CN115601382A - Vehicle door clamped object detection method, device, equipment and storage medium - Google Patents

Vehicle door clamped object detection method, device, equipment and storage medium Download PDF

Info

Publication number
CN115601382A
CN115601382A CN202211357663.XA CN202211357663A CN115601382A CN 115601382 A CN115601382 A CN 115601382A CN 202211357663 A CN202211357663 A CN 202211357663A CN 115601382 A CN115601382 A CN 115601382A
Authority
CN
China
Prior art keywords
image frame
vehicle
detection result
image
clamped
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211357663.XA
Other languages
Chinese (zh)
Inventor
黄永
赛影辉
李涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhu Automotive Prospective Technology Research Institute Co ltd
Chery Automobile Co Ltd
Original Assignee
Wuhu Automotive Prospective Technology Research Institute Co ltd
Chery Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhu Automotive Prospective Technology Research Institute Co ltd, Chery Automobile Co Ltd filed Critical Wuhu Automotive Prospective Technology Research Institute Co ltd
Priority to CN202211357663.XA priority Critical patent/CN115601382A/en
Publication of CN115601382A publication Critical patent/CN115601382A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a method, a device, equipment and a storage medium for detecting a clamped object of a vehicle door, and belongs to the field of automobiles and computers. The method comprises the following steps: acquiring a first image frame and a second image frame of a vehicle from a junction of a door and a body of the vehicle; respectively processing the first image frame and the second image frame to obtain a detection result of the first image frame and a detection result of the second image frame; and acquiring article data between the vehicle door and the vehicle body according to the detection result of the first image frame and the detection result of the second image frame. According to the technical scheme, traffic accidents caused by the fact that objects are clamped between the automobile door and the automobile body are reduced, different image frames correspond to different acquisition moments, whether the objects are clamped between the automobile door and the automobile body is determined based on the image frames acquired at different acquisition moments, and compared with detection of single-frame images, accuracy of detection results of the objects to be clamped is improved.

Description

Vehicle door clamped object detection method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of automobiles and computers, and more particularly, to a method, an apparatus, a device and a storage medium for detecting an object clamped in a door of a vehicle.
Background
In autumn and winter, users like long clothes ornaments such as a long shirt, a long skirt, a scarf and the like. However, when a user wears a long clothing ornament into a vehicle, if the user cares nothing, the clothing ornament is easily caught between the door and the vehicle body after the door is closed, thereby causing a traffic accident.
Disclosure of Invention
The embodiment of the application provides a method, a device, equipment and a storage medium for detecting an object clamped by a vehicle door, which can reduce traffic accidents caused by the fact that the object clamped between the vehicle door and a vehicle body. The technical scheme is as follows:
in one aspect, an embodiment of the present application provides a method for detecting an object clamped in a vehicle door, where the method includes:
acquiring a first image frame and a second image frame of a vehicle from a junction of a door and a body of the vehicle; wherein the acquisition moments of the first image frame and the second image frame are different;
processing the first image frame and the second image frame respectively to obtain a detection result of the first image frame and a detection result of the second image frame; wherein the detection result of the first image frame is used for indicating whether an article is intersected with the vehicle in the first image frame, and the detection result of the second image frame is used for indicating whether an article is intersected with the vehicle in the second image frame;
and acquiring article data between the vehicle door and the vehicle body according to the detection result of the first image frame and the detection result of the second image frame, wherein the article data is used for indicating whether an object is clamped between the vehicle door and the vehicle body.
On the other hand, this application embodiment provides a door quilt thing detection device, the device includes:
the device comprises an image acquisition module, a display module and a display module, wherein the image acquisition module is used for acquiring a first image frame and a second image frame of a vehicle from a junction of a vehicle door and a vehicle body of the vehicle; wherein the acquisition moments of the first image frame and the second image frame are different;
the image processing module is used for respectively processing the first image frame and the second image frame to obtain a detection result of the first image frame and a detection result of the second image frame; wherein the detection result of the first image frame is used for indicating whether an article is intersected with the vehicle in the first image frame, and the detection result of the second image frame is used for indicating whether an article is intersected with the vehicle in the second image frame;
and the data acquisition module is used for acquiring article data between the vehicle door and the vehicle body according to the detection result of the first image frame and the detection result of the second image frame, wherein the article data is used for indicating whether an object is clamped between the vehicle door and the vehicle body.
In another aspect, an embodiment of the present application provides a computer device, where the computer device includes a processor and a memory, where the memory stores a computer program, and the computer program is loaded by the processor and executed to implement the above-mentioned method for detecting an object being stuck on a vehicle door.
In a further aspect, the present application provides a computer-readable storage medium, where a computer program is stored, and the computer program is loaded and executed by the processor to implement the above-mentioned vehicle door clamped object detection method.
In yet another aspect, a computer program product is provided, which when run on a computer device, causes the computer device to perform the above-mentioned vehicle door entrapment detection method.
The technical scheme provided by the embodiment of the application can bring the following beneficial effects:
the method comprises the steps of respectively processing different image frames to obtain detection results corresponding to the different image frames, and further determining whether an object to be clamped exists between the vehicle door and the vehicle body based on the detection results, so that a user can conveniently and reasonably adjust the object to be clamped between the vehicle door and the vehicle body, and traffic accidents caused by the object to be clamped between the vehicle door and the vehicle body are reduced; and different image frames correspond to different acquisition moments, whether the clamped object exists between the vehicle door and the vehicle body is determined based on the image frames acquired at the different acquisition moments, and compared with the detection of a single-frame image, the accuracy of the clamped object detection result is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic view of a vehicle detection system provided by one embodiment of the present application;
FIG. 2 is a flow chart of a method for detecting an object being caught in a vehicle door according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram illustrating a training mode of a vehicle object detection model;
fig. 4 and 5 are schematic diagrams exemplarily showing a clamped object determining manner;
FIG. 6 is a schematic diagram illustrating a flow of a vehicle door object entrapment detection method;
fig. 7 is a block diagram of a vehicle door object detecting device according to an embodiment of the present disclosure;
fig. 8 is a block diagram of a vehicle door object detection device according to another embodiment of the present disclosure;
fig. 9 is a block diagram of a computer device according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Referring to fig. 1, a schematic diagram of a vehicle detection system according to an embodiment of the present application is shown. The vehicle detection system includes: image acquisition device 10 and computer device 20.
The image capture device 10 is used to capture images from the interface of the door and body of a vehicle. Illustratively, the image capture device 10 is a camera. In some embodiments, the image capture device 10 is an onboard device of the vehicle, such as an onboard camera disposed at the location of a rear view mirror of the vehicle.
The computer device 20 is used for processing the image acquired by the image acquisition device 10 to determine whether an object is clamped between the door and the body of the vehicle. Exemplarily, as shown in fig. 1, the computer device obtains a first image frame from a boundary between a vehicle door and a vehicle body when the vehicle is stationary, obtains a second image frame from a boundary between a vehicle door and the vehicle body when the vehicle reaches a target vehicle speed, and further performs image processing on the first image frame and the second image frame by using a vehicle object detection model to obtain a detection result of the first image frame and a detection result of the second image frame. Then, determining that an object is clamped between the vehicle door and the vehicle body under the condition that the detection result of the first image frame and the detection result of the second image frame both indicate that the object intersected with the vehicle exists; and determining that no object is clamped between the vehicle door and the vehicle body in the case that the detection result of the first image frame and/or the detection result of the second image frame indicate that no object intersected with the vehicle exists.
In some embodiments, the image capturing device 10 and the terminal device 20 communicate with each other through a network.
Referring to fig. 2, a flowchart of a method for detecting an object clamped in a vehicle door according to an embodiment of the present application is shown. The method can be applied to a vehicle-mounted terminal of a vehicle, and the execution subject of each step can be the computer device 20 in the embodiment of fig. 1. The method may comprise at least one of the following steps (201-203):
step 201, acquiring a first image frame and a second image frame of a vehicle from a boundary of a door and a body of the vehicle.
The vehicle refers to any type of vehicle, and the embodiment of the present application is not limited thereto. In the embodiment of the application, in order to reduce traffic accidents caused by the existence of objects clamped between the vehicle door and the vehicle body, when the vehicle starts to start, the computer equipment acquires the first image frame and the second image frame of the vehicle from the boundary of the vehicle door and the vehicle body.
In some embodiments, a computer device obtains, via an image capture device, a first image frame and a second image frame of a vehicle from an interface of a door and a body of the vehicle.
In a possible implementation, the image capturing device is a vehicle-mounted device. In some embodiments, at the time of vehicle assembly, the image capture device is configured for the vehicle at a suitable location based on the image capture angle of the image capture device such that the first and second image frames are subsequently captured in time by the image capture device.
In another possible embodiment, the image capturing device is not a vehicle-mounted device. In some embodiments, upon determining that the vehicle has begun to start, acquiring an available image capture device that is closest to the vehicle, and capturing a first image frame and a second image frame by the image capture device and transmitting the first image frame and the second image frame to the computer device.
Of course, in other possible embodiments, the image capturing device includes an onboard device and an offboard device. In some embodiments, when it is determined that the vehicle starts to start, the first image frame and the second image frame are preferentially acquired by the on-board device, and if the on-board device cannot acquire the captured image, the first image frame and the second image frame are acquired by the off-board device.
In the embodiment of the application, the acquisition time of the first image frame and the acquisition time of the second image frame are different.
In one possible implementation, the computer device acquires the first image frame and the second image frame based on a vehicle speed of the vehicle. In some embodiments, the computer device acquires a first image frame from an interface of a door and a body of the vehicle while the vehicle is in a stationary state; the computer device acquires a second image frame from an intersection of the vehicle door and the vehicle body in a case where the vehicle speed of the vehicle reaches a target vehicle speed. The target vehicle speed is a vehicle speed reached when the vehicle starts to start. For example, the above-described target vehicle speed may be referred to as a start vehicle speed.
In another possible embodiment, the computer device acquires the first image frame and the second image frame based on a driving time of the vehicle. In some embodiments, the computer device acquires a first image frame from an interface of the vehicle door and the vehicle body at a first time and acquires a second image frame from the interface of the vehicle door and the vehicle body at a second time. The first time is the corresponding time when the vehicle starts to start, and the second time is the time after the first time and when the time interval between the first time and the second time reaches an interval threshold value. For example, the interval threshold may be any value, and the interval threshold may be flexibly set and adjusted according to practical situations, which is not limited in the embodiment of the present application.
Step 202, the first image frame and the second image frame are processed respectively to obtain a detection result of the first image frame and a detection result of the second image frame.
In this embodiment, after acquiring the first image frame and the second image frame, the computer device processes the first image frame and the second image frame respectively to obtain a detection result of the first image frame and a detection result of the second image frame. The detection result of the first image frame is used for indicating whether an article is intersected with the vehicle in the first image frame, and the detection result of the second image frame is used for indicating whether an article is intersected with the vehicle in the second image frame.
In one possible implementation, the computer device processes the first image frame and the second image frame respectively by using the vehicle object clamping detection model to obtain the detection result of the first image frame and the detection result of the second image frame. The vehicle object clamping detection model refers to a deep learning model. Exemplarily, a training process of the vehicle object detection model is as shown in fig. 3, an initial sample set of the vehicle object detection model is obtained, image adding processing and image labeling processing are performed on samples in the initial sample set to obtain a final sample set, the final sample set is further divided into a training sample set, a verification sample set and a test sample set, then, the training sample set is used for model training of the vehicle object detection model to adjust model parameters, and further, the verification sample set is used for model verification of the trained vehicle object detection model. And then, performing model test on the verified vehicle clamped object detection model by adopting a test sample set.
In another possible embodiment, the computer device processes the first image frame and the second image frame respectively through a fixed image processing sequence to obtain a detection result of the first image frame and a detection result of the second image frame. In some embodiments, after acquiring the first image frame and the second image frame, the computer device sequentially performs image gray processing, image noise filtering, image edge detection, and edge curve drawing on the first image frame and the second image frame, respectively, to obtain a detection result of the first image frame and a detection result of the second image frame. In some embodiments, in order to improve the acquisition efficiency of the edge curve image, before processing the first image frame and the second image frame, the computer device divides the first image frame and the second image frame, acquires a Region of Interest (ROI) of the first image frame from the first image frame, and further performs image gray processing, image noise filtering, image edge detection, and edge curve drawing on the ROI to obtain the edge curve image of the first image frame; obtaining a Region of Interest (ROI) of the second image frame from the second image frame, and further performing image gray processing, image noise filtering, image edge detection, and edge curve drawing on the ROI to obtain an edge curve image of the second image frame.
And step 203, acquiring article data between the vehicle door and the vehicle body according to the detection result of the first image frame and the detection result of the second image frame.
In the embodiment of the application, after the computer device acquires the detection result of the first image frame and the detection result of the second image frame, the computer device acquires the article data between the vehicle door and the vehicle body according to the detection result of the first image frame and the detection result of the second image frame. Wherein the article data is used for indicating whether an object is clamped between the vehicle door and the vehicle body.
In some embodiments, the first item data is generated if the detection results of the first image frame indicate the presence of an item intersecting the vehicle and the detection results of the second image frame indicate the presence of an item intersecting the vehicle. The first article data is used for indicating that an object to be clamped exists between the vehicle door and the vehicle body. For example, as shown in fig. 4, in the case where an article intersecting the vehicle exists in both the first image frame and the second image frame, it is determined that the article is an object sandwiched between the vehicle door and the vehicle body.
In some embodiments, the second item data is generated if the detection result of the first image frame indicates that there is no item intersecting the vehicle and/or the detection result of the second image frame indicates that there is no item intersecting the vehicle. The second article data is used for indicating that an object to be clamped exists between the vehicle door and the vehicle body. Exemplarily, as shown in fig. 5, in a case where an article intersecting the vehicle is present in the first image frame, but an article intersecting the vehicle is not present in the second image frame, it is determined that there is no object sandwiched between the door and the body; or determining that no clamped object exists between the vehicle door and the vehicle body under the condition that no object intersected with the vehicle exists in the first image frame and the second image frame. Of course, if there is no article intersecting the vehicle in the first image frame but there is an article intersecting the vehicle in the second image frame, it is determined that there is no object interposed between the door and the vehicle body.
In some embodiments, in the case where the article data indicates that there is an object being caught between the vehicle door and the vehicle body, the computer device issues an early warning message and suppresses a control operation for an accelerator pedal of the vehicle. The early warning prompt information is used for prompting that an object to be clamped exists between the vehicle door and the vehicle body; when the control operation of the accelerator pedal for the vehicle is suppressed, the vehicle does not respond to the control operation of the accelerator pedal.
In summary, in the technical scheme provided by the embodiment of the application, different image frames are respectively processed to obtain detection results corresponding to the different image frames, and then whether an object is clamped between the vehicle door and the vehicle body is determined based on the detection results, so that a user can conveniently and reasonably adjust the object to be clamped between the vehicle door and the vehicle body, and traffic accidents caused by the object to be clamped between the vehicle door and the vehicle body are reduced; and different image frames correspond to different acquisition moments, whether the clamped object exists between the vehicle door and the vehicle body is determined based on the image frames acquired at the different acquisition moments, and compared with the detection of a single-frame image, the accuracy of the clamped object detection result is improved.
Next, a manner of acquiring the detection result of the first image frame and the detection result of the second image frame will be described.
In a possible implementation manner, the computer device respectively processes the first image frame and the second image frame by using the vehicle clamped object detection model to obtain a detection result of the first image frame and a detection result of the second image frame. In an exemplary embodiment, the step 202 includes at least one of:
1. respectively carrying out feature extraction processing on the first image frame and the second image frame to obtain a first feature map of the first image frame and a first feature map of the second image frame;
2. respectively carrying out image adding processing on a first feature map of a first image frame and a first feature map of a second image frame to obtain a second feature map of the first image frame and a second feature map of the second image frame;
3. acquiring a detection result of the first image frame based on a first feature map of the first image frame; and acquiring a detection result of the second image frame based on the second feature map of the second image frame.
In some embodiments, after acquiring the first image frame and the second image frame, the computer device performs feature extraction processing on the first image frame and the second image frame respectively to obtain a first feature map of the first image frame and a first feature map of the second image frame. Exemplarily, for a first image frame, the computer device performs feature extraction processing on the first feature map to obtain a low-level global feature map of the first image frame, further performs normalization processing on the low-level global feature map of the first image frame to obtain a normalized feature map of the first image frame, and obtains the first feature map of the first image frame based on the normalized feature map of the first image frame; similarly, for the second image frame, the computer device performs feature extraction processing on the second image frame to obtain a low-level global feature map of the second image frame, further performs normalization processing on the low-level global feature map of the second image frame to obtain a normalized feature map of the second image frame, and obtains a first feature map of the second image frame based on the normalized feature map of the second image frame.
In some embodiments, after acquiring the first feature map of the first image frame and the first feature map of the second image frame, the computer device performs image adding processing on the first feature map of the first image frame and the first feature map of the second image frame, respectively, to obtain a second feature map of the first image frame and a second feature map of the second image frame. Exemplarily, for a first image frame, the computer device performs convolution processing on a first feature map of the first image frame to obtain a convolution feature map of the first image frame, performs multi-scale feature fusion on the convolution feature map of the first image frame to obtain a fusion feature map of the first image frame, and performs semantic feature enhancement and positioning feature enhancement on the fusion feature map of the first image frame to obtain a second feature map of the first image frame; similarly, for a second image frame, the computer device performs convolution processing on the first feature map of the second image frame to obtain a convolution feature map of the second image frame, performs multi-scale feature fusion on the convolution feature map of the second image frame to obtain a fusion feature map of the second image frame, and performs semantic feature enhancement and positioning feature enhancement on the fusion feature map of the second image frame to obtain a second feature map of the second image frame. Illustratively, for a first image frame, the computer device performs downsampling processing on the fusion feature map of the first image frame to realize semantic feature enhancement to obtain a semantic enhancement feature map of the first image frame, and performs upsampling processing on the semantic enhancement feature map of the first image frame to realize positioning feature enhancement to obtain a second feature map of the first image frame; similarly, for the second image frame, the computer device performs downsampling processing on the fusion feature map of the second image frame to achieve semantic feature enhancement to obtain a semantic enhancement feature map of the second image frame, and performs upsampling processing on the semantic enhancement feature map of the second image frame to achieve positioning feature enhancement to obtain a second feature map of the second image frame.
In another possible embodiment, the computer device processes the first image frame and the second image frame respectively through a fixed image processing sequence to obtain a detection result of the first image frame and a detection result of the second image frame. In an exemplary embodiment, the step 202 includes at least one of the following steps:
1. respectively carrying out edge detection processing on the first image frame and the second image frame to obtain an edge curve image of the first image frame and an edge curve image of the second image frame;
2. acquiring a detection result of the first image frame based on an edge curve image of the first image frame; and acquiring a detection result of the second image frame based on the edge curve image of the second image frame.
In the embodiment of the application, after acquiring the first image frame and the second image frame, the computer device performs edge detection on the first image frame and the second image frame respectively to obtain an edge curve image of the first image frame and an edge curve image of the second image frame. Wherein the edge curve image is used to record the edge curve of each object (including the vehicle) in the image frame.
Next, taking the first image frame as an example, a detection result acquisition mode of the first image frame will be described.
In some embodiments, the computer device performs image segmentation on the first image frame after acquiring the first image frame, and acquires a corresponding region of interest from the first image frame. Illustratively, when the first image frame is segmented, the computer device acquires position information between the region of interest and the first image frame, and then segments the region of interest of the first image frame from the first image frame based on the position information. The position information is preset based on the acquisition position of the first image frame; illustratively, the acquisition range corresponding to the first image frame is determined based on the acquisition position of the first image frame, and the position information is further determined according to the positions of the vehicle door and the vehicle body in the acquisition range.
In some embodiments, after acquiring the region of interest of the first image frame, the computer device performs image grayscale processing on the region of interest of the first image frame to obtain a grayscale image of the first image frame, and filters noise in the grayscale image of the first image frame to obtain a filtered image of the first image frame. Illustratively, the computer device filters noise in the grayscale image of the first image frame using gaussian filtering to obtain a filtered image of the first image frame. Of course, in the exemplary embodiment, other suitable filtering manners may be selected according to practical situations to filter noise in the grayscale image of the first image frame, which is not limited in the embodiment of the present application.
In some embodiments, after acquiring the filtered image of the first image frame, the computer device performs edge detection on the filtered image of the first image frame by using an edge detection operator to obtain an edge image of the first image frame. Illustratively, the computer device performs edge detection on the filtered image of the first image frame by using a Canny operator to obtain an edge image of the first image frame. Of course, in the exemplary embodiment, other suitable edge detection operators may be selected according to actual situations to perform edge detection on the filtered image of the first image frame, which is not limited in the embodiment of the present application.
In some embodiments, after obtaining the edge image of the first image frame, the computer device performs feature extraction on the edge image of the first image frame to obtain a gray histogram corresponding to the edge image of the first image frame, and then draws an edge curve image of the first image frame based on pixel point distribution information in the gray histogram. Then, a detection result of the first image frame is acquired based on the edge curve image of the first image frame.
The detection result obtaining manner of the second image frame is the same as the detection result obtaining manner of the first image frame, and reference is specifically made to the description of the first image frame, which is not repeated herein.
In addition, with reference to fig. 6, a complete flow of the vehicle door object-to-be-clamped detection method in the present application will be described. The method comprises the following specific steps:
step 601, when the vehicle starts to start, a first image frame and a second image frame at the junction of a vehicle door and a vehicle body are adopted based on a rearview mirror camera.
Step 602, calling a vehicle clamped object detection model to respectively process the first image frame and the second image frame to obtain detection information of the first image frame and detection information of the second image frame.
And step 603, judging whether an object is clamped between the vehicle door and the vehicle body or not based on the detection information of the first image frame and the detection information of the second image frame. If the clamped object exists between the vehicle door and the vehicle body, executing step 604; if there is no object to be clamped between the door and the vehicle body, step 605 is executed.
And step 604, sending out early warning prompt information and restraining an accelerator pedal of the vehicle.
Step 605, determining that the accelerator pedal of the vehicle is normal.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Referring to fig. 7, a block diagram of a vehicle door object detecting device according to an embodiment of the present application is shown. The device has the function of realizing the method for detecting the clamped object of the vehicle door, and the function can be realized by hardware or by hardware executing corresponding software. The device can be a computer device and can also be arranged in the computer device. The apparatus 700 may include: an image acquisition module 710, an image processing module 720, and a data acquisition module 730.
The image acquisition module 710 is used for acquiring a first image frame and a second image frame of a vehicle from the junction of a door and a body of the vehicle; wherein the acquisition time instants of the first image frame and the second image frame are different.
An image processing module 720, configured to process the first image frame and the second image frame respectively to obtain a detection result of the first image frame and a detection result of the second image frame; wherein the detection result of the first image frame is used for indicating whether an article is intersected with the vehicle in the first image frame, and the detection result of the second image frame is used for indicating whether an article is intersected with the vehicle in the second image frame.
A data obtaining module 730, configured to obtain article data between the vehicle door and the vehicle body according to the detection result of the first image frame and the detection result of the second image frame, where the article data is used to indicate whether there is an object clamped between the vehicle door and the vehicle body.
In an exemplary embodiment, the data obtaining module 730 is further configured to:
if the detection result of the first image frame indicates that an article intersected with the vehicle exists and the detection result of the second image frame indicates that the article intersected with the vehicle exists, generating first article data, wherein the first article data is used for indicating that an object clamped between the vehicle door and the vehicle body exists;
if the detection result of the first image frame indicates that no article intersected with the vehicle exists and/or the detection result of the second image frame indicates that no article intersected with the vehicle exists, second article data is generated, and the second article data is used for indicating that an object clamped between the vehicle door and the vehicle body exists.
In an exemplary embodiment, the image acquisition module 710 is further configured to:
acquiring the first image frame from the boundary of the vehicle door and the vehicle body under the condition that the vehicle is in a static state;
and acquiring the second image frame from the boundary of the vehicle door and the vehicle body when the vehicle speed of the vehicle reaches a target vehicle speed.
In an exemplary embodiment, the image processing module 720 is further configured to:
and respectively processing the first image frame and the second image frame by adopting a vehicle clamped object detection model to obtain a detection result of the first image frame and a detection result of the second image frame.
In an exemplary embodiment, the image processing module 720 is further configured to:
respectively carrying out feature extraction processing on the first image frame and the second image frame to obtain a first feature map of the first image frame and a first feature map of the second image frame;
respectively carrying out image adding processing on the first feature map of the first image frame and the first feature map of the second image frame to obtain a second feature map of the first image frame and a second feature map of the second image frame;
acquiring a detection result of the first image frame based on a first feature map of the first image frame; and acquiring a detection result of the second image frame based on a second feature map of the second image frame.
In an exemplary embodiment, the image processing module 720 is further configured to
Respectively carrying out edge detection processing on the first image frame and the second image frame to obtain an edge curve image of the first image frame and an edge curve image of the second image frame;
acquiring a detection result of the first image frame based on an edge curve image of the first image frame; and acquiring a detection result of the second image frame based on the edge curve image of the second image frame.
In an exemplary embodiment, as shown in fig. 8, the apparatus 700 further comprises: a message issuance module 740 and a pedal depression module 750.
The information sending module 740 is configured to send out early warning prompt information when the article data indicates that an object is clamped between the vehicle door and the vehicle body, where the early warning prompt information is used to prompt that an object is clamped between the vehicle door and the vehicle body.
A pedal inhibit module 750 to inhibit control operation of an accelerator pedal for the vehicle.
In summary, in the technical scheme provided by the embodiment of the application, different image frames are respectively processed to obtain detection results corresponding to the different image frames, and then whether an object is clamped between the vehicle door and the vehicle body is determined based on the detection results, so that a user can conveniently and reasonably adjust the object to be clamped between the vehicle door and the vehicle body, and traffic accidents caused by the object to be clamped between the vehicle door and the vehicle body are reduced; and different image frames correspond to different acquisition moments, whether the clamped object exists between the vehicle door and the vehicle body is determined based on the image frames acquired at the different acquisition moments, and compared with the detection of a single-frame image, the accuracy of the clamped object detection result is improved.
It should be noted that, when the apparatus provided in the foregoing embodiment implements the functions thereof, only the division of the functional modules is illustrated, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the apparatus may be divided into different functional modules to implement all or part of the functions described above. In addition, the apparatus and method embodiments provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments for details, which are not described herein again.
Referring to fig. 9, a block diagram of a computer device 900 according to an embodiment of the present application is shown. The computer device may be an in-vehicle terminal in a target vehicle, and the device may implement the above-described door object detection method. Specifically, the method comprises the following steps:
the computer apparatus 900 includes a Processing Unit (e.g., a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), an FPGA (Field Programmable Gate Array), etc.) 901, a system Memory 904 including a RAM (Random Access Memory) 902 and a ROM (Read Only Memory) 903, and a system bus 905 connecting the system Memory 904 and the Central Processing Unit 901. The server 900 also includes a basic I/O system (Input/Output) 906 that facilitates transfer of information between devices within the computing server, and a mass storage device 907 for storing an operating system 913, application programs 914, and other program modules 912.
The basic input/output system 906 includes a display 908 for displaying information and an input device 909 such as a mouse, keyboard, etc. for a user to input information. The display 908 and the input device 909 are connected to the central processing unit 901 through an input/output controller 910 connected to the system bus 905. The basic input/output system 906 may also include an input/output controller 910 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, an input-output controller 910 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 907 is connected to the central processing unit 901 through a mass storage controller (not shown) connected to the system bus 905. The mass storage device 907 and its associated computer-readable media provide non-volatile storage for the server 900. That is, the mass storage device 907 may include a computer-readable medium (not shown) such as a hard disk or CD-ROM (Compact disk Read-Only Memory) drive.
Without loss of generality, the computer readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, EPROM (Erasable Programmable Read-Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory), flash Memory or other solid state Memory, CD-ROM, DVD (Digital Video Disc) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will appreciate that the computer storage media is not limited to the foregoing. The system memory 904 and mass storage device 907 described above may be collectively referred to as memory.
The server 900 may also operate as a remote computer connected to a network via a network, such as the internet, according to embodiments of the present application. That is, the server 900 may be connected to the network 912 through the network interface unit 911 coupled to the system bus 905, or the network interface unit 911 may be used to connect to other types of networks or remote computer systems (not shown).
The memory stores a computer program which is loaded by the processor and realizes the vehicle door clamped object detection method.
In an exemplary embodiment, there is also provided a computer-readable storage medium having stored therein a computer program which, when executed by a processor, implements the above-described vehicle door object-catching detection method.
Optionally, the computer-readable storage medium may include: ROM (Read Only Memory), RAM (Random Access Memory), SSD (Solid State drive), or optical disc. The Random Access Memory may include a ReRAM (resistive Random Access Memory) and a DRAM (Dynamic Random Access Memory).
In an exemplary embodiment, a computer program product is also provided, which when executed by a processor is configured to implement the above-mentioned vehicle door object detecting method.
It should be understood that reference to "a plurality" herein means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. In addition, the step numbers described herein only show an exemplary possible execution sequence among the steps, and in some other embodiments, the steps may also be executed out of the numbering sequence, for example, two steps with different numbers are executed simultaneously, or two steps with different numbers are executed in a reverse order to the illustrated sequence, which is not limited in this application.
The above description is only exemplary of the present application and should not be taken as limiting the present application, and any modifications, equivalents, improvements and the like that are made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (10)

1. A method for detecting an object clamped in a vehicle door is characterized by comprising the following steps:
acquiring a first image frame and a second image frame of a vehicle from a junction of a door and a body of the vehicle; wherein the acquisition moments of the first image frame and the second image frame are different;
processing the first image frame and the second image frame respectively to obtain a detection result of the first image frame and a detection result of the second image frame; wherein the detection result of the first image frame is used for indicating whether an article is intersected with the vehicle in the first image frame, and the detection result of the second image frame is used for indicating whether an article is intersected with the vehicle in the second image frame;
and acquiring article data between the vehicle door and the vehicle body according to the detection result of the first image frame and the detection result of the second image frame, wherein the article data is used for indicating whether an object is clamped between the vehicle door and the vehicle body.
2. The method according to claim 1, wherein the acquiring article data between the vehicle door and the vehicle body according to the detection result of the first image frame and the detection result of the second image frame comprises:
if the detection result of the first image frame indicates that an article intersected with the vehicle exists and the detection result of the second image frame indicates that the article intersected with the vehicle exists, generating first article data, wherein the first article data is used for indicating that an object clamped between the vehicle door and the vehicle body exists;
if the detection result of the first image frame indicates that no article intersected with the vehicle exists and/or the detection result of the second image frame indicates that no article intersected with the vehicle exists, second article data is generated, and the second article data is used for indicating that an object clamped between the vehicle door and the vehicle body exists.
3. The method of claim 1, wherein said obtaining a first image frame and a second image frame of a vehicle from a door to body interface of the vehicle comprises:
acquiring the first image frame from the boundary of the vehicle door and the vehicle body under the condition that the vehicle is in a static state;
and acquiring the second image frame from the boundary of the vehicle door and the vehicle body when the vehicle speed of the vehicle reaches a target vehicle speed.
4. The method according to claim 1, wherein the processing the first image frame and the second image frame to obtain the detection result of the first image frame and the detection result of the second image frame comprises:
and respectively processing the first image frame and the second image frame by adopting a vehicle clamped object detection model to obtain a detection result of the first image frame and a detection result of the second image frame.
5. The method according to claim 4, wherein the processing the first image frame and the second image frame by using the vehicle object detection model to obtain the detection result of the first image frame and the detection result of the second image frame comprises:
respectively carrying out feature extraction processing on the first image frame and the second image frame to obtain a first feature map of the first image frame and a first feature map of the second image frame;
respectively carrying out image adding processing on the first feature map of the first image frame and the first feature map of the second image frame to obtain a second feature map of the first image frame and a second feature map of the second image frame;
acquiring a detection result of the first image frame based on a first feature map of the first image frame; and acquiring a detection result of the second image frame based on a second feature map of the second image frame.
6. The method according to claim 1, wherein the processing the first image frame and the second image frame to obtain the detection result of the first image frame and the detection result of the second image frame comprises:
respectively carrying out edge detection processing on the first image frame and the second image frame to obtain an edge curve image of the first image frame and an edge curve image of the second image frame;
acquiring a detection result of the first image frame based on an edge curve image of the first image frame; and acquiring a detection result of the second image frame based on the edge curve image of the second image frame.
7. The method according to any one of claims 1 to 6, wherein after acquiring the article data between the vehicle door and the vehicle body according to the detection result of the first image frame and the detection result of the second image frame, the method further comprises:
sending early warning prompt information under the condition that the article data indicate that the object clamped between the vehicle door and the vehicle body exists, wherein the early warning prompt information is used for prompting that the object clamped between the vehicle door and the vehicle body exists;
suppressing a control operation for an accelerator pedal of the vehicle.
8. A vehicle door clamped object detection device is characterized by comprising:
the device comprises an image acquisition module, a display module and a display module, wherein the image acquisition module is used for acquiring a first image frame and a second image frame of a vehicle from a junction of a vehicle door and a vehicle body of the vehicle; wherein the acquisition moments of the first image frame and the second image frame are different;
the image processing module is used for respectively processing the first image frame and the second image frame to obtain a detection result of the first image frame and a detection result of the second image frame; wherein the detection result of the first image frame is used for indicating whether an article is intersected with the vehicle in the first image frame, and the detection result of the second image frame is used for indicating whether an article is intersected with the vehicle in the second image frame;
and the data acquisition module is used for acquiring article data between the vehicle door and the vehicle body according to the detection result of the first image frame and the detection result of the second image frame, wherein the article data is used for indicating whether an object is clamped between the vehicle door and the vehicle body.
9. A computer device characterized in that the computer device comprises a processor and a memory, the memory having stored therein a computer program that is loaded and executed by the processor to implement the vehicle door clamped object detection method according to any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that a computer program is stored in the storage medium, which is loaded and executed by a processor to implement the vehicle door clamped object detecting method according to any one of claims 1 to 7.
CN202211357663.XA 2022-11-01 2022-11-01 Vehicle door clamped object detection method, device, equipment and storage medium Pending CN115601382A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211357663.XA CN115601382A (en) 2022-11-01 2022-11-01 Vehicle door clamped object detection method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211357663.XA CN115601382A (en) 2022-11-01 2022-11-01 Vehicle door clamped object detection method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115601382A true CN115601382A (en) 2023-01-13

Family

ID=84850386

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211357663.XA Pending CN115601382A (en) 2022-11-01 2022-11-01 Vehicle door clamped object detection method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115601382A (en)

Similar Documents

Publication Publication Date Title
US10846556B2 (en) Vehicle insurance image processing method, apparatus, server, and system
US10503999B2 (en) System for detecting salient objects in images
US9113049B2 (en) Apparatus and method of setting parking position based on AV image
US9467645B2 (en) System and method for recognizing parking space line markings for vehicle
CN109993065B (en) Driver behavior detection method and system based on deep learning
CN110826544A (en) Traffic sign detection and identification system and method
CN113486856B (en) Driver irregular behavior detection method
CN115496775A (en) Vehicle door clamped object detection method, device, equipment and storage medium
CN113279652A (en) Vehicle door anti-pinch control method and device, electronic equipment and readable storage medium
CN113255500A (en) Method and device for detecting random lane change of vehicle
CN112784675A (en) Target detection method and device, storage medium and terminal
CN110188645B (en) Face detection method and device for vehicle-mounted scene, vehicle and storage medium
CN115601382A (en) Vehicle door clamped object detection method, device, equipment and storage medium
CN114267076B (en) Image identification method, device, equipment and storage medium
CN116071557A (en) Long tail target detection method, computer readable storage medium and driving device
CN115601381A (en) Vehicle door clamped object detection method, device, equipment and storage medium
CN113269150A (en) Vehicle multi-attribute identification system and method based on deep learning
CN112183413B (en) Parking space detection method and device, storage medium and vehicle
CN112613363B (en) Method, device and storage medium for dividing vehicle image
US20220292316A1 (en) Shape-biased image classification using deep convolutional networks
CN114615437B (en) Vehicle tracking method and system based on GIS
US20230410661A1 (en) Method for warning collision of vehicle, system, vehicle, and computer readable storage medium
JP6948222B2 (en) Systems, methods, and programs for determining stop locations included in captured images
US20220012506A1 (en) System and method of segmenting free space based on electromagnetic waves
GB2624853A (en) A system and method of detecting curved mirrors in a current image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination