CN117079219B - Vehicle running condition monitoring method and device applied to trailer service - Google Patents

Vehicle running condition monitoring method and device applied to trailer service Download PDF

Info

Publication number
CN117079219B
CN117079219B CN202311289717.8A CN202311289717A CN117079219B CN 117079219 B CN117079219 B CN 117079219B CN 202311289717 A CN202311289717 A CN 202311289717A CN 117079219 B CN117079219 B CN 117079219B
Authority
CN
China
Prior art keywords
image
vehicle
pixel
transported vehicle
moving object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311289717.8A
Other languages
Chinese (zh)
Other versions
CN117079219A (en
Inventor
王明月
陈程
王明泉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Vehicle Trailer Network Technology Co ltd
Original Assignee
Shandong Vehicle Trailer Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Vehicle Trailer Network Technology Co ltd filed Critical Shandong Vehicle Trailer Network Technology Co ltd
Priority to CN202311289717.8A priority Critical patent/CN117079219B/en
Publication of CN117079219A publication Critical patent/CN117079219A/en
Application granted granted Critical
Publication of CN117079219B publication Critical patent/CN117079219B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a vehicle running condition monitoring method and device applied to trailer service, comprising the following steps: acquiring video stream data acquired by a plurality of cameras arranged on a trailer; detecting a moving target in the video stream data based on a background part in the video stream data, and intercepting the video stream data according to the moving target so as to obtain continuous frame images containing the moving target; calculating the shortest distance between the moving object and the transported vehicle aiming at the continuous frame images, and acquiring the contact area of the moving object on the transported vehicle when the shortest distance is zero; adjusting the acquisition angles of the cameras so that the cameras are opposite to the direction of the contact area to acquire images of suspected damaged sides of the transported vehicle; cutting the suspicious damaged side image of the transported vehicle according to the contact area to obtain a local image, and comparing the local image with a reference image of the transported vehicle to determine the damaged position of the transported vehicle according to the comparison result.

Description

Vehicle running condition monitoring method and device applied to trailer service
Technical Field
The present disclosure relates to the field of vehicle transportation monitoring technologies, and in particular, to a method and an apparatus for monitoring a vehicle running condition applied to a trailer service.
Background
With the development of economy, the living standard of people is continuously improved, the average vehicle possession is higher and higher, and then the trailer business is brought up. However, vehicles are valuable items, and the safety of the transportation of the vehicles and the rights of the occurrence of problems are particularly important.
At present, in the execution process of trailer service, the monitoring of the running condition of the transported vehicle, including whether the vehicle is damaged, is mostly realized by manually checking whether the vehicle is damaged after the transported vehicle reaches a destination. In this case, if the transported vehicle is damaged, the problem of division of responsibility often occurs, and the existing trailer service cannot meet the requirement of real-time monitoring of the running condition of the transported vehicle when being executed, which brings difficulty to the division of the responsibility of the damage of the vehicle. Meanwhile, by means of the scheme of manually checking the damage state of the vehicle, a large amount of labor cost and time cost are required, and the damage to some minor vehicles is not easy to recognize in a manual checking mode, and the monitoring efficiency and accuracy of the damage to the vehicles are low.
Disclosure of Invention
The embodiment of the application provides a vehicle running condition monitoring method and device applied to trailer service, which are used for solving the following technical problems: the existing trailer service has low monitoring efficiency and precision on the running condition of the transported vehicle and cannot realize real-time monitoring during transportation.
The embodiment of the application adopts the following technical scheme:
in one aspect, an embodiment of the present application provides a vehicle running condition monitoring method applied to a trailer service, where the method includes: acquiring video stream data acquired by a plurality of cameras arranged on a trailer; determining a background part in the video stream data by using a preset algorithm, detecting a moving target in the video stream data based on the background part, and intercepting the video stream data according to the moving target so as to obtain continuous frame images containing the moving target; calculating the shortest distance between the moving object and the transported vehicle aiming at the continuous frame images, and acquiring the contact area of the moving object on the transported vehicle when the shortest distance is zero; the acquisition angles of the cameras are adjusted so that the cameras are opposite to the contact area direction, and suspicious damaged side images of the transported vehicle are acquired; cutting the suspicious damaged side image of the transported vehicle according to the contact area to obtain a local image, and comparing the local image with a reference image of the transported vehicle to determine the damaged position of the transported vehicle according to a comparison result.
In one possible implementation manner of the present application, determining a background portion in the video stream data by using a preset algorithm, and detecting a moving object in the video stream data based on the background portion specifically includes: determining an image sequence corresponding to the video stream data, and carrying out gray processing on the image sequence to obtain a gray image sequence; the gray image sequence is subjected to weighted summation processing by utilizing a Sobel operator in the horizontal direction and a Sobel operator in the vertical direction so as to extract gradient information of the gray image sequence and obtain a gradient image sequence; extracting a current frame gradient map, a previous frame gradient map and a subsequent frame gradient map from the gradient image sequence, and calculating absolute values of differences between the current frame gradient map and the previous frame gradient map and between the current frame gradient map and the subsequent frame gradient map respectively; multiplying the absolute difference values to obtain a difference absolute value image of the current frame; and carrying out binarization processing on the difference absolute value image so as to determine the moving object in the binarized image.
In one possible implementation manner of the present application, after determining the moving target in the binarized image, the method further includes: in the binarized image, taking a pixel where a midpoint of the moving object is located as a center pixel; extracting eight pixel areas of the central pixel in the horizontal, vertical and four diagonal directions; and calculating the pixel duty ratio of the moving object in the eight-pixel area, and determining the moving object as noise when the pixel duty ratio is smaller than a preset duty ratio threshold value.
In one possible implementation manner of the present application, calculating the shortest distance between the moving object and the transported vehicle specifically includes: generating a straight line which passes through the midpoint of the moving object and is parallel to the lower edge of the image in the continuous frame images; screening out images corresponding to straight lines passing through the transported vehicles; determining a first intersection point coordinate of the straight line intersecting with the edge of the moving object and a second intersection point coordinate of the straight line intersecting with the edge of the transported vehicle under a pixel coordinate system corresponding to the screened image; and calculating the pixel distance between the first intersection point coordinate and the second intersection point coordinate.
In one possible implementation manner of the present application, when the pixel distance is smaller than a preset distance threshold, the preset distance threshold is related to the number of pixels occupied by the moving object on the straight line, and the method further includes: screening out the images with the pixel distances smaller than a preset distance threshold value, and sorting the images according to the corresponding frame numbers of the images in the video stream data to obtain a contact image sequence; extracting side edge features of the transported vehicle aiming at the contact image sequence, wherein the side edge features at least comprise a front window lower boundary line, a front window left boundary midpoint, a rear window lower boundary line, a front door lower boundary line, a rear door lower boundary line, a front door stitching line and a rear door stitching line of the transported vehicle; constructing a front door coordinate system and a rear door coordinate system corresponding to the transported vehicle according to the side edge characteristics of the transported vehicle; the front door coordinate system takes a region midpoint formed by a front door stitching line, an extension line of a front window lower boundary line, a front door lower boundary line and a straight line passing through a left boundary midpoint of the front window and parallel to the front door stitching line as an origin, takes a straight line passing through the origin and parallel to the front door lower boundary line as a transverse axis and takes a straight line passing through the origin and parallel to the front door stitching line as a longitudinal axis; the rear door coordinate system takes the midpoint of a region formed by the rear door stitching line, the front door stitching line, the lower boundary line of the rear door and the extension line of the lower boundary line of the rear window as an origin, takes a straight line passing through the origin and being parallel to the lower boundary line of the rear door as a transverse axis, and takes a straight line passing through the origin and being parallel to the rear door stitching line as a longitudinal axis; determining contact point coordinates between the moving object and the transported vehicle, wherein the contact point coordinates at least comprise the second intersection point coordinates and/or the first intersection point coordinates; and extracting a quadrant region of the contact point coordinate falling into the front door coordinate system or the rear door coordinate system, and determining the quadrant region as a contact region of the moving object on the transported vehicle.
In one possible implementation manner of the present application, the image comparing the local image with the reference image of the transported vehicle specifically includes: determining a sliding window of a preset pixel size, wherein the preset pixel size is related to a pixel size corresponding to the local image; traversing the reference image by utilizing the sliding window, and determining a pixel value average value of pixel points contained in the sliding window as a reference window pixel value; traversing the local image by utilizing the sliding window, and determining the average value of pixel values of pixel points contained in the sliding window as a local window pixel value; calculating an absolute value of a difference between the local window pixel value and the reference window pixel value; when the absolute value of the difference is larger than a preset difference threshold value, extracting a window area corresponding to the sliding window from the local image; splicing adjacent window areas according to the sliding sequence of the sliding window in the partial image to obtain spliced areas; subtracting the local image from the splicing area, and taking an absolute value to obtain an absolute value image; taking a preset multiple of the pixel value of the local window as a threshold value, and carrying out classification assignment processing on the pixel points in the absolute value image; and determining the damage position corresponding to the transported vehicle in the absolute value image according to the processing result, so as to determine the damage position of the transported vehicle in the partial image, wherein the damage position is represented by pixel coordinates.
In one possible implementation of the present application, after determining the damaged position of the transported vehicle in the partial image, the method further includes: marking the pixel points corresponding to the damage positions in the partial image; determining an external rectangle corresponding to the marked pixel point, and counting the aspect ratio of the external rectangle; and displaying the circumscribed rectangle and the length-width ratio in the suspected damage side image of the transported vehicle, and pushing.
In one possible implementation manner of the present application, the plurality of cameras are mounted on the plurality of rotatable wheel discs, and the mounting relationship between the cameras and the rotatable wheel discs at least includes one-to-one and many-to-one; the rotatable wheel disc is of a disc-shaped structure, is arranged on the inner side of the frame of the trailer vehicle and faces the transported vehicle.
In one possible implementation manner of the present application, adjusting the acquisition angles of the plurality of cameras specifically includes: reading a camera code for acquiring the video stream data to determine the position information of the transported vehicle on the trailer vehicle according to the camera code; determining a rotatable wheel disc opposite to the transported vehicle according to the position information, and rotating the rotatable wheel disc to enable the position of a camera arranged on the rotatable wheel disc to be opposite to the transported vehicle; and adjusting the acquisition angle of the camera to shoot, and obtaining an image of the suspected damaged side of the transported vehicle.
In another aspect, an embodiment of the present application further provides a vehicle running condition monitoring device applied to a trailer service, where the device includes: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a vehicle behavior monitoring method as described above for use in a trailer service.
The vehicle running condition monitoring method and device applied to the trailer service have the following beneficial effects:
the method comprises the steps of extracting a moving target from video stream data acquired by a camera arranged on a trailer vehicle, calculating the distance between the extracted moving target and a transported vehicle, judging whether the extracted moving target and the transported vehicle are contacted through the distance between the extracted moving target and the transported vehicle, adjusting the angle of the camera to acquire an image of a suspected damaged side when the extracted moving target and the suspected damaged side are contacted, and comparing the image with a reference image of the transported vehicle, so that the damaged position of the transported vehicle can be determined through a comparison result, real-time monitoring of the transported vehicle in the transportation process is realized through the camera arranged on the trailer vehicle, the damaged position generated after the vehicle is contacted with the moving target can be accurately marked, and the vehicle is referred to by vehicle detection personnel, thereby improving the monitoring precision of the running condition of the vehicle and simultaneously improving the detection efficiency of the vehicle.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art. In the drawings:
fig. 1 is a flowchart of a vehicle running condition monitoring method applied to a trailer service according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a vehicle running condition monitoring device applied to a trailer service according to an embodiment of the present application.
Detailed Description
In order to better understand the technical solutions in the present application, the following description will clearly and completely describe the technical solutions in the embodiments of the present application with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present application.
The following describes the method in the embodiment of the present application in detail by means of the accompanying drawings.
Fig. 1 is a flowchart of a vehicle running condition monitoring method applied to a trailer service according to an embodiment of the present application, where, as shown in fig. 1, the vehicle running condition monitoring method in the embodiment of the present application at least includes the following execution steps:
step 101, acquiring video stream data acquired by a plurality of cameras arranged on a trailer.
According to the vehicle running condition monitoring method provided by the embodiment of the application, the execution main body can be a server or computer equipment with computing capability.
The vehicle running condition monitoring method in the embodiment of the application is mainly applied to the field of trailer service, and mainly relates to two vehicles in the trailer service execution process, wherein one is a trailer vehicle, the other is a transported vehicle, the transported vehicle is loaded on the trailer vehicle for transportation, and the transported vehicle is a new vehicle which is transported to a vehicle sales center for sales or an accident vehicle for maintenance, so that the running condition of the monitored vehicle in the embodiment of the application is mainly the condition of the transported vehicle.
Firstly, a plurality of cameras are installed on a transported vehicle, and the cameras are used for collecting video stream data of the transported vehicle in the transportation process, receiving the video stream data and monitoring the running condition of the transported vehicle based on the video stream data. It should be noted that, the video stream data receiving process herein may be implemented by sending a request to a plurality of cameras, or may be implemented by receiving a timing automatic upload of a plurality of cameras, which is not limited in this embodiment of the present application.
Step 102, determining a background part in the video stream data by using a preset algorithm, detecting a moving target in the video stream data based on the background part, and intercepting the video stream data according to the moving target so as to obtain continuous frame images containing the moving target.
After receiving the video stream data, it is necessary to determine a moving object that the transported vehicle may contact during transportation based on the video stream data.
In one possible implementation manner of the embodiment of the present application, a background portion exists in the video stream data, where the background portion refers to a thing in a fixed position in the video stream data, and may also be understood as a portion in the same position and the same content in the obtained image sequence after the video stream data is divided according to the frame number. The pixel value or the gray value of the background part in each frame of image is relatively fixed, so that in the embodiment of the application, a moving object existing in the video stream data can be determined in an image difference mode. Specifically, firstly, carrying out gray processing on an image sequence obtained by dividing video stream data to obtain a gray image sequence, carrying out weighted summation processing on each image in the gray image sequence by utilizing a Sobel operator in the horizontal direction and a Sobel operator in the vertical direction, obtaining a gradient image sequence corresponding to the gray image sequence after the processing is completed, further, calculating the absolute difference values between an ith frame and an i-1 frame and an i+1 frame in the gradient image sequence, thereby obtaining the difference between the ith frame and the front and rear frames of the ith frame respectively, then carrying out multiplication processing on the obtained two absolute difference values to obtain an absolute difference value image corresponding to the ith frame, and finally, carrying out binarization processing on the absolute difference value image to enable the moving object in the absolute difference value image to be more accurate, thereby obtaining the moving object in the video stream data.
Further, since noise interference may exist in the process of determining the moving object, in the embodiment of the present application, the moving object is screened by using a binarized image, specifically, in the foregoing binarized image, a pixel point where a determined center point of the moving object is located is determined as a center pixel, where the center point of the moving object may be represented by a shape center of the moving object, and then an eight-pixel area corresponding to the center pixel is extracted, where the eight-pixel area is an area with the center pixel as the center and formed by adjacent pixels in horizontal, vertical and four diagonal directions, and by a pixel duty ratio of the moving object in the eight-pixel area, whether the moving object is noise is determined, that is, the moving object with the pixel duty ratio smaller than a preset duty ratio threshold (for example, 5 pixels) is determined as noise. It should be noted that the moving object determined as noise is directly ignored, and is not considered in the subsequent moving object analysis process.
Further, after determining the moving object in the video stream data, the continuous frame image containing the moving object may be obtained by extracting, in the image sequence corresponding to the video stream data, the image sequence between the number of frames in which the moving object first appears and the number of frames in which the moving object last appears, that is, extracting the image containing the moving object. The frame containing the moving object can be directly cut out from the video stream data to obtain continuous frame images containing the moving object.
Step 103, calculating the shortest distance between the moving object and the transported vehicle according to the continuous frame images, and acquiring the contact area of the moving object on the transported vehicle when the shortest distance is zero.
After the continuous frame images corresponding to the moving object are obtained, the shortest distance between the moving object and the moved vehicle is calculated for each frame in the continuous frame images, wherein the shortest distance is calculated so as to be more convenient for judging whether the moving object is in contact with the moved vehicle or not, and further judging whether the moving object causes damage or injury to the moved vehicle or not.
In one possible implementation manner of the embodiment of the present application, calculating the shortest distance between the moving object and the transported vehicle includes, for each frame in the continuous frame images, generating a straight line passing through the midpoint of the moving object and parallel to the lower boundary of the image in the image, that is, generating a transverse straight line passing through the midpoint of the moving object, and then screening the image of the moving object through the transported vehicle in the continuous frame images, where the reason for performing this process is that only when the aforementioned straight line passes through the moving object and the transported vehicle at the same time, there is a possibility that the two make contact. And in a pixel coordinate system corresponding to the screened image, extracting a first intersection point coordinate of the straight line contacting with the edge of the moving object and a second intersection point coordinate of the straight line intersecting with the edge of the transported vehicle, and determining the pixel distance between the two intersection point coordinates as the shortest distance between the moving object and the transported vehicle. It should be noted that the edge contact of the straight line with the moving object is preferably the edge contact of the straight line with the moving object closest to the vehicle to be transported, and the edge contact of the straight line with the vehicle to be transported is preferably the edge contact of the straight line with the side of the vehicle to be transported close to the moving object, whereby the pixel distance between the intersection of the two edge contacts can be determined as the shortest distance between the moving object and the vehicle to be transported.
Further, when the shortest distance is zero, it indicates that the distance between the moving object and the transported vehicle is zero, that is, there is contact between the moving object and the transported vehicle, so that images with the shortest distance of zero are extracted, and the images are sorted according to the number of frames of each image in the video stream data, so as to obtain a contact image sequence.
In one example of the present application, the intersection point at which the straight line intersects the edge of the moving object may not be the closest position to the transported vehicle on the edge of the moving object, for example, when the moving object is irregularly shaped, the intersection point of the straight line with the edge of the moving object may not be the most prominent point on the edge curve of the moving object. In this way, even if there is a contact between the moving object and the transported vehicle, the shortest distance calculated according to the two intersection points may not be zero, so as to avoid this situation, and also to ensure that the moving object that has a contact with the transported vehicle can be extracted, in this embodiment of the present application, whether there is a contact between the moving object and the transported vehicle may also be determined according to the magnitude relation between the shortest distance between the two intersection points and the preset distance threshold, if the shortest distance between the two intersection points is less than the preset distance threshold, then it is considered that there is a contact between the moving object and the transported vehicle, at this time, the images corresponding to the shortest distance are extracted and sorted according to the frame numbers of the images in the video stream data, so as to obtain the contact image sequence. It should be noted that, the preset distance threshold is related to the size of the moving object, so the number of pixels occupied by the moving object on the straight line can be determined.
Further, after a contact image sequence (on each image in the sequence, the moving object is in contact with the transported vehicle) is obtained, a contact region on the transported vehicle is extracted from the contact image sequence.
Specifically, first, the side features of the transported vehicle are extracted in the contact image, where the side features are extracted because the contact between the moving object and the transported vehicle during the actual execution of the trailer service occurs on the side of the vehicle with a high probability. Therefore, the vehicle side is taken as an example in the embodiment of the present application. In one example of the present application, the aforementioned side features include at least a front window lower boundary line, a front window left boundary midpoint, a rear window lower boundary line, a front door lower boundary line, a rear door lower boundary line, a front door seam line, and a rear door seam line of the transported vehicle; the side features are extracted to divide the side of the transported vehicle into a front door region and a rear door region, and then the contact occurrence position is specifically determined based on the divided regions, so that after the side features are obtained, a front door coordinate system and a rear door coordinate system corresponding to the transported vehicle are constructed according to the side features, specifically, the front door coordinate system takes the center point of a region formed by a front door seam line, an extension line of a front window lower boundary line, a front door lower boundary line and a straight line passing through the center point of a front window left boundary and parallel to the front door seam line as an origin, the four lines are divided into the front door region substantially, and then the straight line passing through the origin and parallel to the front door lower boundary line as a transverse axis takes the straight line passing through the origin and parallel to the front door seam line as a longitudinal axis; the rear door coordinate system has an origin at a midpoint of a region formed by the rear door line, the front door line, the rear door lower boundary line, and the extension line of the rear window lower boundary line, a horizontal axis at a straight line passing through the origin and parallel to the rear door lower boundary line, and a vertical axis at a straight line passing through the origin and parallel to the rear door line. Here, the side features are extracted and the vehicle side of the coordinate system is constructed, and corresponds to the side of the transported vehicle that is in contact with the moving object in the contact image.
After that, after the front door coordinate system and the rear door coordinate system are obtained, the contact point coordinates of the moving object and the transported vehicle are determined, and in one example of the present application, the contact point coordinates include at least the second intersection point coordinates and/or the first intersection point coordinates. This is because, when the pixel distance between the two intersection points is zero, the two intersection points overlap (the first intersection point coordinate and the second intersection point coordinate are equal), and when the pixel distance between the two intersection points is smaller than the predetermined threshold, the second intersection point coordinate and the first intersection point coordinate may not be the contact point coordinate, and when the contact point coordinate may be coordinates other than the first intersection point coordinate and the second intersection point coordinate, or may be left of the second intersection point where the straight line intersects the edge of the transported vehicle.
Finally, after the contact point coordinates are obtained, the contact area between the moving object and the transported vehicle is obtained by obtaining the quadrant of the contact point coordinates falling into the front door coordinate system or the rear door coordinate system. For example, the contact point coordinates fall into a first quadrant of the front door coordinate system, and then the upper left area of the front door corresponding to the first quadrant is determined as the contact area between the moving object and the transported vehicle. The method is characterized in that a partial area of the vehicle door area is extracted as a contact area for subsequent image comparison, so that overlarge contact area determined in the earlier stage can be avoided, calculation pressure is caused to image comparison, meanwhile, the area of the damaged position can be determined preliminarily by adopting a small area comparison scheme, and the accuracy of the damaged position determined in the later stage is improved.
And 104, adjusting the acquisition angles of the cameras so that the cameras are opposite to the contact area direction, and acquiring images of suspected damaged sides of the transported vehicle.
After the contact area of the transported vehicle is obtained, adjusting a plurality of cameras on the trailer vehicle, so that the cameras are right opposite to the contact area for acquisition, and obtaining the suspicious damaged side image of the transported vehicle.
In one possible implementation manner of the embodiment of the present application, a plurality of cameras are installed on the trailer vehicle, and the plurality of cameras may be divided into a plurality of groups, and each group may be configured to mainly collect a corresponding transported vehicle, and when the adjustment is performed, the collection direction of the group of cameras is adjusted, so that images of different angles/different areas of the transported vehicle may be collected.
To achieve the above-mentioned scheme, the camera design in this application embodiment is installed on circular rotatable rim plate, and it should be noted that, the installation mode here can be that a rotatable rim plate is last to install a camera, also can be that a rotatable rim plate is last to install a plurality of cameras to, the collection angle of the camera of installing on rotatable rim plate is adjustable, and the adjustment of collection angle can be realized through the rotation of camera afterbody. Meanwhile, in order to facilitate image acquisition/monitoring of the transported vehicle, the rotatable wheel disc may be mounted inside the frame of the trailer vehicle and facing the direction of the transported vehicle. For example, it may be mounted on a frame closest to the side of the transported vehicle.
Further, when adjusting the capturing angle of the camera to make it face the direction of the contact area, it is necessary to first determine which camera captures the video stream data, that is, first obtain the code of the camera transmitting the video stream data, and then find the position of the transported vehicle according to the code of the camera capturing the video stream data, that is, which transported vehicle on the trailer vehicle the captured video stream data is for. And finally, determining a rotatable wheel disc corresponding to the transported vehicle, and rotating the rotatable wheel disc to enable the camera installed on the rotatable wheel disc to be opposite to the transported vehicle, wherein the acquisition angle of the camera is probably not opposite to the transported vehicle, so that the acquisition angle of the camera is required to be adjusted to be opposite to the transported vehicle, and the suspicious damaged side image of the transported vehicle is acquired.
It should be noted that, here, the rotatable rim plate of adjustment is adjusted earlier and is adjusted the camera angle again, is in order to make this scheme adapt to the less condition of camera angle adjustment range, avoids only adjusting the condition that the camera probably can not make its collection direction just to being transported the vehicle, improves the suitability of this scheme.
And 105, cutting the suspicious damaged side image of the transported vehicle according to the contact area to obtain a local image, and comparing the local image with a reference image of the transported vehicle to determine the damaged position of the transported vehicle according to the comparison result.
The contact area is utilized to cut the suspicious broken side image, the cutting processing can be realized by covering the contact area on the suspicious broken side image for cutting, and the cutting processing can also be realized by cutting the vehicle door position corresponding to the contact area on the suspicious broken side image, so that a local image is obtained.
And then comparing the local image with a reference image of the transported vehicle, and determining the damage position on the reference image of the transported vehicle through comparison results. It should be noted that, the reference image of the transported vehicle may be extracted through the vehicle factory information (including the factory image), which is not described herein in detail in this embodiment of the present application.
In one possible implementation manner of the embodiment of the present application, comparing the local image with the reference image specifically includes: a sliding window with a preset pixel size is set first, where the preset pixel size is related to a pixel size corresponding to the local image, and the pixel size in the embodiment of the present application may be represented by the number of pixels. And then, traversing the reference image and the local image respectively by using the set sliding window, averaging pixel values of all pixel points contained in the window during traversing to obtain a local window pixel value corresponding to the local image and a reference window pixel value corresponding to the reference image, calculating the absolute value of the difference between the reference window pixel value and the local window pixel value, and extracting a window area with the absolute value of the difference larger than a preset difference threshold value from the local image.
Further, the extracted window regions are spliced according to the positions on the partial images, so that a spliced region is obtained, and when the transported vehicle is damaged, the damaged scratches may be continuous, so that the window regions are necessarily adjacent to each other, that is, the spliced region can be spliced. Then, the absolute value of the difference value between the splicing area and the local image is calculated to obtain an absolute value image, at this time, the damaged position can be initially shown in the absolute value image, in order to make the damaged position clearer, in the embodiment of the application, the classification assignment processing is continuously performed on the absolute value image, for example, the pixel point with the pixel value greater than the threshold value can be assigned to 1, and the pixel point with the pixel value less than the threshold value can be assigned to 0, so that the damaged position in the absolute value image is more obvious, and further, the damaged position of the transported vehicle in the local image is obtained.
Thus, the damaged position of the vehicle to be transported is obtained.
In order to enable vehicle detection personnel to detect a vehicle by utilizing the damaged position of the vehicle, so that detection precision and efficiency are improved, in the embodiment of the application, after the damaged position of the transported vehicle is determined, an external rectangle corresponding to the damaged position is determined in a partial image, the external rectangle and the length-width ratio are displayed in the partial image or the suspicious damaged side image of the transported vehicle, and the external rectangle and the length-width ratio are pushed to the terminal of the detection personnel, so that when the vehicle detection personnel encounters smaller damage, the damage condition can be accurately detected, so that the recognition precision of the smaller damage is improved, and meanwhile, if the damaged position is larger damage, damage positioning can be directly performed, so that the recognition efficiency of the damage of the vehicle is improved.
Based on the same inventive concept, the embodiment of the application also provides a vehicle running condition monitoring device applied to trailer service, and the structure of the vehicle running condition monitoring device is shown in fig. 2.
Fig. 2 is a schematic structural diagram of a vehicle running condition monitoring device applied to a trailer service according to an embodiment of the present application. As shown in fig. 2, the vehicle running condition monitoring device 200 applied to the trailer service in the embodiment of the present application specifically includes: at least one processor 201; and a memory 203 communicatively coupled (via bus 202) to the at least one processor 201; the memory 203 stores therein instructions executable by the at least one processor 201 to enable the at least one processor 201 to perform a vehicle behavior monitoring method as described in the above embodiments as applied to a trailer service.
In one or more possible implementations of the embodiments of the present application, the foregoing processor is configured to perform, to obtain video stream data collected by a plurality of cameras mounted on a trailer; determining a background part in the video stream data by using a preset algorithm, detecting a moving target in the video stream data based on the background part, and intercepting the video stream data according to the moving target so as to obtain continuous frame images containing the moving target; calculating the shortest distance between the moving object and the transported vehicle aiming at the continuous frame images, and acquiring the contact area of the moving object on the transported vehicle when the shortest distance is zero; the acquisition angles of the cameras are adjusted so that the cameras are opposite to the contact area direction, and suspicious damaged side images of the transported vehicle are acquired; cutting the suspicious damaged side image of the transported vehicle according to the contact area to obtain a local image, and comparing the local image with a reference image of the transported vehicle to determine the damaged position of the transported vehicle according to a comparison result.
It will be appreciated by those skilled in the art that the present description may be provided as a method, system, or computer program product. Accordingly, the present specification embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present description embodiments may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present description is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the specification. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for apparatus, devices, non-volatile computer storage medium embodiments, the description is relatively simple, as it is substantially similar to method embodiments, with reference to the section of the method embodiments being relevant.
The foregoing describes specific embodiments of the present disclosure. In some cases, the acts or steps recited in the present specification may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The foregoing is merely one or more embodiments of the present description and is not intended to limit the present description. Various modifications and alterations to one or more embodiments of this description will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, or the like, which is within the spirit and principles of one or more embodiments of the present specification, is intended to be included within the scope of the present specification.

Claims (7)

1. A vehicle behavior monitoring method applied to a trailer service, the method comprising:
acquiring video stream data acquired by a plurality of cameras arranged on a trailer;
determining a background part in the video stream data by using a preset algorithm, detecting a moving target in the video stream data based on the background part, and intercepting the video stream data according to the moving target so as to obtain continuous frame images containing the moving target;
calculating a shortest distance between the moving object and the transported vehicle for the continuous frame images, and acquiring a contact area of the moving object on the transported vehicle when the shortest distance is zero, comprising: generating a straight line which passes through the midpoint of the moving object and is parallel to the lower edge of the image in the continuous frame images; screening out images corresponding to straight lines passing through the transported vehicles; determining a first intersection point coordinate of the straight line intersecting with the edge of the moving object and a second intersection point coordinate of the straight line intersecting with the edge of the transported vehicle under a pixel coordinate system corresponding to the screened image; calculating a pixel distance between the first intersection point coordinate and the second intersection point coordinate;
When the pixel distance is smaller than a preset distance threshold, the preset distance threshold is related to the number of pixels occupied by the moving target on the straight line, and the method further comprises: screening out the images with the pixel distances smaller than a preset distance threshold value, and sorting the images according to the corresponding frame numbers of the images in the video stream data to obtain a contact image sequence; extracting side edge features of the transported vehicle aiming at the contact image sequence, wherein the side edge features at least comprise a front window lower boundary line, a front window left boundary midpoint, a rear window lower boundary line, a front door lower boundary line, a rear door lower boundary line, a front door stitching line and a rear door stitching line of the transported vehicle; constructing a front door coordinate system and a rear door coordinate system corresponding to the transported vehicle according to the side edge characteristics of the transported vehicle; the front door coordinate system takes a region midpoint formed by a front door stitching line, an extension line of a front window lower boundary line, a front door lower boundary line and a straight line passing through a left boundary midpoint of the front window and parallel to the front door stitching line as an origin, takes a straight line passing through the origin and parallel to the front door lower boundary line as a transverse axis and takes a straight line passing through the origin and parallel to the front door stitching line as a longitudinal axis; the rear door coordinate system takes the midpoint of a region formed by the rear door stitching line, the front door stitching line, the lower boundary line of the rear door and the extension line of the lower boundary line of the rear window as an origin, takes a straight line passing through the origin and being parallel to the lower boundary line of the rear door as a transverse axis, and takes a straight line passing through the origin and being parallel to the rear door stitching line as a longitudinal axis; determining contact point coordinates between the moving object and the transported vehicle, wherein the contact point coordinates at least comprise the second intersection point coordinates and/or the first intersection point coordinates; extracting a quadrant region of the contact point coordinate falling into the front door coordinate system or the rear door coordinate system, and determining the quadrant region as a contact region of the moving object on the transported vehicle;
The acquisition angles of the cameras are adjusted so that the cameras are opposite to the contact area direction, and suspicious damaged side images of the transported vehicle are acquired;
cutting the suspicious damaged side image of the transported vehicle according to the contact area to obtain a local image, and comparing the local image with a reference image of the transported vehicle to determine the damaged position of the transported vehicle according to a comparison result, wherein the method comprises the following steps: determining a sliding window of a preset pixel size, wherein the preset pixel size is related to a pixel size corresponding to the local image; traversing the reference image by utilizing the sliding window, and determining a pixel value average value of pixel points contained in the sliding window as a reference window pixel value; traversing the local image by utilizing the sliding window, and determining the average value of pixel values of pixel points contained in the sliding window as a local window pixel value; calculating an absolute value of a difference between the local window pixel value and the reference window pixel value; when the absolute value of the difference is larger than a preset difference threshold value, extracting a window area corresponding to the sliding window from the local image; splicing adjacent window areas according to the sliding sequence of the sliding window in the partial image to obtain spliced areas; subtracting the local image from the splicing area, and taking an absolute value to obtain an absolute value image; taking a preset multiple of the pixel value of the local window as a threshold value, and carrying out classification assignment processing on the pixel points in the absolute value image; and determining the damage position corresponding to the transported vehicle in the absolute value image according to the processing result, so as to determine the damage position of the transported vehicle in the partial image, wherein the damage position is represented by pixel coordinates.
2. The vehicle running condition monitoring method applied to a trailer service according to claim 1, wherein a background portion in the video stream data is determined by using a preset algorithm, and a moving object in the video stream data is detected based on the background portion, and specifically comprises:
determining an image sequence corresponding to the video stream data, and carrying out gray processing on the image sequence to obtain a gray image sequence;
the gray image sequence is subjected to weighted summation processing by utilizing a Sobel operator in the horizontal direction and a Sobel operator in the vertical direction so as to extract gradient information of the gray image sequence and obtain a gradient image sequence;
extracting a current frame gradient map, a previous frame gradient map and a subsequent frame gradient map from the gradient image sequence, and calculating absolute values of differences between the current frame gradient map and the previous frame gradient map and between the current frame gradient map and the subsequent frame gradient map respectively;
multiplying the absolute difference values to obtain a difference absolute value image of the current frame;
and carrying out binarization processing on the difference absolute value image so as to determine the moving object in the binarized image.
3. The vehicle behavior monitoring method applied to a trailer service according to claim 2, wherein after determining the moving object in the binarized image, the method further comprises:
In the binarized image, taking a pixel where a midpoint of the moving object is located as a center pixel;
extracting eight pixel areas of the central pixel in the horizontal, vertical and four diagonal directions;
and calculating the pixel duty ratio of the moving object in the eight-pixel area, and determining the moving object as noise when the pixel duty ratio is smaller than a preset duty ratio threshold value.
4. A vehicle behavior monitoring method applied to a trailer service according to claim 1, wherein after determining a damaged position of the transported vehicle in the partial image, the method further comprises:
marking the pixel points corresponding to the damage positions in the partial image;
determining an external rectangle corresponding to the marked pixel point, and counting the aspect ratio of the external rectangle;
and displaying the circumscribed rectangle and the length-width ratio in the suspected damage side image of the transported vehicle, and pushing.
5. The method for monitoring the running condition of a vehicle applied to a trailer service according to claim 1, wherein the cameras are mounted on the rotatable sheaves, and the mounting relationship between the cameras and the rotatable sheaves at least comprises one-to-one and many-to-one;
The rotatable wheel disc is of a disc-shaped structure, is arranged on the inner side of the frame of the trailer vehicle and faces the transported vehicle.
6. The method for monitoring the running condition of a vehicle applied to a trailer service according to claim 5, wherein adjusting the acquisition angles of the cameras specifically comprises:
reading a camera code for acquiring the video stream data to determine the position information of the transported vehicle on the trailer vehicle according to the camera code;
determining a rotatable wheel disc opposite to the transported vehicle according to the position information, and rotating the rotatable wheel disc to enable the position of a camera arranged on the rotatable wheel disc to be opposite to the transported vehicle;
and adjusting the acquisition angle of the camera to shoot, and obtaining an image of the suspected damaged side of the transported vehicle.
7. A vehicle behavior monitoring device for use in a trailer service, the device comprising:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a vehicle behavior monitoring method applied to a trailer service according to any one of claims 1-6.
CN202311289717.8A 2023-10-08 2023-10-08 Vehicle running condition monitoring method and device applied to trailer service Active CN117079219B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311289717.8A CN117079219B (en) 2023-10-08 2023-10-08 Vehicle running condition monitoring method and device applied to trailer service

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311289717.8A CN117079219B (en) 2023-10-08 2023-10-08 Vehicle running condition monitoring method and device applied to trailer service

Publications (2)

Publication Number Publication Date
CN117079219A CN117079219A (en) 2023-11-17
CN117079219B true CN117079219B (en) 2024-01-09

Family

ID=88711924

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311289717.8A Active CN117079219B (en) 2023-10-08 2023-10-08 Vehicle running condition monitoring method and device applied to trailer service

Country Status (1)

Country Link
CN (1) CN117079219B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117275126B (en) * 2023-11-22 2024-03-05 盛视科技股份有限公司 Vehicle information acquisition driving control method and lane information acquisition system
CN117314538B (en) * 2023-11-27 2024-05-10 长威信息科技发展股份有限公司 Service charging method and system for expressway rescue service vehicle

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103522970A (en) * 2013-05-31 2014-01-22 Tcl集团股份有限公司 Vehicle driving safety detection method and system based on machine vision
CN104166841A (en) * 2014-07-24 2014-11-26 浙江大学 Rapid detection identification method for specified pedestrian or vehicle in video monitoring network
CN106878674A (en) * 2017-01-10 2017-06-20 哈尔滨工业大学深圳研究生院 A kind of parking detection method and device based on monitor video
CN107705326A (en) * 2017-09-15 2018-02-16 温州大学 A kind of intrusion detection method that crosses the border in security sensitive region
WO2018058356A1 (en) * 2016-09-28 2018-04-05 驭势科技(北京)有限公司 Method and system for vehicle anti-collision pre-warning based on binocular stereo vision
CN108280444A (en) * 2018-02-26 2018-07-13 江苏裕兰信息科技有限公司 A kind of fast motion object detection method based on vehicle panoramic view
CN108875648A (en) * 2018-06-22 2018-11-23 深源恒际科技有限公司 A method of real-time vehicle damage and component detection based on mobile video stream
CN112242058A (en) * 2020-05-29 2021-01-19 北京新能源汽车技术创新中心有限公司 Target abnormity detection method and device based on traffic monitoring video and storage medium
CN113525362A (en) * 2020-04-17 2021-10-22 华为技术有限公司 Automatic driving danger target determination method and device
CN113829978A (en) * 2020-06-08 2021-12-24 伟摩有限责任公司 Cargo inspection, monitoring and securing in autonomous driving trucks
JP7067852B1 (en) * 2022-02-01 2022-05-16 株式会社ファンクリエイト Calculation method of road surface damage position
CN114530058A (en) * 2022-03-03 2022-05-24 恒大恒驰新能源汽车研究院(上海)有限公司 Collision early warning method, device and system
CN115578441A (en) * 2022-08-30 2023-01-06 感知信息科技(浙江)有限责任公司 Vehicle side image splicing and vehicle size measuring method based on deep learning
CN115965636A (en) * 2022-12-07 2023-04-14 北京万集科技股份有限公司 Vehicle side view generating method and device and terminal equipment
CN116167969A (en) * 2022-12-16 2023-05-26 北京集度科技有限公司 Lens smudge detection method, device, vehicle, storage medium and program product
CN116246251A (en) * 2023-02-10 2023-06-09 河南工业和信息化职业学院 Vehicle bottom safety detection method and system
CN116342644A (en) * 2023-02-13 2023-06-27 华能南京金陵发电有限公司 Intelligent monitoring method and system suitable for coal yard
CN116453064A (en) * 2023-06-16 2023-07-18 烟台黄金职业学院 Method for identifying abnormal road conditions of tunnel road section based on monitoring data
CN116778371A (en) * 2023-03-13 2023-09-19 上海临港绝影智能科技有限公司 Abnormal event detection method, device, equipment, vehicle and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10392013B2 (en) * 2017-09-30 2019-08-27 A-Hamid Hakki Collision detection and avoidance system

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103522970A (en) * 2013-05-31 2014-01-22 Tcl集团股份有限公司 Vehicle driving safety detection method and system based on machine vision
CN104166841A (en) * 2014-07-24 2014-11-26 浙江大学 Rapid detection identification method for specified pedestrian or vehicle in video monitoring network
WO2018058356A1 (en) * 2016-09-28 2018-04-05 驭势科技(北京)有限公司 Method and system for vehicle anti-collision pre-warning based on binocular stereo vision
CN106878674A (en) * 2017-01-10 2017-06-20 哈尔滨工业大学深圳研究生院 A kind of parking detection method and device based on monitor video
CN107705326A (en) * 2017-09-15 2018-02-16 温州大学 A kind of intrusion detection method that crosses the border in security sensitive region
CN108280444A (en) * 2018-02-26 2018-07-13 江苏裕兰信息科技有限公司 A kind of fast motion object detection method based on vehicle panoramic view
CN108875648A (en) * 2018-06-22 2018-11-23 深源恒际科技有限公司 A method of real-time vehicle damage and component detection based on mobile video stream
CN113525362A (en) * 2020-04-17 2021-10-22 华为技术有限公司 Automatic driving danger target determination method and device
CN112242058A (en) * 2020-05-29 2021-01-19 北京新能源汽车技术创新中心有限公司 Target abnormity detection method and device based on traffic monitoring video and storage medium
CN113829978A (en) * 2020-06-08 2021-12-24 伟摩有限责任公司 Cargo inspection, monitoring and securing in autonomous driving trucks
JP7067852B1 (en) * 2022-02-01 2022-05-16 株式会社ファンクリエイト Calculation method of road surface damage position
CN114530058A (en) * 2022-03-03 2022-05-24 恒大恒驰新能源汽车研究院(上海)有限公司 Collision early warning method, device and system
CN115578441A (en) * 2022-08-30 2023-01-06 感知信息科技(浙江)有限责任公司 Vehicle side image splicing and vehicle size measuring method based on deep learning
CN115965636A (en) * 2022-12-07 2023-04-14 北京万集科技股份有限公司 Vehicle side view generating method and device and terminal equipment
CN116167969A (en) * 2022-12-16 2023-05-26 北京集度科技有限公司 Lens smudge detection method, device, vehicle, storage medium and program product
CN116246251A (en) * 2023-02-10 2023-06-09 河南工业和信息化职业学院 Vehicle bottom safety detection method and system
CN116342644A (en) * 2023-02-13 2023-06-27 华能南京金陵发电有限公司 Intelligent monitoring method and system suitable for coal yard
CN116778371A (en) * 2023-03-13 2023-09-19 上海临港绝影智能科技有限公司 Abnormal event detection method, device, equipment, vehicle and storage medium
CN116453064A (en) * 2023-06-16 2023-07-18 烟台黄金职业学院 Method for identifying abnormal road conditions of tunnel road section based on monitoring data

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
An Edge Traffic Flow Detection Scheme Based on Deep Learning in an Intelligent Transportation System;Chen Chen等;《IEEE Transactions on Intelligent Transportation Systems》;第22卷(第3期);1840-1852 *
Real-time image enhancement for an automatic automobile accident detection through CCTV using deep learning;Manu S. Pillai等;《Soft Computing》;第25卷;11929-11940 *
基于VIDAR的车辆前方热体障碍物识别与避撞控制研究;宫晓彤;《中国优秀硕士学位论文全文数据库 工程科技II辑》;第2023年卷(第2期);C035-1054 *
面向智慧工地的入侵检测算法研究;刘欢;《中国优秀硕士学位论文全文数据库 工程科技II辑》;第2023年卷(第1期);C038-1892 *

Also Published As

Publication number Publication date
CN117079219A (en) 2023-11-17

Similar Documents

Publication Publication Date Title
CN117079219B (en) Vehicle running condition monitoring method and device applied to trailer service
US8902053B2 (en) Method and system for lane departure warning
US10976262B2 (en) Mobile and automated apparatus for the detection and classification of damages on the body of a vehicle
KR100459476B1 (en) Apparatus and method for queue length of vehicle to measure
Barcellos et al. A novel video based system for detecting and counting vehicles at user-defined virtual loops
CN110298300B (en) Method for detecting vehicle illegal line pressing
CN107273802B (en) Method and device for detecting fault of brake shoe drill rod ring of railway train
CN109489724B (en) Tunnel train safe operation environment comprehensive detection device and detection method
CN107516423B (en) Video-based vehicle driving direction detection method
CN110544271B (en) Parabolic motion detection method and related device
CN106650730A (en) Turn signal lamp detection method and system in car lane change process
CN112270253A (en) High-altitude parabolic detection method and device
CN110807765A (en) Suspension insulator string inclination detection method and system based on image processing
US20200394802A1 (en) Real-time object detection method for multiple camera images using frame segmentation and intelligent detection pool
CN113191239A (en) Vehicle overall dimension dynamic detection system based on computer vision
CN105469604A (en) An in-tunnel vehicle detection method based on monitored images
Abdagic et al. Counting traffic using optical flow algorithm on video footage of a complex crossroad
CN115984780A (en) Industrial solid waste warehouse-in and warehouse-out distinguishing method and device, electronic equipment and medium
CN109978879A (en) Case angle based on loading goods train video monitor enters slot condition detection method
CN110634124A (en) Method and equipment for area detection
EP3195261B1 (en) Identification of a pantograph represented in an image
CN114882709A (en) Vehicle congestion detection method and device and computer storage medium
CN107358170B (en) Vehicle violation line pressing identification method based on mobile machine vision
US20210256273A1 (en) Ternary image obtaining method and apparatus, and vehicle
US20190197349A1 (en) Image identification method and image identification device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant