CN117132872A - Intelligent collision recognition system for material transport vehicle on production line - Google Patents

Intelligent collision recognition system for material transport vehicle on production line Download PDF

Info

Publication number
CN117132872A
CN117132872A CN202310990386.4A CN202310990386A CN117132872A CN 117132872 A CN117132872 A CN 117132872A CN 202310990386 A CN202310990386 A CN 202310990386A CN 117132872 A CN117132872 A CN 117132872A
Authority
CN
China
Prior art keywords
vehicle
video
vehicles
production line
collision
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310990386.4A
Other languages
Chinese (zh)
Inventor
黄建红
陈颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Saihong Zhongzhi Network Technology Co ltd
Original Assignee
Zhejiang Saihong Zhongzhi Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Saihong Zhongzhi Network Technology Co ltd filed Critical Zhejiang Saihong Zhongzhi Network Technology Co ltd
Priority to CN202310990386.4A priority Critical patent/CN117132872A/en
Publication of CN117132872A publication Critical patent/CN117132872A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • G06V10/945User interactive design; Environments; Toolboxes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/96Management of image or video recognition tasks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to the field of intelligent transportation, and discloses an intelligent collision recognition system for a material transport vehicle on a production line. The system comprises: the image acquisition device is used for acquiring the logistics transportation vehicle flow video on the production line; the edge reasoning server is used for processing the logistics transportation vehicle flow video on the production line and judging whether a vehicle collision occurs or not; and the remote data center is used for displaying the vehicle collision accident information and storing the vehicle collision accident video. The invention provides a visual algorithm-based method for identifying the collision accident of the logistics transportation vehicle on the production line, automatically storing the data of the collision accident of the logistics transportation vehicle, accurately and timely positioning the collision position of the logistics transportation vehicle, and providing a basis for the investigation of the follow-up accident cause and the optimization of the task planning of the material transportation vehicle.

Description

Intelligent collision recognition system for material transport vehicle on production line
Technical Field
The invention relates to the field of intelligent transportation, in particular to an intelligent collision recognition system for a material transport vehicle on a production line.
Background
In the manufacturing field, a large amount of resources are often involved in transportation and transfer between each production line and each production flow, whether materials can be quickly and accurately transferred and transported or not directly influences the processing and manufacturing efficiency on the production line, and the manufacturing efficiency is improved and the competitive power is increased between manufacturing enterprises.
The material transportation vehicle replaces manual transportation to transfer and transport materials, so that the working pressure and strength of workers are greatly reduced, but due to the fact that the materials between production lines are greatly transferred, the number of transportation vehicles is large, the production environment is complex and the like, the material transportation vehicle is often easy to collide in the process of transporting the materials, damage to the vehicle and loss of the materials are caused, and the work efficiency of the whole production line is affected due to blockage and stagnation of the material transportation vehicle.
Disclosure of Invention
In order to solve at least one of the problems mentioned in the background art, the invention provides an intelligent collision recognition system for a material transport vehicle on a production line.
The intelligent collision recognition system for the material transportation vehicle on the production line comprises an image acquisition device, an edge reasoning server and a remote data center, wherein the image acquisition device is used for acquiring a logistics transportation vehicle flow video on the production line; the edge reasoning server is used for processing logistics transportation vehicle flow videos on the production line and judging whether a vehicle collision occurs or not; the remote data center comprises a display device and a storage device, and is used for displaying vehicle collision accident information and storing vehicle collision accident videos, and for the intelligent collision recognition system of the material transportation vehicle on a group of production lines, the number relation among the image acquisition device, the edge reasoning server and the remote data center is as follows:
1 remote data center = m edge reasoning server
1 edge reasoning server = n image acquisition device
Wherein, the value range of m is [30,40], the value range of n is [15,30],
the working process of the system comprises the following steps:
step 1, the image acquisition device acquires a logistics transportation vehicle flow video on a production line and pushes the acquired video to an edge reasoning server, and the method specifically comprises the following steps:
step 101, collecting logistics transportation vehicle flow video on a production line by the image collecting device, storing time sequence information of the video, specifically setting parameters as 640 x 640, setting frame rate as 50FPS and setting format as RGB;
102, encoding a video stream in real time according to time sequence information of the video;
and step 103, pushing the coded code stream to the edge reasoning server through an RTMP streaming media service protocol.
Step 2, the edge reasoning server receives the video pushed by the image acquisition device, processes and judges the video in real time, and decides whether to push the vehicle collision accident and the video according to the judging result, and specifically comprises the following steps:
step 201, the edge reasoning server receives the code stream pushed by the image acquisition device, decodes the code stream into video according to time sequence information in the code stream, and the decoded video is specifically divided into two paths in the edge reasoning server;
the first path of video is cached in a memory and waits for whether to execute a pushing instruction, if the pushing instruction is not received, the caching period is 15 seconds, and if the pushing instruction is received, the current time point is taken as a reference, and the video is recounted;
the second video stream is loaded into a neural Network Processor (NPU), awaiting further inference calculations,
step 202, performing frame skipping processing on video, taking 1 frame of image every 5-10 continuous frames of images, dividing an ROI (region of interest) on the image subjected to the frame skipping processing, and taking the lower left corner of each frame of image as an origin of coordinates (0, 0), specifically, taking the x-axis region of the ROI as (0, 640) and the y-axis region as (0, 480);
step 203, performing inference calculation on the image based on the improved detection algorithm and tracking algorithm;
wherein, based on the detection algorithm after the improvement, namely improve yolov5, detect vehicle and personnel, the concrete improvement includes:
the first improvement is that after the attention mechanism CBAM module is fused to the feature extraction network backhaul, the specific calculation formula is as follows:
wherein M is c Represents channel attention, M s Representing spatial attention, F represents an input feature map, F Representing a feature map after channel attention, F "representing a feature map after channel attention and spatial attention,
in the second improvement, the GIoU Loss is used as a Loss function, and a specific calculation formula is as follows:
L GIoU =1-GIoU
wherein IoU represents the intersection ratio of the prediction bounding box and the real bounding box, A c Representing the area of the smallest rectangular box that both the prediction and the real boundary boxes contain, u represents the union of the prediction and the real boundary boxes, L GIOU The loss of GIoU is shown, the improvement III is that DIoU is adopted as a judging standard of a non-maximum value inhibition stage, and a specific calculation formula is as follows:
wherein b represents the center store of the prediction bounding box, b gt Represents the center point of the real bounding box, c representsThe shortest diagonal length of the prediction bounding box and the smallest bounding box of the real bounding box, s i Represents a classification score, epsilon represents a threshold for non-maximum suppression,
the output targets of the detection algorithm are as follows:
[ "category: motor vehicle "," confidence level "," coordinates of vehicle detection frame ")
[ "category: personnel "," confidence level "," coordinates of personnel detection frame ")
The improved tracking algorithm is to re-identify and track the vehicle based on the improved deepsort tracking algorithm, and the specific improvement mode is to replace the original loss function with a new loss function, wherein the new loss function combines a center loss function and a cross entropy loss function, and the calculation formula is as follows:
wherein L is c_eL Represents a cross entropy loss function, L cL Representing the center loss function, the gamma parameter is used for balancing the two functions, and the value range is [0,1],L sL Is Softmax function, c is classification category, N is sample number, N b For sample batch size, x i As a result of the characteristics of the image being input,
the tracking algorithm outputs the target as:
[ "vehicle weight identification ID number", "coordinate point of vehicle track" ]
Step 204, judging whether a vehicle collision accident occurs according to the result of the reasoning calculation, wherein the judging conditions comprise:
the specific calculation formula of the first condition is as follows:
a i x+b i =a j x+b j
a i x+b i and a j x+b j And when the above equation is satisfied and a solution exists, judging that the two vehicle tracks intersect, wherein the equation of the vehicle track line is obtained by fitting the coordinate point track of the vehicle by a unitary linear regression equation, and the specific calculation formula is as follows:
wherein x and y respectively represent coordinate points of an x axis and a y axis of the vehicle track,
and if the two vehicles are stopped, the specific calculation formula is as follows:
wherein t is j And t i Representing the corresponding different times, y, when reasoning about images j and i j And y i Representing the y-axis coordinates of the same vehicle trajectory in image i and image j, u representing the travel speed of each vehicle, if the speed of the vehicle is 0, it is determined that the vehicle has stopped,
and in the third condition, whether the detection frames of the two vehicles overlap or not is judged according to the specific calculation formula:
wherein iou represents the ratio of intersection and union of two vehicle detection frames, A and B represent the detection frames of two vehicles respectively, U represents intersection, U represents union, if iou is greater than 0, it is determined that the detection frames of two vehicles overlap,
a fourth condition, whether the ROI area detects a personnel target;
the judgment logic for finally judging whether the vehicle collision accident occurs is as follows:
condition one &conditiontwo & condition three & condition four
Wherein & & represents logical AND operation, when all conditions are satisfied, i.e., the judgment logic is true, it is judged that a vehicle collision accident is currently occurring,
further, if it is determined that a vehicle collision accident has occurred, a flow L1 is performed, otherwise the video in the current memory is cleared,
the process L1 refers to issuing a rear-end collision event, and specifically includes the following steps:
firstly, compiling various parameters of the current vehicle collision into message Info, and sending the message Info to a remote data center, wherein the message Info is specifically in a JSON format and has the structure that:
info= { "event": "vehicle crash", "Time": "vehicle crash occurrence Time", "number": "camera number", "address": "specific location of rear-end event occurrence" }
Secondly, sending an instruction to a memory, coding the video cached currently by taking a current time node T1 as a reference, wherein the coding time node is from T1-15 seconds to T1+15 seconds,
and finally, waiting for the video coding to be completed, and pushing the coded code stream to a remote data center through an RTMP streaming media service protocol.
Step 3, the remote data center monitors the edge reasoning server, analyzes and displays the vehicle collision accident after receiving the vehicle collision accident and the video sent by the edge reasoning server, and stores the corresponding vehicle collision accident video, and specifically comprises the following steps:
step 301, the remote data center monitors the reasoning server, if the message Info sent by the edge reasoning server is received, the message Info is parsed, and the parsed vehicle collision time, the vehicle collision position and the vehicle collision time are printed on a display device interface.
Step 302, waiting for the data stream pushed by the inference server, decoding the data stream into a video stream if the data stream pushed by the inference server is received, and storing the video stream to the local according to the naming mode of time+address.
The invention provides an intelligent collision recognition system for a material transport vehicle on a production line, which has the following advantages compared with the prior art
The beneficial effects are that:
the invention provides a visual algorithm-based method for identifying the collision accident of the logistics transportation vehicle on the production line, automatically storing the data of the collision accident of the logistics transportation vehicle, accurately and timely positioning the collision position of the logistics transportation vehicle, and providing a basis for analyzing the cause of the collision accident of the logistics transportation vehicle and rescheduling and planning the logistics transportation vehicle.
According to the invention, parallel logic of video data caching and calculation is designed according to the actual task scene, so that data storage and exchange are reduced on the basis of avoiding key data loss, and the operation efficiency is improved.
The invention improves the detection algorithm and the tracking algorithm, reduces network parameters and calculated amount and improves the algorithm calculation speed while improving the algorithm precision.
Drawings
FIG. 1 is a block diagram of an intelligent collision recognition system for a material handling vehicle on a production line according to the present invention;
fig. 2 is a vehicle collision accident determination logic diagram of the present invention.
Detailed Description
In order to make the objects and features of the present invention more comprehensible, the present invention is described in detail below by way of examples and with reference to the accompanying drawings.
The intelligent collision recognition system for the material transportation vehicle on the production line comprises an image acquisition device, an edge reasoning server and a remote data center, wherein the image acquisition device is used for acquiring a logistics transportation vehicle flow video on the production line; the edge reasoning server is used for processing logistics transportation vehicle flow videos on the production line and judging whether a vehicle collision occurs or not; the remote data center comprises a display device and a storage device, and is used for displaying vehicle collision accident information and storing vehicle collision accident videos, as shown in fig. 1, for the intelligent collision recognition system of the material transportation vehicle on a group of production lines, the number relation among the image acquisition device, the edge reasoning server and the remote data center is as follows:
1 remote data center = m edge reasoning server
1 edge reasoning server = n image acquisition device
Wherein, the value range of m is [30,40], and the value range of n is [15,30].
The working process of the system comprises the following steps:
step 1, the image acquisition device acquires the logistics transportation vehicle flow video on the production line and pushes the acquired video to the edge reasoning server.
The method specifically comprises the following steps:
step 101, collecting the logistics transportation vehicle flow video on the production line by the image collecting device, and storing the time sequence information of the video, wherein the specific setting parameters are 640 x 640, the frame rate is 50FPS, and the format is RGB.
And 102, encoding the video stream in real time according to the time sequence information of the video.
And step 103, pushing the coded code stream to the edge reasoning server through an RTMP streaming media service protocol.
And 2, the edge reasoning server receives the video pushed by the image acquisition device, processes and judges the video in real time, and decides whether to push the vehicle collision accident and the video according to the judging result.
The method specifically comprises the following steps:
step 201, the edge inference server receives the code stream pushed by the image acquisition device, decodes the code stream into video according to time sequence information in the code stream, and the decoded video is specifically divided into two paths in the edge inference server:
the first path of video is cached in a memory and waits for whether to execute the pushing instruction, if the pushing instruction is not received, the caching period is 15 seconds, and if the pushing instruction is received, the current time point is taken as a reference, and the video is recounted.
The second video stream is loaded into a neural Network Processor (NPU) awaiting further inference calculations.
Step 202, performing frame skipping processing on the video, taking 1 frame of image every 5-10 continuous frames of images, dividing the image after performing frame skipping processing into ROI regions, taking the lower left corner of each frame of image as the origin of coordinates (0, 0), specifically, the x-axis region of the ROI is (0, 640), and the y-axis region is (0, 480).
Step 203, the improved yolov5 is modified to detect vehicles and personnel based on the following manner.
The first improvement is that after the attention mechanism CBAM module is fused to the feature extraction network backhaul, the specific calculation formula is as follows:
wherein M is c Represents channel attention, M s Representing spatial attention, F represents an input feature map, F The feature map after channel attention is represented, and F "the feature map after channel attention and spatial attention is represented.
In the second improvement, the GIoU Loss is used as a Loss function, and a specific calculation formula is as follows:
L GIoU =1-GIoU
wherein IoU represents the intersection ratio of the prediction bounding box and the real bounding box, A c Representation predictionThe area of the smallest rectangle box simultaneously contained by the boundary box and the real boundary box, u represents the union of the prediction boundary box and the real boundary box, and L GIOU Indicating a loss of GIoU.
The improvement III adopts DIoU as a judgment standard in a non-maximum value inhibition stage, and a specific calculation formula is as follows:
wherein b represents the center store of the prediction bounding box, b gt Representing the center point of the real bounding box, c representing the shortest diagonal length of the prediction bounding box and the real bounding box minimum bounding box, s i Represents a classification score and epsilon represents a threshold for non-maximum suppression.
The output targets of the detection algorithm are as follows:
[ "category: motor vehicle "," confidence level "," coordinates of vehicle detection frame ")
[ "category: personnel "," confidence level "," coordinates of personnel detection frame ")
The improved tracking algorithm is to re-identify and track the vehicle based on the improved deepsort tracking algorithm, and the specific improvement mode is to replace the original loss function with a new loss function, wherein the new loss function combines a center loss function and a cross entropy loss function, and the calculation formula is as follows:
wherein L is c_eL Represents a cross entropy loss function, L cL Representing the center loss function, the gamma parameter is used for balancing the two functions, and the value range is [0,1],L sL Is Softmax function, c is classification category, N is sample number, N b For sample batch size, x i Is an input image feature.
The tracking algorithm outputs the target as:
[ "vehicle weight identification ID number", "coordinate point of vehicle track" ]
And 204, judging whether a vehicle collision accident occurs according to the result of the reasoning calculation.
As shown in fig. 2, the vehicle collision accident determination conditions include:
the specific calculation formula of the first condition is as follows:
a i x+b i =a j x+b j
a i x+b i and a j x+b j Representing the trajectories of two vehicles, when the above equation is established and there is a solution, it is determined that the two vehicle trajectories intersect.
The equation of the vehicle track line is obtained by fitting a coordinate point track of the vehicle by a unitary linear regression equation, and the specific calculation formula is as follows:
wherein x and y represent coordinate points of the x-axis and y-axis of the vehicle track, respectively.
And if the two vehicles are stopped, the specific calculation formula is as follows:
wherein t is j And t i Representing the corresponding different times, y, when reasoning about images j and i j And y i The y-axis coordinates representing the same vehicle trajectory in image i and image j, u representing the travel speed of each vehicle, and if the speed of the vehicle is 0, determining that the vehicle has stopped.
And in the third condition, whether the detection frames of the two vehicles overlap or not is judged according to the specific calculation formula:
wherein iou represents the ratio of intersection and union of two vehicle detection frames, A and B represent the detection frames of two vehicles respectively, and U represents intersection, U represents union, if iou is greater than 0, it is determined that the detection frames of two vehicles overlap.
A fourth condition, whether the ROI area detects a personnel target;
the judgment logic for finally judging whether the vehicle collision accident occurs is as follows:
condition one &conditiontwo & condition three & condition four
And & & indicates logical AND operation, and when all conditions are met, namely the judgment logic is true, the current occurrence of the vehicle collision accident is judged.
Further, if it is determined that a vehicle collision accident occurs, the process L1 is executed, otherwise, the video in the current memory is cleared.
The process L1 refers to the following process of issuing a rear-end collision event:
firstly, compiling various parameters of the current vehicle collision into message Info, and sending the message Info to a remote data center, wherein the message Info is specifically in a JSON format and has the structure that:
info= { "event": "vehicle crash", "Time": "vehicle crash occurrence Time", "number": "camera number", "address": "specific location of rear-end event occurrence" }
And secondly, sending an instruction to the memory, and coding the currently cached video by taking a current time node T1 as a reference, wherein the coding time node is from T1-15 seconds to T1 plus 15 seconds.
And finally, waiting for the video coding to be completed, and pushing the coded code stream to a remote data center through an RTMP streaming media service protocol.
And 3, the remote data center monitors the edge reasoning server, analyzes and displays the vehicle collision accident after receiving the vehicle collision accident and the video sent by the edge reasoning server, and stores the corresponding vehicle collision accident video.
The method specifically comprises the following steps:
step 301, the remote data center monitors the reasoning server, if the message Info sent by the edge reasoning server is received, the message Info is parsed, and the parsed vehicle collision time, the vehicle collision position and the vehicle collision time are printed on the interface of the display device.
Step 302, waiting for the data stream pushed by the inference server, decoding the data stream into a video stream if the data stream pushed by the inference server is received, and storing the video stream to the local according to the naming mode of time+address.
The working process of the invention has been carried out once according to the method disclosed herein.
While the invention has been described in detail in this specification with reference to the general description and the specific embodiments thereof, it will be apparent to one skilled in the art that modifications and improvements can be made thereto. Accordingly, such modifications or improvements may be made without departing from the spirit of the invention and are intended to be within the scope of the invention as claimed.

Claims (10)

1. The intelligent collision recognition system for the material transportation vehicles on the production line is characterized by comprising an image acquisition device, an edge reasoning server and a remote data center, wherein the image acquisition device is used for acquiring logistics transportation vehicle flow videos on the production line; the edge reasoning server is used for processing logistics transportation vehicle flow videos on the production line and judging whether a vehicle collision occurs or not; the remote data center comprises a display device and a storage device, wherein the display device and the storage device are used for displaying the information of the collision accident of the vehicle and storing the video of the collision accident of the vehicle;
for a group of intelligent collision recognition systems of material transportation vehicles on a production line, the number relation among the image acquisition device, the edge reasoning server and the remote data center is as follows:
1 remote data center = m edge reasoning server
1 edge reasoning server = n image acquisition device
Wherein, the value range of m is [30,40], the value range of n is [15,30],
the image acquisition device acquires logistics transportation vehicle flow videos on the production line and pushes the acquired videos to the edge reasoning server;
the edge reasoning server receives the video pushed by the image acquisition device, processes and judges the video in real time, and decides whether to push the vehicle collision accident and the video according to the judging result;
the remote data center monitors the edge reasoning server, analyzes and displays the vehicle collision accident after receiving the vehicle collision accident and the video sent by the edge reasoning server, and stores the corresponding vehicle collision accident video.
2. The on-line material handling vehicle intelligent crash identification system as set forth in claim 1, further comprising:
step 101, collecting logistics transportation vehicle flow video on a production line by the image collecting device, storing time sequence information of the video, specifically setting parameters as 640 x 640, setting frame rate as 50FPS and setting format as RGB;
102, encoding the video in real time according to the time sequence information of the video;
and step 103, pushing the coded code stream to the edge reasoning server through an RTMP streaming media service protocol.
3. The on-line material handling vehicle intelligent crash identification system as set forth in claim 1, further comprising:
step 201, the edge reasoning server receives the code stream pushed by the image acquisition device, and decodes the code stream into video according to time sequence information in the code stream;
step 202, performing frame skipping processing on a video, and dividing an ROI (region of interest) on an image subjected to the frame skipping processing, wherein the lower left corner of each frame of image is taken as an origin (0, 0), specifically, the x-axis region of the ROI is (0, 640), and the y-axis region of the ROI is (0, 480);
step 203, performing inference calculation on the image based on the improved detection algorithm and tracking algorithm;
wherein, the output targets of the detection algorithm are as follows:
[ "category: motor vehicle "," confidence level "," coordinates of vehicle detection frame ")
[ "category: personnel "," confidence level "," coordinates of personnel detection frame ")
The tracking algorithm outputs the target as:
[ "vehicle weight identification ID number", "coordinate point of vehicle track" ]
Step 204, judging whether a vehicle collision accident occurs according to the result of the reasoning calculation, if so, executing a process L1, otherwise, clearing the video in the current memory.
4. The intelligent collision recognition system for on-line material handling vehicles according to claim 3, wherein in step 201, the code stream is decoded into a video stream, and the decoded video is split into two paths in the edge reasoning server:
the first path of video is cached in a memory and waits for whether to execute a pushing instruction, if the pushing instruction is not received, the caching period is 15 seconds, and if the pushing instruction is received, the current time point is taken as a reference, and the video is recounted;
the second video stream is loaded into a neural Network Processor (NPU) awaiting further inference calculations.
5. The intelligent collision recognition system for on-line material handling vehicles according to claim 3, wherein the frame skipping of the video in step 202 is performed by taking 1 frame of image every consecutive k frames of images, and performing inference calculation, wherein the value of k is in the range of [5, 10].
6. The intelligent collision recognition system for material handling vehicles on a production line according to claim 3, wherein the step 203 of performing an inference calculation on the image based on the improved detection algorithm is to detect vehicles and personnel based on the improved yolov5 detection algorithm, and the specific improvement comprises:
the improved detection algorithm comprises the following steps: after the attention mechanism CBAM module is fused to the feature extraction network backhaul, the specific calculation formula is as follows:
wherein M is c Represents channel attention, M s Representing spatial attention, F represents an input feature map, F Representing a feature map after channel attention, F "representing a feature map after channel attention and spatial attention; or (b)
The improved detection algorithm comprises the following steps: the GIoU Loss is used as a Loss function, and the specific calculation formula is as follows:
L GIoU =1-GIoU
wherein IoU represents a prediction bounding box and a true bounding boxCross ratio, A c Representing the area of the smallest rectangular box that both the prediction and the real boundary boxes contain, u represents the union of the prediction and the real boundary boxes, L GIOU Indicating a loss of GIoU; or (b)
The improved detection algorithm comprises the following steps: the DIoU is adopted as a judgment standard in a non-maximum value inhibition stage, and a specific calculation formula is as follows:
wherein b represents the center store of the prediction bounding box, b gt Representing the center point of the real bounding box, c representing the shortest diagonal length of the prediction bounding box and the real bounding box minimum bounding box, s i Represents a classification score and epsilon represents a threshold for non-maximum suppression.
7. The intelligent collision recognition system for on-line material handling vehicles according to claim 3, wherein the step 203 of performing image inference calculation based on the modified tracking algorithm is to re-recognize and track the vehicle based on the modified deepsort tracking algorithm, and the specific modification is to replace the original loss function with a new loss function, where the new loss function combines a center loss function and a cross entropy loss function, and the calculation formula is as follows:
wherein L is c_eL Represents a cross entropy loss function, L cL Representing the center loss function, the gamma parameter is used for balancing the two functions, and the value range is [0,1],L sL Is Softmax function, c is classification category, N is sample number, N b For sample batch size, x i Is an input image feature.
8. The intelligent crash recognition system for on-line material handling vehicles of claim 3, wherein said determining in step 204 whether a vehicle crash event has occurred comprises:
firstly, judging whether tracks of two vehicles are intersected or not, wherein a specific calculation formula is as follows:
a i x+b i =a j x+b j
a i x+b i and a j x+b j And when the above equation is satisfied and a solution exists, judging that the two vehicle tracks intersect, wherein the equation of the vehicle track line is obtained by fitting the coordinate point track of the vehicle by a unitary linear regression equation, and the specific calculation formula is as follows:
wherein x and y respectively represent coordinate points of an x axis and a y axis of the vehicle track; and/or
Then judging whether two vehicles are stopped or not, wherein the specific calculation formula is as follows:
wherein t is j And t i Representing the corresponding different times, y, when reasoning about images j and i j And y i A y-axis coordinate representing the same vehicle track in the image i and the image j, u representing the running speed of each vehicle, and if the speed of the vehicle is 0, determining that the vehicle has stopped; then judging whether the detection frames of the two vehicles overlap or not, wherein a specific calculation formula is as follows:
wherein, iou represents the ratio of intersection and union of two vehicle detection frames, A and B represent the detection frames of two vehicles respectively, U represents intersection, U represents union, if iou is greater than 0, it is judged that the detection frames of two vehicles overlap;
finally judging whether the ROI area detects a personnel target;
when two vehicles stop and the tracks intersect, the detection frames of the two vehicles overlap, and the ROI area detects a personnel target, and then the current vehicle collision accident is judged.
9. The intelligent collision recognition system for on-line material handling vehicles according to claim 3, wherein the step of step 204, wherein the step of L1, comprises:
compiling various parameters of the current vehicle collision into message Info, and sending the message Info to a remote data center, wherein the message Info is specifically in a JSON format and has the structure that:
info= { "event": "vehicle crash", "Time": "vehicle crash occurrence Time", "number": "camera number", "address": "specific location of rear-end event occurrence" }
And sending an instruction to the memory, and coding the currently cached video by taking a current time node T1 as a reference, wherein the coding time node is from T1-15 seconds to T1 plus 15 seconds;
further, the video coding is waited for to finish, and the coded code stream is pushed to a remote data center through an RTMP streaming media service protocol.
10. The on-line material handling vehicle intelligent crash identification system as set forth in claim 1, further comprising:
monitoring the reasoning server by the remote data center, if receiving the message Info sent by the edge reasoning server, analyzing the message Info, and printing the analyzed vehicle collision time, the vehicle collision position on a display device interface;
and waiting for the data stream pushed by the reasoning server, decoding the data stream into a video stream if the data stream pushed by the reasoning server is received, and storing the video stream to the local according to the naming mode of time+address.
CN202310990386.4A 2023-08-08 2023-08-08 Intelligent collision recognition system for material transport vehicle on production line Pending CN117132872A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310990386.4A CN117132872A (en) 2023-08-08 2023-08-08 Intelligent collision recognition system for material transport vehicle on production line

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310990386.4A CN117132872A (en) 2023-08-08 2023-08-08 Intelligent collision recognition system for material transport vehicle on production line

Publications (1)

Publication Number Publication Date
CN117132872A true CN117132872A (en) 2023-11-28

Family

ID=88851959

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310990386.4A Pending CN117132872A (en) 2023-08-08 2023-08-08 Intelligent collision recognition system for material transport vehicle on production line

Country Status (1)

Country Link
CN (1) CN117132872A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117455918A (en) * 2023-12-25 2024-01-26 深圳市辉熙智能科技有限公司 Anti-blocking feeding method and system based on image analysis

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117455918A (en) * 2023-12-25 2024-01-26 深圳市辉熙智能科技有限公司 Anti-blocking feeding method and system based on image analysis
CN117455918B (en) * 2023-12-25 2024-03-26 深圳市辉熙智能科技有限公司 Anti-blocking feeding method and system based on image analysis

Similar Documents

Publication Publication Date Title
JP7016943B2 (en) Methods, devices and equipment for object detection
US11003923B2 (en) Spatial and temporal information for semantic segmentation
US20200293797A1 (en) Lane line-based intelligent driving control method and apparatus, and electronic device
Saunier et al. A feature-based tracking algorithm for vehicles in intersections
CN108875803B (en) Hazardous chemical substance transport vehicle detection and identification method based on video image
CN108897317B (en) Automatic guided vehicle AGV path optimization method, related device and storage medium
CN113962274B (en) Abnormity identification method and device, electronic equipment and storage medium
CN117132872A (en) Intelligent collision recognition system for material transport vehicle on production line
CN112339773A (en) Monocular vision-based non-active lane departure early warning method and system
Ghahremannezhad et al. Real-time accident detection in traffic surveillance using deep learning
CN111753623A (en) Method, device and equipment for detecting moving object and storage medium
CN110796104A (en) Target detection method and device, storage medium and unmanned aerial vehicle
CN111079621A (en) Method and device for detecting object, electronic equipment and storage medium
Chen et al. Ship imaging trajectory extraction via an aggregated you only look once (YOLO) model
CN114724063B (en) Road traffic incident detection method based on deep learning
CN112215073A (en) Traffic marking line rapid identification and tracking method under high-speed motion scene
CN113705557A (en) Method, system, equipment and storage medium for detecting door post of rear door of container
Valencia et al. Overhead view bus passenger detection and counter using deepsort and tiny-yolo v4
CN112598660B (en) Automatic detection method for pulp cargo quantity in wharf loading and unloading process
CN115937765A (en) Image identification method, AGV material sorting method and system
CN115526837A (en) Abnormal driving detection method and device, electronic equipment and medium
CN114039279A (en) Control cabinet monitoring method and system in rail transit station
Yao An effective vehicle counting approach based on CNN
CN113989787A (en) Detection method and system for dangerous driving behaviors
CN111401104B (en) Classification model training method, classification method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication