CN113705549B - Road rescue work node determination method and device and related equipment - Google Patents

Road rescue work node determination method and device and related equipment Download PDF

Info

Publication number
CN113705549B
CN113705549B CN202111267060.6A CN202111267060A CN113705549B CN 113705549 B CN113705549 B CN 113705549B CN 202111267060 A CN202111267060 A CN 202111267060A CN 113705549 B CN113705549 B CN 113705549B
Authority
CN
China
Prior art keywords
rescue
node
target image
image
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111267060.6A
Other languages
Chinese (zh)
Other versions
CN113705549A (en
Inventor
洪子梦
施媛媛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Shanghai ICT Co Ltd
CM Intelligent Mobility Network Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Shanghai ICT Co Ltd
CM Intelligent Mobility Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Shanghai ICT Co Ltd, CM Intelligent Mobility Network Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202111267060.6A priority Critical patent/CN113705549B/en
Publication of CN113705549A publication Critical patent/CN113705549A/en
Application granted granted Critical
Publication of CN113705549B publication Critical patent/CN113705549B/en
Priority to PCT/CN2022/126059 priority patent/WO2023071874A1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Abstract

The invention provides a method and a device for determining road rescue work nodes and related equipment, wherein the method for determining the road rescue work nodes comprises the following steps: acquiring images acquired by a plurality of cameras, wherein the plurality of cameras are positioned at different positions on the rescue vehicle; inputting the images into a scene classification recognition model, and outputting road rescue work nodes, wherein the scene classification recognition model is obtained by training a plurality of image sets acquired in advance, and the road rescue work nodes comprise a node leaving a rescue work station, a node reaching a high-speed rescue site, a node starting rescue trailer work, a node ending rescue work and leaving the rescue site or a node returning to the rescue work station. Therefore, the accuracy of the output result of the road rescue working node is improved.

Description

Road rescue work node determination method and device and related equipment
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to a method and a device for determining road rescue work nodes and related equipment.
Background
With the continuous development of the automobile market, the incidence rate of road traffic accidents is increased, and the automobile road rescue industry is brought forward. In the process of continuous optimization and development of the road rescue industry, a series of specifications and requirements are provided for the problems of road rescue personnel management, standard flow control and the like. However, the current determination of the road rescue working node is performed by a sensor such as a door sensor or a power takeoff, so that the error of the determination result of the current road rescue working node is large.
Disclosure of Invention
The embodiment of the invention provides a method and a device for determining road rescue work nodes and related equipment, and aims to solve the problem that the error of the determination result of the current road rescue work node is large.
In order to solve the problems, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a method for determining a road rescue work node, including:
acquiring images acquired by a plurality of cameras, wherein the plurality of cameras are positioned at different positions on the rescue vehicle;
inputting the images into a scene classification recognition model, and outputting road rescue work nodes, wherein the scene classification recognition model is obtained by training a plurality of image sets acquired in advance, and the road rescue work nodes comprise a node leaving a rescue work station, a node reaching a high-speed rescue site, a node starting rescue trailer work, a node ending rescue work and leaving the rescue site or a node returning to the rescue work station.
In a second aspect, an embodiment of the present invention further provides a device for determining a road rescue work node, including:
the device comprises an acquisition module, a display module and a control module, wherein the acquisition module is used for acquiring images acquired by a plurality of cameras, and the cameras are positioned at different positions on the rescue vehicle;
and the output module is used for inputting the images into a scene classification recognition model and outputting road rescue work nodes, wherein the scene classification recognition model is obtained by training a plurality of pre-acquired image sets, and the road rescue work nodes comprise a node leaving a rescue work station, a node reaching a high-speed rescue site, a node starting a rescue trailer work, a node ending rescue work leaving the rescue site or a node returning to the rescue work station.
In a third aspect, an embodiment of the present invention further provides an electronic device, including: a transceiver, a memory, a processor, and a program stored on the memory and executable on the processor; the processor is configured to read the program in the memory to implement the steps of the method according to the first aspect.
In a fourth aspect, the embodiment of the present invention further provides a readable storage medium for storing a program, where the program, when executed by a processor, implements the steps in the method according to the foregoing first aspect.
In the embodiment of the invention, images acquired by a plurality of cameras are acquired, wherein the plurality of cameras are positioned at different positions on a rescue vehicle; inputting the images into a scene classification recognition model, and outputting road rescue work nodes, wherein the scene classification recognition model is obtained by training a plurality of image sets acquired in advance, and the road rescue work nodes comprise a node leaving a rescue work station, a node reaching a high-speed rescue site, a node starting rescue trailer work, a node ending rescue work and leaving the rescue site or a node returning to the rescue work station.
Therefore, a scene classification recognition model is obtained through training of a plurality of pre-obtained image sets, and then the images collected by a plurality of cameras are recognized by the scene classification recognition model, so that the road rescue work nodes are output, and the road rescue work nodes comprise a node leaving a rescue work station, a node arriving at a high-speed rescue site, a node starting rescue trailer work, a node ending rescue work leaving the rescue site or a node returning to the rescue work station, so that the error of the output result of the road rescue work nodes can be reduced, namely the accuracy of the output result of the road rescue work nodes is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a flowchart of a method for determining a road rescue work node according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a road rescue work node determination device provided in an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," and the like in the embodiments of the present invention are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Further, as used herein, "and/or" means at least one of the connected objects, e.g., a and/or B and/or C, means 7 cases including a alone, B alone, C alone, and both a and B present, B and C present, both a and C present, and A, B and C present.
Referring to fig. 1, fig. 1 is a flowchart of a method for determining a road rescue work node according to an embodiment of the present invention, as shown in fig. 1, including the following steps:
step 101, images acquired by a plurality of cameras are acquired, wherein the plurality of cameras are located at different positions on the rescue vehicle.
The number and the installation position of the cameras are not limited, for example: the rescue vehicle comprises a chassis, an inner wall of a cab, an outer wall of the cab and at least one side of a vehicle body of the rescue vehicle (the vehicle body can comprise a first side, a second side and a third side, the first side and the third side are opposite sides, one end of the first side and one end of the third side are respectively connected with the cab, the second side is respectively connected with the other end of the first side and the other end of the third side, namely the second side and the cab are opposite sides) and can be provided with a camera, and the cameras can be used for collecting images.
The camera can acquire images in real time, and of course, the camera can acquire images at fixed intervals. In addition, the camera may also acquire an image when a preset condition is met, where the preset condition may include at least one of the following: and receiving the acquisition instruction, positioning the rescue vehicle in a target area (the target area comprises a rescue work station stop point or an expressway), and starting the rescue vehicle.
And 102, inputting the images into a scene classification recognition model, and outputting road rescue work nodes, wherein the scene classification recognition model is obtained by training a plurality of pre-acquired image sets, and the road rescue work nodes comprise a node leaving a rescue work station, a node reaching a high-speed rescue field, a node starting a rescue trailer, a node ending the rescue work and leaving the rescue field or a node returning to the rescue work station.
Therefore, a scene classification recognition model is obtained through training of a plurality of pre-obtained image sets, and then the images collected by a plurality of cameras are recognized by the scene classification recognition model, so that the road rescue work nodes are output, and the road rescue work nodes comprise a node leaving a rescue work station, a node arriving at a high-speed rescue site, a node starting rescue trailer work, a node ending rescue work leaving the rescue site or a node returning to the rescue work station, so that the error of the output result of the road rescue work nodes can be reduced, namely the accuracy of the output result of the road rescue work nodes is improved.
It should be noted that the specific structure of the scene classification recognition model is not limited herein. For example: as an optional implementation manner, the scene classification recognition model is an EfficientNet convolutional neural network model. Because the speed and the precision of the EfficientNet convolutional neural network model are high, the training speed of the EfficientNet convolutional neural network model is high, the training time is shortened, the identification precision of the EfficientNet convolutional neural network model is high, and the accuracy of the determination result of the road rescue work node is improved.
The EfficientNet convolutional neural network model comprises network structures with various different parameters, such as: the EfficientNet convolutional neural network model comprises network structures with eight different parameters from EfficientNet-B0 to EfficientNet-B7, and the training speeds and the recognition accuracies of the network structures with the different parameters are different.
It should be noted that, the steps in this embodiment may be applied to a vehicle-mounted terminal on a rescue vehicle, and the vehicle-mounted terminal may be referred to as an algorithm box for localized identification of the rescue vehicle, a scene classification offline identifier of a scene classification identification model may be deployed in the middle of the algorithm box, and a network-side device may be a rescue monitoring platform, and the vehicle-mounted terminal and the rescue monitoring platform may communicate with each other through a network.
It should be noted that, the steps of acquiring the image, obtaining the image, outputting the roadside rescue work node, and the like may be implemented on the vehicle-mounted terminal, that is, may be implemented offline on the vehicle-mounted terminal, and the roadside rescue work node is uploaded to the network-side device after the roadside rescue work node is output, so that communication with the network-side device is not required before the roadside rescue work node is output, thereby reducing consumption of computing resources and power consumption of each device.
As an optional implementation, before acquiring images acquired by a plurality of cameras, the method further includes:
acquiring the plurality of image sets by the plurality of cameras;
inputting the image sets into an initial model for iterative training to obtain the scene classification recognition model, wherein the initial model is an EfficientNet-B2 convolutional neural network model.
That is, in the present embodiment, an EfficientNet-B2 convolutional neural network model is adopted, that is, the parameters of the EfficientNet-B2 convolutional neural network model are EfficientNet-B2, while the width coefficient of the EfficientNet-B2 convolutional neural network model is 1.1, the depth coefficient is 1.2, the input image resolution is 260, and the drainage is 0.3.
In addition, the inference speed of the above described EfficientNet-B2 convolutional neural network model may be less than 100ms per frame, and a drop rate of 0.3 means that the probability of the identification result of the road rescue work node of each frame image by the EfficientNet-B2 convolutional neural network model is greater than 0.3 (i.e., 30%) before being stored, and the probability of the identification result of the road rescue work node of each frame image is less than or equal to 0.3 and is not stored.
It should be noted that the maximum possibility of the identification result of the roadside assistance work node of each frame of image may be determined as the identification result of the frame of image, for example: the possibilities of the road rescue work nodes of a certain frame of image are 0.5 one and 0.7 the other, and the road rescue work node corresponding to 0.5 can be a stop node leaving the rescue work station, and the road rescue work node corresponding to 0.7 can be a high-speed rescue site arriving node, so that the road rescue work node of the frame of image is a high-speed rescue site arriving node.
It should be noted that the recognition result of the image collected by each camera needs to correspond to the installation position thereof, and if the recognition result obviously does not correspond to the installation position thereof, the recognition result is unreliable, for example: the third image that the third camera was gathered is that the trailer structure is in the second position, because the third camera is located the automobile body, and the trailer structure is gathered through the second camera that can only be located the chassis and is obtained, consequently, can judge this moment that above-mentioned recognition result is unreliable.
In the embodiment, the initial model is the EfficientNet-B2 convolutional neural network model, and the network model has a high convergence rate in the training process, so that the training speed is improved, and the time consumed by training is reduced. Meanwhile, a plurality of image sets are adopted to train the initial model, and the images in each image set can belong to the same road rescue working node, so that the training speed of the model can be further improved.
It should be noted that the initial model may also adopt a convolutional neural network model with other parameters.
As an alternative embodiment, the acquiring the plurality of image sets by the plurality of cameras includes:
acquiring a plurality of shot images through the plurality of cameras;
the plurality of captured images are classified into the plurality of image sets by an image classification technique.
The states of the components of the rescue vehicle in the images in each image set can belong to the same state, and the image classification effect of the EfficientNet-B2 convolutional neural network model is good, so that the EfficientNet-B2 convolutional neural network model is considered to be provided with an image classification technology, and the initial model adopts the EfficientNet-B2 convolutional neural network model.
Therefore, the EfficientNet-B2 convolutional neural network model can be used for directly carrying out image classification to obtain a plurality of image sets, and a model does not need to be independently arranged to carry out image classification on shot images, so that the training speed of the model is further improved, the consumption of computing resources is reduced, meanwhile, the image classification technology is used for carrying out characteristic operation on the whole image, and the workload of the model in the sample processing stage can be greatly reduced.
Of course, other models may be used to classify the captured images, and the models may be provided with image classification techniques.
As an optional implementation manner, the plurality of cameras include a first camera, a second camera and a third camera, the first camera is located inside a cab of the rescue vehicle, the second camera is located on a chassis of the rescue vehicle, and the third camera is located on a body of the rescue vehicle;
the inputting the image into a scene classification recognition model and outputting a road rescue work node comprises:
inputting at least one of a first image collected by the first camera, a second image collected by the second camera and a third image collected by the third camera into the scene classification recognition model, and outputting a road rescue work node.
Wherein, first camera can be located the inner wall of driver's cabin, and the second camera can be located the intermediate position of chassis, or is located the chassis and keeps away from the one end of driver's cabin (the aforesaid second side promptly), and the third camera can be located at least one side of the automobile body of rescue vehicle, for example: the third camera may be located on at least one of the first and second sides.
In this embodiment, at least one of the first image acquired by the first camera, the second image acquired by the second camera, and the third image acquired by the third camera may be input into the scene classification recognition model, so that the scene classification recognition model may determine the roadside assistance working node by combining the first image, the second image, and the third image acquired from different positions, thereby improving the accuracy of the determination result of the roadside assistance working node and reducing the error of the determination result of the roadside assistance working node.
Meanwhile, the images shot by the first camera, the second camera and the third camera can completely and accurately reflect the current node of the rescue vehicle, so that the accuracy of the determination result of the road rescue working node is high,
Of course, the camera may also be installed at other positions, and the scene classification recognition model may also determine the road rescue work node according to images acquired by the cameras at other positions, which is not described herein again specifically, and reference may be made to the relevant principle of the above embodiment.
As an optional implementation, the inputting at least one of the first image captured by the first camera, the second image captured by the second camera, and the third image captured by the third camera into the scene classification recognition model and outputting a road rescue work node includes:
inputting at least one of a first image collected by the first camera, a second image collected by the second camera and a third image collected by the third camera into the scene classification recognition model, and outputting a road rescue work node according to a target rule, wherein the number of the first image, the second image and the third image is multiple;
wherein the target rule comprises at least one of:
under the condition that at least part of first target images are included in the plurality of first images and the speed of the rescue vehicle is greater than a first threshold value, determining a road rescue work node as the exit rescue work station stagnation node, wherein the display content of the first target images comprises a driver in a cab of the rescue vehicle;
determining a road rescue work node as the arrival high-speed rescue site node under the condition that the first images comprise the first target image and a second target image and the speed of the rescue vehicle is equal to a first threshold, wherein the display content of the second target image does not comprise a driver in a cab of the rescue vehicle, and the time corresponding to the first target image is positioned before the time corresponding to the second target image;
determining a road rescue work node as the starting rescue trailer work node under the condition that a third target image and a fourth target image are included in the plurality of second images and/or a fifth target image is included in the plurality of third images, wherein the display content of the third target image comprises that the trailer structure of the rescue vehicle is located at a first position, the display content of the fourth target image comprises that the trailer structure of the rescue vehicle is located at a second position, and the display content of the fifth target image comprises at least one of a rescued vehicle and a rescue worker;
under the condition that the plurality of second images comprise the third target image, the fourth target image and a sixth target image, determining that a road rescue work node is a node for finishing rescue work and leaving a rescue scene, wherein the display content of the fourth target image comprises that the time length of the trailer structure of the rescue vehicle at the second position is longer than the first time length, and the display content of the sixth target image comprises that a rescued vehicle is positioned on the rescue vehicle;
and determining a road rescue work node as the parking point node of the returned rescue work station under the condition that the plurality of first images comprise a seventh target image and an eighth target image, the plurality of third images comprise a ninth target image, and the speed of the rescue vehicle is equal to a first threshold value, wherein the display content of the seventh target image comprises a driver in a cab of the rescue vehicle, the display content of the eighth target image does not comprise the driver in the cab of the rescue vehicle, the time corresponding to the seventh target image is located before the time corresponding to the eighth target image, and the display content of the ninth target image comprises that no rescued vehicle is arranged on the rescue vehicle.
Wherein the first target image indicates the presence of a driver inside the cab, the first threshold value may be 0km/h, the second target image indicates the absence of a driver inside the cab, the third target image indicates that the trailer structure is in an initial position (i.e., the first position indicates the initial position), the fourth target image indicates that the trailer structure is out of the initial position (i.e., the second position is a different position from the first position indicating that the trailer structure is performing a towing operation), the fifth target image includes at least one of a rescued vehicle and a rescue worker, and the time length that the trailer structure is located the second position is longer than first time length can reduce the trailer structure not because road conditions jolt etc. cause the trailer structure to be located the influence of phenomenon such as second position to the degree of accuracy of judged result, can be with accurate definite this moment the trailer structure is really carrying out trailer work.
In the embodiment, because the judgment standards of different road rescue working nodes are different, the road rescue working nodes can be accurately and quickly determined according to the corresponding standards, and the accuracy and the determination speed of the determination result of the road rescue working nodes are improved.
As an optional implementation, the method further comprises: and sending the road rescue work node to network side equipment. Therefore, the network side equipment can determine the road rescue work node where each rescue vehicle is located, so that the rescue vehicles can be conveniently allocated, the management of the rescue vehicles is improved, and the allocation efficiency of the rescue vehicles is improved.
It should be noted that, while the road rescue work node is determined, a time node corresponding to the road rescue work node may be recorded, and the road rescue work node and the corresponding time node may be uploaded to the network side device at the same time. Therefore, the monitoring effect and the checking efficiency of each rescue vehicle are enhanced.
The above embodiment is illustrated below by a specific example.
The judgment condition of leaving the rescue work station stationing point node is as follows: when the vehicle is started to strike fire and electrified, the speed of the vehicle is greater than 0km/h (namely a first threshold value), the information of 'inside-person' (namely the information of a first target image) is detected in a plurality of continuous frames of cameras (namely the first camera) in the vehicle, a judgment conclusion is given that the rescue vehicle leaves a rescue work station stopping point (namely is located at a node leaving the rescue work station stopping point), and the time node is recorded.
The judgment conditions for reaching the high-speed rescue site node are as follows: the vehicle speed is still 0km/h, the picture information of the camera in the vehicle is changed from the detection result of 'inside-person' (namely the information of the first target image) to 'inside-no-person' (namely the information of the second target image), a judgment conclusion is given that the rescue vehicle reaches a high-speed rescue site node, and the time node is recorded.
The judgment conditions of the working node for starting the rescue trailer are as follows: detecting a 'bottom-start job' (i.e. information of a fourth target image) at the bottom camera (i.e. the second camera), and at the start of the actual rescue, the picture acquired by the bottom camera is usually that the trailer board or the trailer arm (i.e. the trailer structure) of the rescue vehicle is moving (i.e. both the third target image and the fourth target image are included and it is determined whether the trailer structure is moving according to the third target image and the fourth target image); or "right-side-start work" is detected by the right-side camera (i.e., the third camera), and in the actual rescue work, the picture acquired by the right-side camera usually includes the rescued vehicle parked at the rear and the rescue workers wearing the work uniform (i.e., the information of the fifth target image). The node at which rescue trailer work is initiated is determined by successive multi-frame stationary detections of the two tag categories and recorded.
The judgment conditions for ending the rescue work and leaving the rescue site node are as follows: the bottom camera detects bottom-working information and bottom-finishing information in sequence, when the rescue vehicle is actually rescued, a working picture acquired by the bottom camera is a scene picture that a trailer plate or a trailer arm of the rescue vehicle leaves the original fixed scene picture (namely, whether the scene picture is positioned on the rescue vehicle can be determined by combining the third target image and the fourth target image), and a finishing picture acquired by the bottom camera is a scene picture that the rescue vehicle is placed on the rescue vehicle and is fixed (namely, the display content of the sixth target image). The rescue work is judged to be finished and leave the rescue site node through the stable detection of the two types of label pictures, and the time node is recorded.
The judgment conditions for returning to the parking point node of the rescue work station are as follows: the recognition of the front 4 working nodes, the rescue vehicle speed is reduced to be still, the information detected by the offline recognizer of the AI box at the vehicle interior camera is changed from 'inside-occupied' (i.e. the information of the seventh target image) to 'inside-unoccupied' (i.e. the information of the eighth target image), and the right camera detects a 'right-blank' picture (i.e. the information of the ninth target image) for a plurality of consecutive frames. And giving a judgment conclusion that the rescue vehicle leaves the node of the parking point of the return rescue work station and recording the time node.
It should be noted that the image set can be expressed as follows:
the image of the rescue work picture collected by the camera at the bottom of the rescue vehicle (namely the second camera) is classified as follows: "bottom-start job", "bottom-on job", "bottom-end job", "bottom-vehicle suspension". The image of the rescue work picture collected by the camera on the right side of the rescue vehicle (namely the third camera) is classified into: "right side-empty plate", "right side-start job", "right side-end job". The image of the rescue work picture collected by the camera (namely the first camera) in the rescue vehicle is classified into: "internally-occupied", "internally-unoccupied". The image sets totaling 9 categories are finally sorted. The initial model is trained through the image set, so that the finally trained scene classification recognition model can quickly and accurately determine the current road rescue work node of the rescue vehicle by recognizing the states of all components of the rescue vehicle.
Referring to fig. 2, fig. 2 is a structural diagram of a roadside assistance work node determination apparatus according to an embodiment of the present invention. As shown in fig. 2, the roadside assistance work node determination device 200 includes:
an obtaining module 201, configured to obtain images acquired by multiple cameras, where the multiple cameras are located at different positions on a rescue vehicle;
and the output module 202 is configured to input the images into a scene classification recognition model, and output road rescue work nodes, where the scene classification recognition model is obtained by training a plurality of image sets acquired in advance, and the road rescue work nodes include a node leaving a rescue work station, a node reaching a high-speed rescue site, a node starting a rescue trailer work, a node leaving the rescue site after finishing rescue work, or a node returning to the rescue work station.
Optionally, the plurality of cameras include a first camera located inside a cab of the rescue vehicle, a second camera located on a chassis of the rescue vehicle, and a third camera located on a body of the rescue vehicle;
the output module 202 is further configured to input at least one of the first image acquired by the first camera, the second image acquired by the second camera, and the third image acquired by the third camera into the scene classification recognition model, and output a road rescue work node.
Optionally, the output module 202 is further configured to input at least one of a first image acquired by the first camera, a second image acquired by the second camera, and a third image acquired by the third camera into the scene classification recognition model, and output a road rescue work node according to a target rule, where the number of the first image, the number of the second image, and the number of the third image are all multiple;
wherein the target rule comprises at least one of:
under the condition that at least part of first target images are included in the plurality of first images and the speed of the rescue vehicle is greater than a first threshold value, determining a road rescue work node as the exit rescue work station stagnation node, wherein the display content of the first target images comprises a driver in a cab of the rescue vehicle;
determining a road rescue work node as the arrival high-speed rescue site node under the condition that the first images comprise the first target image and a second target image and the speed of the rescue vehicle is equal to a first threshold, wherein the display content of the second target image does not comprise a driver in a cab of the rescue vehicle, and the time corresponding to the first target image is positioned before the time corresponding to the second target image;
determining a road rescue work node as the starting rescue trailer work node under the condition that a third target image and a fourth target image are included in the plurality of second images and/or a fifth target image is included in the plurality of third images, wherein the display content of the third target image comprises that the trailer structure of the rescue vehicle is located at a first position, the display content of the fourth target image comprises that the trailer structure of the rescue vehicle is located at a second position, and the display content of the fifth target image comprises at least one of a rescued vehicle and a rescue worker;
under the condition that the plurality of second images comprise the third target image, the fourth target image and the sixth target image, determining that a road rescue work node is a node for finishing rescue work and leaving a rescue scene, wherein the display content of the fourth target image comprises that the time length of the trailer structure of the rescue vehicle at the second position is longer than the first time length, and the display content of the sixth target image comprises that a rescued vehicle is positioned on the rescue vehicle;
and determining a road rescue work node as the parking point node of the returned rescue work station under the condition that the plurality of first images comprise a seventh target image and an eighth target image, the plurality of third images comprise a ninth target image, and the speed of the rescue vehicle is equal to a first threshold value, wherein the display content of the seventh target image comprises a driver in a cab of the rescue vehicle, the display content of the eighth target image does not comprise the driver in the cab of the rescue vehicle, the time corresponding to the seventh target image is located before the time corresponding to the eighth target image, and the display content of the ninth target image comprises that no rescued vehicle is arranged on the rescue vehicle.
Optionally, the scene classification recognition model is an EfficientNet convolutional neural network model.
Optionally, the device 200 for determining a road rescue work node further includes:
an acquisition module for acquiring the plurality of image sets by the plurality of cameras;
and the training module is used for inputting the image sets into an initial model for iterative training to obtain the scene classification recognition model, wherein the initial model is an EfficientNet-B2 convolutional neural network model.
Optionally, the acquisition module comprises:
the acquisition submodule is used for acquiring a plurality of shot images through the plurality of cameras;
a classification sub-module for classifying the plurality of captured images into the plurality of image sets by an image classification technique.
Optionally, the device 200 for determining a road rescue work node further includes:
and the sending module is used for sending the road rescue work node to network side equipment.
The road rescue work node determining apparatus 200 can implement each process of the method embodiment of fig. 1 in the embodiment of the present invention, and achieve the same beneficial effects, and for avoiding repetition, details are not repeated here.
The embodiment of the invention also provides the electronic equipment. Referring to fig. 3, the electronic device may include a processor 301, a memory 302, and a program 3021 stored on the memory 302 and operable on the processor 301. When executed by the processor 301, the program 3021 may implement any of the steps of the method embodiment shown in fig. 1 and achieve the same advantages, and thus, the description thereof is omitted here.
Those skilled in the art will appreciate that all or part of the steps of the method according to the above embodiments may be implemented by hardware associated with program instructions, and the program may be stored in a readable medium. An embodiment of the present invention further provides a readable storage medium, where a computer program is stored on the readable storage medium, and when the computer program is executed by a processor, any step in the method embodiment corresponding to fig. 1 may be implemented, and the same technical effect may be achieved, and in order to avoid repetition, details are not repeated here.
The storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. A road rescue work node determination method is characterized by comprising the following steps:
acquiring images acquired by a plurality of cameras, wherein the plurality of cameras are positioned at different positions on the rescue vehicle;
inputting the images into a scene classification recognition model, and outputting road rescue work nodes, wherein the scene classification recognition model is obtained by training a plurality of image sets acquired in advance, and the road rescue work nodes comprise a stop node leaving a rescue work station, a high-speed rescue site node, a work node starting a rescue trailer, a stop node ending the rescue work and leaving the rescue site node or returning to the rescue work station;
the plurality of cameras comprise a first camera, a second camera and a third camera, the first camera is positioned inside a cab of the rescue vehicle, the second camera is positioned on a chassis of the rescue vehicle, and the third camera is positioned on a body of the rescue vehicle;
the inputting the image into a scene classification recognition model and outputting a road rescue work node comprises:
inputting at least one of a first image collected by the first camera, a second image collected by the second camera and a third image collected by the third camera into the scene classification recognition model, and outputting a road rescue work node;
the inputting at least one of the first image collected by the first camera, the second image collected by the second camera and the third image collected by the third camera into the scene classification recognition model and outputting a road rescue work node includes:
inputting at least one of a first image collected by the first camera, a second image collected by the second camera and a third image collected by the third camera into the scene classification recognition model, and outputting a road rescue work node according to a target rule, wherein the number of the first image, the second image and the third image is multiple;
wherein the target rule comprises:
and under the condition that at least part of first target images are included in the plurality of first images and the speed of the rescue vehicle is greater than a first threshold value, determining a road rescue work node as the exit rescue work station stagnation node, wherein the display content of the first target images comprises a driver in a cab of the rescue vehicle.
2. The method of claim 1, wherein the target rule further comprises at least one of:
determining a road rescue work node as the arrival high-speed rescue site node under the condition that the first images comprise the first target image and a second target image and the speed of the rescue vehicle is equal to a first threshold, wherein the display content of the second target image does not comprise a driver in a cab of the rescue vehicle, and the time corresponding to the first target image is positioned before the time corresponding to the second target image;
determining a road rescue work node as the starting rescue trailer work node under the condition that a third target image and a fourth target image are included in the plurality of second images and/or a fifth target image is included in the plurality of third images, wherein the display content of the third target image comprises that the trailer structure of the rescue vehicle is located at a first position, the display content of the fourth target image comprises that the trailer structure of the rescue vehicle is located at a second position, and the display content of the fifth target image comprises at least one of a rescued vehicle and a rescue worker;
under the condition that the plurality of second images comprise the third target image, the fourth target image and a sixth target image, determining that a road rescue work node is a node for finishing rescue work and leaving a rescue scene, wherein the display content of the fourth target image comprises that the time length of the trailer structure of the rescue vehicle at the second position is longer than the first time length, and the display content of the sixth target image comprises that a rescued vehicle is positioned on the rescue vehicle;
and determining a road rescue work node as the parking point node of the returned rescue work station under the condition that the plurality of first images comprise a seventh target image and an eighth target image, the plurality of third images comprise a ninth target image, and the speed of the rescue vehicle is equal to a first threshold value, wherein the display content of the seventh target image comprises a driver in a cab of the rescue vehicle, the display content of the eighth target image does not comprise the driver in the cab of the rescue vehicle, the time corresponding to the seventh target image is located before the time corresponding to the eighth target image, and the display content of the ninth target image comprises that no rescued vehicle is arranged on the rescue vehicle.
3. The method of claim 1 or 2, wherein the scene classification recognition model is an EfficientNet convolutional neural network model.
4. The method of claim 3, wherein prior to acquiring the images captured by the plurality of cameras, the method further comprises:
acquiring the plurality of image sets by the plurality of cameras;
inputting the image sets into an initial model for iterative training to obtain the scene classification recognition model, wherein the initial model is an EfficientNet-B2 convolutional neural network model.
5. The method of claim 4, wherein said acquiring the plurality of image sets by the plurality of cameras comprises:
acquiring a plurality of shot images through the plurality of cameras;
the plurality of captured images are classified into the plurality of image sets by an image classification technique.
6. The method of claim 1, further comprising:
and sending the road rescue work node to network side equipment.
7. A road rescue work node determination device, comprising:
the device comprises an acquisition module, a display module and a control module, wherein the acquisition module is used for acquiring images acquired by a plurality of cameras, and the cameras are positioned at different positions on the rescue vehicle;
the output module is used for inputting the images into a scene classification recognition model and outputting road rescue work nodes, wherein the scene classification recognition model is obtained by training a plurality of pre-acquired image sets, and the road rescue work nodes comprise a node leaving a rescue work station, a node reaching a high-speed rescue site, a node starting a rescue trailer work, a node ending rescue work leaving the rescue site or a node returning to the rescue work station;
the plurality of cameras comprise a first camera, a second camera and a third camera, the first camera is positioned inside a cab of the rescue vehicle, the second camera is positioned on a chassis of the rescue vehicle, and the third camera is positioned on a body of the rescue vehicle;
the output module is further configured to input at least one of a first image acquired by the first camera, a second image acquired by the second camera, and a third image acquired by the third camera into the scene classification recognition model, and output a road rescue work node;
the output module is further configured to input at least one of a first image acquired by the first camera, a second image acquired by the second camera, and a third image acquired by the third camera into the scene classification recognition model, and output a road rescue work node according to a target rule, where the number of the first image, the number of the second image, and the number of the third image are all multiple;
wherein the target rule comprises:
and under the condition that at least part of first target images are included in the plurality of first images and the speed of the rescue vehicle is greater than a first threshold value, determining a road rescue work node as the exit rescue work station stagnation node, wherein the display content of the first target images comprises a driver in a cab of the rescue vehicle.
8. The apparatus of claim 7, wherein the target rule further comprises at least one of:
determining a road rescue work node as the arrival high-speed rescue site node under the condition that the first images comprise the first target image and a second target image and the speed of the rescue vehicle is equal to a first threshold, wherein the display content of the second target image does not comprise a driver in a cab of the rescue vehicle, and the time corresponding to the first target image is positioned before the time corresponding to the second target image;
determining a road rescue work node as the starting rescue trailer work node under the condition that a third target image and a fourth target image are included in the plurality of second images and/or a fifth target image is included in the plurality of third images, wherein the display content of the third target image comprises that the trailer structure of the rescue vehicle is located at a first position, the display content of the fourth target image comprises that the trailer structure of the rescue vehicle is located at a second position, and the display content of the fifth target image comprises at least one of a rescued vehicle and a rescue worker;
under the condition that the plurality of second images comprise the third target image, the fourth target image and a sixth target image, determining that a road rescue work node is a node for finishing rescue work and leaving a rescue scene, wherein the display content of the fourth target image comprises that the time length of the trailer structure of the rescue vehicle at the second position is longer than the first time length, and the display content of the sixth target image comprises that a rescued vehicle is positioned on the rescue vehicle;
and determining a road rescue work node as the parking point node of the returned rescue work station under the condition that the plurality of first images comprise a seventh target image and an eighth target image, the plurality of third images comprise a ninth target image, and the speed of the rescue vehicle is equal to a first threshold value, wherein the display content of the seventh target image comprises a driver in a cab of the rescue vehicle, the display content of the eighth target image does not comprise the driver in the cab of the rescue vehicle, the time corresponding to the seventh target image is located before the time corresponding to the eighth target image, and the display content of the ninth target image comprises that no rescued vehicle is arranged on the rescue vehicle.
9. An electronic device, comprising: a transceiver, a memory, a processor, and a program stored on the memory and executable on the processor; characterized in that the processor, reading the program in the memory, implements the steps in the method for determining a roadside assistance work node according to any one of claims 1 to 6.
10. A readable storage medium storing a program, wherein the program when executed by a processor implements the steps in the roadside assistance work node determination method as claimed in any one of claims 1 to 6.
CN202111267060.6A 2021-10-29 2021-10-29 Road rescue work node determination method and device and related equipment Active CN113705549B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111267060.6A CN113705549B (en) 2021-10-29 2021-10-29 Road rescue work node determination method and device and related equipment
PCT/CN2022/126059 WO2023071874A1 (en) 2021-10-29 2022-10-19 Roadside assistance working node determining method and apparatus, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111267060.6A CN113705549B (en) 2021-10-29 2021-10-29 Road rescue work node determination method and device and related equipment

Publications (2)

Publication Number Publication Date
CN113705549A CN113705549A (en) 2021-11-26
CN113705549B true CN113705549B (en) 2022-02-11

Family

ID=78647443

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111267060.6A Active CN113705549B (en) 2021-10-29 2021-10-29 Road rescue work node determination method and device and related equipment

Country Status (2)

Country Link
CN (1) CN113705549B (en)
WO (1) WO2023071874A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113705549B (en) * 2021-10-29 2022-02-11 中移(上海)信息通信科技有限公司 Road rescue work node determination method and device and related equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9053588B1 (en) * 2014-03-13 2015-06-09 Allstate Insurance Company Roadside assistance management
CN109334591A (en) * 2018-11-28 2019-02-15 奇瑞汽车股份有限公司 Control method, device and the storage medium of intelligent automobile
CN110249280A (en) * 2017-04-28 2019-09-17 深圳市元征科技股份有限公司 Automatic Pilot roadside assistance method, working truck and control centre
CN210093392U (en) * 2019-09-23 2020-02-18 东南大学 Heavy rescue vehicle operation process recording and remote monitoring system
CN111985449A (en) * 2020-09-03 2020-11-24 深圳壹账通智能科技有限公司 Rescue scene image identification method, device, equipment and computer medium
CN112818725A (en) * 2019-11-15 2021-05-18 中移智行网络科技有限公司 Rescue vehicle operation identification method and device, storage medium and computer equipment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10445817B2 (en) * 2017-10-16 2019-10-15 Allstate Insurance Company Geotagging location data
CN111523579B (en) * 2020-04-14 2022-05-03 燕山大学 Vehicle type recognition method and system based on improved deep learning
CN111563494B (en) * 2020-07-16 2020-10-27 平安国际智慧城市科技股份有限公司 Behavior identification method and device based on target detection and computer equipment
CN112381020A (en) * 2020-11-20 2021-02-19 深圳市银星智能科技股份有限公司 Video scene identification method and system and electronic equipment
CN113705549B (en) * 2021-10-29 2022-02-11 中移(上海)信息通信科技有限公司 Road rescue work node determination method and device and related equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9053588B1 (en) * 2014-03-13 2015-06-09 Allstate Insurance Company Roadside assistance management
CN110249280A (en) * 2017-04-28 2019-09-17 深圳市元征科技股份有限公司 Automatic Pilot roadside assistance method, working truck and control centre
CN109334591A (en) * 2018-11-28 2019-02-15 奇瑞汽车股份有限公司 Control method, device and the storage medium of intelligent automobile
CN210093392U (en) * 2019-09-23 2020-02-18 东南大学 Heavy rescue vehicle operation process recording and remote monitoring system
CN112818725A (en) * 2019-11-15 2021-05-18 中移智行网络科技有限公司 Rescue vehicle operation identification method and device, storage medium and computer equipment
CN111985449A (en) * 2020-09-03 2020-11-24 深圳壹账通智能科技有限公司 Rescue scene image identification method, device, equipment and computer medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Agent based Assistance System with Ubiquitous Data Mining for road safety;Karuna.C. G.等;《2009 International Conference on Intelligent Agent & Multi-Agent Systems》;20090901;第1-2页 *

Also Published As

Publication number Publication date
WO2023071874A1 (en) 2023-05-04
CN113705549A (en) 2021-11-26

Similar Documents

Publication Publication Date Title
CN109087510B (en) Traffic monitoring method and device
WO2021056895A1 (en) Multi-target detection and recognition method and assisted driving method and system
DE112018007287T5 (en) VEHICLE SYSTEM AND METHOD FOR DETECTING OBJECTS AND OBJECT DISTANCE
CN110992683A (en) Dynamic image perception-based intersection blind area early warning method and system
CN109472251B (en) Object collision prediction method and device
CN109671006A (en) Traffic accident treatment method, apparatus and storage medium
CN111723854B (en) Expressway traffic jam detection method, equipment and readable storage medium
DE102012021403A1 (en) Method for identifying a vehicle detected by a sensor device
DE112018004953T5 (en) INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING PROCESS, PROGRAM AND MOVING BODY
CN110009929A (en) A kind of Vehicle berth management method, equipment and system
CN113705549B (en) Road rescue work node determination method and device and related equipment
CN111967396A (en) Processing method, device and equipment for obstacle detection and storage medium
CN110930715B (en) Method and system for identifying red light running of non-motor vehicle and violation processing platform
CN115657002A (en) Vehicle motion state estimation method based on traffic millimeter wave radar
CN113657265B (en) Vehicle distance detection method, system, equipment and medium
CN111507269B (en) Parking space state identification method and device, storage medium and electronic device
CN110866441B (en) Vehicle identification and continuation tracking method and device and road side system
CN114119955A (en) Method and device for detecting potential dangerous target
CN113361299A (en) Abnormal parking detection method and device, storage medium and electronic equipment
CN115629385A (en) Vehicle queuing length real-time detection method based on correlation of millimeter wave radar and camera
CN115953759A (en) Method and device for detecting parking space limiter, electronic equipment and storage medium
CN111161542B (en) Vehicle identification method and device
CN113837222A (en) Cloud-edge cooperative machine learning deployment application method and device for millimeter wave radar intersection traffic monitoring system
CN114255452A (en) Target ranging method and device
CN112818725A (en) Rescue vehicle operation identification method and device, storage medium and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant