WO2023071874A1 - Roadside assistance working node determining method and apparatus, electronic device, and storage medium - Google Patents

Roadside assistance working node determining method and apparatus, electronic device, and storage medium Download PDF

Info

Publication number
WO2023071874A1
WO2023071874A1 PCT/CN2022/126059 CN2022126059W WO2023071874A1 WO 2023071874 A1 WO2023071874 A1 WO 2023071874A1 CN 2022126059 W CN2022126059 W CN 2022126059W WO 2023071874 A1 WO2023071874 A1 WO 2023071874A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
rescue
target image
node
vehicle
Prior art date
Application number
PCT/CN2022/126059
Other languages
French (fr)
Chinese (zh)
Inventor
洪子梦
施媛媛
Original Assignee
中移(上海)信息通信科技有限公司
中移智行网络科技有限公司
中国移动通信集团有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中移(上海)信息通信科技有限公司, 中移智行网络科技有限公司, 中国移动通信集团有限公司 filed Critical 中移(上海)信息通信科技有限公司
Publication of WO2023071874A1 publication Critical patent/WO2023071874A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Definitions

  • the embodiments of the present application relate to the technical field of communications, and in particular to a method and device for determining a road rescue work node, electronic equipment, and a storage medium.
  • the current road rescue work node is determined by sensors such as door magnets or power take-offs, which leads to a large error in the determination result of the current road rescue work node.
  • Embodiments of the present application provide a method and device for determining a road rescue work node, an electronic device, and a storage medium, which can improve the accuracy of a determination result of a current road rescue work node.
  • the embodiment of the present application provides a method for determining road rescue work nodes, including:
  • the image is input into the scene classification and recognition model, and the road rescue work node is output, wherein the scene classification and recognition model is obtained by training a plurality of pre-acquired image sets.
  • the embodiment of the present application also provides a road rescue work node determination device, including:
  • An acquisition module configured to acquire images collected by multiple cameras, wherein the multiple cameras are located at different positions on the rescue vehicle;
  • the output module is configured to input the image into the scene classification and recognition model, and output the road rescue work node, wherein the scene classification and recognition model is trained through a plurality of pre-acquired image sets.
  • the embodiment of the present application also provides an electronic device, including: a transceiver, a memory, a processor, and a program stored on the memory and operable on the processor; the processor is used to read the The program implements the steps in the method described in the aforementioned first aspect.
  • the embodiment of the present application further provides a readable storage medium for storing a program, and when the program is executed by a processor, the steps in the method described in the aforementioned first aspect are implemented.
  • the images collected by multiple cameras are obtained, wherein the multiple cameras are located at different positions on the rescue vehicle; the images are input into the scene classification recognition model, and the road rescue work node is output, wherein,
  • the scene classification recognition model is obtained by training a plurality of pre-acquired image sets.
  • the scene classification and recognition model is obtained through pre-acquired multiple image sets, and then the scene classification and recognition model is used to identify the images collected by multiple cameras, so as to output the road rescue work node, because the scene classification model is obtained through pre-acquired multiple images
  • the multiple image sets include various images collected by the camera on the rescue vehicle, so that the output result of the scene classification and recognition model is more accurate, which can reduce the error of the output result of the road rescue work node, that is, improve the road rescue work node. The accuracy of the output results.
  • Fig. 1 is a flow chart of a method for determining a road rescue work node provided by an embodiment of the present application
  • FIG. 2 is an effect diagram of a method for determining a road rescue work node provided by an embodiment of the present application
  • Fig. 3 is a schematic structural diagram of a device for determining a road rescue work node provided by an embodiment of the present application
  • FIG. 4 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • Fig. 1 is a flow chart of a method for determining a road rescue work node provided by the embodiment of the present application, as shown in Fig. 1, including the following steps:
  • Step 101 acquiring images collected by multiple cameras.
  • the device for determining a road rescue work node acquires images collected by multiple cameras, where the multiple cameras are located at different positions on the rescue vehicle.
  • the device for determining the road rescue working node may be a terminal with corresponding processing functions on the rescue vehicle.
  • the quantity and installation positions of the cameras are not limited here, for example: at least one side of the chassis of the rescue vehicle, the inner wall of the driver, the outer wall of the driver and the body of the rescue vehicle (the body can include a first side, a second side and The third side, the first side and the third side are opposite sides, and one end of the first side and one end of the third side are respectively connected with the cab, and the second side is respectively connected with the other end of the first side and the other end of the third side.
  • One end is connected, that is, the second side and the driver's cab are opposite sides
  • the camera can collect images in real time, and the camera can also collect images at regular intervals.
  • the camera can only collect images when preset conditions are met, and the above preset conditions can include at least one of the following: receiving a collection instruction, the rescue vehicle is located in the target area (the target area includes a rescue workstation station or a highway), Rescue vehicles are activated.
  • Step 102 Input the image into the scene classification and recognition model, and output the road rescue work node, wherein the scene classification and recognition model is obtained by training a plurality of pre-acquired image sets.
  • the road rescue work node inputs the image into the scene classification and recognition model, and outputs the road rescue work node, wherein the scene classification and recognition model is obtained by training a plurality of pre-acquired image sets.
  • the scene classification and recognition model is obtained through pre-acquired multiple image sets training, and then the scene classification and recognition model is used to recognize the images collected by multiple cameras, so as to output the road rescue work node
  • the road rescue work node includes leaving rescue Stationary node of the workstation, node of arriving at the high-speed rescue site, node of starting rescue trailer work, node of ending the rescue work and leaving the rescue site, or returning to the node of the station of the rescue work station, so as to reduce the error of the output result of the road rescue work node, that is, to improve the road rescue work.
  • the accuracy of the node output results.
  • the structure of the scene classification recognition model is not limited here.
  • the scene classification recognition model may include an EfficientNet convolutional neural network model. Due to the high speed and precision of the EfficientNet convolutional neural network model, the training speed of the EfficientNet convolutional neural network model is faster, the training time is shortened, and the recognition accuracy of the EfficientNet convolutional neural network model is higher. The accuracy of the determination result of the rescue work node.
  • the EfficientNet convolutional neural network model includes a variety of network structures with different parameters, for example: the EfficientNet convolutional neural network model includes eight network structures with different parameters from EfficientNet-B0 to EfficientNet-B7, and the training of network structures with different parameters The speed and recognition accuracy are different.
  • each step in this embodiment can be applied to the vehicle-mounted terminal on the rescue vehicle, and the vehicle-mounted terminal can be called an algorithm box for the rescue vehicle to use for localized recognition, and the middle part of the algorithm box can be deployed with scene classification and recognition
  • the scene classification of the model is an offline recognizer
  • the network side device can be a rescue monitoring platform, and the vehicle terminal and the rescue monitoring platform can communicate through the network.
  • the above steps of collecting images, obtaining images, and outputting road rescue work nodes can all be realized on the vehicle terminal, that is, it can be realized offline on the vehicle terminal, and the road rescue The working nodes are uploaded to the network-side device, so that there is no need to communicate with the network-side device before outputting the road rescue working node, thereby reducing the consumption of computing resources and the power consumption of each device.
  • the method before acquiring images collected by multiple cameras, the method further includes:
  • the initial model is an EfficientNet-B2 convolutional neural network model.
  • the EfficientNet-B2 convolutional neural network model is adopted in this embodiment, that is to say, the parameter of the EfficientNet-B2 convolutional neural network model is EfficientNet-B2, and the width coefficient of the EfficientNet-B2 convolutional neural network model is 1.1, and the depth coefficient It is 1.2, the input image resolution is 260*260, and the dropout rate is 0.3.
  • the inference speed of the above-mentioned EfficientNet-B2 convolutional neural network model can be less than 100 ms per frame, and the dropout rate of 0.3 refers to the possibility of the EfficientNet-B2 convolutional neural network model recognizing the road rescue work node of each frame of image If the probability is greater than 0.3 (ie 30%), it will be saved, and the probability of the recognition result of the road rescue work node of each frame image is less than or equal to 0.3 will not be saved.
  • the maximum possibility of the recognition result of the road rescue work node of each frame image can be determined as the recognition result of the frame image, for example: one possibility of the road rescue work node of a certain frame image is 0.5, and the other is 0.5. One is 0.7, and the road rescue work node corresponding to 0.5 can be the station node leaving the rescue work station, and the road rescue work node corresponding to 0.7 can be the node arriving at the high-speed rescue site, then the road rescue work node of the frame image is the arriving high-speed rescue work node field node.
  • the recognition result of the image collected by each camera needs to correspond to its installation location. If the recognition result obviously does not correspond to its installation location, the recognition result is unreliable.
  • the third image collected by the third camera is that the trailer structure is in In the second position, since the third camera is located on the vehicle body, and the structure of the trailer can only be collected by the second camera located on the chassis, it can be judged that the above recognition result is not reliable at this time.
  • the initial model is the EfficientNet-B2 convolutional neural network model
  • the network model converges faster during the training process
  • the training speed is improved and the time spent on training is reduced.
  • the images included in each image set can all belong to the same road rescue work node, so that the training speed of the model can be further improved.
  • the initial model may also adopt a convolutional neural network model with other parameters.
  • multiple image sets are collected through multiple cameras, including:
  • Multiple capture images are acquired by multiple cameras.
  • a plurality of captured images are classified into a plurality of image sets by an image classification technique.
  • the state of the parts of the rescue vehicle in each image set can belong to the same state, and since the image classification effect of the EfficientNet-B2 convolutional neural network model is better, it can be considered that the EfficientNet-B2 convolutional neural network
  • the model has image classification technology, and the initial model uses the EfficientNet-B2 convolutional neural network model.
  • the EfficientNet-B2 convolutional neural network model can directly classify images to obtain multiple image sets, without setting up a separate model to classify captured images, which further improves the training speed of the model and reduces the cost of computing resources.
  • image classification technology performs feature operations on the entire image, which can greatly reduce the workload of the model in the sample processing stage.
  • model may include image classification technology.
  • the plurality of cameras include a first camera, a second camera and a third camera, the first camera is located inside the cab of the rescue vehicle, the second camera is located on the chassis of the rescue vehicle, and the third camera is located on the on the body.
  • the image is input into the scene classification recognition model, and the road rescue work node is output, including:
  • At least one of the first image collected by the first camera, the second image collected by the second camera, and the third image collected by the third camera is input into the scene classification recognition model, and the first image, the first image, and the first image are detected and analyzed.
  • the second image and the third image are output to the road rescue work node.
  • the number of the first image, the second image and the third image is multiple.
  • the first camera 101 can be located on the inner wall of the cab 104
  • the second camera 102 can be located in the middle of the chassis 105, or be located at the end of the chassis 105 away from the cab 104
  • the third camera 103 can be Located above the cab 104 .
  • At least one of the first image captured by the first camera, the second image captured by the second camera, and the third image captured by the third camera may be input into the scene classification and recognition model, so that the scene classification and recognition
  • the model can combine the first image, the second image and the third image collected from different locations to determine the road rescue work node, so that the accuracy of the determination result of the road rescue work node can be improved, and the accuracy of the determination result of the road rescue work node can be reduced. error.
  • the accuracy of the determination result of the road rescue work node is relatively high.
  • the camera can also be installed in other locations, and the scene classification and recognition model can also determine the road rescue work node according to the images collected by the cameras in other locations, which will not be repeated here, but can refer to the relevant principles of the above-mentioned embodiment.
  • the first image, the second image and the third image are detected and analyzed, and the road rescue work node is output, including one of the following:
  • the scene classification and recognition model detects that a plurality of first images include part of the first target image, and the vehicle speed acquired in real time is greater than the first threshold, then output the node leaving the rescue station, wherein the display of the first target image
  • the content includes the scene of the first target object; the vehicle speed is the vehicle speed of the rescue vehicle.
  • the first target object scene is a scene image in which there is a driver in the cab of the rescue vehicle.
  • the scene classification recognition model detects that multiple first images include the first target image and the second target image, and the vehicle speed obtained in real time is equal to the first threshold, the output reaches the high-speed rescue scene node, wherein the second target The display content of the image does not include the first target object scene, and the acquisition moment corresponding to the first target image is located before the acquisition moment corresponding to the second target image;
  • the scene classification recognition model detects at least one of the situation that the third target image and the fourth target image are included in the multiple second images, and at least one of the fifth target image is included in the multiple third images, then the output starts to rescue In the trailer working node, the display content of the third target image includes the trailer structure of the rescue vehicle at the first position, the display content of the fourth target image includes the trailer structure at the second position, and the display content of the fifth target image includes the vehicle to be rescued. and at least one of the second target objects; the second target objects may include rescue workers.
  • the scene classification recognition model detects that multiple second images include the third target image, the fourth target image and the sixth target image, then output the end rescue work and leave the rescue scene node, wherein the number of the fourth target image greater than the number threshold, the display content of the sixth target image includes the rescued vehicle being located on the rescue vehicle; wherein, when the number of the fourth target image is greater than the data threshold, it indicates that the duration of the trailer structure of the rescue vehicle at the second position is greater than the first duration .
  • the output returns to the station node of the rescue workstation, wherein the display content of the seventh target image includes the first target object scene, the display content of the eighth target image does not include the first target object scene, and the acquisition time corresponding to the seventh target image Before the acquisition time corresponding to the eighth target image, the displayed content of the ninth target image includes that the rescue vehicle is not provided with a rescued vehicle.
  • the first target image indicates that there is a driver inside the cab
  • the value of the first threshold can be 0km/h
  • the second target image indicates that there is no driver in the cab
  • the third target image indicates that the trailer structure is in the initial position (i.e. The first position represents the initial position)
  • the fourth target image represents that the trailer structure leaves the initial position (that is, the second position is a different position from the first position, indicating that the trailer structure is performing a trailer operation)
  • the fifth target image includes the rescued vehicle and Rescue at least one of the workers, and the trailer structure is located at the second position for longer than the first duration, it can be accurately determined that the trailer structure is indeed performing towing work at this time.
  • the road rescue work nodes can be accurately and quickly determined according to the corresponding standards, which improves the accuracy and speed of the determination result of the road rescue work nodes.
  • the method further includes: sending the road rescue working node to the network side device.
  • the network side device can determine the road rescue work node where each rescue vehicle is located, so as to facilitate the deployment of rescue vehicles, so as to improve the management of rescue vehicles and the deployment efficiency of rescue vehicles.
  • the time node corresponding to the above road rescue work node can also be recorded, and the road rescue work node and the corresponding time node can be uploaded to the network side device at the same time. Thereby enhancing the monitoring effect and verification efficiency of each rescue vehicle.
  • the judgment conditions for leaving the station node of the rescue work station are: the vehicle starts to ignite and power on and the vehicle speed is greater than 0km/h (ie, the first threshold), and the camera inside the vehicle (ie, the first camera) detects "inside-people" for multiple consecutive frames Information (that is, the information of the first target image), giving a judgment conclusion that the rescue vehicle has left the station of the rescue work station (that is, at the node leaving the station of the rescue work station) and recording the time node.
  • the judgment condition for arriving at the high-speed rescue site node is: the vehicle speed is still at 0km/h, and the picture information of the camera in the vehicle changes from the detection result of "inside-person" (that is, the information of the first target image) to "inside-unmanned” (that is, the information of the second target image), give a judgment conclusion that the rescue vehicle has reached the high-speed rescue site node and record the time node.
  • the judgment condition for starting the rescue trailer work node is: the bottom camera (ie, the second camera) detects "bottom-start operation" (ie, the information of the fourth target image), and when the actual rescue starts, the picture acquired by the bottom camera is usually
  • the trailer board or trailer arm (ie the trailer structure) of the rescue vehicle is moving (that is, including the third target image and the fourth target image, and determining whether the trailer structure is moving according to the third target image and the fourth target image); or
  • the right camera that is, the third camera detects "right-start operation".
  • the picture acquired by the right camera usually includes the rescued vehicle parked behind and the rescue workers wearing work uniforms ( That is, the information of the fifth target image).
  • the continuous multi-frame stable detection of these two label categories will be used to determine the start of the rescue trailer work node and record the time node.
  • the judgment condition for ending the rescue work and leaving the rescue site node is: the bottom camera detects the information of "bottom-working" and "bottom-ending" successively.
  • the working picture acquired by the bottom camera is usually
  • the trailer board or trailer arm of the rescue vehicle has left the original and fixed scene picture (that is, it can be determined whether it is in the above picture by combining the third target image and the fourth target image), and the end operation picture captured by the bottom camera
  • the rescue vehicle places a fixed scene picture (ie, the display content of the sixth target image) on the rescue vehicle. Through the stable detection of these two types of label screens, it will determine the end of the rescue work and leave the rescue site node and record the time node.
  • the judgment conditions for returning to the stagnant node of the rescue workstation are: the recognition of the front 4 working nodes, the speed of the rescue vehicle is reduced to a standstill, and the offline recognizer of the artificial intelligence (AI) box detects information from the camera inside the vehicle from the "internal - Someone" (that is, the information of the seventh target image) becomes "inside - unmanned” (that is, the information of the eighth target image), and the right camera detects the "right - empty board” picture (that is, the first Nine target image information).
  • AI artificial intelligence
  • image set can refer to the following expressions:
  • the images of the rescue work images collected by the camera at the bottom of the rescue vehicle are classified into: “bottom-start operation”, “bottom-working”, “bottom-end operation”, “bottom-vehicle suspension”.
  • the images of the rescue work images collected by the camera on the right side of the rescue vehicle are classified into: “right side - empty board”, “right side - start operation”, “right side - end operation”.
  • the images of the rescue work images collected by the internal camera of the rescue vehicle ie, the first camera
  • a total of 9 categories of image sets were sorted out.
  • the initial model is trained through the above image set, so that the final trained scene classification recognition model can quickly and accurately determine the road rescue work node where the rescue vehicle is currently located by identifying the state of each component of the rescue vehicle.
  • FIG. 3 is one of the structural diagrams of a device for determining a road rescue work node provided by an embodiment of the present application.
  • the road rescue work node determining device 200 includes:
  • the obtaining module 201 is configured to obtain images collected by multiple cameras, wherein the multiple cameras
  • the output module 202 is configured to input the image into the scene classification and recognition model, and output the road rescue work node, wherein the scene classification and recognition model is trained through a plurality of pre-acquired image sets.
  • the plurality of cameras include a first camera, a second camera and a third camera, the first camera is located inside the cab of the rescue vehicle, the second camera is located on the chassis of the rescue vehicle, and the third camera is located on the on the body;
  • the output module 202 is configured to input at least one of the first image captured by the first camera, the second image captured by the second camera, and the third image captured by the third camera into the scene classification recognition model, and detect and analyze The first image, the second image, and the third image output the road rescue work node, and the number of the first image, the second image, and the third image is multiple.
  • the road rescue work node includes: a node leaving the rescue work station station, a node arriving at the high-speed rescue site, a node starting the rescue trailer work, a node leaving the rescue site after the rescue work, or a node returning to the station station of the rescue work station.
  • the output module 202 is configured to output Leaving the station node of the rescue workstation, wherein the display content of the first target image includes the scene of the first target object;
  • the output module 202 is configured such that if the scene classification recognition model detects that multiple first images include the first target image and the second target image, and the vehicle speed obtained in real time is equal to the first threshold, Then the output reaches the high-speed rescue scene node, wherein the display content of the second target image does not include the first target object scene, and the acquisition time corresponding to the first target image is located before the acquisition time corresponding to the second target image;
  • the output module 202 is configured such that if the scene classification recognition model detects that multiple second images include the third target image and the fourth target image, and that multiple third images include the fifth target In at least one of the situations of the image, output the starting rescue trailer work node, wherein, the display content of the third target image includes that the trailer structure of the rescue vehicle is located at the first position, and the display content of the fourth target image includes that the trailer structure is located at the second position. position, the display content of the fifth target image includes at least one of the rescued vehicle and the second target;
  • the output module 202 is configured to output the end of the rescue work if the scene classification recognition model detects that the third target image, the fourth target image and the sixth target image are included in the multiple second images. Leaving the rescue scene node, wherein the number of the fourth target image is greater than the number threshold, and the display content of the sixth target image includes that the rescued vehicle is located on the rescue vehicle;
  • the output module 202 is configured such that if the scene classification recognition model detects that the seventh target image and the eighth target image are included in the plurality of first images, the ninth target image is included in the plurality of third images, and When the vehicle speed obtained in real time is equal to the first threshold, the output is returned to the station node of the rescue station, wherein the display content of the seventh target image includes the first target object scene, and the display content of the eighth target image does not include the first target object
  • the stimulation time corresponding to the seventh target image is before the acquisition time corresponding to the eighth target image
  • the display content of the ninth target image includes that there is no vehicle to be rescued on the rescue vehicle.
  • the scene classification recognition model includes: an EfficientNet convolutional neural network model.
  • the road rescue work node determination device 200 also includes:
  • a collection module configured to collect multiple image sets through multiple cameras
  • the training module is configured to input multiple image sets into the initial model for iterative training to obtain a scene classification and recognition model, wherein the initial model is an EfficientNet-B2 convolutional neural network model.
  • the acquisition module includes:
  • the collection sub-module is configured to collect multiple captured images through multiple cameras
  • the classification sub-module is configured to classify the multiple captured images into multiple image sets through image classification technology.
  • the road rescue work node determination device 200 also includes:
  • the sending module is configured to send the road rescue work node to the network side device.
  • the road rescue work node determination device 200 can realize the various processes of the method embodiment in FIG. 1 in the embodiment of the present application, and achieve the same beneficial effect. In order to avoid repetition, details are not repeated here.
  • the embodiment of the present application also provides an electronic device.
  • the electronic device may include a processor 301 , a memory 302 and a program 3021 stored in the memory 302 and executable on the processor 301 .
  • the program 3021 is executed by the processor 301, any step in the method embodiment corresponding to FIG. 1 can be implemented and the same beneficial effect can be achieved, so details are not repeated here.
  • the storage medium is, for example, a read-only memory (Read-Only Memory, ROM), a random access memory (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • magnetic disk or an optical disk and the like.
  • the images collected by multiple cameras are obtained, wherein the multiple cameras are located at different positions on the rescue vehicle; the images are input into the scene classification recognition model, and the road rescue work node is output, wherein , the scene classification recognition model is obtained by training a plurality of pre-acquired image sets.
  • the scene classification recognition model is trained through pre-acquired multiple image sets, the multiple image sets include various images collected by the camera on the rescue vehicle, so the output result of the scene classification recognition model is more accurate, which can reduce road
  • the error of the output result of the rescue work node is to improve the accuracy of the output result of the road rescue work node.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The present invention provides a roadside assistance working node determining method and apparatus, an electronic device, and a storage medium. The roadside assistance working node determining method comprises: obtaining images collected by a plurality of cameras, wherein the plurality of cameras are located at different positions on a rescue vehicle; and inputting the images into a scene classification and identification model, and outputting a roadside assistance working node. Therefore, the accuracy of an output result of a roadside assistance working node is improved.

Description

道路救援工作节点确定方法、装置及电子设备、存储介质Road rescue work node determination method, device, electronic equipment, and storage medium
相关申请的交叉引用Cross References to Related Applications
本申请基于申请号为202111267060.6、申请日为2021年10月29日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本发明作为参考。This application is based on a Chinese patent application with application number 202111267060.6 and a filing date of October 29, 2021, and claims the priority of this Chinese patent application. The entire content of this Chinese patent application is hereby incorporated by reference.
技术领域technical field
本申请实施例涉及通信技术领域,尤其涉及一种道路救援工作节点确定方法、装置及电子设备、存储介质。The embodiments of the present application relate to the technical field of communications, and in particular to a method and device for determining a road rescue work node, electronic equipment, and a storage medium.
背景技术Background technique
随着汽车市场的不断发展,道路交通事故的发生率也随之上升,汽车道路救援行业应运而生。在道路救援行业不断优化和发展的过程中,对道路救援人员管理、标准流程控制等问题也出现了一系列的规范和要求。但是当前对于道路救援工作节点的确定是通过门磁或者取力器等传感器进行识别,从而导致当前道路救援工作节点的确定结果的误差较大。With the continuous development of the automobile market, the incidence of road traffic accidents has also increased, and the automobile road rescue industry has emerged as the times require. In the process of continuous optimization and development of the road rescue industry, a series of norms and requirements have emerged for the management of road rescue personnel and standard process control. However, the current road rescue work node is determined by sensors such as door magnets or power take-offs, which leads to a large error in the determination result of the current road rescue work node.
发明内容Contents of the invention
本申请实施例提供一种道路救援工作节点确定方法、装置及电子设备、存储介质,可以提高当前道路救援工作节点的确定结果的准确度。Embodiments of the present application provide a method and device for determining a road rescue work node, an electronic device, and a storage medium, which can improve the accuracy of a determination result of a current road rescue work node.
为解决上述问题,本申请是这样实现的:In order to solve the above problems, the application is implemented as follows:
本申请实施例提供了一种道路救援工作节点确定方法,包括:The embodiment of the present application provides a method for determining road rescue work nodes, including:
获取多个摄像头采集的图像,其中,所述多个摄像头位于救援车辆上的不同位置;acquiring images captured by a plurality of cameras, wherein the plurality of cameras are located at different positions on the rescue vehicle;
将所述图像输入至场景分类识别模型中,输出道路救援工作节点,其中,所述场景分类识别模型通过预先获取的多个图像集进行训练得到。The image is input into the scene classification and recognition model, and the road rescue work node is output, wherein the scene classification and recognition model is obtained by training a plurality of pre-acquired image sets.
本申请实施例还提供一种道路救援工作节点确定装置,包括:The embodiment of the present application also provides a road rescue work node determination device, including:
获取模块,用于获取多个摄像头采集的图像,其中,所述多个摄像头位于救援车辆上的不同位置;An acquisition module, configured to acquire images collected by multiple cameras, wherein the multiple cameras are located at different positions on the rescue vehicle;
输出模块,用于将所述图像输入至场景分类识别模型中,输出道路救援工作节点,其中,所述场景分类识别模型通过预先获取的多个图像集进行训练得到。The output module is configured to input the image into the scene classification and recognition model, and output the road rescue work node, wherein the scene classification and recognition model is trained through a plurality of pre-acquired image sets.
本申请实施例还提供一种电子设备,包括:收发机、存储器、处理器及存储在所述存 储器上并可在所述处理器上运行的程序;所述处理器,用于读取存储器中的程序实现如前述第一方面所述方法中的步骤。The embodiment of the present application also provides an electronic device, including: a transceiver, a memory, a processor, and a program stored on the memory and operable on the processor; the processor is used to read the The program implements the steps in the method described in the aforementioned first aspect.
本申请实施例还提供一种可读存储介质,用于存储程序,所述程序被处理器执行时实现如前述第一方面所述方法中的步骤。The embodiment of the present application further provides a readable storage medium for storing a program, and when the program is executed by a processor, the steps in the method described in the aforementioned first aspect are implemented.
在本申请实施例中,获取多个摄像头采集的图像,其中,所述多个摄像头位于救援车辆上的不同位置;将所述图像输入至场景分类识别模型中,输出道路救援工作节点,其中,所述场景分类识别模型通过预先获取的多个图像集进行训练得到。这样,通过预先获取的多个图像集训练得到场景分类识别模型,然后采用场景分类识别模型识别多个摄像头采集的图像,从而输出道路救援工作节点,因为场景分类模型是通过预先获取的多个图像集训练得到的,多个图像集包括了救援车上摄像头采集的各种图像,从而场景分类识别模型输出的结果更加准确,可以减小道路救援工作节点输出结果的误差,即提高道路救援工作节点输出结果的准确度。In the embodiment of the present application, the images collected by multiple cameras are obtained, wherein the multiple cameras are located at different positions on the rescue vehicle; the images are input into the scene classification recognition model, and the road rescue work node is output, wherein, The scene classification recognition model is obtained by training a plurality of pre-acquired image sets. In this way, the scene classification and recognition model is obtained through pre-acquired multiple image sets, and then the scene classification and recognition model is used to identify the images collected by multiple cameras, so as to output the road rescue work node, because the scene classification model is obtained through pre-acquired multiple images The multiple image sets include various images collected by the camera on the rescue vehicle, so that the output result of the scene classification and recognition model is more accurate, which can reduce the error of the output result of the road rescue work node, that is, improve the road rescue work node. The accuracy of the output results.
附图说明Description of drawings
为了更清楚地说明本申请实施例的技术方案,下面将对本申请实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that need to be used in the description of the embodiments of the present application will be briefly introduced below. Obviously, the drawings in the following description are only some embodiments of the present application. For those skilled in the art, other drawings can also be obtained based on these drawings without any creative effort.
图1是本申请实施例提供的一种道路救援工作节点确定方法的流程图;Fig. 1 is a flow chart of a method for determining a road rescue work node provided by an embodiment of the present application;
图2是本申请实施例提供的一种道路救援工作节点确定方法的效果图;FIG. 2 is an effect diagram of a method for determining a road rescue work node provided by an embodiment of the present application;
图3是本申请实施例提供的一种道路救援工作节点确定装置的结构示意图;Fig. 3 is a schematic structural diagram of a device for determining a road rescue work node provided by an embodiment of the present application;
图4是本申请实施例提供的电子设备的结构示意图。FIG. 4 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
具体实施方式Detailed ways
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。The following will clearly and completely describe the technical solutions in the embodiments of the present application with reference to the drawings in the embodiments of the present application. Obviously, the described embodiments are part of the embodiments of the present application, not all of them. Based on the embodiments in this application, all other embodiments obtained by persons of ordinary skill in the art without creative efforts fall within the protection scope of this application.
本申请实施例中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。此外,术语“包括”和“具有”以及他们的任何变形,意图在 于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。此外,本申请中使用“和/或”表示所连接对象的至少其中之一,例如A和/或B和/或C,表示包含单独A,单独B,单独C,以及A和B都存在,B和C都存在,A和C都存在,以及A、B和C都存在的7种情况。The terms "first", "second" and the like in the embodiments of the present application are used to distinguish similar objects, and are not necessarily used to describe a specific order or sequence. Furthermore, the terms "comprising" and "having", as well as any variations thereof, are intended to cover a non-exclusive inclusion, for example, a process, method, system, product or device comprising a sequence of steps or elements is not necessarily limited to the expressly listed instead, may include other steps or elements not explicitly listed or inherent to the process, method, product or apparatus. In addition, the use of "and/or" in this application means at least one of the connected objects, such as A and/or B and/or C, means that A alone, B alone, C alone, and both A and B exist, Both B and C exist, both A and C exist, and there are 7 situations where A, B, and C all exist.
请参见图1,图1为本申请实施例提供的一种道路救援工作节点确定方法的流程图,如图1所示,包括以下步骤:Please refer to Fig. 1, Fig. 1 is a flow chart of a method for determining a road rescue work node provided by the embodiment of the present application, as shown in Fig. 1, including the following steps:
步骤101、获取多个摄像头采集的图像。 Step 101, acquiring images collected by multiple cameras.
本申请实施例中,道路救援工作节点确定装置获取多个摄像头采集的图像,其中多个摄像头位于救援车辆上的不同位置。In the embodiment of the present application, the device for determining a road rescue work node acquires images collected by multiple cameras, where the multiple cameras are located at different positions on the rescue vehicle.
其中,道路救援工作节点确定装置可以为救援车辆上的一个具有相应处理功能的终端。摄像头的数量和安装位置在此均不做限定,例如:可以在救援车辆的底盘、驾驶室内壁、驾驶室外壁和救援车辆的车身的至少一侧(车身可以包括第一侧、第二侧和第三侧,第一侧和第三侧为相对侧,且第一侧的一端和第三侧的一端分别与驾驶室连接,第二侧分别与第一侧的另一端和第三侧的另一端连接,即第二侧与驾驶室为相对侧)上均可以设置有摄像头,上述摄像头均可以用于采集图像。Wherein, the device for determining the road rescue working node may be a terminal with corresponding processing functions on the rescue vehicle. The quantity and installation positions of the cameras are not limited here, for example: at least one side of the chassis of the rescue vehicle, the inner wall of the driver, the outer wall of the driver and the body of the rescue vehicle (the body can include a first side, a second side and The third side, the first side and the third side are opposite sides, and one end of the first side and one end of the third side are respectively connected with the cab, and the second side is respectively connected with the other end of the first side and the other end of the third side. One end is connected, that is, the second side and the driver's cab are opposite sides) can be provided with a camera, and the above-mentioned cameras can be used to collect images.
其中,摄像头可以实时采集图像,摄像头也可以每隔固定周期才采集图像。另外,摄像头也可以在满足预设条件时,才采集图像,上述预设条件可以包括以下至少一项:接收到采集指令、救援车辆位于目标区域(目标区域包括救援工作站驻点或者高速道路)、救援车辆处于启动状态。Wherein, the camera can collect images in real time, and the camera can also collect images at regular intervals. In addition, the camera can only collect images when preset conditions are met, and the above preset conditions can include at least one of the following: receiving a collection instruction, the rescue vehicle is located in the target area (the target area includes a rescue workstation station or a highway), Rescue vehicles are activated.
步骤102、将图像输入至场景分类识别模型中,输出道路救援工作节点,其中,场景分类识别模型通过预先获取的多个图像集进行训练得到。Step 102: Input the image into the scene classification and recognition model, and output the road rescue work node, wherein the scene classification and recognition model is obtained by training a plurality of pre-acquired image sets.
本申请实施例中,道路救援工作节点将图像输入至场景分类识别模型中,输出道路救援工作节点,其中,场景分类识别模型通过预先获取的多个图像集进行训练得到。In the embodiment of the present application, the road rescue work node inputs the image into the scene classification and recognition model, and outputs the road rescue work node, wherein the scene classification and recognition model is obtained by training a plurality of pre-acquired image sets.
本申请实施例中,通过预先获取的多个图像集训练得到场景分类识别模型,然后采用场景分类识别模型识别多个摄像头采集的图像,从而输出道路救援工作节点,且道路救援工作节点包括离开救援工作站驻点节点、到达高速救援现场节点、开始救援拖车工作节点、结束救援工作离开救援现场节点或者返回救援工作站驻点节点,从而可以减小道路救援工作节点输出结果的误差,即提高道路救援工作节点输出结果的准确度。In the embodiment of the present application, the scene classification and recognition model is obtained through pre-acquired multiple image sets training, and then the scene classification and recognition model is used to recognize the images collected by multiple cameras, so as to output the road rescue work node, and the road rescue work node includes leaving rescue Stationary node of the workstation, node of arriving at the high-speed rescue site, node of starting rescue trailer work, node of ending the rescue work and leaving the rescue site, or returning to the node of the station of the rescue work station, so as to reduce the error of the output result of the road rescue work node, that is, to improve the road rescue work. The accuracy of the node output results.
需要说明的是,场景分类识别模型的结构在此不做限定。示例性的,场景分类识别模 型可以包括EfficientNet卷积神经网络模型。由于EfficientNet卷积神经网络模型的速度和精度均较高,从而使得EfficientNet卷积神经网络模型的训练速度较快,缩短了训练时长,且EfficientNet卷积神经网络模型的识别精度较高,提高了道路救援工作节点的确定结果的准确度。It should be noted that the structure of the scene classification recognition model is not limited here. Exemplarily, the scene classification recognition model may include an EfficientNet convolutional neural network model. Due to the high speed and precision of the EfficientNet convolutional neural network model, the training speed of the EfficientNet convolutional neural network model is faster, the training time is shortened, and the recognition accuracy of the EfficientNet convolutional neural network model is higher. The accuracy of the determination result of the rescue work node.
其中,EfficientNet卷积神经网络模型包括多种不同参数的网络结构,例如:EfficientNet卷积神经网络模型包括EfficientNet-B0至EfficientNet-B7共八种不同参数的网络结构,而不同参数的网络结构的训练速度和识别精度不同。Among them, the EfficientNet convolutional neural network model includes a variety of network structures with different parameters, for example: the EfficientNet convolutional neural network model includes eight network structures with different parameters from EfficientNet-B0 to EfficientNet-B7, and the training of network structures with different parameters The speed and recognition accuracy are different.
需要说明的是,本实施例中的各个步骤可以应用于救援车辆上的车载终端,而车载终端可以被称作为救援车辆用作本地化识别的算法盒子,该算法盒子中部可以部署有场景分类识别模型的场景分类离线识别器,而网络侧设备可以为救援监控平台,车载终端与救援监控平台之间可以通过网络进行通信。It should be noted that each step in this embodiment can be applied to the vehicle-mounted terminal on the rescue vehicle, and the vehicle-mounted terminal can be called an algorithm box for the rescue vehicle to use for localized recognition, and the middle part of the algorithm box can be deployed with scene classification and recognition The scene classification of the model is an offline recognizer, and the network side device can be a rescue monitoring platform, and the vehicle terminal and the rescue monitoring platform can communicate through the network.
需要说明的是,上述采集图像、获取图像以及输出道路救援工作节点等各个步骤均可以在车载终端上实现,即可以在车载终端上离线进行实现,在输出道路救援工作节点之后,才将道路救援工作节点上传至网络侧设备,这样,在输出道路救援工作节点之前,无需与网络侧设备进行通信,从而降低了计算资源的消耗,也降低了各设备的功耗。It should be noted that the above steps of collecting images, obtaining images, and outputting road rescue work nodes can all be realized on the vehicle terminal, that is, it can be realized offline on the vehicle terminal, and the road rescue The working nodes are uploaded to the network-side device, so that there is no need to communicate with the network-side device before outputting the road rescue working node, thereby reducing the consumption of computing resources and the power consumption of each device.
本申请实施例中,获取多个摄像头采集的图像之前,方法还包括:In the embodiment of the present application, before acquiring images collected by multiple cameras, the method further includes:
通过多个摄像头采集多个图像集。Acquire multiple image sets from multiple cameras.
将多个图像集输入至初始模型中进行迭代训练,以得到场景分类识别模型,其中,初始模型为EfficientNet-B2卷积神经网络模型。即本实施方式中采用EfficientNet-B2卷积神经网络模型,也就是说EfficientNet-B2卷积神经网络模型的参数为EfficientNet-B2,而EfficientNet-B2卷积神经网络模型的宽度系数为1.1,深度系数为1.2,输入图像分辨率为260*260,dropout rate为0.3。Multiple image sets are input into the initial model for iterative training to obtain a scene classification recognition model, wherein the initial model is an EfficientNet-B2 convolutional neural network model. That is, the EfficientNet-B2 convolutional neural network model is adopted in this embodiment, that is to say, the parameter of the EfficientNet-B2 convolutional neural network model is EfficientNet-B2, and the width coefficient of the EfficientNet-B2 convolutional neural network model is 1.1, and the depth coefficient It is 1.2, the input image resolution is 260*260, and the dropout rate is 0.3.
另外,上述EfficientNet-B2卷积神经网络模型的推理速度可以小于100ms每帧,而dropout rate为0.3指的是EfficientNet-B2卷积神经网络模型对每帧图像的道路救援工作节点的识别结果的可能性大于0.3(即30%)才会保存,每帧图像的道路救援工作节点的识别结果的可能性小于或等于0.3不会保存。In addition, the inference speed of the above-mentioned EfficientNet-B2 convolutional neural network model can be less than 100 ms per frame, and the dropout rate of 0.3 refers to the possibility of the EfficientNet-B2 convolutional neural network model recognizing the road rescue work node of each frame of image If the probability is greater than 0.3 (ie 30%), it will be saved, and the probability of the recognition result of the road rescue work node of each frame image is less than or equal to 0.3 will not be saved.
需要说明的是,可以将每帧图像的道路救援工作节点的识别结果的最大可能性确定为该帧图像的识别结果,例如:某一帧图像的道路救援工作节点的可能性一个为0.5,另一个为0.7,而0.5对应的道路救援工作节点可以为离开救援工作站驻点节点,而0.7对应的道路救援工作节点可以为到达高速救援现场节点,则该帧图像的道路救援工作节点为到达 高速救援现场节点。It should be noted that the maximum possibility of the recognition result of the road rescue work node of each frame image can be determined as the recognition result of the frame image, for example: one possibility of the road rescue work node of a certain frame image is 0.5, and the other is 0.5. One is 0.7, and the road rescue work node corresponding to 0.5 can be the station node leaving the rescue work station, and the road rescue work node corresponding to 0.7 can be the node arriving at the high-speed rescue site, then the road rescue work node of the frame image is the arriving high-speed rescue work node field node.
需要说明的是,每个摄像头采集的图像的识别结果需要与其安装位置对应,如果识别结果明显与其安装位置不对应,则识别结果不可靠,例如:第三摄像头采集的第三图像为拖车结构处于第二位置,由于第三摄像头位于车身上,而拖车结构通过只能是位于底盘上的第二摄像头采集得到,因此,此时可以判断上述识别结果不可靠。It should be noted that the recognition result of the image collected by each camera needs to correspond to its installation location. If the recognition result obviously does not correspond to its installation location, the recognition result is unreliable. For example: the third image collected by the third camera is that the trailer structure is in In the second position, since the third camera is located on the vehicle body, and the structure of the trailer can only be collected by the second camera located on the chassis, it can be judged that the above recognition result is not reliable at this time.
本申请实施方式中,由于初始模型为EfficientNet-B2卷积神经网络模型,而该网络模型在训练过程中的收敛速度较快,从而提高了训练速度,减少了训练耗费的时间。同时,由于采用多个图像集对初始模型进行训练,而每一个图像集中包括的图像均可以属于同一个道路救援工作节点,这样,可以进一步提高模型的训练速度。In the embodiment of the present application, since the initial model is the EfficientNet-B2 convolutional neural network model, and the network model converges faster during the training process, the training speed is improved and the time spent on training is reduced. At the same time, since multiple image sets are used to train the initial model, the images included in each image set can all belong to the same road rescue work node, so that the training speed of the model can be further improved.
需要说明的是,初始模型也可以采用其他参数的卷积神经网络模型。It should be noted that the initial model may also adopt a convolutional neural network model with other parameters.
作为一种可选的实施方式,通过多个摄像头采集多个图像集,包括:As an optional implementation manner, multiple image sets are collected through multiple cameras, including:
通过多个摄像头采集多个拍摄图像。Multiple capture images are acquired by multiple cameras.
通过图像分类技术将多个拍摄图像分类为多个图像集。A plurality of captured images are classified into a plurality of image sets by an image classification technique.
其中,每一个图像集中的图像中的救援车辆的部件的状态均可以属于同一个状态,而由于EfficientNet-B2卷积神经网络模型的图像分类效果较好,因此可以认为EfficientNet-B2卷积神经网络模型的带有图像分类技术,而初始模型采用EfficientNet-B2卷积神经网络模型。Among them, the state of the parts of the rescue vehicle in each image set can belong to the same state, and since the image classification effect of the EfficientNet-B2 convolutional neural network model is better, it can be considered that the EfficientNet-B2 convolutional neural network The model has image classification technology, and the initial model uses the EfficientNet-B2 convolutional neural network model.
这样,EfficientNet-B2卷积神经网络模型可以直接进行图像分类,以得到多个图像集,无需单独设置模型来对拍摄图像进行图像分类,从而进一步提高了模型的训练速度,且降低了计算资源的消耗,同时,图像分类技术是对整张图片做特征运算,从而可以大幅度的减少了模型在样本处理阶段的工作量。In this way, the EfficientNet-B2 convolutional neural network model can directly classify images to obtain multiple image sets, without setting up a separate model to classify captured images, which further improves the training speed of the model and reduces the cost of computing resources. At the same time, image classification technology performs feature operations on the entire image, which can greatly reduce the workload of the model in the sample processing stage.
当然,也可以采用其他模型来对拍摄图像进行图像分类,而该模型可以带有图像分类技术。Of course, other models may also be used to classify the captured images, and the model may include image classification technology.
本申请实施例中,多个摄像头包括第一摄像头、第二摄像头和第三摄像头,第一摄像头位于救援车辆的驾驶室内部,第二摄像头位于救援车辆的底盘上,第三摄像头位于救援车辆的车身上。In the embodiment of the present application, the plurality of cameras include a first camera, a second camera and a third camera, the first camera is located inside the cab of the rescue vehicle, the second camera is located on the chassis of the rescue vehicle, and the third camera is located on the on the body.
本申请实施例中,将图像输入至场景分类识别模型中,输出道路救援工作节点,包括:In the embodiment of the present application, the image is input into the scene classification recognition model, and the road rescue work node is output, including:
将第一摄像头采集的第一图像、第二摄像头采集的第二图像和第三摄像头采集的第三图像中的至少一者输入至场景分类识别模型中,并检测分析第一图像、所述第二图像和所述第三图像,输出所述道路救援工作节点。第一图像、第二图像和第三图像的数量均为多 个。At least one of the first image collected by the first camera, the second image collected by the second camera, and the third image collected by the third camera is input into the scene classification recognition model, and the first image, the first image, and the first image are detected and analyzed. The second image and the third image are output to the road rescue work node. The number of the first image, the second image and the third image is multiple.
示例性的,结合图2,第一摄像头101可以位于驾驶室104的内壁上,第二摄像头102可以位于底盘105的中间位置,或者位于底盘105远离驾驶室104的一端(,第三摄像头103可以位于驾驶室104上方。Exemplarily, with reference to FIG. 2, the first camera 101 can be located on the inner wall of the cab 104, the second camera 102 can be located in the middle of the chassis 105, or be located at the end of the chassis 105 away from the cab 104 (the third camera 103 can be Located above the cab 104 .
本实施方式中,可以将第一摄像头采集的第一图像、第二摄像头采集的第二图像和第三摄像头采集的第三图像中的至少一者输入至场景分类识别模型中,使得场景分类识别模型可以结合从不同位置采集到的第一图像、第二图像和第三图像来确定道路救援工作节点,从而可以提高道路救援工作节点的确定结果的准确度,降低道路救援工作节点的确定结果的误差。In this embodiment, at least one of the first image captured by the first camera, the second image captured by the second camera, and the third image captured by the third camera may be input into the scene classification and recognition model, so that the scene classification and recognition The model can combine the first image, the second image and the third image collected from different locations to determine the road rescue work node, so that the accuracy of the determination result of the road rescue work node can be improved, and the accuracy of the determination result of the road rescue work node can be reduced. error.
同时,由于第一摄像头、第二摄像头和第三摄像头所能拍摄的图像可以非常全面且准确的反应救援车辆的当前所处的节点,即使得道路救援工作节点的确定结果的准确度较高。At the same time, since the images captured by the first camera, the second camera and the third camera can fully and accurately reflect the current node of the rescue vehicle, the accuracy of the determination result of the road rescue work node is relatively high.
当然,摄像头也可以安装在其他位置,而场景分类识别模型也可以根据其他位置的摄像头采集的图像来确定道路救援工作节点,在此不再赘述,可以参见上述实施方式的相关原理。Of course, the camera can also be installed in other locations, and the scene classification and recognition model can also determine the road rescue work node according to the images collected by the cameras in other locations, which will not be repeated here, but can refer to the relevant principles of the above-mentioned embodiment.
本申请实施例中,并检测分析第一图像、第二图像和第三图像,输出所述道路救援工作节点,包括以下之一:In the embodiment of the present application, the first image, the second image and the third image are detected and analyzed, and the road rescue work node is output, including one of the following:
若场景分类识别模型检测到多张第一图像中包括部分第一目标图像,且实时获取的车速大于第一阈值的情况下,则输出离开救援工作站驻点节点,其中,第一目标图像的显示内容包括第一目标对象场景;车速为救援车辆的车速。第一目标对象场景为救援车辆的驾驶室内有驾驶员的场景图像。If the scene classification and recognition model detects that a plurality of first images include part of the first target image, and the vehicle speed acquired in real time is greater than the first threshold, then output the node leaving the rescue station, wherein the display of the first target image The content includes the scene of the first target object; the vehicle speed is the vehicle speed of the rescue vehicle. The first target object scene is a scene image in which there is a driver in the cab of the rescue vehicle.
若场景分类识别模型检测到多张第一图像中包括第一目标图像和第二目标图像,且实时获取的车速等于第一阈值的情况下,则输出到达高速救援现场节点,其中,第二目标图像的显示内容不包括第一目标对象场景,第一目标图像对应的采集时刻位于第二目标图像对应的采集时刻之前;If the scene classification recognition model detects that multiple first images include the first target image and the second target image, and the vehicle speed obtained in real time is equal to the first threshold, the output reaches the high-speed rescue scene node, wherein the second target The display content of the image does not include the first target object scene, and the acquisition moment corresponding to the first target image is located before the acquisition moment corresponding to the second target image;
若场景分类识别模型检测到多张第二图像中包括第三目标图像和第四目标图像的情况,和多张第三图像中包括第五目标图像的情况中的至少之一,则输出开始救援拖车工作节点,其中,第三目标图像的显示内容包括救援车辆的拖车结构位于第一位置,第四目标图像的显示内容包括拖车结构位于第二位置,第五目标图像的显示内容包括被救援车和第二目标对象中的至少一者;第二目标对象可以包括救援工作人员。If the scene classification recognition model detects at least one of the situation that the third target image and the fourth target image are included in the multiple second images, and at least one of the fifth target image is included in the multiple third images, then the output starts to rescue In the trailer working node, the display content of the third target image includes the trailer structure of the rescue vehicle at the first position, the display content of the fourth target image includes the trailer structure at the second position, and the display content of the fifth target image includes the vehicle to be rescued. and at least one of the second target objects; the second target objects may include rescue workers.
若场景分类识别模型检测到多张第二图像中包括第三目标图像、第四目标图像和第六 目标图像的情况下,则输出结束救援工作离开救援现场节点,其中,第四目标图像的数量大于数量阈值,第六目标图像的显示内容包括被救援车位于救援车辆上;其中,当第四目标图像的数量大于数据阈值时,表征救援车辆的拖车结构位于第二位置的时长大于第一时长。If the scene classification recognition model detects that multiple second images include the third target image, the fourth target image and the sixth target image, then output the end rescue work and leave the rescue scene node, wherein the number of the fourth target image greater than the number threshold, the display content of the sixth target image includes the rescued vehicle being located on the rescue vehicle; wherein, when the number of the fourth target image is greater than the data threshold, it indicates that the duration of the trailer structure of the rescue vehicle at the second position is greater than the first duration .
若所述场景分类识别模型检测到在多张第一图像中包括第七目标图像和第八目标图像,多张第三图像中包括第九目标图像,且实时获取的车速等于第一阈值的情况下,则输出返回救援工作站驻点节点,其中,第七目标图像的显示内容包括第一目标对象场景,第八目标图像的显示内容不包括第一目标对象场景,第七目标图像对应的采集时刻位于第八目标图像对应的采集时刻之前,第九目标图像的显示内容包括救援车辆上未设置有被救援车。If the scene classification recognition model detects that the seventh target image and the eighth target image are included in multiple first images, the ninth target image is included in multiple third images, and the vehicle speed obtained in real time is equal to the first threshold Next, the output returns to the station node of the rescue workstation, wherein the display content of the seventh target image includes the first target object scene, the display content of the eighth target image does not include the first target object scene, and the acquisition time corresponding to the seventh target image Before the acquisition time corresponding to the eighth target image, the displayed content of the ninth target image includes that the rescue vehicle is not provided with a rescued vehicle.
其中,第一目标图像表示驾驶室内部存在驾驶员,第一阈值的取值可以为0km/h,第二目标图像表示驾驶室内不存在驾驶员,第三目标图像表示拖车结构位于初始位置(即第一位置表示初始位置),第四目标图像表示拖车结构离开初始位置(即第二位置与第一位置为不同的位置,表示拖车结构正在进行拖车操作),第五目标图像包括被救援车和救援工作人员中的至少一者,而拖车结构位于第二位置的时长大于第一时长可以准确的确定此时拖车结构确实是在进行拖车工作。Wherein, the first target image indicates that there is a driver inside the cab, the value of the first threshold can be 0km/h, the second target image indicates that there is no driver in the cab, and the third target image indicates that the trailer structure is in the initial position (i.e. The first position represents the initial position), the fourth target image represents that the trailer structure leaves the initial position (that is, the second position is a different position from the first position, indicating that the trailer structure is performing a trailer operation), and the fifth target image includes the rescued vehicle and Rescue at least one of the workers, and the trailer structure is located at the second position for longer than the first duration, it can be accurately determined that the trailer structure is indeed performing towing work at this time.
本实施方式中,由于不同的道路救援工作节点的判断标准不同,因此,根据相应的标准可以准确且快速的确定道路救援工作节点,提高了道路救援工作节点的确定结果的准确度和确定速度。In this embodiment, since different road rescue work nodes have different judgment standards, the road rescue work nodes can be accurately and quickly determined according to the corresponding standards, which improves the accuracy and speed of the determination result of the road rescue work nodes.
本申请实施例中,方法还包括:向网络侧设备发送道路救援工作节点。这样,网络侧设备可以确定每一台救援车辆所处的道路救援工作节点,从而方便对救援车辆进行调配,以提高对救援车辆的管理以及提高救援车辆的调配效率。In the embodiment of the present application, the method further includes: sending the road rescue working node to the network side device. In this way, the network side device can determine the road rescue work node where each rescue vehicle is located, so as to facilitate the deployment of rescue vehicles, so as to improve the management of rescue vehicles and the deployment efficiency of rescue vehicles.
需要说明的是,在确定上述道路救援工作节点的同时,还可以记录上述道路救援工作节点对应的时间节点,并可以同时向网络侧设备上传道路救援工作节点和对应的时间节点。从而增强对每一台救援车辆的监控效果以及核查效率。It should be noted that, while determining the road rescue work node, the time node corresponding to the above road rescue work node can also be recorded, and the road rescue work node and the corresponding time node can be uploaded to the network side device at the same time. Thereby enhancing the monitoring effect and verification efficiency of each rescue vehicle.
以下以一个实施例来举例说明上述实施方式。Hereinafter, an example is used to illustrate the above-mentioned implementation manner.
离开救援工作站驻点节点的判定条件为:车辆启动打火通电且车速大于0km/h(即第一阈值),在车内部摄像头(即第一摄像头)连续多帧检测到“内部-有人”的信息(即第一目标图像的信息),给出判断结论救援车辆离开救援工作站驻点(即处于离开救援工作站驻点节点)并记录下该时间节点。The judgment conditions for leaving the station node of the rescue work station are: the vehicle starts to ignite and power on and the vehicle speed is greater than 0km/h (ie, the first threshold), and the camera inside the vehicle (ie, the first camera) detects "inside-people" for multiple consecutive frames Information (that is, the information of the first target image), giving a judgment conclusion that the rescue vehicle has left the station of the rescue work station (that is, at the node leaving the station of the rescue work station) and recording the time node.
到达高速救援现场节点的判定条件为:车速静止为0km/h,且车内摄像头的画面信息从检测结果为“内部-有人”(即第一目标图像的信息)变为“内部-无人”(即第二目标图像的信息),给出判断结论救援车车辆达到高速救援现场节点并记录下该时间节点。The judgment condition for arriving at the high-speed rescue site node is: the vehicle speed is still at 0km/h, and the picture information of the camera in the vehicle changes from the detection result of "inside-person" (that is, the information of the first target image) to "inside-unmanned" (that is, the information of the second target image), give a judgment conclusion that the rescue vehicle has reached the high-speed rescue site node and record the time node.
开始救援拖车工作节点的判定条件为:在底部摄像头(即第二摄像头)检测到“底部-开始作业”(即第四目标图像的信息),在实际救援开始时,底部摄像头获取到的画面通常为救援车辆的拖车板或者拖车臂(即拖车结构)正在挪动(即同时包括第三目标图像和第四目标图像,并根据第三目标图像和第四目标图像确定拖车结构是否挪动);或者在右侧摄像头(即第三摄像头)检测到“右侧-开始作业”,在实际救援工作时,右侧摄像头获取到的画面通常包含停靠在后方的被救援车和穿着工作制服的救援工作人员(即第五目标图像的信息)。会通过这两个标签类别的连续多帧稳定检测来判定开始救援拖车工作节点并记录下该时间节点。The judgment condition for starting the rescue trailer work node is: the bottom camera (ie, the second camera) detects "bottom-start operation" (ie, the information of the fourth target image), and when the actual rescue starts, the picture acquired by the bottom camera is usually The trailer board or trailer arm (ie the trailer structure) of the rescue vehicle is moving (that is, including the third target image and the fourth target image, and determining whether the trailer structure is moving according to the third target image and the fourth target image); or The right camera (that is, the third camera) detects "right-start operation". During the actual rescue work, the picture acquired by the right camera usually includes the rescued vehicle parked behind and the rescue workers wearing work uniforms ( That is, the information of the fifth target image). The continuous multi-frame stable detection of these two label categories will be used to determine the start of the rescue trailer work node and record the time node.
结束救援工作离开救援现场节点的判定条件为:在底部摄像头先后检测到“底部-正在作业”信息和“底部-结束作业”信息,在实际救援工作时,底部摄像头获取到的正在工作画面通常为救援车辆的拖车板或者拖车臂离开了原本的为了并固定不动的场景画面(即结合第三目标图像和第四目标图像可以确定是否处于上述画面),底部摄像头获取到的结束作业画面为被救援车辆放置在救援车上固定不动的场景画面(即第六目标图像的显示内容)。会通过对这两类标签画面的稳定检测来判定结束救援工作离开救援现场节点并记录下该时间节点。The judgment condition for ending the rescue work and leaving the rescue site node is: the bottom camera detects the information of "bottom-working" and "bottom-ending" successively. During the actual rescue work, the working picture acquired by the bottom camera is usually The trailer board or trailer arm of the rescue vehicle has left the original and fixed scene picture (that is, it can be determined whether it is in the above picture by combining the third target image and the fourth target image), and the end operation picture captured by the bottom camera The rescue vehicle places a fixed scene picture (ie, the display content of the sixth target image) on the rescue vehicle. Through the stable detection of these two types of label screens, it will determine the end of the rescue work and leave the rescue site node and record the time node.
返回救援工作站驻点节点的判定条件为:前面4个工作节点的识别,救援车车速减小到静止,人工智能(Artificial Intelligence,AI)盒子的离线识别器在车内部摄像头检测的信息从“内部-有人”(即第七目标图像的信息)变为“内部-无人”(即第八目标图像的信息),且右侧摄像头连续多帧检测到“右侧-空板”画面(即第九目标图像的信息)。给出判断结论救援车辆离开返回救援工作站驻点节点并记录下该时间节点。The judgment conditions for returning to the stagnant node of the rescue workstation are: the recognition of the front 4 working nodes, the speed of the rescue vehicle is reduced to a standstill, and the offline recognizer of the artificial intelligence (AI) box detects information from the camera inside the vehicle from the "internal - Someone" (that is, the information of the seventh target image) becomes "inside - unmanned" (that is, the information of the eighth target image), and the right camera detects the "right - empty board" picture (that is, the first Nine target image information). Give the judgment conclusion that the rescue vehicle leaves and returns to the station node of the rescue workstation and records the time node.
需要说明的是,图像集可以参见以下表述:It should be noted that the image set can refer to the following expressions:
将救援车底部摄像头(即第二摄像头)采集到的救援工作画面图像分类为:“底部-开始作业”、“底部-正在作业”、“底部-结束作业”、“底部-车辆悬挂”。将救援车右侧摄像头(即第三摄像头)采集到的救援工作画面图像分类为:“右侧-空板”、“右侧-开始作业”、“右侧-结束作业”。将救援车内部摄像头(即第一摄像头)采集到的救援工作画面图像分类为:“内部-有人”、“内部-无人”。最终整理了合计9个类别的图像集。通过上述图像集来训练初始模型,从而使得最终训练得到的场景分类识别模型可以 通过识别救援车辆各个部件的状态从而快速且准确的确定救援车辆当前所处的道路救援工作节点。The images of the rescue work images collected by the camera at the bottom of the rescue vehicle (that is, the second camera) are classified into: "bottom-start operation", "bottom-working", "bottom-end operation", "bottom-vehicle suspension". The images of the rescue work images collected by the camera on the right side of the rescue vehicle (that is, the third camera) are classified into: "right side - empty board", "right side - start operation", "right side - end operation". The images of the rescue work images collected by the internal camera of the rescue vehicle (ie, the first camera) are classified into: "inside-people", "inside-unmanned". Finally, a total of 9 categories of image sets were sorted out. The initial model is trained through the above image set, so that the final trained scene classification recognition model can quickly and accurately determine the road rescue work node where the rescue vehicle is currently located by identifying the state of each component of the rescue vehicle.
参见图3,图3是本申请实施例提供的道路救援工作节点确定装置的结构图之一。如图3所示,道路救援工作节点确定装置200包括:Referring to FIG. 3 , FIG. 3 is one of the structural diagrams of a device for determining a road rescue work node provided by an embodiment of the present application. As shown in Figure 3, the road rescue work node determining device 200 includes:
获取模块201,被配置为获取多个摄像头采集的图像,其中,多个摄像The obtaining module 201 is configured to obtain images collected by multiple cameras, wherein the multiple cameras
头位于救援车辆上的不同位置;different positions of the head on the rescue vehicle;
输出模块202,被配置为将图像输入至场景分类识别模型中,输出道路救援工作节点,其中,场景分类识别模型通过预先获取的多个图像集进行训练得到。The output module 202 is configured to input the image into the scene classification and recognition model, and output the road rescue work node, wherein the scene classification and recognition model is trained through a plurality of pre-acquired image sets.
本申请实施例中,多个摄像头包括第一摄像头、第二摄像头和第三摄像头,第一摄像头位于救援车辆的驾驶室内部,第二摄像头位于救援车辆的底盘上,第三摄像头位于救援车辆的车身上;In the embodiment of the present application, the plurality of cameras include a first camera, a second camera and a third camera, the first camera is located inside the cab of the rescue vehicle, the second camera is located on the chassis of the rescue vehicle, and the third camera is located on the on the body;
输出模块202,被配置为将第一摄像头采集的第一图像、第二摄像头采集的第二图像和第三摄像头采集的第三图像中的至少一者输入至场景分类识别模型中,并检测分析所述第一图像、所述第二图像和所述第三图像,输出所述道路救援工作节点,所述第一图像、所述第二图像和所述第三图像的数量均为多个。The output module 202 is configured to input at least one of the first image captured by the first camera, the second image captured by the second camera, and the third image captured by the third camera into the scene classification recognition model, and detect and analyze The first image, the second image, and the third image output the road rescue work node, and the number of the first image, the second image, and the third image is multiple.
其中,所述道路救援工作节点包括:离开救援工作站驻点节点、到达高速救援现场节点、开始救援拖车工作节点、结束救援工作离开救援现场节点或者返回救援工作站驻点节点。Wherein, the road rescue work node includes: a node leaving the rescue work station station, a node arriving at the high-speed rescue site, a node starting the rescue trailer work, a node leaving the rescue site after the rescue work, or a node returning to the station station of the rescue work station.
本申请实施例中,输出模块202,被配置为若场景分类识别模型检测到在多张第一图像中包括至少部分第一目标图像,且实时获取的车速大于第一阈值的情况下,则输出离开救援工作站驻点节点,其中,第一目标图像的显示内容包括第一目标对象场景;In the embodiment of the present application, the output module 202 is configured to output Leaving the station node of the rescue workstation, wherein the display content of the first target image includes the scene of the first target object;
本申请实施例中,输出模块202,被配置为若场景分类识别模型检测到多张第一图像中包括第一目标图像和第二目标图像,且实时获取的车速等于第一阈值的情况下,则输出到达高速救援现场节点,其中,第二目标图像的显示内容不包括第一目标对象场景,第一目标图像对应的采集时刻位于第二目标图像对应的采集时刻之前;In the embodiment of the present application, the output module 202 is configured such that if the scene classification recognition model detects that multiple first images include the first target image and the second target image, and the vehicle speed obtained in real time is equal to the first threshold, Then the output reaches the high-speed rescue scene node, wherein the display content of the second target image does not include the first target object scene, and the acquisition time corresponding to the first target image is located before the acquisition time corresponding to the second target image;
本申请实施例中,输出模块202,被配置为若场景分类识别模型检测到多张第二图像中包括第三目标图像和第四目标图像的情况,和多张第三图像中包括第五目标图像的情况中的至少之一,则输出开始救援拖车工作节点,其中,第三目标图像的显示内容包括救援车辆的拖车结构位于第一位置,第四目标图像的显示内容包括拖车结构位于第二位置,第五目标图像的显示内容包括被救援车和第二目标中的至少一者;In the embodiment of the present application, the output module 202 is configured such that if the scene classification recognition model detects that multiple second images include the third target image and the fourth target image, and that multiple third images include the fifth target In at least one of the situations of the image, output the starting rescue trailer work node, wherein, the display content of the third target image includes that the trailer structure of the rescue vehicle is located at the first position, and the display content of the fourth target image includes that the trailer structure is located at the second position. position, the display content of the fifth target image includes at least one of the rescued vehicle and the second target;
本申请实施例中,输出模块202,被配置为若场景分类识别模型检测到多张第二图像 中包括第三目标图像、第四目标图像和第六目标图像的情况下,则输出结束救援工作离开救援现场节点,其中,第四目标图像数量大于数量阈值,第六目标图像的显示内容包括被救援车位于救援车辆上;In the embodiment of the present application, the output module 202 is configured to output the end of the rescue work if the scene classification recognition model detects that the third target image, the fourth target image and the sixth target image are included in the multiple second images. Leaving the rescue scene node, wherein the number of the fourth target image is greater than the number threshold, and the display content of the sixth target image includes that the rescued vehicle is located on the rescue vehicle;
本申请实施例中,输出模块202,被配置为若场景分类识别模型检测到多张第一图像中包括第七目标图像和第八目标图像,多张第三图像中包括第九目标图像,且实时获取的车速等于第一阈值的情况下,则输出返回救援工作站驻点节点,其中,第七目标图像的显示内容包括第一目标对象场景,第八目标图像的显示内容不包括第一目标对象场景,第七目标图像对应的刺激时刻位于第八目标图像对应的采集时刻之前,第九目标图像的显示内容包括救援车辆上未设置有被救援车。In the embodiment of the present application, the output module 202 is configured such that if the scene classification recognition model detects that the seventh target image and the eighth target image are included in the plurality of first images, the ninth target image is included in the plurality of third images, and When the vehicle speed obtained in real time is equal to the first threshold, the output is returned to the station node of the rescue station, wherein the display content of the seventh target image includes the first target object scene, and the display content of the eighth target image does not include the first target object In the scenario, the stimulation time corresponding to the seventh target image is before the acquisition time corresponding to the eighth target image, and the display content of the ninth target image includes that there is no vehicle to be rescued on the rescue vehicle.
本申请实施例中,场景分类识别模型包括:EfficientNet卷积神经网络模型。In the embodiment of the present application, the scene classification recognition model includes: an EfficientNet convolutional neural network model.
本申请实施例中,道路救援工作节点确定装置200,还包括:In the embodiment of the present application, the road rescue work node determination device 200 also includes:
采集模块,被配置为通过多个摄像头采集多个图像集;A collection module configured to collect multiple image sets through multiple cameras;
训练模块,被配置为将多个图像集输入至初始模型中进行迭代训练,以得到场景分类识别模型,其中,初始模型为EfficientNet-B2卷积神经网络模型。The training module is configured to input multiple image sets into the initial model for iterative training to obtain a scene classification and recognition model, wherein the initial model is an EfficientNet-B2 convolutional neural network model.
本申请实施例中,采集模块,包括:In the embodiment of the present application, the acquisition module includes:
采集子模块,被配置为通过多个摄像头采集多个拍摄图像;The collection sub-module is configured to collect multiple captured images through multiple cameras;
分类子模块,被配置为通过图像分类技术将多个拍摄图像分类为多个图像集。The classification sub-module is configured to classify the multiple captured images into multiple image sets through image classification technology.
本申请实施例中,道路救援工作节点确定装置200,还包括:In the embodiment of the present application, the road rescue work node determination device 200 also includes:
发送模块,被配置为向网络侧设备发送道路救援工作节点。The sending module is configured to send the road rescue work node to the network side device.
道路救援工作节点确定装置200能够实现本申请实施例中图1方法实施例的各个过程,以及达到相同的有益效果,为避免重复,这里不再赘述。The road rescue work node determination device 200 can realize the various processes of the method embodiment in FIG. 1 in the embodiment of the present application, and achieve the same beneficial effect. In order to avoid repetition, details are not repeated here.
本申请实施例还提供一种电子设备。请参见图4,电子设备可以包括处理器301、存储器302及存储在存储器302上并可在处理器301上运行的程序3021。程序3021被处理器301执行时可实现图1对应的方法实施例中的任意步骤及达到相同的有益效果,此处不再赘述。The embodiment of the present application also provides an electronic device. Referring to FIG. 4 , the electronic device may include a processor 301 , a memory 302 and a program 3021 stored in the memory 302 and executable on the processor 301 . When the program 3021 is executed by the processor 301, any step in the method embodiment corresponding to FIG. 1 can be implemented and the same beneficial effect can be achieved, so details are not repeated here.
本领域普通技术人员可以理解实现上述实施例方法的全部或者部分步骤是可以通过程序指令相关的硬件来完成,所述的程序可以存储于一可读取介质中。本申请实施例还提供一种可读存储介质,可读存储介质上存储有计算机程序,计算机程序被处理器执行时可实现上述图1对应的方法实施例中的任意步骤,且能达到相同的技术效果,为避免重复,这里不再赘述。Those skilled in the art can understand that all or part of the steps for implementing the methods of the above embodiments can be completed by program instructions related hardware, and the program can be stored in a readable medium. The embodiment of the present application also provides a readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, any step in the above method embodiment corresponding to Figure 1 can be realized, and the same Technical effects, in order to avoid repetition, will not be repeated here.
所述的存储介质,如只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random  Access Memory,RAM)、磁碟或者光盘等。The storage medium is, for example, a read-only memory (Read-Only Memory, ROM), a random access memory (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.
以上是本申请实施例的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以作出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。The above is the preferred implementation of the embodiment of the present application. It should be pointed out that for those of ordinary skill in the art, without departing from the principle of the application, some improvements and modifications can also be made, and these improvements and modifications should also be considered For the scope of protection of this application.
工业实用性Industrial Applicability
在本申请实施例中,通过获取多个摄像头采集的图像,其中,所述多个摄像头位于救援车辆上的不同位置;将所述图像输入至场景分类识别模型中,输出道路救援工作节点,其中,所述场景分类识别模型通过预先获取的多个图像集进行训练得到。这样,因为场景分类模型是通过预先获取的多个图像集训练得到的,多个图像集包括了救援车上摄像头采集的各种图像,从而场景分类识别模型输出的结果更加准确,可以减小道路救援工作节点输出结果的误差,即提高道路救援工作节点输出结果的准确度。In the embodiment of the present application, the images collected by multiple cameras are obtained, wherein the multiple cameras are located at different positions on the rescue vehicle; the images are input into the scene classification recognition model, and the road rescue work node is output, wherein , the scene classification recognition model is obtained by training a plurality of pre-acquired image sets. In this way, because the scene classification model is trained through pre-acquired multiple image sets, the multiple image sets include various images collected by the camera on the rescue vehicle, so the output result of the scene classification recognition model is more accurate, which can reduce road The error of the output result of the rescue work node is to improve the accuracy of the output result of the road rescue work node.

Claims (10)

  1. 一种道路救援工作节点确定方法,包括:A method for determining road rescue work nodes, comprising:
    获取多个摄像头采集的图像,其中,所述多个摄像头位于救援车辆上的不同位置;acquiring images captured by a plurality of cameras, wherein the plurality of cameras are located at different positions on the rescue vehicle;
    将所述图像输入至场景分类识别模型中,输出道路救援工作节点,其中,所述场景分类识别模型通过预先获取的多个图像集进行训练得到。The image is input into the scene classification and recognition model, and the road rescue work node is output, wherein the scene classification and recognition model is obtained by training a plurality of pre-acquired image sets.
  2. 根据权利要求1所述的方法,其中,所述多个摄像头包括第一摄像头、第二摄像头和第三摄像头,所述第一摄像头位于所述救援车辆的驾驶室内部,所述第二摄像头位于所述救援车辆的底盘上,所述第三摄像头位于所述救援车辆的车身上;The method according to claim 1, wherein the plurality of cameras comprises a first camera, a second camera and a third camera, the first camera is located inside the cab of the rescue vehicle, and the second camera is located On the chassis of the rescue vehicle, the third camera is located on the body of the rescue vehicle;
    所述将所述图像输入至场景分类识别模型中,输出道路救援工作节点,包括:The described image is input into the scene classification recognition model, and the output road rescue work node includes:
    将所述第一摄像头采集的第一图像、所述第二摄像头采集的第二图像和所述第三摄像头采集的第三图像中的至少一者输入至所述场景分类识别模型中,并检测分析所述第一图像、所述第二图像和所述第三图像,输出所述道路救援工作节点,所述第一图像、所述第二图像和所述第三图像的数量均为多个;input at least one of the first image captured by the first camera, the second image captured by the second camera, and the third image captured by the third camera into the scene classification recognition model, and detect Analyzing the first image, the second image and the third image, outputting the road rescue work node, the number of the first image, the second image and the third image is multiple ;
    其中,所述道路救援工作节点包括:离开救援工作站驻点节点、到达高速救援现场节点、开始救援拖车工作节点、结束救援工作离开救援现场节点或者返回救援工作站驻点节点。Wherein, the road rescue work node includes: a node leaving the rescue work station station, a node arriving at the high-speed rescue site, a node starting the rescue trailer work, a node leaving the rescue site after the rescue work, or a node returning to the station station of the rescue work station.
  3. 根据权利要求2所述的方法,其中,所述并检测分析所述第一图像、所述第二图像和所述第三图像,输出所述道路救援工作节点,包括以下之一:若所述场景分类识别模型检测到多张所述第一图像中包括部分第一目标图像,且实时获取的车速大于第一阈值的情况下,则输出所述所述离开救援工作站驻点节点,其中,所述第一目标图像的显示内容包括第一目标对象场景;The method according to claim 2, wherein said detecting and analyzing said first image, said second image and said third image, and outputting said road rescue work node includes one of the following: if said When the scene classification recognition model detects that some of the first target images are included in the plurality of first images, and the vehicle speed acquired in real time is greater than the first threshold, then output the station node for leaving the rescue work station, wherein the The display content of the first target image includes the first target object scene;
    若所述场景分类识别模型检测到多张所述第一图像中包括所述第一目标图像和第二目标图像,且实时获取的所述车速等于第一阈值的情况下,则输出所述到达高速救援现场节点,其中,所述第二目标图像的显示内容不包括所述第一目标对象场景,所述第一目标图像对应的采集时刻位于所述第二目标图像对应的采集时刻之前;If the scene classification recognition model detects that multiple first images include the first target image and the second target image, and the vehicle speed obtained in real time is equal to the first threshold, then output the arrival The high-speed rescue site node, wherein the display content of the second target image does not include the first target object scene, and the acquisition time corresponding to the first target image is before the acquisition time corresponding to the second target image;
    若所述场景分类识别模型检测到多张所述第二图像中包括第三目标图像和第四目标图像的情况,和多张所述第三图像中包括第五目标图像的情况中的至少之一,则输出所述开始救援拖车工作节点,其中,所述第三目标图像的显示内容包括所述救援车辆的拖车结构位于第一位置,所述第四目标图像的显示内容包括所述拖车结构位于第二位置,所述第五目标图像的显示内容包括被救援车和第二目标对象中的至少一者;If the scene classification recognition model detects at least one of the situation that the third target image and the fourth target image are included in the plurality of second images, and at least one of the situation that the fifth target image is included in the plurality of third images One, then output the start rescue trailer work node, wherein the display content of the third target image includes that the trailer structure of the rescue vehicle is located at the first position, and the display content of the fourth target image includes the trailer structure Located at the second position, the display content of the fifth target image includes at least one of the rescued vehicle and the second target object;
    若所述场景分类识别模型检测到多张所述第二图像中包括所述第三目标图像、所述第四 目标图像和第六目标图像的情况下,则输出所述结束救援工作离开救援现场节点,其中,所述第四目标图像的数量大于数量阈值,所述第六目标图像的显示内容包括所述被救援车位于所述救援车辆上;If the scene classification and recognition model detects that a plurality of the second images include the third target image, the fourth target image and the sixth target image, then output the end of the rescue work and leave the rescue scene A node, wherein the number of the fourth target image is greater than a number threshold, and the display content of the sixth target image includes that the rescued vehicle is located on the rescue vehicle;
    若所述场景分类识别模型检测到多张所述第一图像中包括第七目标图像和第八目标图像,多张所述第三图像中包括第九目标图像,且实时获取的所述车速等于第一阈值的情况下,则输出所述返回救援工作站驻点节点,其中,所述第七目标图像的显示内容包括所述第一目标对象场景,所述第八目标图像的显示内容不包括所述第一目标对象场景,所述第七目标图像对应的采集时刻位于所述第八目标图像对应的采集时刻之前,所述第九目标图像的显示内容包括所述救援车辆上未设置有所述被救援车。If the scene classification recognition model detects that multiple first images include the seventh target image and the eighth target image, multiple third images include the ninth target image, and the vehicle speed obtained in real time is equal to In the case of the first threshold value, then output the return to rescue workstation station node, wherein the display content of the seventh target image includes the first target object scene, and the display content of the eighth target image does not include the In the scene of the first target object, the acquisition time corresponding to the seventh target image is before the acquisition time corresponding to the eighth target image, and the display content of the ninth target image includes that the rescue vehicle is not equipped with the Rescue vehicle.
  4. 根据权利要求1所述的方法,其中,所述场景分类识别模型包括:EfficientNet卷积神经网络模型。The method according to claim 1, wherein the scene classification recognition model comprises: an EfficientNet convolutional neural network model.
  5. 根据权利要求4所述的方法,其中,所述获取多个摄像头采集的图像之前,所述方法还包括:The method according to claim 4, wherein, before acquiring images collected by a plurality of cameras, the method further comprises:
    通过所述多个摄像头采集所述多个图像集;capturing the plurality of image sets through the plurality of cameras;
    将所述多个图像集输入至初始模型中进行迭代训练,以得到所述场景分类识别模型,其中,所述初始模型为EfficientNet-B2卷积神经网络模型。The plurality of image sets are input into an initial model for iterative training to obtain the scene classification recognition model, wherein the initial model is an EfficientNet-B2 convolutional neural network model.
  6. 根据权利要求5所述的方法,其中,所述通过所述多个摄像头采集所述多个图像集,包括:The method according to claim 5, wherein said collecting said plurality of image sets through said plurality of cameras comprises:
    通过所述多个摄像头采集多个拍摄图像;collecting a plurality of photographed images through the plurality of cameras;
    通过图像分类技术将所述多个拍摄图像分类为所述多个图像集。The plurality of captured images are classified into the plurality of image sets by an image classification technique.
  7. 根据权利要求1所述的方法,其中,所述方法还包括:The method according to claim 1, wherein the method further comprises:
    向网络侧设备发送所述道路救援工作节点。Send the road rescue working node to the network side device.
  8. 一种道路救援工作节点确定装置,包括:A road rescue work node determination device, comprising:
    获取模块,被配置为获取多个摄像头采集的图像,其中,所述多个摄像头位于救援车辆上的不同位置;An acquisition module configured to acquire images captured by multiple cameras, wherein the multiple cameras are located at different positions on the rescue vehicle;
    输出模块,被配置为将所述图像输入至场景分类识别模型中,输出道路救援工作节点,其中,所述场景分类识别模型通过预先获取的多个图像集进行训练得到。The output module is configured to input the image into the scene classification and recognition model, and output the road rescue work node, wherein the scene classification and recognition model is obtained by training a plurality of pre-acquired image sets.
  9. 一种电子设备,包括:收发机、存储器、处理器及存储在所述存储器上并可在所述处理器上运行的程序;所述处理器,用于读取存储器中的程序实现如权利要求1至7中任一项所述的道路救援工作节点确定方法中的步骤。An electronic device, comprising: a transceiver, a memory, a processor, and a program stored on the memory and operable on the processor; the processor is used to read the program in the memory to implement the claim Steps in the method for determining road rescue work nodes described in any one of 1 to 7.
  10. 一种可读存储介质,用于存储程序,所述程序被处理器执行时实现如权利要求1至7 中任一项所述的道路救援工作节点确定方法中的步骤。A readable storage medium for storing a program, and when the program is executed by a processor, the steps in the method for determining a road rescue working node according to any one of claims 1 to 7 are realized.
PCT/CN2022/126059 2021-10-29 2022-10-19 Roadside assistance working node determining method and apparatus, electronic device, and storage medium WO2023071874A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111267060.6 2021-10-29
CN202111267060.6A CN113705549B (en) 2021-10-29 2021-10-29 Road rescue work node determination method and device and related equipment

Publications (1)

Publication Number Publication Date
WO2023071874A1 true WO2023071874A1 (en) 2023-05-04

Family

ID=78647443

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/126059 WO2023071874A1 (en) 2021-10-29 2022-10-19 Roadside assistance working node determining method and apparatus, electronic device, and storage medium

Country Status (2)

Country Link
CN (1) CN113705549B (en)
WO (1) WO2023071874A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113705549B (en) * 2021-10-29 2022-02-11 中移(上海)信息通信科技有限公司 Road rescue work node determination method and device and related equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111985449A (en) * 2020-09-03 2020-11-24 深圳壹账通智能科技有限公司 Rescue scene image identification method, device, equipment and computer medium
CN112381020A (en) * 2020-11-20 2021-02-19 深圳市银星智能科技股份有限公司 Video scene identification method and system and electronic equipment
CN112818725A (en) * 2019-11-15 2021-05-18 中移智行网络科技有限公司 Rescue vehicle operation identification method and device, storage medium and computer equipment
CN113705549A (en) * 2021-10-29 2021-11-26 中移(上海)信息通信科技有限公司 Road rescue work node determination method and device and related equipment

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9053588B1 (en) * 2014-03-13 2015-06-09 Allstate Insurance Company Roadside assistance management
WO2018195937A1 (en) * 2017-04-28 2018-11-01 深圳市元征科技股份有限公司 Roadside rescue method based on automated driving, operation vehicle, and control center
US10445817B2 (en) * 2017-10-16 2019-10-15 Allstate Insurance Company Geotagging location data
CN109334591A (en) * 2018-11-28 2019-02-15 奇瑞汽车股份有限公司 Control method, device and the storage medium of intelligent automobile
CN210093392U (en) * 2019-09-23 2020-02-18 东南大学 Heavy rescue vehicle operation process recording and remote monitoring system
CN111523579B (en) * 2020-04-14 2022-05-03 燕山大学 Vehicle type recognition method and system based on improved deep learning
CN111563494B (en) * 2020-07-16 2020-10-27 平安国际智慧城市科技股份有限公司 Behavior identification method and device based on target detection and computer equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112818725A (en) * 2019-11-15 2021-05-18 中移智行网络科技有限公司 Rescue vehicle operation identification method and device, storage medium and computer equipment
CN111985449A (en) * 2020-09-03 2020-11-24 深圳壹账通智能科技有限公司 Rescue scene image identification method, device, equipment and computer medium
CN112381020A (en) * 2020-11-20 2021-02-19 深圳市银星智能科技股份有限公司 Video scene identification method and system and electronic equipment
CN113705549A (en) * 2021-10-29 2021-11-26 中移(上海)信息通信科技有限公司 Road rescue work node determination method and device and related equipment

Also Published As

Publication number Publication date
CN113705549B (en) 2022-02-11
CN113705549A (en) 2021-11-26

Similar Documents

Publication Publication Date Title
CN109087510B (en) Traffic monitoring method and device
CN109671006B (en) Traffic accident handling method, device and storage medium
WO2020042984A1 (en) Vehicle behavior detection method and apparatus
CN106599832A (en) Method for detecting and recognizing various types of obstacles based on convolution neural network
KR102453627B1 (en) Deep Learning based Traffic Flow Analysis Method and System
CN110298300B (en) Method for detecting vehicle illegal line pressing
CN110516518A (en) A kind of illegal manned detection method of non-motor vehicle, device and electronic equipment
CN105744232A (en) Method for preventing power transmission line from being externally broken through video based on behaviour analysis technology
CN109472251B (en) Object collision prediction method and device
CN110598511A (en) Method, device, electronic equipment and system for detecting green light running event
CN111967396A (en) Processing method, device and equipment for obstacle detection and storage medium
US11482012B2 (en) Method for driving assistance and mobile device using the method
CN110555347A (en) Vehicle target identification method and device with dangerous cargo carrying behavior and electronic equipment
CN112766046B (en) Target detection method and related device
WO2023071874A1 (en) Roadside assistance working node determining method and apparatus, electronic device, and storage medium
CN111723854A (en) Method and device for detecting traffic jam of highway and readable storage medium
CN107798688A (en) Motion estimate method, method for early warning and automobile anti-rear end collision prior-warning device
CN112487884A (en) Traffic violation behavior detection method and device and computer readable storage medium
CN114119955A (en) Method and device for detecting potential dangerous target
CN111105619A (en) Method and device for judging road side reverse parking
CN117437792A (en) Real-time road traffic state monitoring method, device and system based on edge calculation
JP7384158B2 (en) Image processing device, moving device, method, and program
CN112417978A (en) Vision-based method and device for detecting foreign matters between train and shield door
CN110866441B (en) Vehicle identification and continuation tracking method and device and road side system
CN111161542B (en) Vehicle identification method and device

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE