CN113762123A - Method for detecting driver using mobile phone and computer readable medium - Google Patents

Method for detecting driver using mobile phone and computer readable medium Download PDF

Info

Publication number
CN113762123A
CN113762123A CN202111009414.7A CN202111009414A CN113762123A CN 113762123 A CN113762123 A CN 113762123A CN 202111009414 A CN202111009414 A CN 202111009414A CN 113762123 A CN113762123 A CN 113762123A
Authority
CN
China
Prior art keywords
mobile phone
driver
driving
data
vehicle dynamics
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111009414.7A
Other languages
Chinese (zh)
Other versions
CN113762123B (en
Inventor
徐荣娇
王雪松
朱晓晖
庄一帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
China Pacific Property Insurance Co Ltd
Original Assignee
Tongji University
China Pacific Property Insurance Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University, China Pacific Property Insurance Co Ltd filed Critical Tongji University
Priority to CN202111009414.7A priority Critical patent/CN113762123B/en
Publication of CN113762123A publication Critical patent/CN113762123A/en
Application granted granted Critical
Publication of CN113762123B publication Critical patent/CN113762123B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to a method for detecting that a driver uses a mobile phone and a computer readable medium, wherein the detection method comprises the following steps: step 1: collecting video data of a driver and vehicle dynamics data of a mobile phone used for driving in natural driving research, dividing mobile phone fragments used for driving into different categories and adding labels; step 2: constructing a detection model for using the mobile phone, wherein the detection model comprises a deep residual error network ResNet sub-model and a long-short term memory network LSTM sub-model, and training the detection model for using the mobile phone; and step 3: and acquiring video data and vehicle dynamics data of a driver to be detected, inputting the video data and the vehicle dynamics data into a trained mobile phone detection model, and acquiring the use state of the mobile phone of the driver. Compared with the prior art, the method has the advantages of effectively monitoring the use state of the mobile phone when the driver drives, and the like.

Description

Method for detecting driver using mobile phone and computer readable medium
Technical Field
The invention relates to the technical field of traffic safety, in particular to a mobile phone using method for a driver based on multi-source data fusion deep learning and a computer readable medium.
Background
The mobile phone is used in driving with distraction, and previous researches show that the mobile phone is used in driving to cause unstable vehicle operation and increase driving risks. The accident risk can be increased to more than 3 times by holding the mobile phone by hand, wherein the risk of browsing and reading by using the mobile phone is 2.7 times, the risk of stretching to hold the mobile phone by hand is 4.8 times, and the risk of making a call is up to 12.2 times.
The mobile phone is used for driving, so that great potential safety hazards of road traffic are caused. According to the statistical data of the U.S. road traffic safety administration, the traffic accidents caused by distracted driving in 2018 in the U.S. cause 2,841 deaths, wherein the percentage of the deaths caused by the driving of the mobile phone is as high as 14%.
The problems of low feasibility of radical treatment by legislation, difficult supervision, lack of effective and reliable equipment detection and the like exist when a mobile phone is driven and used. However, in the prior art, a detection method for effectively using a mobile phone by a driver does not exist.
Disclosure of Invention
The present invention is directed to overcome the above-mentioned drawbacks of the prior art, and provides a method and a computer readable medium for detecting a driver's mobile phone, which effectively monitor the use status of the mobile phone when the driver is driving.
The purpose of the invention can be realized by the following technical scheme:
a method for detecting that a driver uses a mobile phone comprises the following steps:
step 1: collecting video data of a driver and vehicle dynamics data of a mobile phone used for driving in natural driving research, dividing mobile phone fragments used for driving into different categories and adding labels;
step 2: constructing a detection model for using the mobile phone, wherein the detection model comprises a deep residual error network ResNet sub-model and a long-short term memory network LSTM sub-model, and training the detection model for using the mobile phone;
and step 3: and acquiring video data and vehicle dynamics data of a driver to be detected, inputting the video data and the vehicle dynamics data into a trained mobile phone detection model, and acquiring the use state of the mobile phone of the driver.
Preferably, the step 1 specifically comprises:
step 1-1: extracting driving fragments of the mobile phone used for driving, and classifying according to the type of the using state of the mobile phone;
step 1-2: splitting each frame of the driver face video into pictures, and adding mobile phone use state category labels to the pictures;
step 1-3: acquiring a vehicle dynamics characteristic set V and constructing a sample data set D;
step 1-4: filling missing values by using a linear interpolation method, and screening driving segments.
More preferably, the using the mobile phone using state in the step 1-1 includes holding the mobile phone, clicking or sliding, putting down the mobile phone, hands-free talking, browsing and reading, dialing, typing, holding the mobile phone for talking, picking up the mobile phone and attentively driving.
More preferably, the vehicle dynamics characteristic set V and the sample data set D of steps 1-3 are respectively:
V={v1,v2,v3,…,vn}
D={v1,v2,v3,…,vn,y}
wherein v isnThe method is characterized in that the method is a characteristic variable, n is the number of the characteristic variables, and y is a prediction target, namely the use state of the mobile phone.
More preferably, the characteristic variables of the vehicle dynamics feature set V include vehicle speed, speed standard deviation, lateral acceleration standard deviation, longitudinal acceleration standard deviation, lane deviation and lane deviation standard deviation.
More preferably, the method for screening the driving segment in the steps 1 to 4 is as follows: and deleting the driving sections of which the missing value ratio is more than 85 percent and the time ratio of which the vehicle speed is less than 8km/h is more than 85 percent.
Preferably, the mobile phone usage detection model includes:
the depth residual error network ResNet sub-model is used for classifying the video data of the drivers and outputting the characteristic probability of the mobile phone category used in each driver;
and the long and short term memory network LSTM submodel is used for identifying the fusion data of the probability characteristics output by video identification and the vehicle dynamics characteristics and outputting the use state category of the driving mobile phone.
Preferably, the mobile phone usage detection model is specifically;
step 2-1: taking out each frame in each driving mobile phone use state fragment acquired by a natural driving research image sensor;
step 2-2: according to the obtained face image of the driver, data enhancement is carried out, so that the number of pictures used by different types of mobile phones is kept balanced;
step 2-3: classifying the image data by using a ResNet sub-model, and outputting the probability characteristic U ═ U { U } of the using state of each type of driving mobile phone1,u2,u3,...,umM is the number of the using states of the mobile phone;
step 2-4: and fusing the probability characteristics of the driving use mobile phone output by the ResNet sub-model and the vehicle dynamics characteristics in the time dimension to form a new fusion data set E, wherein the structure of the new fusion data set E is E ═ { v }1,v2,v3,...,vn,u1,u2,u3,...,um,y};
Step 2-5: and segmenting the fused data into segments, inputting the segments into an LSTM submodel, and obtaining the state of the driver using the mobile phone in a driving mode.
More preferably, the construction method of the LSTM submodel comprises:
firstly, an input layer containing n + m variables is constructed, the input layer is input into a bidirectional LSTM layer with n + m neurons, an attention layer with n + m neurons is input, then two Dense layers are connected, and classification results of a model, namely the state of a driver driving a mobile phone, are output through a Softmax activation function.
A computer readable medium, wherein the computer readable medium stores therein any one of the above-mentioned methods for detecting the use of a mobile phone by a driver.
Compared with the prior art, the invention has the following beneficial effects:
effectively monitor the cell-phone user state when the driver drives: the method for detecting the mobile phone used by the driver is a universal algorithm, the state of the mobile phone used by the driver in driving is detected through the facial image data and the vehicle dynamics data of the driver researched by natural driving and the traffic engineering theory, and the method has practical significance for the subsequent formation of high-precision mobile phone used by the driver in driving.
Drawings
Fig. 1 is a schematic flow chart of a method for detecting that a driver uses a mobile phone according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, shall fall within the scope of protection of the present invention.
A method for detecting the use of a mobile phone by a driver is disclosed, and the flow of the method is shown in FIG. 1, and comprises the following steps:
step 1: collecting video data of a driver and vehicle dynamics data of a mobile phone used for driving in natural driving research, dividing mobile phone fragments used for driving into different categories and adding labels;
step 1-1: extracting driving fragments of the mobile phone used for driving, and classifying according to the type of the using state of the mobile phone;
the using state of the mobile phone and the corresponding label serial number are shown in table 1, and comprise holding the mobile phone, clicking or sliding, putting down the mobile phone, carrying out hands-free conversation, browsing and reading, dialing, typing, holding the mobile phone for conversation, picking up the mobile phone and attentively driving;
TABLE 1 driver's Mobile phone use status
Serial number Class of usage state of mobile phone
1 Hand-held mobile phone
2 Clicking or sliding
3 Put down mobile phone
4 Hands-free calling
5 Browse reading
6 Dialing
7 Typing by typing
8 Hand-held mobile phone communication
9 Pick up the mobile phone
10 Attentive driving
Step 1-2: splitting each frame of the driver face video into pictures, and adding mobile phone use state category labels to the pictures;
step 1-3: acquiring a vehicle dynamics characteristic set V and constructing a sample data set D;
the vehicle dynamics characteristic set V and the sample data set D are respectively as follows:
V={v1,v2,v3,…,vn}
D={v1,v2,v3,…,vn,y}
wherein v isnThe method comprises the following steps of taking a characteristic variable, wherein n is the number of the characteristic variables, and y is a prediction target, namely the using state of the mobile phone;
the vehicle dynamics characteristic variables are shown in table 2 and include vehicle speed, speed standard deviation, lateral acceleration standard deviation, longitudinal acceleration standard deviation, lane deviation and lane deviation standard deviation;
TABLE 2 vehicle dynamics variables
Serial number Vehicle dynamics
1 Speed of rotation
2 Standard deviation of speed
3 Lateral acceleration
4 Standard deviation of lateral acceleration
5 Longitudinal acceleration
6 Standard deviation of longitudinal acceleration
7 Lane offset
8 Standard deviation of lane offset
Step 1-4: filling missing values by using a linear interpolation method, and screening driving segments;
the screening method comprises the following steps: deleting the driving segments with missing values of more than 85% and time of less than 8km/h of more than 85%;
step 2: constructing a detection model for using the mobile phone, wherein the detection model comprises a deep residual error network ResNet sub-model and a long-short term memory network LSTM sub-model, and training the detection model for using the mobile phone;
the mobile phone use detection model comprises:
the depth residual error network ResNet sub-model is used for classifying the video data of the drivers and outputting the characteristic probability of the mobile phone category used in each driver;
the long and short term memory network LSTM submodel is used for identifying the fusion data of the probability characteristics output by video identification and the vehicle dynamics characteristics and outputting the use state category of the driving mobile phone;
the mobile phone use detection model specifically comprises the following steps:
step 2-1: taking out each frame in each driving mobile phone use state fragment acquired by a natural driving research image sensor;
step 2-2: according to the obtained face image of the driver, data enhancement is carried out, so that the number of pictures used by different types of mobile phones is kept balanced;
step 2-3: classifying the image data by using a ResNet sub-model, and outputting the probability characteristic U ═ U { U } of the using state of each type of driving mobile phone1,u2,u3,...,umM is the number of the using states of the mobile phone;
step 2-4: and fusing the probability characteristics of the driving use mobile phone output by the ResNet sub-model and the vehicle dynamics characteristics in the time dimension to form a new fusion data set E, wherein the structure of the new fusion data set E is E ═ { v }1,v2,v3,...,vn,u1,u2,u3,...,um,y};
Step 2-5: dividing the fused data into 10-second segments, inputting an LSTM submodel, and obtaining the state of the driver driving the mobile phone;
the construction method of the LSTM submodel comprises the following steps:
firstly, an input layer containing n + m variables is constructed, the input layer is input into a bidirectional LSTM layer with n + m neurons, an attention layer with n + m neurons is input, then two Dense layers are connected, and classification results of a model, namely the state of a driver driving a mobile phone, are output through a Softmax activation function.
And step 3: as shown in fig. 1, whether the camera and the sensor work normally is judged at first, if so, the vehicle-mounted camera is used for acquiring video data of a driver, the vehicle sensor is used for acquiring vehicle dynamics data, and the data set mainly comprises: and inputting the face video data and the vehicle dynamics data of the driver into the trained mobile phone detection model to obtain the use state of the mobile phone of the driver.
The embodiment also relates to a computer readable medium, wherein any one of the mobile phone detection methods is stored in the medium.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A method for detecting that a driver uses a mobile phone is characterized by comprising the following steps:
step 1: collecting video data of a driver and vehicle dynamics data of a mobile phone used for driving in natural driving research, dividing mobile phone fragments used for driving into different categories and adding labels;
step 2: constructing a detection model for using the mobile phone, wherein the detection model comprises a deep residual error network ResNet sub-model and a long-short term memory network LSTM sub-model, and training the detection model for using the mobile phone;
and step 3: and acquiring video data and vehicle dynamics data of a driver to be detected, inputting the video data and the vehicle dynamics data into a trained mobile phone detection model, and acquiring the use state of the mobile phone of the driver.
2. The method for detecting the use of the mobile phone by the driver as claimed in claim 1, wherein the step 1 specifically comprises:
step 1-1: extracting driving fragments of the mobile phone used for driving, and classifying according to the type of the using state of the mobile phone;
step 1-2: splitting each frame of the driver face video into pictures, and adding mobile phone use state category labels to the pictures;
step 1-3: acquiring a vehicle dynamics characteristic set V and constructing a sample data set D;
step 1-4: filling missing values by using a linear interpolation method, and screening driving segments.
3. The method as claimed in claim 2, wherein the usage status of the mobile phone in step 1-1 includes holding the mobile phone, clicking or sliding, putting down the mobile phone, hands-free talking, browsing and reading, dialing, typing, holding the mobile phone talking, picking up the mobile phone, and driving with concentration.
4. The method for detecting the use of the mobile phone by the driver as claimed in claim 2, wherein the vehicle dynamics feature set V and the sample data set D in the steps 1-3 are respectively:
V={v1,v2,v3,…,vn}
D={v1,v2,v3,…,vn,y}
wherein v isnThe method is characterized in that the method is a characteristic variable, n is the number of the characteristic variables, and y is a prediction target, namely the use state of the mobile phone.
5. The method as claimed in claim 4, wherein the characteristic variables of the vehicle dynamics feature set V include vehicle speed, speed standard deviation, lateral acceleration standard deviation, longitudinal acceleration standard deviation, lane offset and lane offset standard deviation.
6. The method for detecting the use of the mobile phone by the driver as claimed in claim 2, wherein the method for screening the driving sections in the steps 1 to 4 comprises: and deleting the driving sections of which the missing value ratio is more than 85 percent and the time ratio of which the vehicle speed is less than 8km/h is more than 85 percent.
7. The method as claimed in claim 1, wherein the mobile phone usage detection model comprises:
the depth residual error network ResNet sub-model is used for classifying the video data of the drivers and outputting the characteristic probability of the mobile phone category used in each driver;
and the long and short term memory network LSTM submodel is used for identifying the fusion data of the probability characteristics output by video identification and the vehicle dynamics characteristics and outputting the use state category of the driving mobile phone.
8. The method for detecting the use of the mobile phone by the driver as claimed in claim 1, wherein the mobile phone use detection model is specifically;
step 2-1: taking out each frame in each driving mobile phone use state fragment acquired by a natural driving research image sensor;
step 2-2: according to the obtained face image of the driver, data enhancement is carried out, so that the number of pictures used by different types of mobile phones is kept balanced;
step 2-3: classifying the image data by using a ResNet sub-model, and outputting the probability characteristic U ═ U { U } of the using state of each type of driving mobile phone1,u2,u3,...,umM is the number of the using states of the mobile phone;
step 2-4: and fusing the probability characteristics of the driving use mobile phone output by the ResNet sub-model and the vehicle dynamics characteristics in the time dimension to form a new fusion data set E, wherein the structure of the new fusion data set E is E ═ { v }1,v2,v3,...,vn,u1,u2,u3,...,um,y};
Step 2-5: and segmenting the fused data into segments, inputting the segments into an LSTM submodel, and obtaining the state of the driver using the mobile phone in a driving mode.
9. The method for detecting the use of the mobile phone by the driver as claimed in claim 8, wherein the LSTM submodel is constructed by:
firstly, an input layer containing n + m variables is constructed, the input layer is input into a bidirectional LSTM layer with n + m neurons, an attention layer with n + m neurons is input, then two Dense layers are connected, and classification results of a model, namely the state of a driver driving a mobile phone, are output through a Softmax activation function.
10. A computer-readable medium, wherein the method for detecting the use of a mobile phone by a driver as claimed in any one of claims 1 to 9 is stored in the computer-readable medium.
CN202111009414.7A 2021-08-31 2021-08-31 Method for detecting driver using mobile phone and computer readable medium Active CN113762123B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111009414.7A CN113762123B (en) 2021-08-31 2021-08-31 Method for detecting driver using mobile phone and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111009414.7A CN113762123B (en) 2021-08-31 2021-08-31 Method for detecting driver using mobile phone and computer readable medium

Publications (2)

Publication Number Publication Date
CN113762123A true CN113762123A (en) 2021-12-07
CN113762123B CN113762123B (en) 2022-11-18

Family

ID=78792010

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111009414.7A Active CN113762123B (en) 2021-08-31 2021-08-31 Method for detecting driver using mobile phone and computer readable medium

Country Status (1)

Country Link
CN (1) CN113762123B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104574817A (en) * 2014-12-25 2015-04-29 清华大学苏州汽车研究院(吴江) Machine vision-based fatigue driving pre-warning system suitable for smart phone
US20160292510A1 (en) * 2015-03-31 2016-10-06 Zepp Labs, Inc. Detect sports video highlights for mobile computing devices
CN109165607A (en) * 2018-08-29 2019-01-08 浙江工业大学 A kind of hand-held phone detection method of the driver based on deep learning
CN110143202A (en) * 2019-04-09 2019-08-20 南京交通职业技术学院 A kind of dangerous driving identification and method for early warning and system
CN110781873A (en) * 2019-12-31 2020-02-11 南斗六星系统集成有限公司 Driver fatigue grade identification method based on bimodal feature fusion
CN111738337A (en) * 2020-06-23 2020-10-02 吉林大学 Driver distraction state detection and identification method in mixed traffic environment
CN111738037A (en) * 2019-03-25 2020-10-02 广州汽车集团股份有限公司 Automatic driving method and system and vehicle

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104574817A (en) * 2014-12-25 2015-04-29 清华大学苏州汽车研究院(吴江) Machine vision-based fatigue driving pre-warning system suitable for smart phone
US20160292510A1 (en) * 2015-03-31 2016-10-06 Zepp Labs, Inc. Detect sports video highlights for mobile computing devices
CN109165607A (en) * 2018-08-29 2019-01-08 浙江工业大学 A kind of hand-held phone detection method of the driver based on deep learning
CN111738037A (en) * 2019-03-25 2020-10-02 广州汽车集团股份有限公司 Automatic driving method and system and vehicle
CN110143202A (en) * 2019-04-09 2019-08-20 南京交通职业技术学院 A kind of dangerous driving identification and method for early warning and system
CN110781873A (en) * 2019-12-31 2020-02-11 南斗六星系统集成有限公司 Driver fatigue grade identification method based on bimodal feature fusion
CN111738337A (en) * 2020-06-23 2020-10-02 吉林大学 Driver distraction state detection and identification method in mixed traffic environment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHEN D. S. ET AL.: ""Safety-orien-ted Speed Guidance of Urban Expressway Under Model Predictive Control"", 《INTERNATIONAL JOURNAL OF SIMULATION MODELLING》 *
王雪松 等: ""基于自然驾驶数据的中国驾驶人城市快速路跟驰模型标定与验证"", 《中国公路学报》 *

Also Published As

Publication number Publication date
CN113762123B (en) 2022-11-18

Similar Documents

Publication Publication Date Title
CN111274881A (en) Driving safety monitoring method and device, computer equipment and storage medium
CN111310562B (en) Vehicle driving risk management and control method based on artificial intelligence and related equipment thereof
CN106250513B (en) Event modeling-based event personalized classification method and system
CN111242015B (en) Method for predicting driving dangerous scene based on motion profile semantic graph
CN106650660A (en) Vehicle type recognition method and terminal
Choi et al. Driver drowsiness detection based on multimodal using fusion of visual-feature and bio-signal
CN113495959B (en) Financial public opinion identification method and system based on text data
CN114067143A (en) Vehicle weight recognition method based on dual sub-networks
CN105117096A (en) Image identification based anti-tracking method and apparatus
CN104156717A (en) Method for recognizing rule breaking of phoning of driver during driving based on image processing technology
CN108846387B (en) Traffic police gesture recognition method and device
Vaegae et al. Design of an Efficient Distracted Driver Detection System: Deep Learning Approaches
CN113762123B (en) Method for detecting driver using mobile phone and computer readable medium
CN113283272A (en) Real-time image information prompting method and device for road congestion and electronic equipment
CN113469023A (en) Method, device, equipment and storage medium for determining alertness
CN114782936B (en) Behavior detection method based on improved yolov5s network
CN116385185A (en) Vehicle risk assessment auxiliary method, device, computer equipment and storage medium
Chen et al. Traffic travel pattern recognition based on sparse Global Positioning System trajectory data
CN113920780A (en) Cloud and mist collaborative personalized forward collision risk early warning method based on federal learning
CN112329566A (en) Visual perception system for accurately perceiving head movements of motor vehicle driver
Sun et al. Context awareness-based accident prevention during mobile phone use
CN113537132B (en) Visual fatigue detection method based on double-current convolutional neural network
Rakesh et al. Machine Learning and Internet of Things-based Driver Safety and Support System
CN116127366B (en) Emotion recognition method, system and medium based on TWS earphone
CN110717035A (en) Accident rapid processing method, system and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant