CN116469086A - Driving behavior detection method and device based on artificial intelligence - Google Patents

Driving behavior detection method and device based on artificial intelligence Download PDF

Info

Publication number
CN116469086A
CN116469086A CN202310554001.XA CN202310554001A CN116469086A CN 116469086 A CN116469086 A CN 116469086A CN 202310554001 A CN202310554001 A CN 202310554001A CN 116469086 A CN116469086 A CN 116469086A
Authority
CN
China
Prior art keywords
point cloud
cloud data
behavior detection
personnel
driving behavior
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310554001.XA
Other languages
Chinese (zh)
Inventor
贺舒庭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuo Zhensizhong Guangzhou Technology Co ltd
Original Assignee
Zhuo Zhensizhong Guangzhou Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuo Zhensizhong Guangzhou Technology Co ltd filed Critical Zhuo Zhensizhong Guangzhou Technology Co ltd
Priority to CN202310554001.XA priority Critical patent/CN116469086A/en
Publication of CN116469086A publication Critical patent/CN116469086A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads

Abstract

The embodiment of the application belongs to the technical field of traffic safety, and relates to a driving behavior detection method based on artificial intelligence, which comprises the following steps: transmitting detection laser to the environment in the vehicle through a measuring device arranged at a preset position in the vehicle; obtaining initial point cloud data according to reflected laser received by a measuring device, wherein the reflected laser is generated by object reflection detection laser in an in-car environment, the point cloud data is provided with a detection type identifier, and the detection type identifier is determined based on the position of the measuring device; preprocessing the initial point cloud data to obtain point cloud data; and detecting the driving behavior corresponding to the detection type identifier on the point cloud data to obtain a driving behavior detection result. The application also provides a driving behavior detection device based on artificial intelligence, a computer device and a storage medium. The driving behavior detection accuracy based on artificial intelligence is improved.

Description

Driving behavior detection method and device based on artificial intelligence
Technical Field
The application relates to the technical field of traffic safety, in particular to a driving behavior detection method, device, computer equipment and storage medium based on artificial intelligence.
Background
As society advances, automobiles become more popular. The automobile is driven to travel, so that people can select more freely, and more convenience is brought to enjoyment. There may be dangerous behaviors of the driver during driving, such as irregular pose, other things the driver does while driving, etc., which may bring potential safety hazards. For this reason, it is important to detect driving behavior during driving.
Traditional driving behavior detection is simple, for example, whether a person in a vehicle ties a safety belt or not is detected by a sensor; or collecting the image of the driver for image recognition to detect whether the driver has irregular driving behaviors. However, besides the interference of the personal expression of the driver on the image recognition result, the interference of factors such as illumination and shielding on the image can also be caused, so that the image recognition result is affected, and the driving behavior detection accuracy is lower.
Disclosure of Invention
An object of the embodiments of the present application is to provide a driving behavior detection method, apparatus, computer device, and storage medium, so as to solve the problem of low driving behavior detection accuracy.
In order to solve the above technical problems, the embodiments of the present application provide a driving behavior detection method based on artificial intelligence, which adopts the following technical scheme:
Transmitting detection laser to the environment in the vehicle through a measuring device arranged at a preset position in the vehicle;
obtaining initial point cloud data according to reflected laser received by the measuring device, wherein the reflected laser is generated by reflecting the detection laser by an object in the in-vehicle environment, the point cloud data is provided with a detection type identifier, and the detection type identifier is determined based on the position of the measuring device;
preprocessing the initial point cloud data to obtain point cloud data;
and detecting the driving behavior corresponding to the detection type identifier on the point cloud data to obtain a driving behavior detection result.
Further, the step of emitting the detection laser to the environment in the vehicle through the measuring device disposed at the preset position in the vehicle includes:
acquiring the current state of the vehicle;
when the current state is a static state, transmitting detection laser to the environment of the passenger area in the vehicle through a first measuring device arranged at least one preset position in the vehicle;
and when the current state is a driving state, transmitting detection laser to the driving area environment in the vehicle through a second measuring device arranged at least one preset position in the vehicle.
Further, the step of preprocessing the initial point cloud data to obtain point cloud data includes:
Clustering the initial point cloud data to obtain first point cloud data;
and filtering the first point cloud data according to preset space coordinate information to obtain point cloud data.
Further, the step of detecting the driving behavior corresponding to the detection type identifier on the point cloud data to obtain a driving behavior detection result includes:
inputting the point cloud data into a personnel feature extraction network to obtain personnel feature information in the point cloud data, wherein the personnel feature information comprises the number of identified target personnel, the personnel point cloud data of each target personnel and corresponding key points;
when the detection type is identified as a quantity behavior detection identifier, quantity behavior detection is carried out according to the personnel characteristic information, and a quantity behavior detection result is obtained;
when the detection type identifier is an operation behavior detection identifier, performing operation behavior detection according to the personnel characteristic information to obtain an operation behavior detection result;
and determining the quantity behavior detection result or the operation behavior detection result as a driving behavior detection result.
Further, the step of performing quantitative behavior detection according to the personnel characteristic information to obtain a quantitative behavior detection result includes:
Acquiring the number of people in the personnel characteristic information and personnel point cloud data of each target person;
acquiring a preset total number threshold and a personnel density threshold;
comparing the personnel number with the total number threshold to detect a first number of behaviors, and obtaining a first number of behaviors detection result;
performing second number behavior detection according to the personnel point cloud data of each target personnel and the personnel density threshold value to obtain a second number behavior detection result;
and generating a quantitative behavior detection result based on the first quantitative behavior detection result and the second quantitative behavior detection result.
Further, the step of performing operation behavior detection according to the personnel characteristic information to obtain an operation behavior detection result includes:
acquiring key points in personnel point cloud data corresponding to a driver from the personnel characteristic information;
acquiring preset standard key points;
according to the key points and the standard key points, calculating the pose offset value of the driver;
and generating an operation behavior detection result based on the pose offset value.
Further, after the step of performing driving behavior detection corresponding to the detection type identifier on the point cloud data to obtain a driving behavior detection result, the method further includes:
When the driving behavior abnormality is determined to exist according to the driving behavior detection result, driving behavior warning information is generated according to the driving behavior detection result;
and broadcasting the driving behavior warning information to carry out driving reminding.
In order to solve the technical problem, the embodiment of the application also provides a driving behavior detection device based on artificial intelligence, which adopts the following technical scheme:
the laser emission module is used for emitting detection laser to the environment in the vehicle through a measuring device arranged at a preset position in the vehicle;
the initial generation module is used for obtaining initial point cloud data according to the reflected laser received by the measuring device, the reflected laser is generated by reflecting the detection laser by an object in the in-vehicle environment, the point cloud data is provided with a detection type identifier, and the detection type identifier is determined based on the position of the measuring device;
the preprocessing module is used for preprocessing the initial point cloud data to obtain point cloud data;
and the behavior detection module is used for detecting the driving behavior corresponding to the detection type identifier on the point cloud data to obtain a driving behavior detection result.
To solve the above technical problem, the embodiments of the present application further provide a computer device, where the computer device includes a memory and a processor, where the memory stores computer readable instructions, and the processor executes the computer readable instructions to implement the steps of the driving behavior detection method based on artificial intelligence as described above.
To solve the above technical problem, the embodiments of the present application further provide a computer readable storage medium, where computer readable instructions are stored on the computer readable storage medium, and the computer readable instructions implement the steps of the driving behavior detection method based on artificial intelligence as described above when being executed by a processor.
Compared with the prior art, the embodiment of the application has the following main beneficial effects: transmitting detection laser to the environment in the vehicle through a measuring device arranged at a preset position in the vehicle, and obtaining initial point cloud data according to the received reflected laser; based on different positions of the measuring devices, the initial point cloud data can focus on passengers or drivers, and detection type identification can be added to the initial point cloud data according to the positions of the measuring devices due to different detection focus on the passengers or the drivers; preprocessing initial point cloud data, and reserving effective and useful data points to obtain point cloud data; according to the detection type identification of the point cloud data, carrying out driving behavior detection on the level of a focusing passenger or a driver on the point cloud data to obtain a driving behavior detection result; according to the method and the device, the targeted detection is carried out on the passengers or the drivers through the point cloud data, the point cloud data are generated based on laser, the interference of external factors is not easy to occur, and the accuracy of driving behavior detection is improved.
Drawings
For a clearer description of the solution in the present application, a brief description will be given below of the drawings that are needed in the description of the embodiments of the present application, it being obvious that the drawings in the following description are some embodiments of the present application, and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art.
FIG. 1 is a flow chart of one embodiment of an artificial intelligence based driving behavior detection method according to the present application;
FIG. 2 is a flow chart of one embodiment of step S103 in FIG. 2;
FIG. 3 is a flow chart of one embodiment of step S104 of FIG. 2;
FIG. 4 is a flow chart of one embodiment of step S1042 in FIG. 3;
FIG. 5 is a flow chart of one embodiment of step S1043 of FIG. 3;
FIG. 6 is a schematic structural diagram of one embodiment of an artificial intelligence based driving behavior detection apparatus according to the present application;
FIG. 7 is a schematic structural diagram of one embodiment of a computer device according to the present application.
Description of the embodiments
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used in the description of the applications herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "comprising" and "having" and any variations thereof in the description and claims of the present application and in the description of the figures above are intended to cover non-exclusive inclusions. The terms first, second and the like in the description and in the claims or in the above-described figures, are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
In order to better understand the technical solutions of the present application, the following description will clearly and completely describe the technical solutions in the embodiments of the present application with reference to the accompanying drawings.
It should be noted that, the driving behavior detection method provided in the embodiment of the present application is generally executed by a terminal, and accordingly, the driving behavior detection device is generally disposed in the terminal. The terminal may be a computer device in a car, with an operating system installed. The computer device may be, but is not limited to, various industrial computers, personal computers, and notebook computers. The computer device may further comprise measuring means, which may be a radar, for acquiring data points to obtain point cloud data.
FIG. 1 illustrates a flow chart of one embodiment of an artificial intelligence based driving behavior detection method according to the present application. The driving behavior detection method based on artificial intelligence comprises the following steps:
step S101, transmitting detection laser to the in-vehicle environment through a measuring device disposed at a preset position in the vehicle.
Specifically, a measuring device is disposed at a preset position inside the automobile, and the measuring device may be a radar, for example, a lidar, which may emit a detection laser to an in-vehicle environment to detect a characteristic amount of a position, a speed, or the like of a target object (obstacle).
Further, the step S101 may include: acquiring the current state of the vehicle; when the current state is a static state, transmitting detection laser to the environment of the passenger area in the vehicle through a first measuring device arranged at least one preset position in the vehicle; and when the current state is a driving state, transmitting detection laser to the driving area environment in the vehicle through a second measuring device arranged at least one preset position in the vehicle.
Specifically, a terminal in the automobile controls the measuring device. The terminal can acquire the current state of the vehicle through the sensor, wherein the current state comprises a stationary state and a running state, and it can be understood that the vehicle is stationary and is not displaced, namely the stationary state, and the running vehicle is the running state.
The emission mode of the detection laser is different according to the different current states of the vehicle. Transmitting detection laser to the environment of the passenger area in the vehicle through a first measuring device arranged in the vehicle when the vehicle is in a stationary state; wherein, the passenger area environment refers to an area environment where passengers can sit or stand, such as seats arranged in a vehicle or channels in a bus; because the environment of the passenger area is large, the number of the first measuring devices is at least one, usually a plurality of, and is distributed at different preset positions, for example, the first measuring devices can not belong to the top of the carriage
Transmitting detection laser to the driving area environment in the vehicle through a second measuring device arranged in the vehicle when the vehicle is in a driving state; the driving area environment refers to an area environment where a driver is located; the number of second measuring devices is at least one, and when a plurality of second measuring devices are arranged, the second measuring devices are also distributed at different preset positions.
The first measuring device and the second measuring device generally have no difference in hardware structure, and only emit detection lasers to different in-vehicle environments by using different measuring devices according to the current state of the vehicle. It will be appreciated that the first measurement device is primarily used to collect passenger-related point cloud data and the second measurement device is primarily used to collect driver-related point cloud data. According to the current state of the vehicle, detection laser is emitted to the preset vehicle interior through the measuring device, corresponding point cloud data are collected, and targeted driving behavior detection can be conducted for passengers and drivers.
In this embodiment, the current state of the vehicle is obtained, when the vehicle is in a stationary state, the first measuring device emits detection laser to the environment of the passenger area in the vehicle, and when the vehicle is in a driving state, the second measuring device emits detection laser to the environment of the driving area in the vehicle, so as to collect the point cloud data related to the passenger or the driver, thereby performing targeted driving behavior detection for the passenger and the driver.
Step S102, initial point cloud data are obtained according to reflected laser received by the measuring device, the reflected laser is generated by object reflection detection laser in the environment in the vehicle, the point cloud data are provided with detection type identifiers, and the detection type identifiers are determined based on the position of the measuring device.
Specifically, the in-vehicle environment has obstacle objects such as in-vehicle personnel such as passengers or drivers, and seats, vehicle bodies, and the like. When the detection laser encounters an obstacle, reflection can occur, reflected laser is generated, data points can be generated after the reflected laser is received by the measuring device, and the collection of all the data points in the secondary acquisition forms initial point cloud data.
The terminal adds a detection type identifier to the initial point cloud data, and the detection type identifier is determined according to the position of the measuring device. According to the position of the measuring device, if the measuring device is used for collecting point cloud data of the passenger area environment in the vehicle, adding quantity behavior detection identification to the initial point cloud data; and if the measuring device is used for collecting point cloud data of the driving area environment in the vehicle, adding an operation behavior detection identifier to the initial point cloud data.
Step S103, preprocessing the initial point cloud data to obtain point cloud data.
Specifically, after the terminal obtains the initial point cloud data through the measuring device, preprocessing is performed on the initial point cloud data, wherein the preprocessing can be to identify and remove sparse features or noise points in the initial point cloud data, remove useless data points and obtain the point cloud data.
Further, as shown in fig. 2, the step S103 may include:
step S1031, performing clustering processing on the initial point cloud data to obtain first point cloud data.
Specifically, the data points in the initial point cloud data may be from multiple obstacles, and may contain noise points. Noise points can be removed through a Density-based DBSCAN (Density-Based Spatial Clustering of Applications with Noise) clustering algorithm, accuracy of point cloud data is improved, obstacles corresponding to the data points are determined, and first point cloud data are obtained. The DBSCAN clustering algorithm defines clusters as the largest set of densely connected points, is able to divide areas with a sufficiently high density into clusters, and can find arbitrarily shaped clusters in a noisy spatial database.
Step S1032, filtering the first point cloud data according to the preset space coordinate information to obtain the point cloud data.
Specifically, the data points in the point cloud bear spatial coordinates, which represent the spatial position of the obstacle reflecting the detection laser light. In a car there are some stationary objects, such as car bodies, hand rails, etc., whose generated data points are stationary and can be known in advance. The terminal records the spatial positions of the fixed obstacles in advance to obtain spatial coordinate information.
After the first point cloud data is obtained, the first point cloud data is judged according to the space coordinate information, which data points come from the fixed-position obstacles are determined in the first point cloud data, the obstacles are not useful for detecting driving behaviors, the data quantity to be processed can be increased, and the data points can be filtered out to obtain the point cloud data.
In this embodiment, clustering is performed on the initial point cloud data to remove noise points, so as to obtain first point cloud data; and determining and filtering data points from the fixed-position obstacle in the first point cloud data according to the preset space coordinate information, so that additional data processing is avoided, and the subsequent data processing efficiency is improved.
Step S104, driving behavior detection corresponding to the detection type identification is carried out on the point cloud data, and a driving behavior detection result is obtained.
Specifically, the point cloud data may be point cloud data related to passengers, or may be point cloud data related to drivers, and the point cloud data may be identified and distinguished by detecting type identification.
The emphasis point of the detection to be performed also differs for the passenger and the driver, for example, the detection of the point cloud data related to the passenger may be the detection of whether there is an overload, and the detection of the point cloud data related to the driver may be the detection of whether the driving operation of the driver is normal. Therefore, the corresponding driving behavior detection can be carried out on the point cloud data according to the detection type identification, and the driving behavior detection result is obtained.
In the embodiment, a measuring device arranged at a preset position in a vehicle emits detection laser to the environment in the vehicle, and initial point cloud data are obtained according to the received reflected laser; based on different positions of the measuring devices, the initial point cloud data can focus on passengers or drivers, and detection type identification can be added to the initial point cloud data according to the positions of the measuring devices due to different detection focus on the passengers or the drivers; preprocessing initial point cloud data, and reserving effective and useful data points to obtain point cloud data; according to the detection type identification of the point cloud data, carrying out driving behavior detection on the level of a focusing passenger or a driver on the point cloud data to obtain a driving behavior detection result; according to the method and the device, the targeted detection is carried out on the passengers or the drivers through the point cloud data, the point cloud data are generated based on laser, the interference of external factors is not easy to occur, and the accuracy of driving behavior detection is improved.
Further, as shown in fig. 3, the step S104 may include:
step S1041, inputting the point cloud data into a personnel feature extraction network to obtain personnel feature information in the point cloud data, wherein the personnel feature information comprises the number of identified target personnel, the personnel point cloud data of each target personnel and the corresponding key points thereof.
Specifically, the point cloud data are input into a personnel feature extraction network, the personnel feature extraction network is built based on a neural network and trained, and feature extraction can be performed on the point cloud data so as to realize three-dimensional target detection. In one embodiment, the person feature extraction network may be built based on PointRCNN.
The personnel characteristic extraction network outputs personnel characteristic information in the point cloud data, wherein the personnel characteristic information comprises the number of identified target personnel, the personnel point cloud data of each target personnel and the corresponding key points of the personnel point cloud data. The personnel feature extraction network can identify personnel in the in-vehicle environment, namely target personnel according to the point cloud data, and the personnel number of the target personnel is obtained. The person feature extraction network may identify which data points come from which target person, the identified data points constituting the outline of the person in the point cloud virtual space, the data points constituting person point cloud data of the target person. The method and the device have the advantages that a plurality of key points are preset, the key points represent a plurality of key positions of the surface part of the human body, such as the forehead center, the eye center, the nodes of the shoulder parts and the like, and the key points can form a topological structure in a three-dimensional space, so that the pose of a person can be described and represented. The person feature extraction network may identify key points from the person point cloud data.
In step S1042, when the detection type is identified as the quantitative behavior detection identification, quantitative behavior detection is performed according to the personnel characteristic information to obtain a quantitative behavior detection result.
Specifically, the point cloud data may be point cloud data related to passengers, or may be point cloud data related to drivers, and the point cloud data may be identified and distinguished by detecting type identification. When the detection type identifier is a plurality of behavior detection identifiers, the point cloud data is the point cloud data related to the passenger; when the detection type identifier is an operation behavior detection identifier, the point cloud data is point cloud data related to the driver.
The emphasis on the detection to be performed differs for the passenger and the driver. The detection of the passengers generally performs detection related to the number of passengers, that is, performs detection of the number behavior when the vehicle is stationary, and obtains a detection result of the number behavior. For example, when the bus arrives at a bus station, the running is suspended, passengers get on and off the bus, at this time, the measurement device emits detection laser to the passenger area in the bus to obtain point cloud data, and the number of passengers in the bus is detected according to the point cloud data.
In step S1043, when the detection type identifier is an operation behavior detection identifier, operation behavior detection is performed according to the personnel feature information, so as to obtain an operation behavior detection result.
Specifically, when the detection type identifier is the operation behavior detection identifier, the point cloud data is point cloud data related to the driver. The detection of the driver usually detects whether the driving operation of the driver meets the specification or not in the running process of the vehicle, namely, the operation behavior detection is performed, and the operation behavior detection result is obtained.
Step S1044, a quantity behavior detection result or an operation behavior detection result is determined as a driving behavior detection result.
Specifically, the number behavior detection result or the operation behavior detection result is taken as the driving behavior detection result that is acquired at the present time.
In the embodiment, the point cloud data is input into a personnel feature extraction network to obtain personnel feature information, wherein the personnel feature information comprises the number of identified target personnel, the personnel point cloud data of each target personnel and the corresponding key points; the personnel characteristic information reflects the distribution condition and the pose condition of personnel in the vehicle, and the quantity behavior detection or the operation behavior detection is carried out according to the detection type identification and the personnel characteristic information, so that the driving behavior monitoring of different layers is realized.
Further, as shown in fig. 4, the step S1042 may include:
step S10421, obtaining the number of people in the personnel characteristic information and the personnel point cloud data of each target person.
Specifically, the personnel characteristic information includes personnel number and personnel point cloud data of each target personnel, and data points included in the personnel point cloud data form an outline of the passenger in the point cloud virtual space, namely, the position of the passenger can be determined according to the personnel point cloud data.
Step S10422, obtaining a preset total threshold and a personnel density threshold.
Specifically, a preset total number threshold, which is the maximum value of the number of passengers in the preset vehicle, and a personnel density threshold, which is the maximum value of the preset passenger density in the vehicle, are obtained.
In step S10423, the number of people is compared with the total threshold to perform the first number behavior detection, so as to obtain a first number behavior detection result.
Specifically, the number of people is compared with the total number threshold value to realize the first number behavior detection, and it can be understood that the first number behavior detection is to detect whether the number of passengers in the vehicle reaches the total number threshold value, so as to realize the detection of whether the vehicle is overloaded, and the first number behavior detection result can be obtained according to the comparison result.
Step S10424, performing a second number of behavior detection according to the personnel point cloud data and the personnel density threshold of each target personnel, to obtain a second number of behavior detection results.
Specifically, the personnel point cloud data can locate the position of the passenger, so that the distribution condition of the passenger in the vehicle can be obtained. According to the personnel point cloud data, the personnel density of the passengers in a certain area can be calculated, the personnel density is compared with a personnel density threshold value, the second quantity behavior detection is achieved, a second quantity behavior detection result is obtained, and whether the passengers are too dense in the certain area or not is detected in the second quantity behavior detection. It will be appreciated that in a bus or the like scenario, if passengers are too dense in a certain area, there is a potential safety risk, and the space in the vehicle is underutilized (for example, passengers gather near the door opening, so that subsequent passengers cannot get on the vehicle), so that whether the passengers are too dense in a certain area is determined through the detection of the second number of behaviors.
It will be appreciated that the population threshold and person density threshold are typically less than the actual population threshold and person density threshold, and tend to be at a higher risk when the number of passengers or person density reaches the actual population threshold or person density threshold. In the detection, the total number threshold and the personnel density threshold are smaller than the actual total number threshold and the actual personnel density threshold, so that advanced detection is realized.
Step S10425, generating a quantitative behavior detection result based on the first quantitative behavior detection result and the second quantitative behavior detection result.
Specifically, the first quantity behavior detection result and the second quantity behavior detection result are summarized to obtain the quantity behavior detection result.
In the embodiment, the number of people in the personnel characteristic information and the personnel point cloud data of each target person are obtained, and the personnel point cloud data can locate the position of the passenger; comparing the number of people with a preset total number threshold to judge whether overload of people occurs or not, and obtaining a first number behavior detection result; based on the personnel point cloud data and the personnel density threshold value of each target personnel, whether passengers are too dense in a certain area can be judged, and a second number of behavior detection results are obtained.
Further, as shown in fig. 5, the step S1043 may include:
step S10431, obtaining key points in the personnel point cloud data corresponding to the driver from the personnel characteristic information.
Specifically, when the detection type identifier is an operation behavior detection identifier, the point cloud data is generally point cloud data related to the driver, and key points of the driver are acquired from the personnel characteristic information at the time of detection.
Step S10432, obtaining preset standard key points.
Specifically, the application presets a plurality of standard key points, and the number of the standard key points and the corresponding surface parts of the human body are the same as the key points.
Step S10433, calculating the pose offset value of the driver according to the key points and the standard key points.
In particular, the keypoints have spatial coordinates and a time stamp, wherein the time stamp may be the time at which the detection laser was emitted. The key points can form a topological structure of a plurality of body parts of the driver in a three-dimensional space, and can describe and represent the pose of the driver in the driving process.
Similarly, the standard key points can also form a topological structure, and the topological structure is a description and a representation of the standard pose expected by the driver in the driving process.
The terminal can calculate the degree of difference between the topological structure formed by the key points and the topological structure formed by the standard key points to obtain a pose offset value, and the pose offset value reflects the degree of difference of the pose of the driver in the driving process compared with the expected standard pose.
In one embodiment, the pose offset value comprises a first offset value and a second offset value. The center position of the driver is determined according to the key points, the standard center position of the driver is determined according to the standard key points, and the first offset value is obtained according to the distance between the two center positions. For example, assuming that the sitting posture of the driver while driving is the same as the standard sitting posture, but the sitting position of the driver is too forward or too backward, the driver may be underdriven, and the difference may be estimated by the first offset value.
In driving regulations, there is often a certain requirement for the sitting posture of the driver, such as sitting with the body right, facing forward, not taking action independent of driving, etc., and standard key points are determined according to these regulations. If the driver violates these specifications, the distribution of the key points will vary greatly from the standard key points. The key points and the standard key points are in one-to-one correspondence, for example, the key points representing the center of the forehead are both included, the distances of the corresponding key points and the standard key points in the point cloud virtual space are calculated according to the space coordinates, and the distances are added to obtain a second offset value. And then carrying out weighted operation on the first offset value and the second offset value to obtain the pose offset value.
Step S10434, generating an operation behavior detection result based on the pose offset value.
Specifically, an offset value threshold may be preset, and the pose offset value is compared with the offset value threshold, and if the pose offset value is greater than or equal to the offset value threshold, the current pose of the driver is less standard, and there is a potential risk that the pose is abnormal.
In one embodiment, the offset value threshold may be provided in plurality, and the degree of abnormality of the pose of the driver may be ranked by comparing the pose offset value with the plurality of offset value thresholds.
In one embodiment, the detection laser may be emitted at a preset frequency to obtain point cloud data for a plurality of frames. When judging that the position and the posture of the driver are abnormal according to the point cloud data of a certain frame, acquiring the point cloud data of a plurality of frames next and judging whether the position and the posture of the driver are abnormal or not. If the pose of the driver is abnormal in the continuous preset number of frames, determining that dangerous driving behaviors exist in the driver, and generating a corresponding driving behavior detection result.
In this embodiment, key points in the personnel point cloud data corresponding to the driver are obtained from the personnel feature information, where the key points can describe the pose of the driver in the driving process; acquiring preset standard key points, wherein the standard key points describe the pose of a desired driver; and calculating a pose offset value of the driver according to the key points and the standard key points, wherein the pose offset value represents the offset degree of the pose of the driver compared with the standard pose, so as to evaluate the driving operation of the driver and generate an operation behavior detection result.
Further, after the step S104, the method may further include: when the driving behavior abnormality is determined to exist according to the driving behavior detection result, driving behavior warning information is generated according to the driving behavior detection result; and broadcasting driving behavior warning information to carry out driving reminding.
Specifically, the driving behavior detection result may display whether the current vehicle has abnormal driving behavior, including whether passengers in the vehicle are overloaded, whether passengers are too dense in a certain area, whether the driver has abnormal pose or dangerous driving behavior during driving, and the like. The terminal can generate corresponding driving behavior warning information according to the driving behavior detection result and broadcast the warning information, for example, the terminal can remind a driver that the number of passengers is large, and passengers should be stopped to continue getting on; or reminding passengers to avoid gathering and moving to the area with less people in the vehicle; or alert the driver to the concentration of driving, etc.
In this embodiment, when there is an abnormal driving behavior, driving behavior warning information is generated according to the driving behavior detection result, and the driving behavior warning information is broadcast to remind the passenger or the driver, so as to reduce the potential safety risk in the vehicle.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by computer readable instructions stored in a computer readable storage medium that, when executed, may comprise the steps of the embodiments of the methods described above. The storage medium may be a nonvolatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a random access Memory (Random Access Memory, RAM).
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited in order and may be performed in other orders, unless explicitly stated herein. Moreover, at least some of the steps in the flowcharts of the figures may include a plurality of sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, the order of their execution not necessarily being sequential, but may be performed in turn or alternately with other steps or at least a portion of the other steps or stages.
With further reference to fig. 6, as an implementation of the method shown in fig. 1, the present application provides an embodiment of an artificial intelligence-based driving behavior detection apparatus, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 1, and the apparatus may be specifically applied to various electronic devices.
As shown in fig. 6, the driving behavior detection apparatus 200 based on artificial intelligence according to the present embodiment includes: a laser emitting module 201, an initial generating module 202, a preprocessing module 203 and a behavior detecting module 204, wherein:
The laser emission module 201 is configured to emit detection laser to an in-vehicle environment through a measurement device disposed at a preset position in the vehicle.
The initial generation module 202 is configured to obtain initial point cloud data according to reflected laser light received by the measurement device, where the reflected laser light is generated by object reflection detection laser light in the in-vehicle environment, and the point cloud data has a detection type identifier, and the detection type identifier is determined based on the position of the measurement device.
The preprocessing module 203 is configured to preprocess the initial point cloud data to obtain point cloud data.
The behavior detection module 204 is configured to perform driving behavior detection on the point cloud data corresponding to the detection type identifier, so as to obtain a driving behavior detection result.
In the embodiment, a measuring device arranged at a preset position in a vehicle emits detection laser to the environment in the vehicle, and initial point cloud data are obtained according to the received reflected laser; based on different positions of the measuring devices, the initial point cloud data can focus on passengers or drivers, and detection type identification can be added to the initial point cloud data according to the positions of the measuring devices due to different detection focus on the passengers or the drivers; preprocessing initial point cloud data, and reserving effective and useful data points to obtain point cloud data; according to the detection type identification of the point cloud data, carrying out driving behavior detection on the level of a focusing passenger or a driver on the point cloud data to obtain a driving behavior detection result; according to the method and the device, the targeted detection is carried out on the passengers or the drivers through the point cloud data, the point cloud data are generated based on laser, the interference of external factors is not easy to occur, and the accuracy of driving behavior detection is improved.
In some alternative implementations of the present embodiment, the laser emitting module 201 may include: the system comprises a state acquisition sub-module, a first transmitting sub-module and a second transmitting sub-module, wherein:
and the state acquisition sub-module is used for acquiring the current state of the vehicle.
And the first transmitting sub-module is used for transmitting detection laser to the environment of the passenger area in the vehicle through a first measuring device arranged at least one preset position in the vehicle when the current state is a static state.
And the second transmitting sub-module is used for transmitting detection laser to the driving area environment in the vehicle through a second measuring device arranged at least one preset position in the vehicle when the current state is the driving state.
In this embodiment, the current state of the vehicle is obtained, when the vehicle is in a stationary state, the first measuring device emits detection laser to the environment of the passenger area in the vehicle, and when the vehicle is in a driving state, the second measuring device emits detection laser to the environment of the driving area in the vehicle, so as to collect the point cloud data related to the passenger or the driver, thereby performing targeted driving behavior detection for the passenger and the driver.
In some alternative implementations of the present embodiment, the preprocessing module 203 may include: clustering submodule and filtering submodule, wherein:
And the clustering sub-module is used for carrying out clustering processing on the initial point cloud data to obtain first point cloud data.
And the filtering sub-module is used for filtering the first point cloud data according to the preset space coordinate information to obtain the point cloud data.
In this embodiment, clustering is performed on the initial point cloud data to remove noise points, so as to obtain first point cloud data; and determining and filtering data points from the fixed-position obstacle in the first point cloud data according to the preset space coordinate information, so that additional data processing is avoided, and the subsequent data processing efficiency is improved.
In some alternative implementations of the present embodiment, the behavior detection module 204 may include: the device comprises a feature extraction sub-module, a quantity detection sub-module, a behavior detection sub-module and a result determination sub-module, wherein:
the feature extraction sub-module is used for inputting the point cloud data into a personnel feature extraction network to obtain personnel feature information in the point cloud data, wherein the personnel feature information comprises the number of identified target personnel, the personnel point cloud data of each target personnel and the corresponding key points of the personnel point cloud data.
And the quantity detection sub-module is used for carrying out quantity behavior detection according to the personnel characteristic information when the detection type is identified as the quantity behavior detection identification, so as to obtain a quantity behavior detection result.
And the behavior detection sub-module is used for detecting the operation behaviors according to the personnel characteristic information when the detection type identifier is the operation behavior detection identifier, so as to obtain an operation behavior detection result.
And the result determination submodule is used for determining the quantity behavior detection result or the operation behavior detection result as a driving behavior detection result.
In the embodiment, the point cloud data is input into a personnel feature extraction network to obtain personnel feature information, wherein the personnel feature information comprises the number of identified target personnel, the personnel point cloud data of each target personnel and the corresponding key points; the personnel characteristic information reflects the distribution condition and the pose condition of personnel in the vehicle, and the quantity behavior detection or the operation behavior detection is carried out according to the detection type identification and the personnel characteristic information, so that the driving behavior monitoring of different layers is realized.
In some optional implementations of this embodiment, the number detection sub-module may include: the device comprises an acquisition unit, a threshold acquisition unit, a first detection unit, a second detection unit and a result determination unit, wherein:
the acquisition unit is used for acquiring the number of people in the personnel characteristic information and personnel point cloud data of each target person.
The threshold value acquisition unit is used for acquiring a preset total number threshold value and a personnel density threshold value.
The first detection unit is used for comparing the number of people with a total number threshold value to detect the first number of behaviors and obtain a first number of behavior detection result.
And the second detection unit is used for carrying out second number behavior detection according to the personnel point cloud data of each target personnel and the personnel density threshold value to obtain a second number behavior detection result.
And a result determination unit for generating a quantitative behavior detection result based on the first quantitative behavior detection result and the second quantitative behavior detection result.
In the embodiment, the number of people in the personnel characteristic information and the personnel point cloud data of each target person are obtained, and the personnel point cloud data can locate the position of the passenger; comparing the number of people with a preset total number threshold to judge whether overload of people occurs or not, and obtaining a first number behavior detection result; based on the personnel point cloud data and the personnel density threshold value of each target personnel, whether passengers are too dense in a certain area can be judged, and a second number of behavior detection results are obtained.
In some optional implementations of the present embodiment, the behavior detection sub-module may include: the device comprises a key point acquisition unit, a standard acquisition unit, an offset calculation unit and a detection result generation unit, wherein:
The key point acquisition unit is used for acquiring key points in the personnel point cloud data corresponding to the driver from the personnel characteristic information.
The standard acquisition unit is used for acquiring preset standard key points.
And the offset calculation unit is used for calculating the pose offset value of the driver according to the key points and the standard key points.
And the detection result generation unit is used for generating an operation behavior detection result based on the pose offset value.
In this embodiment, key points in the personnel point cloud data corresponding to the driver are obtained from the personnel feature information, where the key points can describe the pose of the driver in the driving process; acquiring preset standard key points, wherein the standard key points describe the pose of a desired driver; and calculating a pose offset value of the driver according to the key points and the standard key points, wherein the pose offset value represents the offset degree of the pose of the driver compared with the standard pose, so as to evaluate the driving operation of the driver and generate an operation behavior detection result.
In some optional implementations of the present embodiment, the artificial intelligence based driving behavior detection apparatus 200 may further include: the information generation module and the information broadcasting module, wherein:
and the information generation module is used for generating driving behavior warning information according to the driving behavior detection result when the driving behavior abnormality exists according to the driving behavior detection result.
And the information broadcasting module is used for broadcasting driving behavior warning information so as to carry out driving reminding.
In this embodiment, when there is an abnormal driving behavior, driving behavior warning information is generated according to the driving behavior detection result, and the driving behavior warning information is broadcast to remind the passenger or the driver, so as to reduce the potential safety risk in the vehicle.
In order to solve the technical problems, the embodiment of the application also provides computer equipment. Referring specifically to fig. 7, fig. 7 is a basic structural block diagram of a computer device according to the present embodiment.
The computer device 3 comprises a memory 31, a processor 32, a network interface 33 communicatively connected to each other via a system bus. It should be noted that only the computer device 3 with components 31-33 is shown in the figures, but it should be understood that not all of the illustrated components are required to be implemented and that more or fewer components may be implemented instead. It will be appreciated by those skilled in the art that the computer device herein is a device capable of automatically performing numerical calculations and/or information processing in accordance with predetermined or stored instructions, the hardware of which includes, but is not limited to, microprocessors, application specific integrated circuits (Application Specific Integrated Circuit, ASICs), programmable gate arrays (fields-Programmable Gate Array, FPGAs), digital processors (Digital Signal Processor, DSPs), embedded devices, etc.
The computer equipment can be a desktop computer, a notebook computer, a palm computer, a cloud server and other computing equipment. The computer equipment can perform man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch pad or voice control equipment and the like.
The memory 31 includes at least one type of readable storage medium including flash memory, hard disk, multimedia card, card memory (e.g., SD or DX memory, etc.), random Access Memory (RAM), static Random Access Memory (SRAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), programmable Read Only Memory (PROM), magnetic memory, magnetic disk, optical disk, etc. In some embodiments, the storage 31 may be an internal storage unit of the computer device 3, such as a hard disk or a memory of the computer device 3. In other embodiments, the memory 31 may also be an external storage device of the computer device 3, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card) or the like, which are provided on the computer device 3. Of course, the memory 31 may also comprise both an internal memory unit of the computer device 3 and an external memory device. In this embodiment, the memory 31 is generally used to store an operating system and various application software installed on the computer device 3, such as computer readable instructions of an artificial intelligence-based driving behavior detection method. Further, the memory 31 may be used to temporarily store various types of data that have been output or are to be output.
The processor 32 may be a central processing unit (Central Processing Unit, CPU), controller, microcontroller, microprocessor, or other data processing chip in some embodiments. The processor 32 is typically used to control the overall operation of the computer device 3. In this embodiment, the processor 32 is configured to execute computer readable instructions stored in the memory 31 or process data, such as executing computer readable instructions of the driving behavior detection method based on artificial intelligence.
The network interface 33 may comprise a wireless network interface or a wired network interface, which network interface 33 is typically used for establishing a communication connection between the computer device 3 and other electronic devices.
The computer device provided in the present embodiment may perform the above-described driving behavior detection method based on artificial intelligence. The driving behavior detection method based on artificial intelligence herein may be the driving behavior detection method based on artificial intelligence of the above-described respective embodiments.
In the embodiment, a measuring device arranged at a preset position in a vehicle emits detection laser to the environment in the vehicle, and initial point cloud data are obtained according to the received reflected laser; based on different positions of the measuring devices, the initial point cloud data can focus on passengers or drivers, and detection type identification can be added to the initial point cloud data according to the positions of the measuring devices due to different detection focus on the passengers or the drivers; preprocessing initial point cloud data, and reserving effective and useful data points to obtain point cloud data; according to the detection type identification of the point cloud data, carrying out artificial intelligence-based driving behavior detection on the aspect of focusing on passengers or drivers on the point cloud data to obtain an artificial intelligence-based driving behavior detection result; according to the method and the device, the passenger or the driver is subjected to targeted detection through the point cloud data, the point cloud data is generated based on laser, interference of external factors is not easy to occur, and the accuracy of driving behavior detection based on artificial intelligence is improved.
The present application also provides another embodiment, namely, a computer-readable storage medium storing computer-readable instructions executable by at least one processor to cause the at least one processor to perform the steps of the artificial intelligence-based driving behavior detection method as described above.
In the embodiment, a measuring device arranged at a preset position in a vehicle emits detection laser to the environment in the vehicle, and initial point cloud data are obtained according to the received reflected laser; based on different positions of the measuring devices, the initial point cloud data can focus on passengers or drivers, and detection type identification can be added to the initial point cloud data according to the positions of the measuring devices due to different detection focus on the passengers or the drivers; preprocessing initial point cloud data, and reserving effective and useful data points to obtain point cloud data; according to the detection type identification of the point cloud data, carrying out artificial intelligence-based driving behavior detection on the aspect of focusing on passengers or drivers on the point cloud data to obtain an artificial intelligence-based driving behavior detection result; according to the method and the device, the passenger or the driver is subjected to targeted detection through the point cloud data, the point cloud data is generated based on laser, interference of external factors is not easy to occur, and the accuracy of driving behavior detection based on artificial intelligence is improved.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method described in the embodiments of the present application.
It is apparent that the embodiments described above are only some embodiments of the present application, but not all embodiments, the preferred embodiments of the present application are given in the drawings, but not limiting the patent scope of the present application. This application may be embodied in many different forms, but rather, embodiments are provided in order to provide a more thorough understanding of the present disclosure. Although the present application has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments described in the foregoing, or equivalents may be substituted for elements thereof. All equivalent structures made by the specification and the drawings of the application are directly or indirectly applied to other related technical fields, and are also within the protection scope of the application.

Claims (10)

1. The driving behavior detection method based on artificial intelligence is characterized by comprising the following steps of:
transmitting detection laser to the environment in the vehicle through a measuring device arranged at a preset position in the vehicle;
obtaining initial point cloud data according to reflected laser received by the measuring device, wherein the reflected laser is generated by reflecting the detection laser by an object in the in-vehicle environment, the point cloud data is provided with a detection type identifier, and the detection type identifier is determined based on the position of the measuring device;
preprocessing the initial point cloud data to obtain point cloud data;
and detecting the driving behavior corresponding to the detection type identifier on the point cloud data to obtain a driving behavior detection result.
2. The driving behavior detection method based on artificial intelligence according to claim 1, wherein the step of emitting the detection laser to the in-vehicle environment through the measuring device disposed at the in-vehicle preset position comprises:
acquiring the current state of the vehicle;
when the current state is a static state, transmitting detection laser to the environment of the passenger area in the vehicle through a first measuring device arranged at least one preset position in the vehicle;
And when the current state is a driving state, transmitting detection laser to the driving area environment in the vehicle through a second measuring device arranged at least one preset position in the vehicle.
3. The driving behavior detection method based on artificial intelligence according to claim 1, wherein the step of preprocessing the initial point cloud data to obtain point cloud data comprises:
clustering the initial point cloud data to obtain first point cloud data;
and filtering the first point cloud data according to preset space coordinate information to obtain point cloud data.
4. The driving behavior detection method based on artificial intelligence according to claim 1, wherein the step of performing driving behavior detection corresponding to the detection type identifier on the point cloud data to obtain a driving behavior detection result includes:
inputting the point cloud data into a personnel feature extraction network to obtain personnel feature information in the point cloud data, wherein the personnel feature information comprises the number of identified target personnel, the personnel point cloud data of each target personnel and corresponding key points;
when the detection type is identified as a quantity behavior detection identifier, quantity behavior detection is carried out according to the personnel characteristic information, and a quantity behavior detection result is obtained;
When the detection type identifier is an operation behavior detection identifier, performing operation behavior detection according to the personnel characteristic information to obtain an operation behavior detection result;
and determining the quantity behavior detection result or the operation behavior detection result as a driving behavior detection result.
5. The driving behavior detection method based on artificial intelligence according to claim 4, wherein the step of performing quantitative behavior detection according to the personnel characteristic information to obtain a quantitative behavior detection result comprises:
acquiring the number of people in the personnel characteristic information and personnel point cloud data of each target person;
acquiring a preset total number threshold and a personnel density threshold;
comparing the personnel number with the total number threshold to detect a first number of behaviors, and obtaining a first number of behaviors detection result;
performing second number behavior detection according to the personnel point cloud data of each target personnel and the personnel density threshold value to obtain a second number behavior detection result;
and generating a quantitative behavior detection result based on the first quantitative behavior detection result and the second quantitative behavior detection result.
6. The driving behavior detection method based on artificial intelligence according to claim 4, wherein the step of performing operation behavior detection according to the person feature information to obtain an operation behavior detection result comprises:
Acquiring key points in personnel point cloud data corresponding to a driver from the personnel characteristic information;
acquiring preset standard key points;
according to the key points and the standard key points, calculating the pose offset value of the driver;
and generating an operation behavior detection result based on the pose offset value.
7. The driving behavior detection method based on artificial intelligence according to claim 1, further comprising, after the step of performing driving behavior detection corresponding to the detection type identifier on the point cloud data, a step of obtaining a driving behavior detection result:
when the driving behavior abnormality is determined to exist according to the driving behavior detection result, driving behavior warning information is generated according to the driving behavior detection result;
and broadcasting the driving behavior warning information to carry out driving reminding.
8. An artificial intelligence-based driving behavior detection device, comprising:
the laser emission module is used for emitting detection laser to the environment in the vehicle through a measuring device arranged at a preset position in the vehicle;
the initial generation module is used for obtaining initial point cloud data according to the reflected laser received by the measuring device, the reflected laser is generated by reflecting the detection laser by an object in the in-vehicle environment, the point cloud data is provided with a detection type identifier, and the detection type identifier is determined based on the position of the measuring device;
The preprocessing module is used for preprocessing the initial point cloud data to obtain point cloud data;
and the behavior detection module is used for detecting the driving behavior corresponding to the detection type identifier on the point cloud data to obtain a driving behavior detection result.
9. A computer device comprising a memory having stored therein computer readable instructions which when executed implement the steps of the artificial intelligence based driving behaviour detection method according to any one of claims 1 to 7.
10. A computer readable storage medium having stored thereon computer readable instructions which when executed by a processor implement the steps of the artificial intelligence based driving behaviour detection method according to any one of claims 1 to 7.
CN202310554001.XA 2023-05-17 2023-05-17 Driving behavior detection method and device based on artificial intelligence Pending CN116469086A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310554001.XA CN116469086A (en) 2023-05-17 2023-05-17 Driving behavior detection method and device based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310554001.XA CN116469086A (en) 2023-05-17 2023-05-17 Driving behavior detection method and device based on artificial intelligence

Publications (1)

Publication Number Publication Date
CN116469086A true CN116469086A (en) 2023-07-21

Family

ID=87184471

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310554001.XA Pending CN116469086A (en) 2023-05-17 2023-05-17 Driving behavior detection method and device based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN116469086A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108919218A (en) * 2018-06-07 2018-11-30 北京邮电大学 A kind of contactless number of people in car and the method and device of position judgement
CN109435689A (en) * 2018-09-25 2019-03-08 北京小米移动软件有限公司 Export method, apparatus, vehicle and the readable storage medium storing program for executing of prompt information
CN111353471A (en) * 2020-03-17 2020-06-30 北京百度网讯科技有限公司 Safe driving monitoring method, device, equipment and readable storage medium
CN111968338A (en) * 2020-07-23 2020-11-20 南京邮电大学 Driving behavior analysis, recognition and warning system based on deep learning and recognition method thereof
US20230039738A1 (en) * 2021-07-28 2023-02-09 Here Global B.V. Method and apparatus for assessing traffic impact caused by individual driving behaviors
CN115760827A (en) * 2022-11-29 2023-03-07 北京百度网讯科技有限公司 Point cloud data detection method, device, equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108919218A (en) * 2018-06-07 2018-11-30 北京邮电大学 A kind of contactless number of people in car and the method and device of position judgement
CN109435689A (en) * 2018-09-25 2019-03-08 北京小米移动软件有限公司 Export method, apparatus, vehicle and the readable storage medium storing program for executing of prompt information
CN111353471A (en) * 2020-03-17 2020-06-30 北京百度网讯科技有限公司 Safe driving monitoring method, device, equipment and readable storage medium
CN111968338A (en) * 2020-07-23 2020-11-20 南京邮电大学 Driving behavior analysis, recognition and warning system based on deep learning and recognition method thereof
US20230039738A1 (en) * 2021-07-28 2023-02-09 Here Global B.V. Method and apparatus for assessing traffic impact caused by individual driving behaviors
CN115760827A (en) * 2022-11-29 2023-03-07 北京百度网讯科技有限公司 Point cloud data detection method, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
汪洋浪: "基于三维激光雷达与摄像头信息融合的激进型驾驶行为检测研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》, pages 035 - 910 *

Similar Documents

Publication Publication Date Title
CN106951847B (en) Obstacle detection method, apparatus, device and storage medium
CN111062240B (en) Monitoring method and device for automobile driving safety, computer equipment and storage medium
CN111489588B (en) Vehicle driving risk early warning method and device, equipment and storage medium
US11043005B2 (en) Lidar-based multi-person pose estimation
CN111274881A (en) Driving safety monitoring method and device, computer equipment and storage medium
US20220270376A1 (en) Deterioration diagnosis device, deterioration diagnosis system, deterioration diagnosis method, and storage medium for storing program
CN110309735A (en) Exception detecting method, device, server and storage medium
GB2573738A (en) Driving monitoring
CN110895662A (en) Vehicle overload alarm method and device, electronic equipment and storage medium
US20230206652A1 (en) Systems and methods for utilizing models to detect dangerous tracks for vehicles
CN114782897A (en) Dangerous behavior detection method and system based on machine vision and deep learning
CN110544312A (en) Video display method and device in virtual scene, electronic equipment and storage device
CN110533094B (en) Evaluation method and system for driver
CN114973211A (en) Object identification method, device, equipment and storage medium
CN116469086A (en) Driving behavior detection method and device based on artificial intelligence
CN115329347A (en) Prediction method, device and storage medium based on car networking vulnerability data
CN114701934A (en) Security control method, device and system for elevator, cloud platform and storage medium
CN112700138A (en) Method, device and system for road traffic risk management
CN111524389A (en) Vehicle driving method and device
CN112233420B (en) Fault diagnosis method and device for intelligent traffic control system
CN115620248B (en) Camera calling method and system based on traffic monitoring
CN116721556B (en) Vehicle management and control method, system, equipment and medium
CN115240406B (en) Road congestion management method and device, computer readable medium and electronic equipment
CN115424211B (en) Civilized dog raising terminal operation method and device based on big data and terminal
Yun et al. Research Article A Before-and-After Study of a Collision Risk Detecting and Warning System on Local Roads

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination