CN114898342A - Method for detecting call receiving and making of non-motor vehicle driver in driving - Google Patents
Method for detecting call receiving and making of non-motor vehicle driver in driving Download PDFInfo
- Publication number
- CN114898342A CN114898342A CN202210831651.XA CN202210831651A CN114898342A CN 114898342 A CN114898342 A CN 114898342A CN 202210831651 A CN202210831651 A CN 202210831651A CN 114898342 A CN114898342 A CN 114898342A
- Authority
- CN
- China
- Prior art keywords
- target
- motor vehicle
- frame
- target frame
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
- G06V20/597—Recognising the driver's state or behaviour, e.g. attention or drowsiness
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention provides a detection method for a non-motor vehicle driver to make and receive calls during driving, belonging to the technical field of non-motor vehicle violation detection. The system comprises a scene acquisition module, a non-motor vehicle detection module, a driving behavior construction module, an analysis module and a data transmission module; the scene acquisition module is used for acquiring video information of a scene; the non-motor vehicle detection module is used for acquiring target frames of non-motor vehicles and pedestrians, matching the non-motor vehicles and the pedestrians and acquiring a minimum adjacent rectangular target frame; the driving behavior construction module is used for analyzing the similarity of the head and hand tracks of a driver and creating an abnormal posture judgment model; the analysis module is used for analyzing whether a non-motor vehicle driver makes a call or not; the data transmission module is used for transmitting the analysis result output by the analysis module to the cloud. The problems that traditional manual inspection wastes time and labor and is high in cost are solved, intelligent and real-time effective monitoring of the road is achieved, and effective decision-making basis is provided for a management layer.
Description
Technical Field
The application relates to a non-motor vehicle driver call receiving and making, in particular to a detection method for the call receiving and making of the non-motor vehicle driver in driving, belonging to the technical field of non-motor vehicle violation detection.
Background
When the non-motor vehicle is ridden, the electronic equipment can cause the attention of a driver of the non-motor vehicle to be diverted, the reaction is delayed, and the control capability and the emergency processing capability are also reduced by the single-hand grip. Once an emergency occurs in the front, serious traffic accidents are likely to happen, life safety of people is endangered, and serious economic property loss is caused. Therefore, it becomes important to accurately identify the behavior of the non-motor vehicle driver in violation of using the electronic device in the monitoring scene, which is helpful for guiding the traffic police to stop and punish the offender in time, and reduces the possibility of road safety accidents.
In the traditional monitoring work of making and receiving calls of a driver of a non-motor vehicle, professionals such as a traffic police are usually arranged to carry out regular inspection tour, but the method has low efficiency and consumes great manpower and material resources. If cloud computing is used, a large amount of video stream information is uploaded to a cloud platform for centralized processing, network and storage pressure is greatly increased, cost is high, and reliability is difficult to guarantee. Therefore, the edge calculation and deep learning technology is used for carrying out image recognition analysis on the real-time video stream of the camera, and the corresponding Internet of things communication technology is used for rapidly reporting the detection result to the platform, so that the detection work can be effectively finished, and the problems of time and labor waste, low efficiency, high cost and the like of the traditional manual detection are solved.
Disclosure of Invention
The following presents a simplified summary of the invention in order to provide a basic understanding of some aspects of the invention. It should be understood that this summary is not an exhaustive overview of the invention. It is not intended to determine the key or critical elements of the present invention, nor is it intended to limit the scope of the present invention. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is discussed later.
In view of the above, in order to solve the technical problems of high detection cost and low efficiency in the prior art, the invention provides a detection method for a non-motor vehicle driver to make and receive calls during driving.
The first scheme is as follows: the detection system for the call receiving and making of the non-motor vehicle driver in the driving process comprises a scene acquisition module, a non-motor vehicle detection module, a driving behavior construction module, an analysis module and a data transmission module;
the scene acquisition module is used for acquiring video information of a scene;
the non-motor vehicle detection module is used for acquiring target frames of non-motor vehicles and pedestrians, matching the non-motor vehicles and the pedestrians and acquiring a minimum adjacent rectangular target frame;
the driving behavior construction module is used for analyzing the similarity of the head and hand tracks of a driver and creating an abnormal posture judgment model;
the analysis module is used for analyzing whether a non-motor vehicle driver makes a call or not;
the data transmission module is used for transmitting the analysis result output by the analysis module to the cloud.
The second scheme is a detection method for the call receiving and making of the non-motor vehicle driver in driving, and the method for realizing the detection system of the first scheme comprises the following steps:
s1, scene monitoring data are obtained, and a detection area is obtained based on the monitoring data;
s2, acquiring target frames of the non-motor vehicles and the pedestrians based on the detection area, and matching the non-motor vehicles and the pedestrians to obtain a minimum adjacent rectangular target frame;
s3, establishing a riding state discrimination model and an abnormal posture discrimination model;
s4, constructing a target sequence based on the position relation between the target frame of the minimum adjacent rectangle and the detection area, judging whether the person is in a riding state or not through the riding state, judging a suspected call receiving behavior through an abnormal posture judgment model, and judging whether the call receiving behavior continuously occurs or not according to a Frecher distance;
and S5, determining the target to be called to form message information, and sending the message data and the picture data to a cloud terminal through a post function of https.
Preferably, in step S2, the target frames of the non-motor vehicle and the pedestrian are obtained based on the detection area, and the non-motor vehicle and the pedestrian are matched to obtain the minimum adjacent rectangular target frame; the method comprises the following steps:
s21, acquiring a target frame of the non-motor vehicle by utilizing a target detection algorithm yolov3 based on deep learning for each frame of the monitored image, acquiring the coordinates of the central point of the target frame of the non-motor vehicle, and generating a unique label for the target frame of the non-motor vehicle;
s22, acquiring a pedestrian target frame for each frame of the monitored image by using a target detection algorithm yolov3 based on deep learning, calculating the coordinates of the center point of the pedestrian target frame, and generating a unique label for the pedestrian target frame;
s23, for each non-motor vehicle target frame, calculating the distance between the center point of the pedestrian target frame and the center point of the non-motor vehicle target frame, and drawing a circle by taking the distance as a radius and taking the center point of the non-motor vehicle target frame as an origin;
s24, taking pedestrian target frames with the radius smaller than a set threshold, wherein the number of the pedestrian target frames is not more than 3;
s25, calculating the proportion of the area of the pedestrian target frame to the total area of the circle in the circle overlapping area, and taking the pedestrian target frame with the largest occupation ratio; if the number of the pedestrian target frames is more than the maximum number, taking the pedestrian target frame with the smallest circle radius; the method specifically comprises the following steps:
s251, establishing a new reference system, taking the central point of the non-motor vehicle target frame as an origin, and projecting the coordinates of the monitoring area into the reference system:
s252, determining the value range of the random point;
s253, randomly generating a plurality of points, and counting the number of the points in a rectangular area, wherein the percentage is counted;
s26, framing the pedestrian target frame and the non-motor vehicle target frame in the S25 by using a minimum adjacent rectangle to form a minimum adjacent rectangle target frame, and calculating the center point coordinate of the minimum adjacent rectangle; generating unique labels for the minimum adjacent rectangular target frame, the pedestrian target frame and the non-motor vehicle target frame;
s27, comparing the coordinates of the central point of the minimum adjacent rectangular target frame with the coordinates of the detection area in the S26, judging whether the central point of the minimum adjacent rectangular target frame is in the detection area, if not, stopping tracking the minimum adjacent rectangular target frame, and if so, tracking and analyzing the behavior of the minimum adjacent rectangular target frame; simultaneously, using a target tracking algorithm to associate the same adjacent rectangle in the continuous frame target detection images, and assigning a unique target sequence number to each adjacent rectangle until the target frame of the adjacent rectangle disappears or the adjacent rectangle leaves the detection area;
s28, if the non-motor vehicle enters the detection area again, a new target serial number should be allocated, wherein the target serial number can be formed by randomly combining 8 or more digits or letters, and each new target serial number is at least guaranteed to be unique in the current day;
s29, if the non-motor vehicle target frame cannot be matched with the human target frame, no operation is performed on the non-motor vehicle target frame; if the distance is less than 1 'person' target frame of a certain set threshold, the minimum adjacent rectangle is directly generated for the 'person' target frame and the non-motor vehicle target frame.
Preferably, the step S3 of creating the riding state discrimination model and the abnormal posture discrimination model includes the following steps:
s31, selecting a fixed point in the image as a reference point, establishing a coordinate system by taking the reference point as an origin, and selecting a certain corner of the edge of the detection area by the fixed point;
s32, establishing a riding state discrimination model, which comprises a CNN characteristic extraction network, an LSTM time sequence modeling network and an FC driving state analysis network;
the CNN feature extraction network respectively extracts features of each frame of target monitoring image in the image sequence, and after the features are extracted, the spatial features of each frame of target sequence image are transformed into a data form accepted by the LSTM time sequence modeling network;
each LSTM unit of the LSTM time sequence modeling network receives a frame of spatial characteristics output by the CNN network as input, simultaneously the output of the last LSTM unit is internally processed to output a group of cell states, the relevance of the non-motor vehicle characteristics on the time sequence is constructed once, and the cell states output by each LSTM unit are spliced and then input into the fully-connected FC driving state analysis network;
the FC running state analysis network output layer is provided with two neurons which respectively represent 'pushing' and 'riding', if the neuron representing 'riding' is activated and the score is higher than a set threshold value t, the output result is 'riding', otherwise, 'pushing' is output;
and S33, creating an abnormal posture distinguishing model.
Preferably, the creating an abnormal posture discrimination model in S33 includes the following steps:
s331, obtaining a non-motor vehicle driver image as a model training sample;
s332, converting the non-motor driver image data set into a coordinate data set of hand joint points and head joint points through a Lightweight-Openpos model; selecting a data set with all successfully recognized hand joint points and head joint points as an abnormal posture judgment criterion; and calculating the angle of arm bendingDistance to hand-ear;
S333, acquiring a driver image with behavior annotation by using a Kaggle platform driver posture data set, and executing a step S332 to acquire an arm bending angle and a hand-ear distance of the driver image as an initial parameter data set;
s334, estimating the bending angle of the arm through the Gaussian mixture model EM algorithmDistance to hand-earDistribution functions under normal attitude and abnormal attitude;
and S335, if the probability under the abnormal posture is greater than that under the normal posture, determining that the driver is suspected to receive and make a call, and if the frame number proportion of the suspected call receiving and making in the image sequence is greater than a set threshold, determining that the driver is suspected to receive and make a call.
Preferably, the method for estimating the angle of arm bending by using the Gaussian mixture model EM algorithm in S334Distance to hand-earA distribution function at normal attitude and abnormal attitude comprising the steps of:
s3341, setting a Gaussian distribution parameter set of a normal postureIs composed of,Abnormal attitude Gaussian distribution parameter setIs composed of,;
S3342, calculating initial values of parameter values by using the initial parameter data set, dividing the initial parameter data set into normal attitude data and abnormal attitude data according to the behavior annotation, and setting the data set marked as the normal attitude asThe data set labeled as the call receiving and making state isThen toAverage value of all data in the table is used as the initial parameter of the normal stateAndtaking the variance of the data as the initial parameter of the normal stateAndin the same way asObtaining the initial parameters of abnormal attitude、And(ii) a WhereinWhich represents a gaussian distribution of the intensity of the light,angle of arm bending representing normal posture in data set used in calculating initial parameter valueHas a mean value and a variance of,Is the average value of the distance between the hands and the ears,is the variance;
s3343. initializing a normal attitude data setAnd abnormal attitude data setNull, for each sample in the initial parameter datasetCalculating the joint distribution probability of the normal posture and the abnormal posture:
normal posture:
abnormal posture:
if it isThe sample is added to the normal attitude data set, considering it as belonging to the normal attitudeOtherwise, add the abnormal attitude data set;
s3345, repeating S3342-S3343 until the iteration resultsAndthe probability of the joint distribution generated in S3343 is the same as that of the joint distribution generated in the step SAndin normal posture and abnormal postureAndand (4) exiting iteration according to the Gaussian distribution parameters.
Preferably, the method for judging whether the person is in the riding state through the riding state is as follows:
s41, sequentially extracting a target frame with a unique target sequence number, namely a minimum adjacent rectangular target frame label, and a target frame in each frame of target monitoring image, namely a minimum adjacent rectangle, zooming the target frames to the same size, and constructing an image sequence for the non-motor vehicle-human target to be identified;
s42, analyzing the motion state of the target image sequence to be detected, and judging whether the target is in a riding state or not;
the method for judging the suspected call receiving and making behavior through the abnormal posture judgment model comprises the following steps:
s43, inputting each frame of target monitoring image in an image sequence with a driving state of 'riding' into a Lightweight-Openpos model to obtain coordinates of a head joint point and a hand joint point, calculating joint probability distribution of an arm bending angle and a hand-ear distance of a driver image under an abnormal posture and a normal posture for each frame of image, judging suspected call receiving and making if the probability under the abnormal posture is greater than that under the normal posture, and judging that the driver is suspected to receive and make calls when the frame number proportion of the suspected call receiving and making calls in the image sequence is greater than a set threshold;
the method for judging whether the telephone answering behavior continuously occurs according to the Frecher distance comprises the following steps:
s44, for the image sequence judged to be suspected of receiving and making a call, passing the coordinates of the center of each frame image in the global coordinate systemAnd the coordinates of each joint returned by the Openpos model in the image coordinate system are calculated to obtain the coordinate sequences of the human head H, the left wrist L and the right wrist R;
s45, calculating the Frey pause distances of H and L, H and R by using a dynamic programming algorithm, setting a path A as a head motion track, setting a path B as a hand motion track, and setting the minimum value of the maximum distance between the two tracks as the Frey pause distance of the two tracks:
and S46, if the minimum value of the Freund distance of the coordinate sequences H and L, H and R is smaller than a set threshold value, judging that the target has continuous incoming and outgoing call behaviors.
Preferably, the step S42 of analyzing the motion state of the target image sequence to be detected to determine whether the target is in the riding state includes the following steps:
s421, taking a vector from an original point of the target sequence to the center of the detection frame, calculating to obtain each vector slope, and when the slope is infinite, making the slope be a certain specific value to obtain a slope array;
s422, if the range of the slope array is greater than a set threshold, the driver is considered to be in a motion state;
s423, inputting the image sequence into a riding state judging model, wherein each frame of target monitoring image shares the same CNN network to obtain a riding state judging result of each frame;
s424, if the riding state judging model judges that the number of the riding frames exceeds the preset frame number threshold, the non-motor vehicle is in a riding driving state.
And the electronic device comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the steps of the method in the second scheme when executing the computer program.
Solution four, a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method of solution two.
The invention has the following beneficial effects: the method comprises the steps of training a high-precision non-motor lane recognition model and non-motor vehicle and pedestrian recognition models by using a YOLOv3 target detection algorithm based on deep learning, using an Openpos human body posture analysis model, using a suspected call receiving and calling judgment method based on a Gaussian mixture model EM algorithm, and using an https communication mode to structurally output and effectively transmit detection results. The problem of traditional artifical inspection waste time and energy, with high costs is solved, realize the road intellectuality, real-time effective monitoring, provide effectual decision-making basis for the management layer simultaneously.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a schematic diagram of the system of the present invention;
FIG. 2 is a schematic flow chart of the method of the present invention;
FIG. 3 is a schematic diagram of reference point selection according to the present invention;
FIG. 4 is a schematic view of the joint distribution;
FIG. 5 is a diagram showing the results of the calculation of the Fourier distance.
Detailed Description
In order to make the technical solutions and advantages of the embodiments of the present application more apparent, the following further detailed description of the exemplary embodiments of the present application with reference to the accompanying drawings makes it clear that the described embodiments are only a part of the embodiments of the present application, and are not exhaustive of all embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
the scene acquisition module is used for acquiring video information of a scene;
the scene acquisition module executes the method:
accessing camera video stream information: and the RJ45 Ethernet network line is used for connecting the camera with the network interface of the edge computing gateway, and the edge computing gateway is arranged in the system software of the edge computing gateway and is accessed into the real-time video stream information collected by the camera in a RTSP video stream address mode.
Video decoding: and decoding the original video into a single-frame picture in a unified RGB format.
Video preprocessing: and performing color space conversion and image filtering and denoising treatment on the single-frame picture, thereby improving the purpose of the picture and facilitating the acquisition of a detection area and the further processing of a subsequent module.
Acquiring a detection area: the selection can be made manually or the region can be determined using algorithmic adaptive detection analysis. For the latter, the non-motor vehicle lane in the scene can be identified and positioned at pixel level through a semantic segmentation algorithm, and then a minimum external rotation matrix capable of completely framing the non-motor vehicle lane region, namely an identification frame of the non-motor vehicle lane, is generated according to the segmentation result and is used as the detection region. Since the camera is generally fixed and the position of the non-motor vehicle lane is difficult to change, the detection area can be determined only at the beginning of monitoring, and the detection area can be used all the time. After the detection zone is obtained, it is given a unique label.
The non-motor vehicle detection module is used for acquiring target frames of non-motor vehicles and pedestrians, matching the non-motor vehicles and the pedestrians and acquiring a minimum adjacent rectangular target frame;
the driving behavior construction module is used for analyzing the similarity of the head and hand tracks of a driver and creating an abnormal posture judgment model;
the analysis module is used for analyzing whether a non-motor vehicle driver makes a call or not;
the data transmission module is used for transmitting the analysis result output by the analysis module to the cloud.
Example 2, the present embodiment is described with reference to fig. 2 to 5, and a method for detecting whether a driver of a non-motor vehicle is on or off during driving includes the steps of:
s1, scene monitoring data are obtained, and a detection area is obtained based on the monitoring data, and the method comprises the following steps:
s11, accessing video stream information of a camera: connecting the camera with a network interface of an edge computing gateway by using an RJ45 Ethernet network cable, wherein the edge computing gateway is arranged in system software of the edge computing gateway and is accessed into real-time video stream information acquired by the camera in a RTSP video stream address mode;
s12, decoding the original video into a single-frame picture in a uniform RGB format;
and S13, color space conversion and image filtering and denoising processing are carried out on the single-frame picture, the purpose of improving the picture is achieved, and a detection area and further processing of a subsequent module are conveniently obtained.
S14 detection region acquisition: including manually selecting and using an algorithmic adaptive detection analysis to determine the region.
The method for determining the area by using the algorithm self-adaptive detection analysis is to identify the non-motor vehicle lane in the scene through a semantic segmentation algorithm and position the non-motor vehicle lane at a pixel level, and generate a minimum external rotation matrix which can completely frame the non-motor vehicle lane area, namely an identification frame of the non-motor vehicle lane according to a segmentation result, and the identification frame is used as the detection area. Since the camera is generally fixed and the position of the non-motor vehicle lane is difficult to change, the detection area can be determined only at the beginning of monitoring, and the detection area can be used all the time.
Specifically, after the detection region is obtained, it is given a unique label.
S2, acquiring target frames of the non-motor vehicles and the pedestrians based on the detection area, and matching the non-motor vehicles and the pedestrians to obtain a minimum adjacent rectangular target frame; including S21-S25:
s21, acquiring a target frame of the non-motor vehicle by utilizing a target detection algorithm yolov3 based on deep learning for each frame of the monitored image, acquiring the coordinates of the central point of the target frame of the non-motor vehicle, and generating a unique label for the target frame of the non-motor vehicle;
s22, acquiring a pedestrian target frame for each frame of the monitored image by using a target detection algorithm yolov3 based on deep learning, calculating the coordinates of the center point of the pedestrian target frame, and generating a unique label for the pedestrian target frame;
s23, for each non-motor vehicle target frame, calculating the distance between the center point of the pedestrian target frame and the center point of the non-motor vehicle target frame, and drawing a circle by taking the distance as a radius and taking the center point of the non-motor vehicle target frame as an origin;
s24, taking pedestrian target frames with the radius smaller than a set threshold, wherein the number of the pedestrian target frames is not more than 3;
s25, calculating the proportion of the area of the pedestrian target frame to the total area of the circle in the circle overlapping area, and taking the pedestrian target frame with the largest occupation ratio; if the number of the pedestrian target frames is more than the maximum number, taking the pedestrian target frame with the smallest circle radius; the method specifically comprises the following steps of S251-S253:
s251, establishing a new reference system, taking the central point of the non-motor vehicle target frame as an origin, and projecting the coordinates of the monitoring area into the reference system:
s252, determining the value range of the random point;
s253, randomly generating a plurality of points, and counting the number of the points in a rectangular area, wherein the percentage is counted;
s26, framing the pedestrian target frame and the non-motor vehicle target frame in the S25 by using a minimum adjacent rectangle to form a minimum adjacent rectangle target frame, and calculating the center point coordinate of the minimum adjacent rectangle; generating unique labels for the minimum adjacent rectangular target frame, the pedestrian target frame and the non-motor vehicle target frame;
s27, comparing the coordinate of the central point of the minimum adjacent rectangular target frame with the coordinate of the detection area in the S26, judging whether the central point of the minimum adjacent rectangular target frame is in the detection area, if not, stopping tracking the minimum adjacent rectangular target frame, and if so, tracking and analyzing behaviors of the minimum adjacent rectangular target frame; simultaneously, using a target tracking algorithm to associate the same adjacent rectangle in the continuous frame target detection images, and assigning a unique target sequence number to each adjacent rectangle until the target frame of the adjacent rectangle disappears or the adjacent rectangle leaves the detection area;
s28, if the non-motor vehicle enters the detection area again, a new target serial number should be allocated, wherein the target serial number can be formed by randomly combining 8 or more digits or letters, and each new target serial number is at least guaranteed to be unique in the current day;
s29, if the non-motor vehicle target frame cannot be matched with the human target frame, no operation is performed on the non-motor vehicle target frame; if the distance is less than 1 'person' target frame of a certain set threshold, the minimum adjacent rectangle is directly generated for the 'person' target frame and the non-motor vehicle target frame.
S3, establishing a riding state discrimination model and an abnormal posture discrimination model, wherein the specific method comprises the following steps of S31-S33:
s31, selecting a fixed point in the image as a reference point, establishing a coordinate system by taking the reference point as an original point, selecting a certain angle at the edge of the detection area by the fixed point, and referring to a reference point selection schematic diagram of FIG. 2, wherein Li represents the left central point of the detection target frame, Hi represents the upper central point of the detection target frame, and Ri represents the right central point of the detection target frame;
s32, establishing a riding state discrimination model, which comprises a CNN characteristic extraction network, an LSTM time sequence modeling network and an FC driving state analysis network;
the CNN characteristic extraction network respectively extracts the characteristics of each frame of target monitoring image in the image sequence, and after the characteristics are extracted, the spatial characteristics of each frame of target sequence image are transformed into a data form accepted by the LSTM time sequence modeling network;
each LSTM unit of the LSTM time sequence modeling network receives a frame of spatial characteristics output by the CNN network as input, simultaneously the output of the last LSTM unit is internally processed to output a group of cell states, the relevance of the non-motor vehicle characteristics on the time sequence is constructed once, and the cell states output by each LSTM unit are spliced and then input into the fully-connected FC driving state analysis network;
the FC running state analysis network output layer is provided with two neurons which respectively represent 'pushing' and 'riding', if the neuron representing 'riding' is activated and the score is higher than a set threshold value t, the output result is 'riding', otherwise, 'pushing' is output;
s33, creating an abnormal posture discrimination model, including S331-S335:
s331, shooting about ten thousand images of a non-motor vehicle driver by using an equipment camera to serve as a model training sample;
s332, converting the non-motor driver image data set into a coordinate data set of hand joint points (joint points No. 2-7) and head joint points (joint points No. 14-17) through a Lightweight-Openpos model, and referring to a joint point distribution schematic diagram of FIG. 4; selecting a data set with all successfully recognized hand joint points and head joint points as an abnormal posture judgment criterion; and calculating the angle of arm bendingDistance to hand-ear;
S333, using a Driver posture Data set (Driver Behavior Annotation Data) of a Kaggle platform, containing 103,282 Driver images with Behavior annotations, and executing the step S332 to acquire the arm bending angle and the hand-ear distance of the Driver images as an initial parameter Data set;
let a data set obtained by shooting with a camera beWherein. Bending angle of driver's arm under assumed normal posture and abnormal postureDistance to hand-earRespectively obey a Gaussian distribution, thenIn (1)Andaccording to two Gaussian mixture distributions.
S334, estimating the bending angle of the arm through the Gaussian mixture model EM algorithmDistance to hand-earDistribution function at normal attitude and abnormal attitude:
s3341, setting a Gaussian distribution parameter set of a normal postureIs composed of,Set of abnormal attitude Gaussian distribution parametersIs composed of,;
S3342, calculating initial values of parameter values using the initial parameter data setsDividing the initial parameter data set into normal attitude data and abnormal attitude data according to the behavior annotation, and setting the data set marked as the normal attitude asThe data set labeled as the call receiving and making state isIf the abnormal state is the state of the driver making and receiving calls, the driver can make the callAverage value of all data in the table is used as the initial parameter of the normal stateAndtaking the variance of the data as the initial parameter of the normal stateAndin the same way asObtaining the initial parameters of abnormal attitude、And(ii) a WhereinWhich represents a gaussian distribution of the intensity of the light,angle of arm bending representing normal posture in data set used in calculating initial parameter valueHas a mean value and a variance of,Is the average value of the distance between the hands and the ears,is the variance;
the random initial values are used for constructing probability distribution, the iteration period is long, the accuracy is difficult to guarantee, and the universality is low. The use of a larger data set to calculate the initial parameter values can effectively solve this series of problems.
S3343. initializing a normal attitude data setAnd abnormal attitude data setNull, for each sample in the initial parameter datasetCalculating the joint distribution probability of the normal posture and the abnormal posture:
normal posture:
abnormal posture:
if it isThe sample is added to the normal attitude data set, considering it as belonging to the normal attitudeOtherwise, add the abnormal attitude data set;
s3345, repeatedly executing S3342-S3343 until a certain pointGenerated by a sub-iterationAndthe probability of the joint distribution generated in S3343 is the same as that of the joint distribution generated in the step SAndin normal posture and abnormal postureAndthe iteration is exited according to the Gaussian distribution parameters;
wherein, in the step S3343, the parameter values obtained by initialization are obtained by using the prior data set, the data sets collected under the actual application scene are classified by calculating the joint distribution probability, then, the parameter values are finely adjusted and updated by using the two data sets obtained by classification in the step S3344, and the parameter values most suitable for the target scene can be obtained through the operation of the steps S3343 and S3344 for countless times。
And S335, if the probability under the abnormal posture is greater than that under the normal posture, determining that the driver is suspected to receive and make a call, and if the frame number proportion of the suspected call receiving and making in the image sequence is greater than a set threshold, determining that the driver is suspected to receive and make a call.
S4, a target sequence is constructed based on the position relation between the target frame of the minimum adjacent rectangle and the detection area, whether the person is in a riding state is judged according to the riding state, a suspected call receiving and making behavior is judged according to the abnormal posture judgment model, whether the call receiving behavior continuously occurs is judged according to the Frecher distance, and the method comprises the following steps:
the method for judging whether the person is in the riding state through the riding state comprises the following steps:
s41, sequentially extracting a target frame with a unique target sequence number, namely a minimum adjacent rectangular target frame label, and a target frame in each frame of target monitoring image, namely a minimum adjacent rectangle, zooming the target frames to the same size, and constructing an image sequence for the non-motor vehicle-human target to be identified; if the length of the image sequence is much longer than 20 frames, 20 frames are randomly extracted in time order.
S42, analyzing the motion state of the target image sequence to be detected, and judging whether the target is in a riding state or not, wherein the method comprises the following steps:
s421, taking the vector from the origin of the target sequence to the center of the detection frameCalculating to obtain each vector slope, and when the slope is infinite, making the slope be a certain special value to obtain a slope array;
s422, if the range of the slope array is greater than a set threshold, the driver is considered to be in a motion state;
s423, inputting the image sequence into a riding state judging model, wherein each frame of target monitoring image shares the same CNN network to obtain a riding state judging result of each frame;
s424, if the riding state judging model judges that the number of the riding frames exceeds a preset frame number threshold, the non-motor vehicle is in a riding driving state;
the method for judging the suspected call receiving and making behavior through the abnormal posture judgment model comprises the following steps:
s43, inputting each frame of target monitoring image in an image sequence with a driving state of 'riding', inputting a Lightweight-Openpos model to obtain coordinates of a head joint point and a hand joint point, namely coordinates of joint points No. 2 to 7 and joint points No. 14 to 17, calculating joint probability distribution of an arm bending angle and a hand-ear distance of a driver image under an abnormal posture and a normal posture for each frame of image, judging that the driver is suspected to receive and make a call if the probability under the abnormal posture is greater than that under the normal posture, and judging that the driver is suspected to receive and make a call if the frame number proportion of the suspected call is greater than a set threshold value in the image sequence;
the method for judging whether the telephone answering behavior continuously occurs according to the Frecher distance comprises the following steps:
s44, for the image sequence judged to be suspected of receiving and making a call, passing the coordinates of the center of each frame image in the global coordinate systemAnd the coordinates of each joint returned by the Openpos model in the image coordinate system are calculated to obtain the coordinate sequences of the human head H, the left wrist L and the right wrist R;
s45, calculating H and L by using a dynamic programming algorithm, setting a path A as a head motion track, setting a path B as a hand motion track, and setting the minimum value of the maximum distance between the two tracks as the Frey distance of the two tracks, wherein the calculation result schematic diagram of the Frey distance is shown in figure 5; the calculation process is as follows:
s451, setting Fd [ i ] [ j ] as the Fourier distance between the i frame and the j frame before the track A and the track B;
s452, initializing Fd [1] [1] as the Euclidean distance between two points of the first frame;
the state transition equation:
s453, because the coordinate sequence has 20 frames in total, the result is returned by taking Fd [20] [20] as the algorithm.
S46, if the minimum value of the Freund distance of the coordinate sequences H and L, H and R is smaller than a set threshold value, judging that the target has a continuous call receiving and making behavior;
and S5, determining the target to make and receive calls to form message information, and sending the message data and the picture data to a cloud terminal through a post function of https.
S51, if the output result is 'call receiving and making during driving', message information is formed; the message information should include at least the date and time of day, the serial number of the non-motor vehicle object, the code of the edge computing gateway device, and location information.
And S52, sending the message data and the picture data to a cloud terminal through a post function of https, wherein the edge computing gateway is used as a https client terminal to send the data, and the cloud terminal is used as an https server to receive and store the information. If the edge computing gateway fails to send, a cache mechanism is adopted to store the data which fails to send locally and resend the data at other time.
In embodiment 3, the computer device of the present invention may be a device including a processor, a memory, and the like, for example, a single chip microcomputer including a central processing unit, and the like. And the processor is used for implementing the steps of the recommendation method capable of modifying the relationship-driven recommendation data based on the CREO software when executing the computer program stored in the memory.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
Embodiment 4, computer-readable storage Medium embodiment
The computer readable storage medium of the present invention may be any form of storage medium that can be read by a processor of a computer device, including but not limited to non-volatile memory, ferroelectric memory, etc., and the computer readable storage medium has stored thereon a computer program that, when the computer program stored in the memory is read and executed by the processor of the computer device, can implement the above-mentioned steps of the CREO-based software that can modify the modeling method of the relationship-driven modeling data.
The computer program comprises computer program code which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this description, will appreciate that other embodiments can be devised which do not depart from the scope of the invention as described herein. Furthermore, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the appended claims. The present invention has been disclosed in an illustrative rather than a restrictive sense, and the scope of the present invention is defined by the appended claims.
Claims (8)
1. A detection method for the call receiving and making of a driving non-motor vehicle driver is characterized by comprising the following steps:
s1, scene monitoring data are obtained, and a detection area is obtained based on the monitoring data;
s2, acquiring target frames of the non-motor vehicles and the pedestrians based on the detection area, and matching the non-motor vehicles and the pedestrians to obtain a minimum adjacent rectangular target frame;
s3, establishing a riding state discrimination model and an abnormal posture discrimination model, wherein the riding state discrimination model and the abnormal posture discrimination model comprise S31-S33:
s31, selecting a fixed point in the image as a reference point, establishing a coordinate system by taking the reference point as an origin, and selecting a certain corner of the edge of the detection area by the fixed point;
s32, establishing a riding state discrimination model, which comprises a CNN characteristic extraction network, an LSTM time sequence modeling network and an FC driving state analysis network;
the CNN characteristic extraction network respectively extracts the characteristics of each frame of target monitoring image in the image sequence, and after the characteristics are extracted, the spatial characteristics of each frame of target sequence image are transformed into a data form accepted by the LSTM time sequence modeling network;
each LSTM unit of the LSTM time sequence modeling network receives a frame of spatial characteristics output by the CNN network as input, simultaneously the output of the last LSTM unit is internally processed to output a group of cell states, the relevance of the non-motor vehicle characteristics on the time sequence is constructed once, and the cell states output by each LSTM unit are spliced and then input into the fully-connected FC driving state analysis network;
the FC running state analysis network output layer is provided with two neurons which respectively represent 'pushing' and 'riding', if the neuron representing 'riding' is activated and the score is higher than a set threshold value t, the output result is 'riding', otherwise, 'pushing' is output;
s33, creating an abnormal posture discrimination model, including S331-S335:
s331, obtaining a non-motor vehicle driver image as a model training sample;
s332, converting the non-motor driver image data set into a coordinate data set of hand joint points and head joint points through a Lightweight-Openpos model; selecting a data set with all successfully recognized hand joint points and head joint points as an abnormal posture judgment criterion; and calculating the angle of arm bendingDistance to hand-ear;
S333, acquiring a driver image with a behavior annotation by using a Kaggle platform driver posture data set, and executing the step S332 to acquire an arm bending angle and a hand-ear distance of the driver image as an initial parameter data set;
s334, estimating the bending angle of the arm through the Gaussian mixture model EM algorithmDistance to hand-earDistribution functions under normal attitude and abnormal attitude;
s335, if the probability under the abnormal posture is larger than that under the normal posture, the suspected call receiving and making is judged, and when the frame number proportion of the suspected call receiving and making in the image sequence is larger than a set threshold value, the suspected call receiving and making of the driver is judged;
s4, constructing a target sequence based on the position relation between the target frame of the minimum adjacent rectangle and the detection area, judging whether the person is in a riding state or not through the riding state, judging a suspected call receiving behavior through an abnormal posture judgment model, and judging whether the call receiving behavior continuously occurs or not according to a Frecher distance;
and S5, determining the target to be called to form message information, and sending the message data and the picture data to a cloud terminal through a post function of https.
2. The method as claimed in claim 1, wherein the step S2 is performed by obtaining target frames of the non-motor vehicle and the pedestrian based on the detected area, and matching the non-motor vehicle and the pedestrian to obtain a minimum adjacent rectangular target frame; the method comprises the following steps:
s21, acquiring a target frame of the non-motor vehicle by utilizing a target detection algorithm yolov3 based on deep learning for each frame of the monitored image, acquiring the coordinates of the central point of the target frame of the non-motor vehicle, and generating a unique label for the target frame of the non-motor vehicle;
s22, acquiring a pedestrian target frame for each frame of the monitored image by using a target detection algorithm yolov3 based on deep learning, calculating the coordinates of the center point of the pedestrian target frame, and generating a unique label for the pedestrian target frame;
s23, for each non-motor vehicle target frame, calculating the distance between the center point of the pedestrian target frame and the center point of the non-motor vehicle target frame, and drawing a circle by taking the distance as a radius and taking the center point of the non-motor vehicle target frame as an origin;
s24, taking pedestrian target frames with the radius smaller than a set threshold, wherein the number of the pedestrian target frames is not more than 3;
s25, calculating the proportion of the area of the pedestrian target frame to the total area of the circle in the circle overlapping area, and taking the pedestrian target frame with the largest occupation ratio; if the number of the pedestrian target frames is more than the maximum number, taking the pedestrian target frame with the smallest circle radius; the method specifically comprises the following steps of S251-S253:
s251, establishing a new reference system, taking the central point of the non-motor vehicle target frame as an origin, and projecting the coordinates of the monitoring area into the reference system:
s252, determining the value range of the random point;
s253, randomly generating a plurality of points, and counting the number of the points in a rectangular area, wherein the percentage is counted;
s26, framing the pedestrian target frame and the non-motor vehicle target frame in the S25 by using a minimum adjacent rectangle to form a minimum adjacent rectangle target frame, and calculating the center point coordinate of the minimum adjacent rectangle; generating unique labels for the minimum adjacent rectangular target frame, the pedestrian target frame and the non-motor vehicle target frame;
s27, comparing the coordinate of the central point of the minimum adjacent rectangular target frame with the coordinate of the detection area in the S26, judging whether the central point of the minimum adjacent rectangular target frame is in the detection area, if not, stopping tracking the minimum adjacent rectangular target frame, and if so, tracking and analyzing behaviors of the minimum adjacent rectangular target frame; simultaneously, using a target tracking algorithm to associate the same adjacent rectangle in the continuous frame target detection images, and assigning a unique target sequence number to each adjacent rectangle until the target frame of the adjacent rectangle disappears or the adjacent rectangle leaves the detection area;
s28, if the non-motor vehicle enters the detection area again, a new target serial number should be allocated, wherein the target serial number can be formed by randomly combining 8 or more digits or letters, and each new target serial number is at least guaranteed to be unique in the current day;
s29, if the non-motor vehicle target frame cannot be matched with the human target frame, no operation is performed on the non-motor vehicle target frame; if the distance is less than 1 'person' target frame of a certain set threshold, the minimum adjacent rectangle is directly generated for the 'person' target frame and the non-motor vehicle target frame.
3. The method as claimed in claim 2, wherein the estimating of the bending angle of the arm by the gaussian mixture model EM algorithm in S334 is performed to detect the on-call or off-call of the driver of the non-motor vehicleDistance to hand-earA distribution function at normal attitude and abnormal attitude comprising the steps of:
s3341, setting a Gaussian distribution parameter set of a normal postureIs composed of,Set of abnormal attitude Gaussian distribution parametersIs composed of,;
S3342, calculating initial values of parameter values by using the initial parameter data set, dividing the initial parameter data set into normal attitude data and abnormal attitude data according to the behavior annotation, and setting the data set marked as the normal attitude asThe data set labeled as call-receiving status isThen, in turnAverage value of all data in the table is used as the initial parameter of the normal stateAndtaking the variance of the data as the initial parameter of the normal stateAndin the same way asObtaining the initial parameters of abnormal attitude、And(ii) a WhereinWhich represents a gaussian distribution of the intensity of the light,angle of arm bending representing normal posture in data set used in calculating initial parameter valueHas a mean value and a variance of,Is the average value of the distance between the hands and the ears,is the variance;
s3343. initializing a normal attitude data setAnd abnormal attitude data setNull, for each sample in the initial parameter datasetCalculating the joint distribution probability of the normal posture and the abnormal posture:
normal posture:
abnormal posture:
if it isThe sample is added to the normal attitude data set, considering it as belonging to the normal attitudeOtherwise, add the abnormal attitude data set;
s3345, repeating S3342-S3343 until the iteration resultsAndthe probability of the joint distribution generated in S3343 is the same as that of the joint distribution generated in the step SAndin normal posture and abnormal postureAndand (4) exiting iteration according to the Gaussian distribution parameters.
4. The method for detecting whether a driver of a non-motor vehicle makes or receives a call according to claim 3, wherein the method for judging whether the person is in the riding state by the riding state comprises the following steps:
s41, sequentially extracting a target frame with a unique target sequence number, namely a minimum adjacent rectangular target frame label, and a target frame in each frame of target monitoring image, namely a minimum adjacent rectangle, zooming the target frames to the same size, and constructing an image sequence for the non-motor vehicle-human target to be identified;
s42, analyzing the motion state of the target image sequence to be detected, and judging whether the target is in a riding state or not;
the method for judging the suspected call receiving and making behavior through the abnormal posture judgment model comprises the following steps:
s43, inputting each frame of target monitoring image in an image sequence with a driving state of 'riding' into a Lightweight-Openpos model to obtain coordinates of a head joint point and a hand joint point, calculating joint probability distribution of an arm bending angle and a hand-ear distance of a driver image under an abnormal posture and a normal posture for each frame of image, judging suspected call receiving and making if the probability under the abnormal posture is greater than that under the normal posture, and judging that the driver is suspected to receive and make calls when the frame number proportion of the suspected call receiving and making calls in the image sequence is greater than a set threshold;
the method for judging whether the telephone answering behavior continuously occurs according to the Frecher distance comprises the following steps:
s44, for the image sequence judged to be suspected of receiving and making a call, passing the coordinates of the center of each frame image in the global coordinate systemAnd the coordinates of each joint returned by the Openpos model in the image coordinate system are calculated to obtain the coordinate sequences of the human head H, the left wrist L and the right wrist R;
s45, calculating the Frey pause distances of H and L, H and R by using a dynamic programming algorithm, setting a path A as a head motion track, setting a path B as a hand motion track, and setting the minimum value of the maximum distance between the two tracks as the Frey pause distance of the two tracks:
and S46, if the minimum value of the Freund distance of the coordinate sequences H and L, H and R is smaller than a set threshold value, judging that the target has continuous incoming and outgoing call behaviors.
5. The method according to claim 4, wherein said step of S42 comprises analyzing the motion state of the image sequence of the target to be detected to determine whether the target is in the riding state, and comprises the steps of:
s421, taking a vector from an original point of the target sequence to the center of the detection frame, calculating to obtain each vector slope, and when the slope is infinite, making the slope be a certain specific value to obtain a slope array;
s422, if the range of the slope array is greater than a set threshold, the driver is considered to be in a motion state;
s423, inputting the image sequence into a riding state judging model, wherein each frame of target monitoring image shares the same CNN network to obtain a riding state judging result of each frame;
s424, if the riding state judging model judges that the number of the riding frames exceeds the preset frame number threshold, the non-motor vehicle is in a riding driving state.
6. The detection system for the call receiving and making of the non-motor vehicle driver in driving is characterized in that the system for realizing the detection method of any one of claims 1 to 5 comprises a scene acquisition module, a non-motor vehicle detection module, a driving behavior construction module, an analysis module and a data transmission module;
the scene acquisition module is used for acquiring video information of a scene;
the non-motor vehicle detection module is used for acquiring target frames of non-motor vehicles and pedestrians, matching the non-motor vehicles and the pedestrians and acquiring a minimum adjacent rectangular target frame;
the driving behavior construction module is used for analyzing the similarity of the head and hand tracks of a driver and creating an abnormal posture judgment model;
the analysis module is used for analyzing whether a non-motor vehicle driver makes a call or not;
the data transmission module is used for transmitting the analysis result output by the analysis module to the cloud.
7. An electronic device comprising a memory storing a computer program and a processor, wherein the processor when executing the computer program implements the method of detecting a call incoming or outgoing from a driving non-motor vehicle driver as claimed in any one of claims 1 to 5.
8. A computer-readable storage medium having stored thereon a computer program, wherein the computer program, when executed by a processor, implements the method for detecting an on-going call made by a non-motor vehicle driver as claimed in any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210831651.XA CN114898342B (en) | 2022-07-15 | 2022-07-15 | Method for detecting call receiving and making of non-motor vehicle driver in driving |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210831651.XA CN114898342B (en) | 2022-07-15 | 2022-07-15 | Method for detecting call receiving and making of non-motor vehicle driver in driving |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114898342A true CN114898342A (en) | 2022-08-12 |
CN114898342B CN114898342B (en) | 2022-11-25 |
Family
ID=82730211
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210831651.XA Active CN114898342B (en) | 2022-07-15 | 2022-07-15 | Method for detecting call receiving and making of non-motor vehicle driver in driving |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114898342B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115311608A (en) * | 2022-10-11 | 2022-11-08 | 之江实验室 | Method and device for multi-task multi-target association tracking |
CN115512315A (en) * | 2022-11-01 | 2022-12-23 | 深圳市城市交通规划设计研究中心股份有限公司 | Non-motor vehicle child riding detection method, electronic device and storage medium |
CN116110006A (en) * | 2023-04-13 | 2023-05-12 | 武汉商学院 | Scenic spot tourist abnormal behavior identification method for intelligent tourism system |
CN116168350A (en) * | 2023-04-26 | 2023-05-26 | 四川路桥华东建设有限责任公司 | Intelligent monitoring method and device for realizing constructor illegal behaviors based on Internet of things |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105956568A (en) * | 2016-05-11 | 2016-09-21 | 东华大学 | Abnormal behavior detecting and early warning method based on monitored object identification |
CN106960568A (en) * | 2015-12-17 | 2017-07-18 | 国际商业机器公司 | Produce method, medium and system based on probabilistic traffic congestion index |
CN109032355A (en) * | 2018-07-27 | 2018-12-18 | 济南大学 | Various gestures correspond to the flexible mapping interactive algorithm of same interactive command |
CN109711320A (en) * | 2018-12-24 | 2019-05-03 | 兴唐通信科技有限公司 | A kind of operator on duty's unlawful practice detection method and system |
CN109902562A (en) * | 2019-01-16 | 2019-06-18 | 重庆邮电大学 | A kind of driver's exception attitude monitoring method based on intensified learning |
CN110008857A (en) * | 2019-03-21 | 2019-07-12 | 浙江工业大学 | A kind of human action matching methods of marking based on artis |
CN111461020A (en) * | 2020-04-01 | 2020-07-28 | 浙江大华技术股份有限公司 | Method and device for identifying behaviors of insecure mobile phone and related storage medium |
CN111666818A (en) * | 2020-05-09 | 2020-09-15 | 大连理工大学 | Driver abnormal posture detection method |
US20210166033A1 (en) * | 2019-12-02 | 2021-06-03 | Accenture Global Solutions Limited | Multi-modal object detection system with 5g array |
CN113158914A (en) * | 2021-04-25 | 2021-07-23 | 胡勇 | Intelligent evaluation method for dance action posture, rhythm and expression |
CN113378649A (en) * | 2021-05-19 | 2021-09-10 | 北京建筑大学 | Identity, position and action recognition method, system, electronic equipment and storage medium |
CN114076631A (en) * | 2020-08-11 | 2022-02-22 | 华为技术有限公司 | Overload vehicle identification method, system and equipment |
CN114332776A (en) * | 2022-03-07 | 2022-04-12 | 深圳市城市交通规划设计研究中心股份有限公司 | Non-motor vehicle occupant pedestrian lane detection method, system, device and storage medium |
CN114511080A (en) * | 2021-12-29 | 2022-05-17 | 武汉中海庭数据技术有限公司 | Model construction method and device and abnormal track point real-time detection method |
-
2022
- 2022-07-15 CN CN202210831651.XA patent/CN114898342B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106960568A (en) * | 2015-12-17 | 2017-07-18 | 国际商业机器公司 | Produce method, medium and system based on probabilistic traffic congestion index |
CN105956568A (en) * | 2016-05-11 | 2016-09-21 | 东华大学 | Abnormal behavior detecting and early warning method based on monitored object identification |
CN109032355A (en) * | 2018-07-27 | 2018-12-18 | 济南大学 | Various gestures correspond to the flexible mapping interactive algorithm of same interactive command |
CN109711320A (en) * | 2018-12-24 | 2019-05-03 | 兴唐通信科技有限公司 | A kind of operator on duty's unlawful practice detection method and system |
CN109902562A (en) * | 2019-01-16 | 2019-06-18 | 重庆邮电大学 | A kind of driver's exception attitude monitoring method based on intensified learning |
CN110008857A (en) * | 2019-03-21 | 2019-07-12 | 浙江工业大学 | A kind of human action matching methods of marking based on artis |
US20210166033A1 (en) * | 2019-12-02 | 2021-06-03 | Accenture Global Solutions Limited | Multi-modal object detection system with 5g array |
CN111461020A (en) * | 2020-04-01 | 2020-07-28 | 浙江大华技术股份有限公司 | Method and device for identifying behaviors of insecure mobile phone and related storage medium |
CN111666818A (en) * | 2020-05-09 | 2020-09-15 | 大连理工大学 | Driver abnormal posture detection method |
CN114076631A (en) * | 2020-08-11 | 2022-02-22 | 华为技术有限公司 | Overload vehicle identification method, system and equipment |
CN113158914A (en) * | 2021-04-25 | 2021-07-23 | 胡勇 | Intelligent evaluation method for dance action posture, rhythm and expression |
CN113378649A (en) * | 2021-05-19 | 2021-09-10 | 北京建筑大学 | Identity, position and action recognition method, system, electronic equipment and storage medium |
CN114511080A (en) * | 2021-12-29 | 2022-05-17 | 武汉中海庭数据技术有限公司 | Model construction method and device and abnormal track point real-time detection method |
CN114332776A (en) * | 2022-03-07 | 2022-04-12 | 深圳市城市交通规划设计研究中心股份有限公司 | Non-motor vehicle occupant pedestrian lane detection method, system, device and storage medium |
Non-Patent Citations (7)
Title |
---|
CHANG W J等: "A pose estimation-based fall detection methodology using artificial intelligence edge computing", 《IEEE ACCESS》 * |
CHEN K等: "Pedestrian Trajectory Prediction in Heterogeneous Traffic Using Pose Keypoints-Based Convolutional Encoder-Decoder Network", 《IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY》 * |
LU X等: "Real-time stage-wise object tracking in traffic scenes: an online tracker selection method via deep reinforcement learning", 《NEURAL COMPUTING AND APPLICATIONS》 * |
ZABALA U等: "Quantitative analysis of robot gesticulation behavior", 《AUTONOMOUS ROBOTS》 * |
杨铮等: "面向实时视频流分析的边缘计算技术", 《中国科学:信息科学》 * |
苏丽娜等: "基于雷达数据的车辆换道行为识别研究", 《北京交通大学学报》 * |
陈辉: "基于卷积神经网络的驾驶员异常姿态识别方法研究", 《中国优秀硕士学位论文全文数据库 (工程科技Ⅱ辑)》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115311608A (en) * | 2022-10-11 | 2022-11-08 | 之江实验室 | Method and device for multi-task multi-target association tracking |
CN115512315A (en) * | 2022-11-01 | 2022-12-23 | 深圳市城市交通规划设计研究中心股份有限公司 | Non-motor vehicle child riding detection method, electronic device and storage medium |
CN116110006A (en) * | 2023-04-13 | 2023-05-12 | 武汉商学院 | Scenic spot tourist abnormal behavior identification method for intelligent tourism system |
CN116110006B (en) * | 2023-04-13 | 2023-06-20 | 武汉商学院 | Scenic spot tourist abnormal behavior identification method for intelligent tourism system |
CN116168350A (en) * | 2023-04-26 | 2023-05-26 | 四川路桥华东建设有限责任公司 | Intelligent monitoring method and device for realizing constructor illegal behaviors based on Internet of things |
CN116168350B (en) * | 2023-04-26 | 2023-06-27 | 四川路桥华东建设有限责任公司 | Intelligent monitoring method and device for realizing constructor illegal behaviors based on Internet of things |
Also Published As
Publication number | Publication date |
---|---|
CN114898342B (en) | 2022-11-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114898342B (en) | Method for detecting call receiving and making of non-motor vehicle driver in driving | |
US20190304102A1 (en) | Memory efficient blob based object classification in video analytics | |
CN110795595B (en) | Video structured storage method, device, equipment and medium based on edge calculation | |
WO2021051601A1 (en) | Method and system for selecting detection box using mask r-cnn, and electronic device and storage medium | |
CN111222513B (en) | License plate number recognition method and device, electronic equipment and storage medium | |
CN111783749A (en) | Face detection method and device, electronic equipment and storage medium | |
CN114332776B (en) | Non-motor vehicle occupant pedestrian lane detection method, system, device and storage medium | |
CN112861575A (en) | Pedestrian structuring method, device, equipment and storage medium | |
CN111401196A (en) | Method, computer device and computer readable storage medium for self-adaptive face clustering in limited space | |
CN112381132A (en) | Target object tracking method and system based on fusion of multiple cameras | |
CN114639042A (en) | Video target detection algorithm based on improved CenterNet backbone network | |
CN113780243A (en) | Training method, device and equipment of pedestrian image recognition model and storage medium | |
CN115131634A (en) | Image recognition method, device, equipment, storage medium and computer program product | |
CN110942456A (en) | Tampered image detection method, device, equipment and storage medium | |
CN114359618A (en) | Training method of neural network model, electronic equipment and computer program product | |
CN115512315B (en) | Non-motor vehicle child riding detection method, electronic equipment and storage medium | |
CN116824641A (en) | Gesture classification method, device, equipment and computer storage medium | |
EP4332910A1 (en) | Behavior detection method, electronic device, and computer readable storage medium | |
CN116778415A (en) | Crowd counting network model for unmanned aerial vehicle and counting method | |
CN111091056A (en) | Method and device for identifying sunglasses in image, electronic equipment and storage medium | |
CN115953744A (en) | Vehicle identification tracking method based on deep learning | |
CN116129484A (en) | Method, device, electronic equipment and storage medium for model training and living body detection | |
CN114612907A (en) | License plate recognition method and device | |
CN114445787A (en) | Non-motor vehicle weight recognition method and related equipment | |
CN111401424A (en) | Target detection method, device and electronic system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |