WO2022142786A1 - Driving behavior recognition method, and device and storage medium - Google Patents

Driving behavior recognition method, and device and storage medium Download PDF

Info

Publication number
WO2022142786A1
WO2022142786A1 PCT/CN2021/130751 CN2021130751W WO2022142786A1 WO 2022142786 A1 WO2022142786 A1 WO 2022142786A1 CN 2021130751 W CN2021130751 W CN 2021130751W WO 2022142786 A1 WO2022142786 A1 WO 2022142786A1
Authority
WO
WIPO (PCT)
Prior art keywords
driving behavior
target
driver
joint point
distance
Prior art date
Application number
PCT/CN2021/130751
Other languages
French (fr)
Chinese (zh)
Inventor
慕晨
王春利
郭红星
周金星
孙靖
陈茹梦
周萍萍
赵璐
张勇
朱明明
Original Assignee
中兴通讯股份有限公司
长安大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司, 长安大学 filed Critical 中兴通讯股份有限公司
Publication of WO2022142786A1 publication Critical patent/WO2022142786A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/579Depth or shape recovery from multiple images from motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Definitions

  • the present application relates to the technical field of image processing, and in particular, to a driving behavior recognition method, device and storage medium.
  • the present application provides a driving behavior recognition method, device and storage medium.
  • an embodiment of the present application provides a driving behavior recognition method, the method includes: acquiring multiple frames of skeletal images corresponding to the current posture of a driver; determining action features corresponding to the driver according to the multiple frames of the skeletal images parameter; according to the action characteristic parameter, determine the current driving behavior state of the driver.
  • an embodiment of the present application further provides a driving behavior recognition device, the driving behavior recognition device includes a processor and a memory; the memory is used for storing a program; the processor is used for executing the program and The driving behavior recognition method as described above is realized when the program is executed.
  • an embodiment of the present application further provides a storage medium for readable storage, where the storage medium stores one or more programs, and the one or more programs can be executed by one or more processors , in order to realize the above-mentioned driving behavior recognition method.
  • FIG. 1 is a schematic diagram of a driving behavior recognition system provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of part segmentation of a human body region provided by an embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of a driving behavior recognition device provided by an embodiment of the present application.
  • FIG. 4 is a schematic flowchart of a driving behavior recognition method provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a skeleton image provided by an embodiment of the present application.
  • FIG. 6 is a schematic flowchart of a sub-step of determining action characteristic parameters corresponding to a driver provided by an embodiment of the present application
  • FIG. 7 is a schematic flowchart of a sub-step of determining a target distance and a target angle provided by an embodiment of the present application
  • FIG. 8 is a schematic diagram of a target distance provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a target angle provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of a state in which a driver is in a standard driving posture according to an embodiment of the present application.
  • Embodiments of the present application provide a driving behavior recognition method, device, system, and storage medium.
  • the driving behavior recognition method can be applied to a driving behavior recognition device. By recognizing the current driving behavior state of the driver according to the skeleton image, the influence of external environmental factors can be avoided, and the accuracy of detecting the driving behavior state of the driver can be improved.
  • the driving behavior recognition device may include a server or a terminal.
  • the server may be an independent server or a server cluster;
  • the terminal may be an electronic device such as a smart phone, a tablet computer, a notebook computer, and a desktop computer.
  • FIG. 1 is a schematic diagram of a driving behavior recognition system provided by an embodiment of the present application.
  • the behavior driving recognition system includes a driving behavior recognition device 10 , an image acquisition device 20 and a target device 30 .
  • the driving behavior recognition device 10 may be connected with the image acquisition device 20 in wired or wireless communication, and the driving behavior recognition device 10 may be connected with the target device 30 in wireless communication.
  • the image acquisition device 20 is configured to acquire a depth image including a driver, generate a skeleton image, and send the skeleton image to the driving behavior recognition device 10 .
  • the driving behavior recognition device 10 is used to recognize and process the skeleton image sent by the image acquisition device 20 to determine the current driving behavior state corresponding to the driver; the driving behavior recognition device 10 can also send an early warning message to the target device 30 according to the current driving behavior state .
  • the target device 30 is configured to receive the early warning message sent by the driving behavior recognition device 10, and issue an alarm according to the early warning message.
  • the image collection device 20 may collect multiple frames of depth images including drivers, and perform human body recognition, body part recognition, and bone joint point positioning on each frame of depth image to obtain multiple frames of bone images.
  • the driving behavior recognition device 10 can receive the multi-frame skeleton images sent by the image acquisition device 20, and determine the action feature parameters corresponding to the driver according to the multi-frame skeleton images; then, determine the current driving behavior state of the driver according to the action feature parameters.
  • image acquisition device 20 may include a camera or a sensing device for acquiring depth images.
  • the image capturing device 20 may be a somatosensory sensor.
  • a somatosensory sensor may be used to acquire a depth image including a driver.
  • the somatosensory sensor may include a depth camera, a color camera, and a light source emitter, which can achieve depth information such as a depth image, a color image, and three-dimensional data information of a target object.
  • the working principle of obtaining depth information the light emitted by the light source emitter is projected into the real scene. Since the emitted light will change due to the different surface shape of the target object, this light can be converted into After collecting and encoding, the distance difference between each pixel in the scene and the depth camera can be obtained, and then the position and depth information of the target object can be obtained.
  • the somatosensory can perform human body recognition on the depth image. For example, the background and people in the depth image are segmented according to a preset segmentation strategy to determine the body region or body contour information corresponding to the driver, and the obtained depth image includes the body region or body contour information corresponding to the driver. Then, the body sensor performs human body part recognition on the depth image obtained by the human body recognition.
  • FIG. 2 is a schematic diagram of part segmentation of a human body region provided by an embodiment of the present application.
  • the body region in the depth image is segmented to obtain multiple part images, such as head, arm, leg, limb, and torso images;
  • the body sensor locates the bone joint points of the depth image according to the body part, and obtains the bone image.
  • the identified body parts are added to the virtual skeleton model, and adjusted according to the position information of the body parts to obtain a skeleton image including multiple joint points.
  • skeletal images may include, but are not limited to, joint points such as head joint points, neck joint points, hand joint points, elbow joint points, wrist joint points, shoulder joint points, and spine joint points.
  • FIG. 3 is a schematic structural diagram of a driving behavior recognition device 10 provided by an embodiment of the present application.
  • the driving behavior recognition device 10 may include a processor 11 and a memory 12, wherein the processor 11 and the memory 12 may be connected by a bus, such as an I2C (Inter-integrated Circuit) bus or any other suitable bus.
  • I2C Inter-integrated Circuit
  • the memory 12 may include a non-volatile storage medium and an internal memory.
  • the nonvolatile storage medium can store operating systems and computer programs.
  • the computer program includes program instructions that, when executed, can cause the processor to execute any driving behavior recognition method.
  • the processor 11 is used to provide computing and control capabilities, and support the operation of the entire driving behavior recognition device 10 .
  • the processor 11 is configured to run a computer program stored in the memory 12, and implement the following steps when executing the computer program:
  • the processor 11 when the processor 11 realizes the acquisition of multi-frame skeleton images corresponding to the current posture of the driver, it is used to realize:
  • the action feature parameters include the target distance between the driver's head and the wrist, the target angle between the forearm and the rear arm, and the action duration of the driver's current posture; processing When the device 11 determines the action feature parameters corresponding to the driver according to the multiple frames of the skeleton images, it is used to realize:
  • the target distance, the target included angle and the action duration corresponding to the driver are determined according to the multiple frames of the skeleton images.
  • the processor 11 determines the current driving behavior state of the driver according to the action characteristic parameter corresponding to the driver, the processor 11 is configured to:
  • the current driving behavior state of the driver is determined according to the target distance, the target included angle and the action duration.
  • the processor 11 before determining the target distance, the target angle and the action duration corresponding to the driver according to the multiple frames of the skeleton images, the processor 11 is further configured to:
  • smoothing is performed on the initial multiple frames of the skeleton images to obtain the smoothed multiple frames of the skeleton images.
  • the processor 11 determines the target distance, the target angle and the action duration corresponding to the driver according to the multiple frames of the skeleton images, the processor 11 is configured to:
  • the continuous multiple frames of the skeletal images where the target included angle satisfies a preset condition are used, and the action duration is determined according to the frame number and frame rate corresponding to the multiple continuous frames of the skeletal images.
  • the skeleton image includes a head joint point, a neck joint point, an elbow joint point, a wrist joint point, a shoulder joint point, and a spine joint point;
  • a three-dimensional space coordinate system is established according to the spine joint points and the neck joint points, and the head joint points, elbow joint points, wrist joint points, and shoulder joint points in the three-dimensional space coordinate system are respectively determined.
  • joint point coordinates, elbow joint point coordinates, wrist joint point coordinates and shoulder joint point coordinates determine the target distance according to the head joint point coordinates and the wrist joint point coordinates;
  • the coordinates, the wrist joint point coordinates and the shoulder joint point coordinates determine the target angle.
  • the processor 11 when determining the target angle according to the elbow joint point coordinates, the wrist joint point coordinates, and the shoulder joint point coordinates, the processor 11 is configured to:
  • the processor 11 before determining the current driving behavior state of the driver according to the target distance, the target angle and the action duration, the processor 11 is further configured to:
  • the reference distance between the head and the wrist and the reference angle between the forearm and the rear arm when the driver is in a standard driving posture when the reference distance is in a preset distance range value, determine the The reference distance is a standard distance; when the reference included angle is within a preset first included angle range value, the reference included angle is determined to be a standard included angle.
  • the current driving behavior state includes a dangerous driving behavior state
  • the dangerous driving behavior state includes at least one of smoking while driving, making a phone call while driving, making a sharp turn, taking both hands off the steering wheel, and picking up objects
  • the processor 11 is in When determining the current driving behavior state of the driver according to the target distance, the target angle and the action duration, it is used to achieve:
  • the target distance is less than the standard distance and less than a preset distance threshold, and the action duration is greater than or equal to a first time threshold, then according to the target angle, it is determined whether the current driving behavior state is driving smoke or drive a phone, wherein, the preset distance threshold is less than the standard distance; if the target distance is greater than the standard distance, according to the target distance, the standard distance, the target angle, The standard included angle and the action duration determine whether the current driving behavior state is one of sharp turning, hands off the steering wheel, and picking up items.
  • the processor 11 is further configured to:
  • the current driving behavior state is a dangerous driving behavior state
  • performing an early warning according to the dangerous driving behavior state including sending an early warning message to a target device within a preset range and/or displaying the current vehicle early warning state on a navigation map
  • the target device includes a vehicle and a mobile terminal carried by a user.
  • the processor 11 may be a central processing unit (Central Processing Unit, CPU), and the processor may also be other general-purpose processors, digital signal processors (Digital Signal Processors, DSP), application specific integrated circuits (application specific integrated circuits) circuit, ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • FIG. 4 is a schematic flowchart of a driving behavior recognition method provided by an embodiment of the present application.
  • the driving behavior recognition method can be applied to a driving behavior recognition device, realizes the recognition of the current driving behavior state of the driver according to the skeleton image, can avoid the influence of external environmental factors, and improves the accuracy of detecting the driving behavior state of the driver.
  • the driving behavior recognition method includes steps S10 to S30.
  • Step S10 acquiring multiple frames of skeleton images corresponding to the driver's current posture.
  • the driving behavior identification method provided by the embodiment of the present application can be applied to scenarios such as detecting whether a driver is driving safely, simulated driving in a virtual game, and an athlete's posture training.
  • the embodiments of the present application will be described in detail by taking the detection of whether the driver drives safely as an example.
  • acquiring multiple frames of skeletal images corresponding to the current posture of the driver may include: acquiring multiple frames of skeletal images sent by an image acquisition device, where the multi-frame skeletal images are depths of multiple frames including the driver from the image acquisition device.
  • the image is used for human body recognition, human body part recognition and skeletal joint point location generation.
  • the image acquisition device may be a somatosensory sensor.
  • the skeleton image corresponding to the driver may be acquired through the body sensor.
  • somatosensory sensors can be installed in the cab of a car to monitor the cab. For example, after the car is started, the somatosensory captures a depth image that includes the driver.
  • the somatosensory sensor can perform human body recognition, human body part recognition and bone joint point positioning on the depth image through the built-in software development kit, and obtain multi-frame bone images corresponding to the driver's current posture.
  • the body sensor includes a half-body mode and a full-body mode; wherein, the shooting distance corresponding to the half-body mode is 0.4 to 3 meters, and the shooting distance corresponding to the full-body mode is 0.8 to 4 meters.
  • the somatosensory sensor can be set to the half-body mode.
  • the somatosensory sensor may further include: performing format conversion on the depth data in the collected depth image according to a preset data format to obtain a format-converted depth image.
  • the preset data format may be Mat format.
  • the depth data corresponding to the acquired depth image is format-converted according to the Mat format, and the depth data corresponding to the format-converted depth image is in the Mat format. Then, the depth image after format conversion is processed such as human body recognition, human body part recognition and bone joint point positioning.
  • the driving behavior recognition device may receive multiple frames of skeletal images sent by the somatosensory sensor.
  • FIG. 5 is a schematic diagram of a skeleton image provided by an embodiment of the present application.
  • the skeleton image includes the skeleton information corresponding to the driver.
  • the skeleton information may include different joint points and connection relationships between the joint points, and the like.
  • the joints may include head joints, neck joints, hand joints, elbow joints, wrist joints, shoulder joints, spine joints, and the like.
  • the dynamic actions of the driver can be detected and recognized, and the accuracy of recognizing the current driving behavior of the driver can be improved.
  • Step S20 Determine action characteristic parameters corresponding to the driver according to the multiple frames of the skeleton images.
  • the action feature parameters may include the target distance between the driver's head and the wrist, the target angle between the forearm and the rear arm, and the action duration of the driver's current posture.
  • the target distance and the target angle can be determined by the positional relationship corresponding to the joint points in the skeleton image; the action duration refers to the time the driver maintains the current posture.
  • the accuracy of identifying the action feature parameters corresponding to the driver can be improved, and the Privacy leakage can be avoided.
  • determining the action feature parameter corresponding to the driver according to the multi-frame skeleton images may include: determining the target distance, target angle and action duration corresponding to the driver according to the multi-frame skeleton images.
  • FIG. 6 is a schematic flowchart of a sub-step of determining action characteristic parameters corresponding to a driver provided by an embodiment of the present application, which may specifically include steps S201 to S203 .
  • Step S201 performing smoothing processing on the initial multiple frames of the skeleton images according to a preset smoothing processing strategy to obtain the smoothed multiple frames of the skeleton images.
  • step S201 is a step performed before determining the target distance, target angle and action duration corresponding to the driver according to the multi-frame skeleton images.
  • the computer vision platform needs to detect and recognize the skeleton image, so as to determine the current driving behavior state of the driver. In the process of detection and recognition, there are high real-time requirements for the driver's action feature parameters. If the skeleton image is not smoothed, it may cause the computer vision platform to shake or even collapse.
  • the initial multi-frame skeleton image refers to the skeleton image sent by the somatosensory sensor. Smoothing is also called filtering. By smoothing the skeleton image, not only the noise or distortion in the skeleton image can be reduced, but also the recognition efficiency can be improved.
  • the preset smoothing processing strategy may include, but is not limited to, a mean filtering algorithm, a median filtering algorithm, a Gaussian filtering algorithm, a bilateral filtering algorithm, and the like.
  • the initial multi-frame skeleton image is smoothed to obtain the smoothed multi-frame skeleton image.
  • Step S202 Extract joint point information in each frame of the skeleton image after smoothing, and determine the target distance and the target angle corresponding to the driver according to the joint point information.
  • determining the target distance, target angle and action duration corresponding to the driver according to the multiple frames of skeleton images includes steps S202 and S203.
  • FIG. 7 is a schematic flowchart of a sub-step of determining a target distance and a target angle corresponding to a driver according to an embodiment of the present application.
  • Step S202 may include steps S2021 to S2023 .
  • Step S2021 Establish a three-dimensional space coordinate system according to the spine joint point and the neck joint point, and determine, respectively, that the head joint point, the elbow joint point, the wrist joint point, and the shoulder joint point are in the three-dimensional space coordinate system.
  • the head joint point coordinates, the elbow joint point coordinates, the wrist joint point coordinates and the shoulder joint point coordinates are in the three-dimensional space coordinate system.
  • the wrist joint point can be replaced with the hand joint point.
  • connection line between the spine joint point and the neck joint point can be used as the Z axis of the three-dimensional space coordinate system
  • the spine joint point can be used as the origin
  • the X axis and the Y axis are determined based on the Z axis, so as to establish the Three-dimensional space coordinate system.
  • the head joint point, elbow joint point, wrist joint point The head joint point coordinate, the elbow joint point coordinate, the wrist joint point coordinate and the shoulder joint point coordinate in the three-dimensional space coordinate system of the shoulder joint point.
  • the coordinates corresponding to each joint point may be determined according to the projection of each joint point on the X axis, the Y axis, and the Z axis. The specific process of determining the coordinates of each joint point will not be repeated here.
  • Step S2022 Determine the target distance according to the head joint point coordinates and the wrist joint point coordinates.
  • the target distance refers to the distance between the head joint point and the wrist joint point.
  • FIG. 8 is a schematic diagram of a target distance provided by an embodiment of the present application.
  • the head joint point coordinates can be represented as (x 1 , y 1 , z 1 ); the wrist joint point coordinates can be represented as (x 2 , y 2 , z 2 ); the target distance can be represented as B.
  • the target distance B between the coordinates of the joint point of the head and the coordinates of the joint point of the wrist can be calculated by the Euclidean distance formula, as shown below:
  • Step S2023 Determine the target angle according to the coordinates of the elbow joint point, the wrist joint point coordinate and the shoulder joint point coordinate.
  • FIG. 9 is a schematic diagram of a target angle provided by an embodiment of the present application.
  • determining the target angle according to the coordinates of the elbow joint point, the coordinates of the wrist joint point, and the coordinates of the shoulder joint point may include: determining the elbow joint point according to the coordinates of the elbow joint point and the shoulder joint point coordinates.
  • the first vector between the shoulder joint point and the elbow joint point; the second vector between the elbow joint point and the wrist joint point is determined according to the coordinates of the elbow joint point and the wrist joint point; based on the preset vector dot product formula, according to The first vector and the second vector determine the target angle.
  • the preset vector dot product formula is:
  • represents a vector with vector the angle between.
  • the first vector of the line connecting the elbow joint point and the shoulder joint point can be determined according to the coordinates of the elbow joint point and the shoulder joint point, where the first vector can be expressed as According to the coordinates of the elbow joint point and the wrist joint point, the second vector of the line connecting the elbow joint point and the wrist joint point is determined, wherein the second vector can be expressed as
  • the target angle may be denoted D.
  • D the target angle
  • the target angle D can be obtained as:
  • the calculation is simple, and the target distance and target angle corresponding to the driver's current posture can be accurately determined.
  • Step S203 Determine the continuous multiple frames of the skeletal images whose corresponding target distance and the included angle between the targets satisfy a preset condition, and determine that the action continues according to the frame number and frame rate corresponding to the continuous multiple frames of the skeletal images. time.
  • the preset condition means that the target distance satisfies the set distance condition and the target angle satisfies the set angle condition. It can be understood that when the target distance corresponding to the driver's current posture satisfies the set distance condition and the target angle satisfies the set angle condition, the action duration corresponding to the driver's current posture is started to be recorded; when the action continues When the time meets the set time threshold, the current driving behavior state of the driver is determined according to the current posture.
  • the action duration can be determined by using consecutive multiple frames of skeleton images.
  • the action duration is determined according to the frame number and frame rate corresponding to the continuous multi-frame skeleton image.
  • Frame rate Frames/Time
  • unit of frame rate is frames per second.
  • the frame rate can be set according to the actual situation, and the specific value is not limited here.
  • the action duration can be denoted as T.
  • the action duration T can be determined to be 0.2 seconds.
  • the action duration can be determined according to the frame number and frame rate corresponding to the continuous multi-frame skeleton images, so as to realize the real-time monitoring of the driver's current posture. monitor.
  • Step S30 Determine the current driving behavior state of the driver according to the action characteristic parameter.
  • the method may further include: acquiring a reference between the head and the wrist of the driver when the driver is in a standard driving posture state Distance and reference angle between forearm and rear arm.
  • FIG. 10 is a schematic diagram of a state in which a driver is in a standard driving posture according to an embodiment of the present application.
  • the driver may be prompted to maintain a standard driving posture. For example, prompt the driver to hold the steering wheel with both hands, lean on the seat with their upper body, and so on. Then, a skeleton image of the driver's standard driving posture state is obtained, and the reference distance between the driver's head and the wrist and the reference angle between the forearm and the rear arm are determined according to the skeleton image.
  • the reference distance may be determined from the distance between the head joint point coordinates and the wrist joint point coordinates.
  • the reference angle between the forearm and the rear arm may be determined according to the coordinates of the elbow joint point, the wrist joint point coordinate and the shoulder joint point coordinate.
  • the specific process please refer to the detailed description of the above embodiment. The process is not repeated here.
  • the reference distance when the reference distance is within the preset distance range value, the reference distance is determined as the standard distance; when the reference included angle is within the preset first included angle range value, the reference included angle is determined as the standard included angle.
  • the preset distance range value and the preset first included angle range value can be set according to the actual situation, and the specific value is not limited here.
  • the distance measurement between the wrist and the head and the joint angle measurement may be performed on a preset number of test personnel.
  • the tester is in the standard driving posture, measure the distance between the wrist and the head, and obtain the distance range value (41.5cm-60.2cm); measure the angle between the line connecting the elbow and the shoulder and the line connecting the elbow and the wrist range, the value of the first included angle range is (100.5°-167.8°).
  • the reference distance when the reference distance is in a distance range value (41.5cm-60.2cm), the reference distance may be determined to be a standard distance.
  • the reference included angle when the reference included angle is in the first included angle range value (100.5°-167.8°), it can be determined that the reference included angle is the standard included angle.
  • the reference distance of the driver is within the distance range value and the reference included angle is within the first included angle range value, it means that the current posture of the driver is standard and safe, so the reference distance can be determined as Standard distance, the reference angle is determined as the standard included angle.
  • the standard distance may be denoted as A, and the standard included angle may be denoted as C.
  • the standard distance and the standard clip can be determined.
  • the target distance can be compared with the standard distance and the target angle can be compared with the standard angle, thereby improving the accuracy of identifying the current driving behavior state.
  • determining the current driving behavior state of the driver according to the action characteristic parameters may include: determining the current driving behavior state of the driver according to the target distance, the target angle and the action duration.
  • the current driving behavior state may include a dangerous driving behavior state, wherein the dangerous driving behavior state includes at least one of smoking while driving, making a phone call while driving, turning sharply, taking hands off the steering wheel, and picking up objects.
  • the recognition accuracy can be improved.
  • determining the current driving behavior state of the driver according to the target distance, the target angle, and the action duration may include: if the target distance is less than a standard distance and less than a preset distance threshold, and the action duration is greater than or is equal to the first time threshold, then according to the target angle, it is determined whether the current driving behavior state is smoking while driving or making a phone call while driving, wherein the preset distance threshold is smaller than the standard distance.
  • the driver's current driving behavior state may be smoking or talking while driving; when the target distance B is greater than or equal to the standard distance A, the driver's current driving behavior state It could be one of sharp turns, hands off the steering wheel, picking up items.
  • the target distance B is less than the standard distance A and less than a preset distance threshold, and the action duration T is greater than or equal to the first time threshold, then according to the target angle D, it is determined whether the current driving behavior state is driving and smoking Or make a phone call while driving.
  • the preset distance threshold and the first time threshold may be set according to actual conditions, and the specific values are not limited herein.
  • the preset distance threshold may be 0.1m
  • the first time threshold may be 0.8s, wherein the standard distance A is greater than the preset distance threshold of 0.1m.
  • the target distance B is smaller than the standard distance A and smaller than the preset distance threshold of 0.1m, and the action duration T is greater than or equal to the first time threshold of 0.8s, then according to the target angle D, determine the current Whether the driving behavior status is smoking while driving or making phone calls while driving.
  • the target distance may include the first target distance corresponding to the driver's left hand and the head and the second target distance corresponding to the right hand and the head of the driver
  • the target angle may include The first target included angle corresponding to the driver's left arm and the second target included angle corresponding to the right arm.
  • the target distance B may include a first target distance B1 and a second target distance B2;
  • the target included angle D may include a first target included angle D1 and a second target included angle D2.
  • the current driving behavior state is to make a phone call while driving.
  • the preset second included angle range value can be set according to the actual situation, which is not uniquely limited here, for example, the second included angle range value is [0°, 5°).
  • the first target angle D1 or the second target angle D2 is [0°, 5°)
  • it may be determined that the current driving behavior state is to make a phone call while driving.
  • the first target angle or the second target angle when the first target angle or the second target angle is within a preset third angle range value, and the first target angle or the second target angle increases multiple times within the preset time period It is determined that the current driving behavior state is smoking while driving.
  • the preset third included angle range value may be set according to actual conditions, and is not uniquely limited here.
  • the third included angle range value is [5°, 10°].
  • the preset duration can be set according to the actual situation, and there is no unique limitation here.
  • the preset duration can be 10s.
  • the first target angle D1 or the second target angle D2 is [5°, 10°]
  • the first target angle D1 or the second target angle D2 is multiple times within the preset duration of 10s Increase and decrease, or when the first target distance B1 or the second target distance B2 increases and decreases multiple times, it may be determined that the current driving behavior state is smoking while driving.
  • the first target angle D1 or the second target angle D2 increases from [5°, 10°] to the standard angle. angle C, and then reduce from the standard angle C to [5°, 10°]; or, the first target distance B1 or the second target distance B2 is increased from the distance threshold 0.1m to the standard distance A, and then from the standard distance A Reduce to a distance threshold of 0.1m.
  • determining the current driving behavior state of the driver according to the target distance, the target angle and the action duration may include: if the target distance is greater than the standard distance, The standard angle and action duration determine whether the current driving behavior is one of sharp turning, hands off the steering wheel, and picking up items.
  • the target distance B when it is determined that the target distance B is greater than the standard distance A, it can be determined whether the current driving behavior state is based on the target distance B, the standard distance A, the target angle D, the standard angle C and the action duration T. It is one of sharp turns, hands off the steering wheel, and picking up items.
  • the current driving behavior state is determined to be Sharp turn.
  • the second time threshold may be set according to the actual situation, which is not uniquely limited here.
  • the second time threshold may be 1s.
  • the preset angle threshold may be set according to the actual situation, and the specific value is not limited herein.
  • the preset angle threshold may be 180°.
  • the current driving behavior state of the driver may be determined for sharp turns.
  • both the first target distance and the second target distance are greater than the standard distance
  • the first target angle and the second target angle are both greater than the standard angle
  • the action duration is greater than or equal to the first time threshold
  • the first target distance B1 and the second target distance B2 are both greater than the standard distance A
  • the first target angle D1 and the second target angle D2 are both greater than the standard angle C
  • the action duration T is greater than or
  • it can be determined that the current driving behavior state of the driver is that both hands are off the steering wheel.
  • the first target distance or the second target distance is greater than the standard distance
  • the first target angle or the second target angle is equal to a preset angle threshold
  • the action duration is greater than or equal to the first time threshold
  • the first target angle D1 or the second target angle D2 is equal to a preset angle threshold, and the action duration T is greater than or equal to the first
  • a time threshold is 0.8s, it can be determined that the current driving behavior state of the driver is picking up items.
  • the method further includes: if the current driving behavior state is a dangerous driving behavior state, performing an early warning according to the dangerous driving behavior state, including sending an early warning to a target device within a preset range The message and/or the current warning state of the vehicle is displayed on the navigation map, wherein the target device includes the vehicle and the mobile terminal carried by the user.
  • the mobile terminal may include, but is not limited to, electronic devices such as smart phones, tablet computers, notebook computers, personal digital assistants, and wearable devices.
  • the current driving behavior state is at least one of smoking while driving, making a phone call while driving, turning sharply, taking hands off the steering wheel, and picking up objects.
  • an alert message may be sent to target devices within a preset range.
  • the preset range can be set according to the actual situation, and the specific value is not limited here.
  • the preset range can be 20 meters or 50 meters.
  • an early warning message is sent to a target device within a preset range.
  • a communication connection can be established with a vehicle and a mobile terminal carried by a user based on communication methods such as 4G, 5G, Bluetooth, Zigbee, and Wifi, and an early warning message is sent to the mobile terminal.
  • the methods of sending the warning message may include but are not limited to text messages, phone calls, WeChat, emails, and the like.
  • the mobile terminal can give an alarm by means of light, sound and vibration to remind the user to pay attention to driving safety.
  • the current vehicle warning status is displayed on the navigation map.
  • navigation map refers to the electronic map being used by vehicles or pedestrians at the current time.
  • the warning status of the current vehicle may be marked on the navigation map and updated in real time. For example, the location of the current vehicle can be highlighted to remind other vehicles and pedestrians to pay attention to safety.
  • the driving behavior recognition method, device, system and storage medium provided by the above embodiments can detect and recognize the dynamic actions of the driver by receiving the multi-frame skeleton images sent by the somatosensory sensor, thereby improving the recognition of the current driving behavior of the driver.
  • the accuracy of the state by determining the action feature parameters corresponding to the driver according to the multi-frame skeleton images, since the skeleton image is not easily disturbed by the external environment, and can be realized in the scene without lights, it can improve the recognition of the action feature parameters corresponding to the driver.
  • the wrist joint point coordinates and shoulder joint point coordinates are simple to calculate, and can accurately determine the target distance and target angle corresponding to the driver's current posture; by determining the corresponding target distance and target angle to meet the preset conditions for multiple consecutive frames Skeletal images, the duration of the action can be determined according to the number of frames and frame rates corresponding to consecutive multi-frame skeletal images, and the current posture of the driver can be monitored in real time;
  • the image determines the reference distance between the driver's head and the wrist and the reference angle between the forearm and the rear arm, so that the standard distance and standard angle can be determined.
  • the The target distance is compared with the standard distance and the target angle is compared with the standard angle, so as to improve the accuracy of identifying the current driving behavior; by comprehensively identifying the driver's current driving according to the target distance, target angle and action duration Behavior status, which can improve the accuracy of recognition; when it is determined that the current driving behavior status is a dangerous driving behavior status, sending an early warning message to the target device within a preset range and/or displaying the current vehicle early warning status on the navigation map, you can Reduce traffic accidents.
  • the embodiments of the present application further provide a storage medium for readable storage, the storage medium stores a program, the program includes program instructions, and the processor executes the program instructions to implement the embodiments of the present application Any of the provided driving behavior recognition methods.
  • the program is loaded by the processor and can perform the following steps:
  • the storage medium may be an internal storage unit of the driving behavior recognition device described in the foregoing embodiments, such as a hard disk or a memory of the driving behavior recognition device.
  • the storage medium may also be an external storage device of the driving behavior recognition device, such as a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a secure digital card (Secure Digital Card) equipped on the driving behavior recognition device. Card, SD Card), Flash Card (Flash Card), etc.
  • the embodiments of the present application disclose a driving behavior recognition method, device, and storage medium.
  • a driving behavior recognition method By acquiring multiple frames of skeleton images corresponding to the driver's current posture, and determining action feature parameters corresponding to the driver according to the multiple frames of skeleton images, external environmental factors can be avoided.
  • the action characteristic parameters can be determined more accurately; by determining the current driving behavior state of the driver according to the action characteristic parameters, the accuracy of detecting the driving behavior state of the driver can be improved.
  • the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be composed of several physical components Components execute cooperatively.
  • Some or all physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit .
  • Such software may be distributed on storable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media).
  • storage medium includes both volatile and nonvolatile implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules or other data , removable and non-removable media.
  • Computer storage media include, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cartridges, magnetic tape, magnetic disk storage or other magnetic storage devices, or may Any other medium used to store desired information and which can be accessed by a computer.
  • communication media typically embodies computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism, and can include any information delivery media, as is well known to those of ordinary skill in the art .

Abstract

A driving behavior recognition method, and a device and a storage medium. The driving behavior recognition method comprises: acquiring a plurality of frames of skeleton images corresponding to a driver (S10); according to the plurality of frames of skeleton images, determining action feature parameters corresponding to the driver (S20); and determining the current driving behavior state of the driver according to the action feature parameters (S30).

Description

驾驶行为识别方法、设备和存储介质Driving behavior recognition method, device and storage medium
相关申请的交叉引用CROSS-REFERENCE TO RELATED APPLICATIONS
本申请基于申请号为202011631342.5、申请日为2020年12月30日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。This application is based on the Chinese patent application with the application number of 202011631342.5 and the filing date of December 30, 2020, and claims the priority of the Chinese patent application. The entire content of the Chinese patent application is incorporated herein by reference.
技术领域technical field
本申请涉及图像处理技术领域,尤其涉及一种驾驶行为识别方法、设备和存储介质。The present application relates to the technical field of image processing, and in particular, to a driving behavior recognition method, device and storage medium.
背景技术Background technique
随着经济水平的不断发展,汽车已经成为一种普遍的交通工具,但随之而来的是大量的交通事故。通过对这些事故的分析,可以得知驾驶人员的危险行为已经成为交通事故的主要原因。因此驾驶人员的行为规范至关重要,对其行为检测的需求也不断提高。现有的检测驾驶人员行为的方法一般是通过采集、识别图像的方式来对驾驶人员的行为进行检测,但这种检测方法常常受到灯光等环境因素的影响,识别准确度不高。With the continuous development of the economic level, automobiles have become a common means of transportation, but a large number of traffic accidents have followed. Through the analysis of these accidents, it can be known that the driver's dangerous behavior has become the main cause of traffic accidents. Therefore, the behavioral norms of drivers are very important, and the demand for their behavior detection is also increasing. Existing methods for detecting driver behavior generally detect driver behavior by collecting and recognizing images, but this detection method is often affected by environmental factors such as lights, and the recognition accuracy is not high.
因此如何提高识别驾驶人员的驾驶行为的准确度成为亟需解决的问题。Therefore, how to improve the accuracy of recognizing the driving behavior of the driver has become an urgent problem to be solved.
发明内容SUMMARY OF THE INVENTION
本申请提供了一种驾驶行为识别方法、设备和存储介质。The present application provides a driving behavior recognition method, device and storage medium.
第一方面,本申请实施例提供了一种驾驶行为识别方法,所述方法包括:获取驾驶人员当前姿态对应的多帧骨骼图像;根据多帧所述骨骼图像确定所述驾驶人员对应的动作特征参数;根据所述动作特征参数,确定所述驾驶人员的当前驾驶行为状态。In a first aspect, an embodiment of the present application provides a driving behavior recognition method, the method includes: acquiring multiple frames of skeletal images corresponding to the current posture of a driver; determining action features corresponding to the driver according to the multiple frames of the skeletal images parameter; according to the action characteristic parameter, determine the current driving behavior state of the driver.
第二方面,本申请实施例还提供了一种驾驶行为识别设备,所述驾驶行为识别设备包括处理器和存储器;所述存储器用于存储程序;所述处理器,用于执行所述程序并在执行所述程序时实现如上述的驾驶行为识别方法。In a second aspect, an embodiment of the present application further provides a driving behavior recognition device, the driving behavior recognition device includes a processor and a memory; the memory is used for storing a program; the processor is used for executing the program and The driving behavior recognition method as described above is realized when the program is executed.
第三方面,本申请实施例还提供了一种存储介质,用于可读存储,所述存储介质存储有一个或者多个程序,所述一个或者多个程序可被一个或者多个处理器执行,以实现如上述的驾驶行为识别方法。In a third aspect, an embodiment of the present application further provides a storage medium for readable storage, where the storage medium stores one or more programs, and the one or more programs can be executed by one or more processors , in order to realize the above-mentioned driving behavior recognition method.
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本申请。It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not limiting of the present application.
附图说明Description of drawings
为了更清楚地说明本申请实施例技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to explain the technical solutions of the embodiments of the present application more clearly, the following briefly introduces the accompanying drawings used in the description of the embodiments. For those of ordinary skill, other drawings can also be obtained from these drawings without any creative effort.
图1是本申请实施例提供的一种驾驶行为识别系统的示意图;1 is a schematic diagram of a driving behavior recognition system provided by an embodiment of the present application;
图2是本申请实施例提供的一种对人体区域进行部位分割的示意图;2 is a schematic diagram of part segmentation of a human body region provided by an embodiment of the present application;
图3是本申请实施例提供的一种驾驶行为识别设备的结构示意图;3 is a schematic structural diagram of a driving behavior recognition device provided by an embodiment of the present application;
图4是本申请实施例提供的一种驾驶行为识别方法的示意性流程图;4 is a schematic flowchart of a driving behavior recognition method provided by an embodiment of the present application;
图5是本申请实施例提供的一种骨骼图像的示意图;5 is a schematic diagram of a skeleton image provided by an embodiment of the present application;
图6是本申请实施例提供的一种确定驾驶人员对应的动作特征参数的子步骤的示意性流程图;6 is a schematic flowchart of a sub-step of determining action characteristic parameters corresponding to a driver provided by an embodiment of the present application;
图7是本申请实施例提供的一种确定目标距离和目标夹角的子步骤的示意性流程图;7 is a schematic flowchart of a sub-step of determining a target distance and a target angle provided by an embodiment of the present application;
图8是本申请实施例提供的一种目标距离的示意图;8 is a schematic diagram of a target distance provided by an embodiment of the present application;
图9是本申请实施例提供的一种目标夹角的示意图;9 is a schematic diagram of a target angle provided by an embodiment of the present application;
图10是本申请实施例提供的一种驾驶人员处于标准驾驶姿势状态的示意图。FIG. 10 is a schematic diagram of a state in which a driver is in a standard driving posture according to an embodiment of the present application.
具体实施方式Detailed ways
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. Obviously, the described embodiments are part of the embodiments of the present application, not all of the embodiments. Based on the embodiments in the present application, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
附图中所示的流程图仅是示例说明,不是必须包括所有的内容和操作/步骤,也不是必须按所描述的顺序执行。例如,有的操作/步骤还可以分解、组合或部分合并,因此实际执行的顺序有可能根据实际情况改变。The flowcharts shown in the figures are for illustration only, and do not necessarily include all contents and operations/steps, nor do they have to be performed in the order described. For example, some operations/steps can also be decomposed, combined or partially combined, so the actual execution order may be changed according to the actual situation.
应当理解,在此本申请说明书中所使用的术语仅仅是出于描述特定实施例的目的而并不意在限制本申请。如在本申请说明书和所附权利要求书中所使用的那样,除非上下文清楚地指明其它情况,否则单数形式的“一”、“一个”及“该”意在包括复数形式。It should be understood that the terms used in the specification of the present application herein are for the purpose of describing particular embodiments only and are not intended to limit the present application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural unless the context clearly dictates otherwise.
还应当理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。It will also be understood that, as used in this specification and the appended claims, the term "and/or" refers to and including any and all possible combinations of one or more of the associated listed items.
应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。It should be understood that the specific embodiments described herein are only used to explain the present application, but not to limit the present application.
在后续的描述中,使用用于表示元件的诸如“模块”、“部件”或“单元”的后缀仅为了有利于本申请的说明,其本身没有特有的意义。因此,“模块”、“部件”或“单元”可以混合地使用。In the following description, suffixes such as 'module', 'component' or 'unit' used to represent elements are used only to facilitate the description of the present application, and have no specific meaning per se. Thus, "module", "component" or "unit" may be used interchangeably.
本申请的实施例提供了一种驾驶行为识别方法、设备、系统和存储介质。其中,该驾驶行为识别方法可以应用于驾驶行为识别设备中,通过根据骨骼图像识别驾驶人员的当前驾驶行为状态,可以避免外界环境因素的影响,提高了检测驾驶人员的驾驶行为状态的准确度。Embodiments of the present application provide a driving behavior recognition method, device, system, and storage medium. The driving behavior recognition method can be applied to a driving behavior recognition device. By recognizing the current driving behavior state of the driver according to the skeleton image, the influence of external environmental factors can be avoided, and the accuracy of detecting the driving behavior state of the driver can be improved.
在一些示例中,驾驶行为识别设备可以包括服务器或终端。其中,服务器可以为独立的服务器,也可以为服务器集群;终端可以是智能手机、平板电脑、笔记本电脑和台式电脑等电子设备。In some examples, the driving behavior recognition device may include a server or a terminal. The server may be an independent server or a server cluster; the terminal may be an electronic device such as a smart phone, a tablet computer, a notebook computer, and a desktop computer.
请参阅图1,图1是本申请的实施例提供的一种驾驶行为识别系统的示意图。该行为驾驶识别系统包括驾驶行为识别设备10、图像采集装置20和目标设备30。Please refer to FIG. 1. FIG. 1 is a schematic diagram of a driving behavior recognition system provided by an embodiment of the present application. The behavior driving recognition system includes a driving behavior recognition device 10 , an image acquisition device 20 and a target device 30 .
其中,驾驶行为识别设备10可以与图像采集装置20有线或无线通信连接,驾驶行为识别设备10与目标设备30无线通信连接。图像采集装置20用于采集包括驾驶人员的深度图像并生成骨骼图像,将骨骼图像发送给驾驶行为识别设备10。驾驶行为识别设备10用于对图像采集装置20发送的骨骼图像进行识别处理,以确定驾驶人员对应的当前驾驶行为状态;驾 驶行为识别设备10还可以根据当前驾驶行为状态向目标设备30发送预警消息。目标设备30用于接收驾驶行为识别设备10发送的预警消息,并根据预警消息进行报警。Wherein, the driving behavior recognition device 10 may be connected with the image acquisition device 20 in wired or wireless communication, and the driving behavior recognition device 10 may be connected with the target device 30 in wireless communication. The image acquisition device 20 is configured to acquire a depth image including a driver, generate a skeleton image, and send the skeleton image to the driving behavior recognition device 10 . The driving behavior recognition device 10 is used to recognize and process the skeleton image sent by the image acquisition device 20 to determine the current driving behavior state corresponding to the driver; the driving behavior recognition device 10 can also send an early warning message to the target device 30 according to the current driving behavior state . The target device 30 is configured to receive the early warning message sent by the driving behavior recognition device 10, and issue an alarm according to the early warning message.
在一些实施例中,图像采集装置20可以采集包含驾驶人员对应的多帧深度图像,并对每帧深度图像进行人体识别、人体部位识别以及骨骼关节点定位等处理,得到多帧骨骼图像。驾驶行为识别设备10可以接收图像采集装置20发送的多帧骨骼图像,根据多帧骨骼图像确定驾驶人员对应的动作特征参数;然后,根据动作特征参数,确定驾驶人员的当前驾驶行为状态。In some embodiments, the image collection device 20 may collect multiple frames of depth images including drivers, and perform human body recognition, body part recognition, and bone joint point positioning on each frame of depth image to obtain multiple frames of bone images. The driving behavior recognition device 10 can receive the multi-frame skeleton images sent by the image acquisition device 20, and determine the action feature parameters corresponding to the driver according to the multi-frame skeleton images; then, determine the current driving behavior state of the driver according to the action feature parameters.
在一些示例中,图像采集装置20可以包括用于采集深度图像的拍摄装置或传感装置。例如,图像采集装置20可以是体感器。在本申请实施例中,可以使用体感器采集包括驾驶人员的深度图像。In some examples, image acquisition device 20 may include a camera or a sensing device for acquiring depth images. For example, the image capturing device 20 may be a somatosensory sensor. In this embodiment of the present application, a somatosensory sensor may be used to acquire a depth image including a driver.
需要说明的是,体感器可以包括深度摄像头、彩色摄像头以及光源发射器,可以实现获取目标物体的深度图像、彩色图像以及三维数据信息等深度信息。在一些示例中,获取深度信息的工作原理:将光源发射器发出的光投射到现实的场景中,由于发射出去的光会因为目标物体表面形状的不同会产生改变,由此可以将这种光进行收集并进行编码就可以得到场景中各个像素点与深度摄像头的之间的距离差值,进而得到目标物体的位置和深度信息。It should be noted that the somatosensory sensor may include a depth camera, a color camera, and a light source emitter, which can achieve depth information such as a depth image, a color image, and three-dimensional data information of a target object. In some examples, the working principle of obtaining depth information: the light emitted by the light source emitter is projected into the real scene. Since the emitted light will change due to the different surface shape of the target object, this light can be converted into After collecting and encoding, the distance difference between each pixel in the scene and the depth camera can be obtained, and then the position and depth information of the target object can be obtained.
在一些实施例中,体感器在采集包括驾驶人员的深度图像之后,可以对深度图像进行人体识别。例如,根据预设的分割策略对深度图像中的背景和人物进行分割,以确定驾驶人员对应的人体区域或人体轮廓信息,得到的深度图像包括驾驶人员对应的人体区域或人体轮廓信息。然后,体感器对人体识别得到的深度图像进行人体部位识别。In some embodiments, after collecting the depth image including the driver, the somatosensory can perform human body recognition on the depth image. For example, the background and people in the depth image are segmented according to a preset segmentation strategy to determine the body region or body contour information corresponding to the driver, and the obtained depth image includes the body region or body contour information corresponding to the driver. Then, the body sensor performs human body part recognition on the depth image obtained by the human body recognition.
请参阅图2,图2是本申请实施例提供的一种对人体区域进行部位分割的示意图。如图2所示,对深度图像中的人体区域进行部位分割,得到多个部位图像,例如头、手臂、腿、四肢以及躯干等部位图像;对多个部位图像进行特征值分类匹配,以确定各部位图像对应的人体部位。最后,体感器根据人体部位对深度图像进行骨骼关节点定位,得到骨骼图像。在一些示例中,将识别出的人体部位添加至虚拟的骨骼模型中,并根据人体部位的位置信息进行调整,得到包括多个关节点的骨骼图像。Please refer to FIG. 2 . FIG. 2 is a schematic diagram of part segmentation of a human body region provided by an embodiment of the present application. As shown in Figure 2, the body region in the depth image is segmented to obtain multiple part images, such as head, arm, leg, limb, and torso images; The body parts corresponding to each part image. Finally, the body sensor locates the bone joint points of the depth image according to the body part, and obtains the bone image. In some examples, the identified body parts are added to the virtual skeleton model, and adjusted according to the position information of the body parts to obtain a skeleton image including multiple joint points.
在一些示例中,骨骼图像可以包括但不限于:头部关节点、颈部关节点、手部关节点、胳膊肘关节点、手腕关节点、肩部关节点以及脊柱关节点等关节点。In some examples, skeletal images may include, but are not limited to, joint points such as head joint points, neck joint points, hand joint points, elbow joint points, wrist joint points, shoulder joint points, and spine joint points.
请参阅图3,图3是本申请的实施例提供的一种驾驶行为识别设备10的结构示意图。驾驶行为识别设备10可以包括处理器11和存储器12,其中,所述处理器11和所述存储器12可以通过总线连接,该总线比如为I2C(Inter-integrated Circuit)总线等任意适用的总线。Please refer to FIG. 3 , which is a schematic structural diagram of a driving behavior recognition device 10 provided by an embodiment of the present application. The driving behavior recognition device 10 may include a processor 11 and a memory 12, wherein the processor 11 and the memory 12 may be connected by a bus, such as an I2C (Inter-integrated Circuit) bus or any other suitable bus.
其中,所述存储器12可以包括非易失性存储介质和内存储器。非易失性存储介质可存储操作系统和计算机程序。该计算机程序包括程序指令,该程序指令被执行时,可使得处理器执行任意一种驾驶行为识别方法。Wherein, the memory 12 may include a non-volatile storage medium and an internal memory. The nonvolatile storage medium can store operating systems and computer programs. The computer program includes program instructions that, when executed, can cause the processor to execute any driving behavior recognition method.
其中,所述处理器11用于提供计算和控制能力,支撑整个驾驶行为识别设备10的运行。Wherein, the processor 11 is used to provide computing and control capabilities, and support the operation of the entire driving behavior recognition device 10 .
在一实施例中,处理器11用于运行存储在存储器12中的计算机程序,并在执行计算机程序时实现如下步骤:In one embodiment, the processor 11 is configured to run a computer program stored in the memory 12, and implement the following steps when executing the computer program:
获取驾驶人员当前姿态对应的多帧骨骼图像;根据多帧所述骨骼图像确定所述驾驶人员对应的动作特征参数;根据所述动作特征参数,确定所述驾驶人员的当前驾驶行为状态。Obtaining multiple frames of skeleton images corresponding to the current posture of the driver; determining action feature parameters corresponding to the driver according to the multiple frames of the skeleton images; determining the current driving behavior state of the driver according to the action feature parameters.
在一个实施例中,处理器11在实现获取驾驶人员当前姿态对应的多帧骨骼图像时,用于 实现:In one embodiment, when the processor 11 realizes the acquisition of multi-frame skeleton images corresponding to the current posture of the driver, it is used to realize:
获取图像采集装置发送的多帧所述骨骼图像,其中,多帧所述骨骼图像为所述图像采集装置对包含所述驾驶人员的多帧深度图像进行人体识别、人体部位识别以及骨骼关节点定位生成。Acquiring multiple frames of the skeleton images sent by the image acquisition device, wherein the multiple frames of the skeleton images are for the image acquisition device to perform human body recognition, human body part recognition and skeletal joint point positioning on the multiple frames of depth images including the driver generate.
在一个实施例中,所述动作特征参数包括所述驾驶人员的头部与手腕之间的目标距离、前臂和后臂之间的目标夹角以及所述驾驶人员当前姿态的动作持续时间;处理器11在实现根据多帧所述骨骼图像确定所述驾驶人员对应的动作特征参数时,用于实现:In one embodiment, the action feature parameters include the target distance between the driver's head and the wrist, the target angle between the forearm and the rear arm, and the action duration of the driver's current posture; processing When the device 11 determines the action feature parameters corresponding to the driver according to the multiple frames of the skeleton images, it is used to realize:
根据多帧所述骨骼图像确定所述驾驶人员对应的所述目标距离、所述目标夹角以及所述动作持续时间。The target distance, the target included angle and the action duration corresponding to the driver are determined according to the multiple frames of the skeleton images.
在一个实施例中,处理器11在实现根据所述驾驶人员对应的动作特征参数,确定所述驾驶人员的当前驾驶行为状态时,用于实现:In one embodiment, when the processor 11 determines the current driving behavior state of the driver according to the action characteristic parameter corresponding to the driver, the processor 11 is configured to:
根据所述目标距离、所述目标夹角以及所述动作持续时间,确定所述驾驶人员的所述当前驾驶行为状态。The current driving behavior state of the driver is determined according to the target distance, the target included angle and the action duration.
在一个实施例中,处理器11在实现根据多帧所述骨骼图像确定所述驾驶人员对应的所述目标距离、所述目标夹角以及所述动作持续时间之前,还用于实现:In one embodiment, before determining the target distance, the target angle and the action duration corresponding to the driver according to the multiple frames of the skeleton images, the processor 11 is further configured to:
根据预设的平滑处理策略,对初始的多帧所述骨骼图像进行平滑处理,得到平滑处理后的多帧所述骨骼图像。According to a preset smoothing processing strategy, smoothing is performed on the initial multiple frames of the skeleton images to obtain the smoothed multiple frames of the skeleton images.
在一个实施例中,处理器11在实现根据多帧所述骨骼图像确定所述驾驶人员对应的所述目标距离、所述目标夹角以及所述动作持续时间时,用于实现:In one embodiment, when the processor 11 determines the target distance, the target angle and the action duration corresponding to the driver according to the multiple frames of the skeleton images, the processor 11 is configured to:
提取平滑处理后的每帧所述骨骼图像中的关节点信息,根据所述关节点信息确定所述驾驶人员对应的所述目标距离和所述目标夹角;确定对应的所述目标距离与所述目标夹角满足预设条件的连续多帧所述骨骼图像,根据连续多帧所述骨骼图像对应的帧数和帧率,确定所述动作持续时间。Extract the joint point information in each frame of the skeleton image after smoothing, and determine the target distance and the target angle corresponding to the driver according to the joint point information; determine the corresponding target distance and the target distance. The continuous multiple frames of the skeletal images where the target included angle satisfies a preset condition are used, and the action duration is determined according to the frame number and frame rate corresponding to the multiple continuous frames of the skeletal images.
在一个实施例中,所述骨骼图像包括头部关节点、颈部关节点、胳膊肘关节点、手腕关节点、肩部关节点以及脊柱关节点;处理器11在实现根据所述关节点信息确定所述驾驶人员对应的所述目标距离和所述目标夹角时,用于实现:In one embodiment, the skeleton image includes a head joint point, a neck joint point, an elbow joint point, a wrist joint point, a shoulder joint point, and a spine joint point; When determining the target distance and the target angle corresponding to the driver, it is used to achieve:
根据所述脊柱关节点与所述颈部关节点建立三维空间坐标系,分别确定头部关节点、胳膊肘关节点、手腕关节点、肩部关节点在所述三维空间坐标系中的头部关节点坐标、胳膊肘关节点坐标、手腕关节点坐标以及肩部关节点坐标;根据所述头部关节点坐标和所述手腕关节点坐标,确定所述目标距离;根据所述胳膊肘关节点坐标、所述手腕关节点坐标以及所述肩部关节点坐标,确定所述目标夹角。A three-dimensional space coordinate system is established according to the spine joint points and the neck joint points, and the head joint points, elbow joint points, wrist joint points, and shoulder joint points in the three-dimensional space coordinate system are respectively determined. joint point coordinates, elbow joint point coordinates, wrist joint point coordinates and shoulder joint point coordinates; determine the target distance according to the head joint point coordinates and the wrist joint point coordinates; The coordinates, the wrist joint point coordinates and the shoulder joint point coordinates determine the target angle.
在一个实施例中,处理器11在实现根据所述胳膊肘关节点坐标、所述手腕关节点坐标以及所述肩部关节点坐标,确定所述目标夹角时,用于实现:In one embodiment, when determining the target angle according to the elbow joint point coordinates, the wrist joint point coordinates, and the shoulder joint point coordinates, the processor 11 is configured to:
根据所述胳膊肘关节点坐标与所述肩部关节点坐标,确定所述胳膊肘关节点与所述肩部关节点之间的第一向量;根据所述胳膊肘关节点坐标与所述手腕关节点坐标,确定所述胳膊肘关节点与所述手腕关节点之间的第二向量;基于预设的向量点积公式,根据所述第一向量与所述第二向量,确定所述目标夹角。Determine the first vector between the elbow joint point and the shoulder joint point according to the elbow joint point coordinates and the shoulder joint point coordinates; according to the elbow joint point coordinates and the wrist joint point joint point coordinates, determine the second vector between the elbow joint point and the wrist joint point; based on a preset vector dot product formula, determine the target according to the first vector and the second vector angle.
在一个实施例中,处理器11在实现根据所述目标距离、所述目标夹角以及所述动作持续时间,确定所述驾驶人员的所述当前驾驶行为状态之前,还用于实现:In one embodiment, before determining the current driving behavior state of the driver according to the target distance, the target angle and the action duration, the processor 11 is further configured to:
获取所述驾驶人员处于标准驾驶姿势状态时的头部与手腕之间的参照距离以及前臂与后臂之间的参照夹角;当所述参照距离处于预设的距离范围值中,确定所述参照距离为标准距离;当所述参照夹角处于预设的第一夹角范围值中,确定所述参照夹角为标准夹角。Obtain the reference distance between the head and the wrist and the reference angle between the forearm and the rear arm when the driver is in a standard driving posture; when the reference distance is in a preset distance range value, determine the The reference distance is a standard distance; when the reference included angle is within a preset first included angle range value, the reference included angle is determined to be a standard included angle.
在一个实施例中,所述当前驾驶行为状态包括危险驾驶行为状态,所述危险驾驶行为状态包括开车抽烟、开车打电话、急转弯、双手脱离方向盘、捡拾物品中至少一种;处理器11在根据所述目标距离、所述目标夹角以及所述动作持续时间,确定所述驾驶人员的所述当前驾驶行为状态时,用于实现:In one embodiment, the current driving behavior state includes a dangerous driving behavior state, and the dangerous driving behavior state includes at least one of smoking while driving, making a phone call while driving, making a sharp turn, taking both hands off the steering wheel, and picking up objects; the processor 11 is in When determining the current driving behavior state of the driver according to the target distance, the target angle and the action duration, it is used to achieve:
若所述目标距离小于所述标准距离并小于预设的距离阈值,且所述动作持续时间大于或等于第一时间阈值,则根据所述目标夹角,确定所述当前驾驶行为状态是否为开车抽烟或开车打电话,其中,预设的所述距离阈值小于所述标准距离;若所述目标距离大于所述标准距离,则根据所述目标距离、所述标准距离、所述目标夹角、所述标准夹角以及所述动作持续时间,确定所述当前驾驶行为状态是否为急转弯、双手脱离方向盘、捡拾物品中的一种。If the target distance is less than the standard distance and less than a preset distance threshold, and the action duration is greater than or equal to a first time threshold, then according to the target angle, it is determined whether the current driving behavior state is driving smoke or drive a phone, wherein, the preset distance threshold is less than the standard distance; if the target distance is greater than the standard distance, according to the target distance, the standard distance, the target angle, The standard included angle and the action duration determine whether the current driving behavior state is one of sharp turning, hands off the steering wheel, and picking up items.
在一个实施例中,处理器11在实现确定所述驾驶人员的当前驾驶行为状态之后,还用于实现:In one embodiment, after determining the current driving behavior state of the driver, the processor 11 is further configured to:
若所述当前驾驶行为状态为危险驾驶行为状态,则根据所述危险驾驶行为状态进行预警,包括向预设范围内的目标设备发送预警消息和/或在导航地图上显示当前车辆的预警状态,其中,所述目标设备包括车辆、用户携带的移动终端。If the current driving behavior state is a dangerous driving behavior state, performing an early warning according to the dangerous driving behavior state, including sending an early warning message to a target device within a preset range and/or displaying the current vehicle early warning state on a navigation map, Wherein, the target device includes a vehicle and a mobile terminal carried by a user.
其中,所述处理器11可以是中央处理单元(Central Processing Unit,CPU),该处理器还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。The processor 11 may be a central processing unit (Central Processing Unit, CPU), and the processor may also be other general-purpose processors, digital signal processors (Digital Signal Processors, DSP), application specific integrated circuits (application specific integrated circuits) circuit, ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
下面结合附图,对本申请的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and features in the embodiments may be combined with each other without conflict.
如图4所示,图4是本申请的实施例提供的一种驾驶行为识别方法的示意性流程图。该驾驶行为识别方法可以应用于驾驶行为识别设备中,实现根据骨骼图像识别驾驶人员的当前驾驶行为状态,可以避免外界环境因素的影响,提高了检测驾驶人员的驾驶行为状态的准确度。该驾驶行为识别方法包括步骤S10至步骤S30。As shown in FIG. 4 , FIG. 4 is a schematic flowchart of a driving behavior recognition method provided by an embodiment of the present application. The driving behavior recognition method can be applied to a driving behavior recognition device, realizes the recognition of the current driving behavior state of the driver according to the skeleton image, can avoid the influence of external environmental factors, and improves the accuracy of detecting the driving behavior state of the driver. The driving behavior recognition method includes steps S10 to S30.
步骤S10、获取驾驶人员当前姿态对应的多帧骨骼图像。Step S10 , acquiring multiple frames of skeleton images corresponding to the driver's current posture.
需要说明的是,本申请实施例提供的驾驶行为识别方法,可以应用于检测驾驶人员是否安全驾驶、虚拟游戏中的模拟驾驶以及运动员姿态训练等场景中。本申请实施例将以检测驾驶人员是否安全驾驶为例进行详细说明。It should be noted that the driving behavior identification method provided by the embodiment of the present application can be applied to scenarios such as detecting whether a driver is driving safely, simulated driving in a virtual game, and an athlete's posture training. The embodiments of the present application will be described in detail by taking the detection of whether the driver drives safely as an example.
在一些实施例中,获取驾驶人员当前姿态对应的多帧骨骼图像,可以包括:获取图像采集装置发送的多帧骨骼图像,其中,多帧骨骼图像为图像采集装置对包含驾驶人员的多帧深度图像进行人体识别、人体部位识别以及骨骼关节点定位生成。In some embodiments, acquiring multiple frames of skeletal images corresponding to the current posture of the driver may include: acquiring multiple frames of skeletal images sent by an image acquisition device, where the multi-frame skeletal images are depths of multiple frames including the driver from the image acquisition device. The image is used for human body recognition, human body part recognition and skeletal joint point location generation.
其中,图像采集装置可以是体感器。Wherein, the image acquisition device may be a somatosensory sensor.
需要说明的是,在本申请实施例中,可以通过体感器获取驾驶人员对应的骨骼图像。在一些示例中,体感器可以安装在汽车驾驶室内,对驾驶室进行监控。例如,可以在汽车启动后,体感器采集包含驾驶人员的深度图像。体感器可以通过内置的软件开发工具包对深度图 像进行人体识别、人体部位识别以及骨骼关节点定位等处理,得到驾驶人员当前姿态对应的多帧骨骼图像。It should be noted that, in the embodiment of the present application, the skeleton image corresponding to the driver may be acquired through the body sensor. In some examples, somatosensory sensors can be installed in the cab of a car to monitor the cab. For example, after the car is started, the somatosensory captures a depth image that includes the driver. The somatosensory sensor can perform human body recognition, human body part recognition and bone joint point positioning on the depth image through the built-in software development kit, and obtain multi-frame bone images corresponding to the driver's current posture.
在一些示例中,体感器包括半身模式和全身模式;其中,半身模式对应的拍摄距离为0.4米至3米,全身模式对应的拍摄距离为0.8米至4米。在驾驶室内,由于驾驶人员的动作大部分是在上半身进行的,因此可以设定体感器为半身模式。In some examples, the body sensor includes a half-body mode and a full-body mode; wherein, the shooting distance corresponding to the half-body mode is 0.4 to 3 meters, and the shooting distance corresponding to the full-body mode is 0.8 to 4 meters. In the cab, since most of the driver's movements are performed on the upper body, the somatosensory sensor can be set to the half-body mode.
在一些实施例中,体感器在采集包含驾驶人员的深度图像之后,还可以包括:根据预设的数据格式,对采集的深度图像中的深度数据进行格式转换,得到格式转换后的深度图像。In some embodiments, after collecting the depth image including the driver, the somatosensory sensor may further include: performing format conversion on the depth data in the collected depth image according to a preset data format to obtain a format-converted depth image.
需要说明的是,为了使深度图像有更好的显示效果,需要对采集的深度图像中的深度数据进行格式转换。It should be noted that, in order to make the depth image have a better display effect, it is necessary to perform format conversion on the depth data in the acquired depth image.
在一些示例中,预设的数据格式可以是Mat格式。例如,根据Mat格式对采集的深度图像对应的深度数据进行格式转换,格式转换后的深度图像对应的深度数据为Mat格式。然后,对格式转换后的深度图像进行人体识别、人体部位识别以及骨骼关节点定位等处理。In some examples, the preset data format may be Mat format. For example, the depth data corresponding to the acquired depth image is format-converted according to the Mat format, and the depth data corresponding to the format-converted depth image is in the Mat format. Then, the depth image after format conversion is processed such as human body recognition, human body part recognition and bone joint point positioning.
在一些示例中,驾驶行为识别设备可以接收体感器发送的多帧骨骼图像。In some examples, the driving behavior recognition device may receive multiple frames of skeletal images sent by the somatosensory sensor.
请参阅图5,图5是本申请实施例提供的一种骨骼图像的示意图。如图5所示,骨骼图像包括驾驶人员对应的骨骼信息。其中,骨骼信息可以包括不同的关节点和各关节点之间的连接关系等。在一些示例中,关节点可以包括头部关节点、颈部关节点、手部关节点、胳膊肘关节点、手腕关节点、肩部关节点以及脊柱关节点等等。Please refer to FIG. 5 , which is a schematic diagram of a skeleton image provided by an embodiment of the present application. As shown in Fig. 5, the skeleton image includes the skeleton information corresponding to the driver. The skeleton information may include different joint points and connection relationships between the joint points, and the like. In some examples, the joints may include head joints, neck joints, hand joints, elbow joints, wrist joints, shoulder joints, spine joints, and the like.
通过接收体感器发送的多帧骨骼图像,可以实现对驾驶人员的动态动作进行检测和识别,提高了识别驾驶人员的当前驾驶行为状态的准确性。By receiving the multi-frame skeleton images sent by the somatosensory sensor, the dynamic actions of the driver can be detected and recognized, and the accuracy of recognizing the current driving behavior of the driver can be improved.
步骤S20、根据多帧所述骨骼图像确定所述驾驶人员对应的动作特征参数。Step S20: Determine action characteristic parameters corresponding to the driver according to the multiple frames of the skeleton images.
需要说明的是,在本申请实施例中,动作特征参数可以包括驾驶人员的头部与手腕之间的目标距离、前臂和后臂之间的目标夹角以及驾驶人员当前姿态的动作持续时间。It should be noted that, in this embodiment of the present application, the action feature parameters may include the target distance between the driver's head and the wrist, the target angle between the forearm and the rear arm, and the action duration of the driver's current posture.
其中,目标距离与目标夹角可以通过骨骼图像中的关节点对应的位置关系来确定;动作持续时间是指驾驶人员保持当前姿态的时间。Among them, the target distance and the target angle can be determined by the positional relationship corresponding to the joint points in the skeleton image; the action duration refers to the time the driver maintains the current posture.
通过根据多帧骨骼图像确定驾驶人员对应的动作特征参数,由于骨骼图像不易受外界环境的干扰,而且可以在无灯光场景下实现,因此可以提高识别驾驶人员对应的动作特征参数的准确度,还可以避免隐私泄露。By determining the action feature parameters corresponding to the driver according to the multi-frame skeleton images, since the skeleton images are not easily disturbed by the external environment and can be realized in a scene without lights, the accuracy of identifying the action feature parameters corresponding to the driver can be improved, and the Privacy leakage can be avoided.
在一些实施例中,根据多帧骨骼图像确定驾驶人员对应的动作特征参数,可以包括:根据多帧骨骼图像确定驾驶人员对应的目标距离、目标夹角以及动作持续时间。In some embodiments, determining the action feature parameter corresponding to the driver according to the multi-frame skeleton images may include: determining the target distance, target angle and action duration corresponding to the driver according to the multi-frame skeleton images.
请参阅图6,图6是本申请实施例提供的一种确定驾驶人员对应的动作特征参数的子步骤的示意性流程图,具体可以包括步骤S201至步骤S203。Please refer to FIG. 6 . FIG. 6 is a schematic flowchart of a sub-step of determining action characteristic parameters corresponding to a driver provided by an embodiment of the present application, which may specifically include steps S201 to S203 .
步骤S201、根据预设的平滑处理策略,对初始的多帧所述骨骼图像进行平滑处理,得到平滑处理后的多帧所述骨骼图像。Step S201 , performing smoothing processing on the initial multiple frames of the skeleton images according to a preset smoothing processing strategy to obtain the smoothed multiple frames of the skeleton images.
需要说明的是,步骤S201是根据多帧骨骼图像确定驾驶人员对应的目标距离、目标夹角以及动作持续时间之前执行的步骤。It should be noted that step S201 is a step performed before determining the target distance, target angle and action duration corresponding to the driver according to the multi-frame skeleton images.
需要说明的是,在本申请实施例中,需要计算机视觉平台中对骨骼图像进行检测和识别,以确定驾驶人员的当前驾驶行为状态。在检测和识别过程中,对驾驶人员的动作特征参数有较高的实时性要求,若不对骨骼图像进行平滑处理,可能导致计算机视觉平台的抖动甚至崩溃。It should be noted that, in the embodiment of the present application, the computer vision platform needs to detect and recognize the skeleton image, so as to determine the current driving behavior state of the driver. In the process of detection and recognition, there are high real-time requirements for the driver's action feature parameters. If the skeleton image is not smoothed, it may cause the computer vision platform to shake or even collapse.
可以理解的是,初始的多帧骨骼图像是指接收到体感器发送的骨骼图像。平滑处理也叫滤波处理。通过对骨骼图像进行平滑处理,不仅可以减少骨骼图像中的噪声或者失真,而且还可以提高识别效率。It can be understood that the initial multi-frame skeleton image refers to the skeleton image sent by the somatosensory sensor. Smoothing is also called filtering. By smoothing the skeleton image, not only the noise or distortion in the skeleton image can be reduced, but also the recognition efficiency can be improved.
在一些示例中,预设的平滑处理策略可以包括但不限于均值滤波算法、中值滤波算法、高斯滤波算法以及双边滤波算法等等。In some examples, the preset smoothing processing strategy may include, but is not limited to, a mean filtering algorithm, a median filtering algorithm, a Gaussian filtering algorithm, a bilateral filtering algorithm, and the like.
例如,根据均值滤波算法,对初始的多帧骨骼图像进行平滑处理,得到平滑处理后的多帧骨骼图像。For example, according to the mean filtering algorithm, the initial multi-frame skeleton image is smoothed to obtain the smoothed multi-frame skeleton image.
步骤S202、提取平滑处理后的每帧所述骨骼图像中的关节点信息,根据所述关节点信息确定所述驾驶人员对应的所述目标距离和所述目标夹角。Step S202: Extract joint point information in each frame of the skeleton image after smoothing, and determine the target distance and the target angle corresponding to the driver according to the joint point information.
需要说明的是,根据多帧骨骼图像确定驾驶人员对应的目标距离、目标夹角以及动作持续时间,包括步骤S202和步骤S203。It should be noted that determining the target distance, target angle and action duration corresponding to the driver according to the multiple frames of skeleton images includes steps S202 and S203.
请参阅图7,图7是本申请实施例提供的一种确定驾驶人员对应的目标距离和目标夹角的子步骤的示意性流程图,步骤S202可以包括步骤S2021至步骤S2023。Please refer to FIG. 7 . FIG. 7 is a schematic flowchart of a sub-step of determining a target distance and a target angle corresponding to a driver according to an embodiment of the present application. Step S202 may include steps S2021 to S2023 .
步骤S2021、根据所述脊柱关节点与所述颈部关节点建立三维空间坐标系,分别确定头部关节点、胳膊肘关节点、手腕关节点、肩部关节点在所述三维空间坐标系中的头部关节点坐标、胳膊肘关节点坐标、手腕关节点坐标以及肩部关节点坐标。Step S2021: Establish a three-dimensional space coordinate system according to the spine joint point and the neck joint point, and determine, respectively, that the head joint point, the elbow joint point, the wrist joint point, and the shoulder joint point are in the three-dimensional space coordinate system. The head joint point coordinates, the elbow joint point coordinates, the wrist joint point coordinates and the shoulder joint point coordinates.
其中,手腕关节点可以替换成手部关节点。Among them, the wrist joint point can be replaced with the hand joint point.
在一些示例中,可以将脊柱关节点与颈部关节点之间的连线作为三维空间坐标系的Z轴,将脊柱关节点作为原点,再基于Z轴确定X轴和Y轴,从而建立得到三维空间坐标系。In some examples, the connection line between the spine joint point and the neck joint point can be used as the Z axis of the three-dimensional space coordinate system, the spine joint point can be used as the origin, and then the X axis and the Y axis are determined based on the Z axis, so as to establish the Three-dimensional space coordinate system.
在一些实施方式中,可以根据各关节点与原点之间的距离,以及与X轴、Y轴、Z轴之间的夹角,分别确定头部关节点、胳膊肘关节点、手腕关节点、肩部关节点在三维空间坐标系中的头部关节点坐标、胳膊肘关节点坐标、手腕关节点坐标以及肩部关节点坐标。In some embodiments, the head joint point, elbow joint point, wrist joint point, The head joint point coordinate, the elbow joint point coordinate, the wrist joint point coordinate and the shoulder joint point coordinate in the three-dimensional space coordinate system of the shoulder joint point.
在另一些实施方式中,可以根据各关节点在X轴、Y轴、Z轴上的投影,确定各关节点对应的坐标。具体确定各关节点的坐标的过程,在此不作赘述。In other embodiments, the coordinates corresponding to each joint point may be determined according to the projection of each joint point on the X axis, the Y axis, and the Z axis. The specific process of determining the coordinates of each joint point will not be repeated here.
步骤S2022、根据所述头部关节点坐标和所述手腕关节点坐标,确定所述目标距离。Step S2022: Determine the target distance according to the head joint point coordinates and the wrist joint point coordinates.
需要说明的是,目标距离是指头部关节点与手腕关节点之间的距离。请参阅图8,图8是本申请实施例提供的一种目标距离的示意图。It should be noted that the target distance refers to the distance between the head joint point and the wrist joint point. Please refer to FIG. 8 , which is a schematic diagram of a target distance provided by an embodiment of the present application.
在一些示例中,头部关节点坐标可以表示为(x 1,y 1,z 1);手腕关节点坐标可以表示为(x 2,y 2,z 2);目标距离可以表示为B。 In some examples, the head joint point coordinates can be represented as (x 1 , y 1 , z 1 ); the wrist joint point coordinates can be represented as (x 2 , y 2 , z 2 ); the target distance can be represented as B.
在本申请实施例中,可以通过欧式距离公式计算头部关节点坐标和手腕关节点坐标之间的目标距离B,如下所示:In this embodiment of the present application, the target distance B between the coordinates of the joint point of the head and the coordinates of the joint point of the wrist can be calculated by the Euclidean distance formula, as shown below:
Figure PCTCN2021130751-appb-000001
Figure PCTCN2021130751-appb-000001
步骤S2023、根据所述胳膊肘关节点坐标、所述手腕关节点坐标以及所述肩部关节点坐标,确定所述目标夹角。Step S2023: Determine the target angle according to the coordinates of the elbow joint point, the wrist joint point coordinate and the shoulder joint point coordinate.
需要说明的是,目标夹角是指胳膊肘关节点与手腕关节点的连线和胳膊肘关节点与肩部关节点的连线之间的角度。请参阅图9,图9是本申请实施例提供的一种目标夹角的示意图。It should be noted that the target included angle refers to the angle between the line connecting the elbow joint point and the wrist joint point and the line connecting the elbow joint point and the shoulder joint point. Please refer to FIG. 9. FIG. 9 is a schematic diagram of a target angle provided by an embodiment of the present application.
在一些实施例中,根据胳膊肘关节点坐标、手腕关节点坐标以及肩部关节点坐标,确定目标夹角,可以包括:根据胳膊肘关节点坐标与肩部关节点坐标,确定胳膊肘关节点与肩部关节点之间的第一向量;根据胳膊肘关节点坐标与手腕关节点坐标,确定胳膊肘关节点与手 腕关节点之间的第二向量;基于预设的向量点积公式,根据第一向量与第二向量,确定目标夹角。In some embodiments, determining the target angle according to the coordinates of the elbow joint point, the coordinates of the wrist joint point, and the coordinates of the shoulder joint point may include: determining the elbow joint point according to the coordinates of the elbow joint point and the shoulder joint point coordinates. The first vector between the shoulder joint point and the elbow joint point; the second vector between the elbow joint point and the wrist joint point is determined according to the coordinates of the elbow joint point and the wrist joint point; based on the preset vector dot product formula, according to The first vector and the second vector determine the target angle.
在一些示例中,预设的向量点积公式为:In some examples, the preset vector dot product formula is:
Figure PCTCN2021130751-appb-000002
Figure PCTCN2021130751-appb-000002
式中,
Figure PCTCN2021130751-appb-000003
Figure PCTCN2021130751-appb-000004
表示两个向量;θ表示向量
Figure PCTCN2021130751-appb-000005
与向量
Figure PCTCN2021130751-appb-000006
之间的夹角。
In the formula,
Figure PCTCN2021130751-appb-000003
and
Figure PCTCN2021130751-appb-000004
represents two vectors; θ represents a vector
Figure PCTCN2021130751-appb-000005
with vector
Figure PCTCN2021130751-appb-000006
the angle between.
在一些示例中,可以根据胳膊肘关节点坐标与肩部关节点坐标,确定胳膊肘关节点与肩部关节点之间连线的第一向量,其中,第一向量可以表示为
Figure PCTCN2021130751-appb-000007
根据胳膊肘关节点坐标与手腕关节点坐标,确定胳膊肘关节点与手腕关节点之间连线的第二向量,其中,第二向量可以表示为
Figure PCTCN2021130751-appb-000008
In some examples, the first vector of the line connecting the elbow joint point and the shoulder joint point can be determined according to the coordinates of the elbow joint point and the shoulder joint point, where the first vector can be expressed as
Figure PCTCN2021130751-appb-000007
According to the coordinates of the elbow joint point and the wrist joint point, the second vector of the line connecting the elbow joint point and the wrist joint point is determined, wherein the second vector can be expressed as
Figure PCTCN2021130751-appb-000008
在一些示例中,目标夹角可以表示为D。基于上述向量点积公式,根据第一向量
Figure PCTCN2021130751-appb-000009
与第二向量
Figure PCTCN2021130751-appb-000010
可以得到目标夹角D对应的余弦值:
In some examples, the target angle may be denoted D. Based on the above vector dot product formula, according to the first vector
Figure PCTCN2021130751-appb-000009
with the second vector
Figure PCTCN2021130751-appb-000010
The cosine value corresponding to the target angle D can be obtained:
Figure PCTCN2021130751-appb-000011
Figure PCTCN2021130751-appb-000011
进而可以得到目标夹角D为:Then the target angle D can be obtained as:
Figure PCTCN2021130751-appb-000012
Figure PCTCN2021130751-appb-000012
通过根据头部关节点坐标、胳膊肘关节点坐标、手腕关节点坐标以及肩部关节点坐标,计算简单,可以准确地确定驾驶人员的当前姿态对应的目标距离和目标夹角。According to the head joint point coordinates, the elbow joint point coordinates, the wrist joint point coordinates and the shoulder joint point coordinates, the calculation is simple, and the target distance and target angle corresponding to the driver's current posture can be accurately determined.
步骤S203、确定对应的所述目标距离与所述目标夹角满足预设条件的连续多帧所述骨骼图像,根据连续多帧所述骨骼图像对应的帧数和帧率,确定所述动作持续时间。Step S203: Determine the continuous multiple frames of the skeletal images whose corresponding target distance and the included angle between the targets satisfy a preset condition, and determine that the action continues according to the frame number and frame rate corresponding to the continuous multiple frames of the skeletal images. time.
需要说明的是,预设条件是指目标距离满足设定的距离条件和目标夹角满足设定的夹角条件。可以理解的是,当驾驶人员的当前姿态对应的目标距离满足设定的距离条件和目标夹角满足设定的夹角条件时,开始记录驾驶人员的当前姿态对应的动作持续时间;当动作持续时间满足设定的时间阈值时,根据当前姿态确定驾驶人员的当前驾驶行为状态。It should be noted that the preset condition means that the target distance satisfies the set distance condition and the target angle satisfies the set angle condition. It can be understood that when the target distance corresponding to the driver's current posture satisfies the set distance condition and the target angle satisfies the set angle condition, the action duration corresponding to the driver's current posture is started to be recorded; when the action continues When the time meets the set time threshold, the current driving behavior state of the driver is determined according to the current posture.
在本申请实施例中,可以通过连续多帧骨骼图像来确定动作持续时间。In this embodiment of the present application, the action duration can be determined by using consecutive multiple frames of skeleton images.
在一些实施例中,当确定对应的目标距离与目标夹角满足预设条件的连续多帧骨骼图像时,根据连续多帧骨骼图像对应的帧数和帧率,确定动作持续时间。In some embodiments, when it is determined that the corresponding target distance and the included angle between the target and the target meet the predetermined condition of the continuous multi-frame skeleton image, the action duration is determined according to the frame number and frame rate corresponding to the continuous multi-frame skeleton image.
在一些示例中,帧率(Frame rate)=帧数(Frames)/时间(Time),帧率的单位为帧每秒。其中,帧率可以根据实际情况设定,具体数值在此不作限定。动作持续时间可以表示为T。In some examples, Frame rate=Frames/Time, and the unit of frame rate is frames per second. The frame rate can be set according to the actual situation, and the specific value is not limited here. The action duration can be denoted as T.
例如,若满足预设条件的连续多帧骨骼图像对应的帧数为10帧,帧率为50帧每秒,则可以确定动作持续时间T为0.2秒。For example, if the number of frames corresponding to the consecutive multi-frame skeleton images that meet the preset conditions is 10 frames, and the frame rate is 50 frames per second, the action duration T can be determined to be 0.2 seconds.
通过确定对应的目标距离与目标夹角满足预设条件的连续多帧骨骼图像,可以根据连续多帧骨骼图像对应的帧数和帧率,确定动作持续时间,实现对驾驶人员的当前姿态进行实时监控。By determining the continuous multi-frame skeleton images whose corresponding target distance and target angle meet the preset conditions, the action duration can be determined according to the frame number and frame rate corresponding to the continuous multi-frame skeleton images, so as to realize the real-time monitoring of the driver's current posture. monitor.
步骤S30、根据所述动作特征参数,确定所述驾驶人员的当前驾驶行为状态。Step S30: Determine the current driving behavior state of the driver according to the action characteristic parameter.
在一些实施例中,根据目标距离、目标夹角以及动作持续时间,确定驾驶人员的当前驾 驶行为状态之前,还可以包括:获取驾驶人员处于标准驾驶姿势状态时的头部与手腕之间的参照距离以及前臂与后臂之间的参照夹角。In some embodiments, before determining the current driving behavior state of the driver according to the target distance, the target angle and the action duration, the method may further include: acquiring a reference between the head and the wrist of the driver when the driver is in a standard driving posture state Distance and reference angle between forearm and rear arm.
需要说明的是,由于不同的驾驶人员在体型上存在差别,因此需要根据每个驾驶人员的实际体型设定标准距离和标准夹角,从而可以将目标距离、目标夹角分别与标准距离、标准夹角进行对比,从而提高了识别当前驾驶行为状态的准确性。It should be noted that since different drivers have differences in body shape, it is necessary to set the standard distance and standard angle according to the actual body shape of each driver, so that the target distance and target angle can be compared with the standard distance and standard angle respectively. The included angles are compared, thereby improving the accuracy of identifying the current driving behavior state.
请参阅图10,图10是本申请实施例提供的一种驾驶人员处于标准驾驶姿势状态的示意图。如图10所示,在进行识别驾驶人员的当前驾驶行为状态之前,可以提示驾驶人员保持标准驾驶姿势。例如,提示驾驶人员双手扶方向盘、上半身靠着座椅等等。然后获取驾驶人员的标准驾驶姿势状态的骨骼图像,根据骨骼图像确定驾驶人员的头部与手腕之间的参照距离以及前臂与后臂之间的参照夹角。Please refer to FIG. 10 . FIG. 10 is a schematic diagram of a state in which a driver is in a standard driving posture according to an embodiment of the present application. As shown in FIG. 10 , before identifying the current driving behavior state of the driver, the driver may be prompted to maintain a standard driving posture. For example, prompt the driver to hold the steering wheel with both hands, lean on the seat with their upper body, and so on. Then, a skeleton image of the driver's standard driving posture state is obtained, and the reference distance between the driver's head and the wrist and the reference angle between the forearm and the rear arm are determined according to the skeleton image.
在一些示例中,可以根据头部关节点坐标与手腕关节点坐标之间的距离,确定参照距离。In some examples, the reference distance may be determined from the distance between the head joint point coordinates and the wrist joint point coordinates.
在一些示例中,前臂与与后臂之间的参照夹角,可以根据胳膊肘关节点坐标、手腕关节点坐标以及肩部关节点坐标进行确定,具体过程可参见上述实施例的详细说明,具体过程在此不再赘述。In some examples, the reference angle between the forearm and the rear arm may be determined according to the coordinates of the elbow joint point, the wrist joint point coordinate and the shoulder joint point coordinate. For the specific process, please refer to the detailed description of the above embodiment. The process is not repeated here.
在一些示例中,当参照距离处于预设的距离范围值中,确定参照距离为标准距离;当参照夹角处于预设的第一夹角范围值中,确定参照夹角为标准夹角。In some examples, when the reference distance is within the preset distance range value, the reference distance is determined as the standard distance; when the reference included angle is within the preset first included angle range value, the reference included angle is determined as the standard included angle.
其中,预设的距离范围值与预设的第一夹角范围值可以根据实际情况设定,具体数值在此不作限定。Wherein, the preset distance range value and the preset first included angle range value can be set according to the actual situation, and the specific value is not limited here.
在本申请实施例中,可以对预设数量的试验人员进行手腕与头部之间的距离测量以及关节角度测量。当试验人员处于标准驾驶姿势时,测量手腕与头部之间的距离,得到距离范围值为(41.5cm-60.2cm);测量胳膊肘和肩膀连线与胳膊肘和手腕连线之间的角度范围,得到第一夹角范围值为(100.5°-167.8°)。In this embodiment of the present application, the distance measurement between the wrist and the head and the joint angle measurement may be performed on a preset number of test personnel. When the tester is in the standard driving posture, measure the distance between the wrist and the head, and obtain the distance range value (41.5cm-60.2cm); measure the angle between the line connecting the elbow and the shoulder and the line connecting the elbow and the wrist range, the value of the first included angle range is (100.5°-167.8°).
在一些示例中,当参照距离处于距离范围值(41.5cm-60.2cm)中时,可以确定参照距离为标准距离。当参照夹角处于第一夹角范围值(100.5°-167.8°)中时,可以确定参照夹角为标准夹角。In some examples, when the reference distance is in a distance range value (41.5cm-60.2cm), the reference distance may be determined to be a standard distance. When the reference included angle is in the first included angle range value (100.5°-167.8°), it can be determined that the reference included angle is the standard included angle.
可以理解的是,当驾驶人员的参照距离处于距离范围值内以及参照夹角处于第一夹角范围值内时,说明驾驶人员的当前姿态是标准的、安全的,因此可以将参照距离确定为标准距离,将参照角度确定为标准夹角。It can be understood that when the reference distance of the driver is within the distance range value and the reference included angle is within the first included angle range value, it means that the current posture of the driver is standard and safe, so the reference distance can be determined as Standard distance, the reference angle is determined as the standard included angle.
在一些示例中,标准距离可以表示为A,标准夹角可以表示为C。In some examples, the standard distance may be denoted as A, and the standard included angle may be denoted as C.
通过获取驾驶人员的标准驾驶姿势状态的骨骼图像,并根据骨骼图像确定驾驶人员的头部与手腕之间的参照距离以及前臂与后臂之间的参照夹角,从而可以确定标准距离和标准夹角,后续在确定驾驶人员的当前驾驶行为状态时,可以将目标距离与标准距离进行对比以及将目标夹角与标准夹角进行对比,从而提高了识别当前驾驶行为状态的准确性。By acquiring the skeleton image of the driver's standard driving posture, and determining the reference distance between the driver's head and the wrist and the reference angle between the forearm and the rear arm according to the skeleton image, the standard distance and the standard clip can be determined. When determining the current driving behavior state of the driver, the target distance can be compared with the standard distance and the target angle can be compared with the standard angle, thereby improving the accuracy of identifying the current driving behavior state.
在一些实施例中,根据动作特征参数,确定驾驶人员的当前驾驶行为状态,可以包括:根据目标距离、目标夹角以及动作持续时间,确定驾驶人员的当前驾驶行为状态。In some embodiments, determining the current driving behavior state of the driver according to the action characteristic parameters may include: determining the current driving behavior state of the driver according to the target distance, the target angle and the action duration.
需要说明的是,当前驾驶行为状态可以包括危险驾驶行为状态,其中,危险驾驶行为状态包括开车抽烟、开车打电话、急转弯、双手脱离方向盘、捡拾物品中至少一种。It should be noted that the current driving behavior state may include a dangerous driving behavior state, wherein the dangerous driving behavior state includes at least one of smoking while driving, making a phone call while driving, turning sharply, taking hands off the steering wheel, and picking up objects.
通过根据目标距离、目标夹角以及动作持续时间综合识别驾驶人员的当前驾驶行为状态,可以提高识别的准确度。By comprehensively identifying the current driving behavior state of the driver according to the target distance, the target angle and the action duration, the recognition accuracy can be improved.
在一些实施例中,根据目标距离、目标夹角以及动作持续时间,确定驾驶人员的当前驾驶行为状态,可以包括:若目标距离小于标准距离并小于预设的距离阈值,且动作持续时间大于或等于第一时间阈值,则根据目标夹角,确定当前驾驶行为状态是否为开车抽烟或开车打电话,其中,预设的距离阈值小于标准距离。In some embodiments, determining the current driving behavior state of the driver according to the target distance, the target angle, and the action duration may include: if the target distance is less than a standard distance and less than a preset distance threshold, and the action duration is greater than or is equal to the first time threshold, then according to the target angle, it is determined whether the current driving behavior state is smoking while driving or making a phone call while driving, wherein the preset distance threshold is smaller than the standard distance.
需要说明的是,当目标距离B小于标准距离A时,驾驶人员的当前驾驶行为状态可能是开车抽烟或开车打电话;当目标距离B大于或等于标准距离A时,驾驶人员的当前驾驶行为状态可能是急转弯、双手脱离方向盘、捡拾物品中的一种。It should be noted that when the target distance B is less than the standard distance A, the driver's current driving behavior state may be smoking or talking while driving; when the target distance B is greater than or equal to the standard distance A, the driver's current driving behavior state It could be one of sharp turns, hands off the steering wheel, picking up items.
在一些示例中,当目标距离B小于标准距离A并小于预设的距离阈值,且动作持续时间T大于或等于第一时间阈值,则根据目标夹角D,确定当前驾驶行为状态是否为开车抽烟或开车打电话。In some examples, when the target distance B is less than the standard distance A and less than a preset distance threshold, and the action duration T is greater than or equal to the first time threshold, then according to the target angle D, it is determined whether the current driving behavior state is driving and smoking Or make a phone call while driving.
其中,预设的距离阈值和第一时间阈值可以根据实际情况设定,具体数值在此不作限定。The preset distance threshold and the first time threshold may be set according to actual conditions, and the specific values are not limited herein.
在一些示例中,预设的距离阈值可以是0.1m,第一时间阈值可以是0.8s,其中,标准距离A大于预设的距离阈值0.1m。In some examples, the preset distance threshold may be 0.1m, and the first time threshold may be 0.8s, wherein the standard distance A is greater than the preset distance threshold of 0.1m.
在本申请实施例中,当目标距离B小于标准距离A并小于预设的距离阈值0.1m,且动作持续时间T大于或等于第一时间阈值0.8s时,则根据目标夹角D,确定当前驾驶行为状态是否为开车抽烟或开车打电话。In the embodiment of the present application, when the target distance B is smaller than the standard distance A and smaller than the preset distance threshold of 0.1m, and the action duration T is greater than or equal to the first time threshold of 0.8s, then according to the target angle D, determine the current Whether the driving behavior status is smoking while driving or making phone calls while driving.
可以理解的是,在本申请实施例中,目标距离可以包括驾驶人员的左手与头部之间对应的第一目标距离和右手与头部之间对应的第二目标距离,目标夹角可以包括驾驶人员的左手臂对应的第一目标夹角和右手臂对应的第二目标夹角。It can be understood that, in this embodiment of the present application, the target distance may include the first target distance corresponding to the driver's left hand and the head and the second target distance corresponding to the right hand and the head of the driver, and the target angle may include The first target included angle corresponding to the driver's left arm and the second target included angle corresponding to the right arm.
在一些示例中,目标距离B可以包括第一目标距离B1与第二目标距离B2;目标夹角D可以包括第一目标夹角D1和第二目标夹角D2。In some examples, the target distance B may include a first target distance B1 and a second target distance B2; the target included angle D may include a first target included angle D1 and a second target included angle D2.
在一些实施例中,当第一目标夹角或第二目标夹角处于预设的第二夹角范围值时,确定当前驾驶行为状态为开车打电话。In some embodiments, when the first target angle or the second target angle is within a preset second angle range value, it is determined that the current driving behavior state is to make a phone call while driving.
其中,预设的第二夹角范围值可以根据实际情况设定,此处不做唯一限定,例如,第二夹角范围值为[0°,5°)。在一些示例中,当第一目标夹角D1或第二目标夹角D2为[0°,5°)时,可以确定当前驾驶行为状态为开车打电话。Wherein, the preset second included angle range value can be set according to the actual situation, which is not uniquely limited here, for example, the second included angle range value is [0°, 5°). In some examples, when the first target angle D1 or the second target angle D2 is [0°, 5°), it may be determined that the current driving behavior state is to make a phone call while driving.
在另一些实施例中,当第一目标夹角或第二目标夹角处于预设的第三夹角范围值,且在预设时长内第一目标夹角或第二目标夹角多次增大和减小,或者第一目标距离或第二目标距离多次增大和减小时,确定当前驾驶行为状态为开车抽烟。In other embodiments, when the first target angle or the second target angle is within a preset third angle range value, and the first target angle or the second target angle increases multiple times within the preset time period It is determined that the current driving behavior state is smoking while driving.
其中,预设的第三夹角范围值可以根据实际情况设定,此处不做唯一限定,例如,第三夹角范围值为[5°,10°]。预设时长可以根据实际情况设定,此处不做唯一限定,例如,预设时长可以是10s。The preset third included angle range value may be set according to actual conditions, and is not uniquely limited here. For example, the third included angle range value is [5°, 10°]. The preset duration can be set according to the actual situation, and there is no unique limitation here. For example, the preset duration can be 10s.
在一些示例中,当第一目标夹角D1或第二目标夹角D2为[5°,10°],且在预设时长10s内第一目标夹角D1或第二目标夹角D2多次增大和减小,或者第一目标距离B1或第二目标距离B2多次增大和减小时,可以确定当前驾驶行为状态为开车抽烟。In some examples, when the first target angle D1 or the second target angle D2 is [5°, 10°], and the first target angle D1 or the second target angle D2 is multiple times within the preset duration of 10s Increase and decrease, or when the first target distance B1 or the second target distance B2 increases and decreases multiple times, it may be determined that the current driving behavior state is smoking while driving.
可以理解的是,当驾驶人员在抽烟时,一般会在10s内出现反复的动作,例如,第一目标夹角D1或第二目标夹角D2从[5°,10°]增大到标准夹角C,再从标准夹角C减小到[5°,10°];或者,第一目标距离B1或第二目标距离B2从距离阈值0.1m增大到标准距离A,再从标准距离A减小到距离阈值0.1m。It can be understood that when the driver is smoking, repeated actions generally occur within 10s. For example, the first target angle D1 or the second target angle D2 increases from [5°, 10°] to the standard angle. angle C, and then reduce from the standard angle C to [5°, 10°]; or, the first target distance B1 or the second target distance B2 is increased from the distance threshold 0.1m to the standard distance A, and then from the standard distance A Reduce to a distance threshold of 0.1m.
在另一些实施例中,根据目标距离、目标夹角以及动作持续时间,确定驾驶人员的当前驾驶行为状态,可以包括:若目标距离大于标准距离,则根据目标距离、标准距离、目标夹角、标准夹角以及动作持续时间,确定当前驾驶行为状态是否为急转弯、双手脱离方向盘、捡拾物品中的一种。In other embodiments, determining the current driving behavior state of the driver according to the target distance, the target angle and the action duration may include: if the target distance is greater than the standard distance, The standard angle and action duration determine whether the current driving behavior is one of sharp turning, hands off the steering wheel, and picking up items.
在本申请实施例中,在判定目标距离B大于标准距离A时,则可以根据目标距离B、标准距离A、目标夹角D、标准夹角C以及动作持续时间T,确定当前驾驶行为状态是否为急转弯、双手脱离方向盘、捡拾物品中的一种。In the embodiment of the present application, when it is determined that the target distance B is greater than the standard distance A, it can be determined whether the current driving behavior state is based on the target distance B, the standard distance A, the target angle D, the standard angle C and the action duration T. It is one of sharp turns, hands off the steering wheel, and picking up items.
在一些实施方式中,若第一目标夹角或第二目标夹角大于标准夹角且小于或等于预设角度阈值、且动作持续时间小于或等于第二时间阈值,则确定当前驾驶行为状态为急转弯。In some embodiments, if the first target angle or the second target angle is greater than the standard angle and less than or equal to the preset angle threshold, and the action duration is less than or equal to the second time threshold, the current driving behavior state is determined to be Sharp turn.
其中,第二时间阈值可以根据实际情况设定,此处不做唯一限定,例如,第二时间阈值可以是1s。预设角度阈值可以根据实际情况设定,具体数值在此不作限定,例如,预设角度阈值可以是180°。Wherein, the second time threshold may be set according to the actual situation, which is not uniquely limited here. For example, the second time threshold may be 1s. The preset angle threshold may be set according to the actual situation, and the specific value is not limited herein. For example, the preset angle threshold may be 180°.
在一些示例中,当目标夹角D大于标准夹角C且小于或等于预设角度阈值180°、且动作持续时间T小于或等于第二时间阈值1s时,可以确定驾驶人员的当前驾驶行为状态为急转弯。In some examples, when the target angle D is greater than the standard angle C and less than or equal to a preset angle threshold of 180°, and the action duration T is less than or equal to the second time threshold 1s, the current driving behavior state of the driver may be determined for sharp turns.
在另一些实施方式中,若第一目标距离和第二目标距离均大于标准距离、第一目标夹角和第二目标夹角均大于标准夹角、且动作持续时间大于或等于第一时间阈值,则确定当前驾驶行为状态为双手脱离方向盘。In other embodiments, if both the first target distance and the second target distance are greater than the standard distance, the first target angle and the second target angle are both greater than the standard angle, and the action duration is greater than or equal to the first time threshold , then it is determined that the current driving behavior state is that both hands are off the steering wheel.
在一些示例中,当第一目标距离B1和第二目标距离B2均大于标准距离A、第一目标夹角D1和第二目标夹角D2均大于标准夹角C、且动作持续时间T大于或等于第一时间阈值0.8s时,可以确定驾驶人员的当前驾驶行为状态为双手脱离方向盘。In some examples, when the first target distance B1 and the second target distance B2 are both greater than the standard distance A, the first target angle D1 and the second target angle D2 are both greater than the standard angle C, and the action duration T is greater than or When it is equal to the first time threshold of 0.8s, it can be determined that the current driving behavior state of the driver is that both hands are off the steering wheel.
在另一些实施方式中,若第一目标距离或第二目标距离大于标准距离、第一目标夹角或第二目标夹角等于预设角度阈值、且动作持续时间大于或等于第一时间阈值,则确定当前驾驶行为状态为捡拾物品。In other embodiments, if the first target distance or the second target distance is greater than the standard distance, the first target angle or the second target angle is equal to a preset angle threshold, and the action duration is greater than or equal to the first time threshold, Then it is determined that the current driving behavior state is picking up items.
在一些示例中,当第一目标距离B1或第二目标距离B2大于标准距离A、第一目标夹角D1或第二目标夹角D2等于预设角度阈值、且动作持续时间T大于或等于第一时间阈值0.8s时,可以确定驾驶人员的当前驾驶行为状态为捡拾物品。In some examples, when the first target distance B1 or the second target distance B2 is greater than the standard distance A, the first target angle D1 or the second target angle D2 is equal to a preset angle threshold, and the action duration T is greater than or equal to the first When a time threshold is 0.8s, it can be determined that the current driving behavior state of the driver is picking up items.
在一些实施例中,确定驾驶人员的当前驾驶行为状态之后,还包括:若当前驾驶行为状态为危险驾驶行为状态,则根据危险驾驶行为状态进行预警,包括向预设范围内的目标设备发送预警消息和/或在导航地图上显示当前车辆的预警状态,其中,目标设备包括车辆、用户携带的移动终端。In some embodiments, after determining the current driving behavior state of the driver, the method further includes: if the current driving behavior state is a dangerous driving behavior state, performing an early warning according to the dangerous driving behavior state, including sending an early warning to a target device within a preset range The message and/or the current warning state of the vehicle is displayed on the navigation map, wherein the target device includes the vehicle and the mobile terminal carried by the user.
在一些示例中,移动终端可以包括但不限于智能手机、平板电脑、笔记本电脑、个人数字助理以及穿戴式设备等电子设备。In some examples, the mobile terminal may include, but is not limited to, electronic devices such as smart phones, tablet computers, notebook computers, personal digital assistants, and wearable devices.
在一些示例中,若当前驾驶行为状态为开车抽烟、开车打电话、急转弯、双手脱离方向盘、捡拾物品中至少一种,则进行预警。In some examples, if the current driving behavior state is at least one of smoking while driving, making a phone call while driving, turning sharply, taking hands off the steering wheel, and picking up objects, an early warning is performed.
在一些实施方式中,可以向预设范围内的目标设备发送预警消息。其中,预设范围可以根据实际情况设定,具体数值在此不作限定。例如,预设范围可以是20米,也可以是50米。In some implementations, an alert message may be sent to target devices within a preset range. The preset range can be set according to the actual situation, and the specific value is not limited here. For example, the preset range can be 20 meters or 50 meters.
例如,若当前驾驶行为状态为开车抽烟,则向预设范围内的目标设备发送预警消息。For example, if the current driving behavior state is smoking while driving, an early warning message is sent to a target device within a preset range.
在一些示例中,在发送预警消息时,可以基于4G、5G、蓝牙、Zigbee以及Wifi等通信方式与车辆、用户携带的移动终端建立通信连接,并向移动终端发送预警消息。其中,发送 预警消息的方式可以包括但不限于短信、电话、微信以及邮件等等。移动终端在接收到预警消息后,可以通过灯光、声音以及震动等方式进行报警,以提醒用户注意行车安全。In some examples, when sending an early warning message, a communication connection can be established with a vehicle and a mobile terminal carried by a user based on communication methods such as 4G, 5G, Bluetooth, Zigbee, and Wifi, and an early warning message is sent to the mobile terminal. The methods of sending the warning message may include but are not limited to text messages, phone calls, WeChat, emails, and the like. After receiving the warning message, the mobile terminal can give an alarm by means of light, sound and vibration to remind the user to pay attention to driving safety.
在一些实施方式中,在导航地图上显示当前车辆的预警状态。In some implementations, the current vehicle warning status is displayed on the navigation map.
需要说明的是,导航地图是指当前时间车辆或行人正在使用的电子地图。It should be noted that the navigation map refers to the electronic map being used by vehicles or pedestrians at the current time.
在一些示例中,可以在导航地图上标注当前车辆的预警状态并实时更新。例如,可以对当前车辆所在的位置进行高亮化,以提醒其它车辆与行人注意安全。In some examples, the warning status of the current vehicle may be marked on the navigation map and updated in real time. For example, the location of the current vehicle can be highlighted to remind other vehicles and pedestrians to pay attention to safety.
通过在确定当前驾驶行为状态为危险驾驶行为状态时,向预设范围内的目标设备发送预警消息和/或在导航地图上显示当前车辆的预警状态,可以减少交通事故的发生。By sending a warning message to a target device within a preset range and/or displaying the warning status of the current vehicle on a navigation map when it is determined that the current driving behavior state is a dangerous driving behavior state, the occurrence of traffic accidents can be reduced.
上述实施例提供的驾驶行为识别方法、设备、系统和存储介质,通过接收体感器发送的多帧骨骼图像,可以实现对驾驶人员的动态动作进行检测和识别,提高了识别驾驶人员的当前驾驶行为状态的准确性;通过根据多帧骨骼图像确定驾驶人员对应的动作特征参数,由于骨骼图像不易受外界环境的干扰,而且可以在无灯光场景下实现,因此可以提高识别驾驶人员对应的动作特征参数的准确度,还可以避免隐私泄露;通过对骨骼图像进行平滑处理,不仅可以减少骨骼图像中的噪声或者失真,而且还可以提高识别效率;通过根据头部关节点坐标、胳膊肘关节点坐标、手腕关节点坐标以及肩部关节点坐标,计算简单,可以准确地确定驾驶人员的当前姿态对应的目标距离和目标夹角;通过确定对应的目标距离与目标夹角满足预设条件的连续多帧骨骼图像,可以根据连续多帧骨骼图像对应的帧数和帧率,确定动作持续时间,实现对驾驶人员的当前姿态进行实时监控;通过获取驾驶人员的标准驾驶姿势状态的骨骼图像,并根据骨骼图像确定驾驶人员的头部与手腕之间的参照距离以及前臂与后臂之间的参照夹角,从而可以确定标准距离和标准夹角,后续在确定驾驶人员的当前驾驶行为状态时,可以将目标距离与标准距离进行对比以及将目标夹角与标准夹角进行对比,从而提高了识别当前驾驶行为状态的准确性;通过根据目标距离、目标夹角以及动作持续时间综合识别驾驶人员的当前驾驶行为状态,可以提高识别的准确度;通过在确定当前驾驶行为状态为危险驾驶行为状态时,向预设范围内的目标设备发送预警消息和/或在导航地图上显示当前车辆的预警状态,可以减少交通事故的发生。The driving behavior recognition method, device, system and storage medium provided by the above embodiments can detect and recognize the dynamic actions of the driver by receiving the multi-frame skeleton images sent by the somatosensory sensor, thereby improving the recognition of the current driving behavior of the driver. The accuracy of the state; by determining the action feature parameters corresponding to the driver according to the multi-frame skeleton images, since the skeleton image is not easily disturbed by the external environment, and can be realized in the scene without lights, it can improve the recognition of the action feature parameters corresponding to the driver. It can also avoid privacy leakage; by smoothing the skeleton image, it can not only reduce the noise or distortion in the skeleton image, but also improve the recognition efficiency; The wrist joint point coordinates and shoulder joint point coordinates are simple to calculate, and can accurately determine the target distance and target angle corresponding to the driver's current posture; by determining the corresponding target distance and target angle to meet the preset conditions for multiple consecutive frames Skeletal images, the duration of the action can be determined according to the number of frames and frame rates corresponding to consecutive multi-frame skeletal images, and the current posture of the driver can be monitored in real time; The image determines the reference distance between the driver's head and the wrist and the reference angle between the forearm and the rear arm, so that the standard distance and standard angle can be determined. When determining the driver's current driving behavior, the The target distance is compared with the standard distance and the target angle is compared with the standard angle, so as to improve the accuracy of identifying the current driving behavior; by comprehensively identifying the driver's current driving according to the target distance, target angle and action duration Behavior status, which can improve the accuracy of recognition; when it is determined that the current driving behavior status is a dangerous driving behavior status, sending an early warning message to the target device within a preset range and/or displaying the current vehicle early warning status on the navigation map, you can Reduce traffic accidents.
本申请的实施例中还提供一种存储介质,用于可读存储,所述存储介质存储有程序,所述程序中包括程序指令,所述处理器执行所述程序指令,实现本申请实施例提供的任一项驾驶行为识别方法。The embodiments of the present application further provide a storage medium for readable storage, the storage medium stores a program, the program includes program instructions, and the processor executes the program instructions to implement the embodiments of the present application Any of the provided driving behavior recognition methods.
例如,该程序被处理器加载,可以执行如下步骤:For example, the program is loaded by the processor and can perform the following steps:
获取驾驶人员当前姿态对应的多帧骨骼图像;根据多帧所述骨骼图像确定所述驾驶人员对应的动作特征参数;根据所述动作特征参数,确定所述驾驶人员的当前驾驶行为状态。Obtaining multiple frames of skeleton images corresponding to the current posture of the driver; determining action feature parameters corresponding to the driver according to the multiple frames of the skeleton images; determining the current driving behavior state of the driver according to the action feature parameters.
其中,所述存储介质可以是前述实施例所述的驾驶行为识别设备的内部存储单元,例如所述驾驶行为识别设备的硬盘或内存。所述存储介质也可以是所述驾驶行为识别设备的外部存储设备,例如所述驾驶行为识别设备上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字卡(Secure Digital Card,SD Card),闪存卡(Flash Card)等。Wherein, the storage medium may be an internal storage unit of the driving behavior recognition device described in the foregoing embodiments, such as a hard disk or a memory of the driving behavior recognition device. The storage medium may also be an external storage device of the driving behavior recognition device, such as a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a secure digital card (Secure Digital Card) equipped on the driving behavior recognition device. Card, SD Card), Flash Card (Flash Card), etc.
本申请实施例公开了一种驾驶行为识别方法、设备和存储介质,通过获取驾驶人员当前姿态对应的多帧骨骼图像,根据多帧骨骼图像确定驾驶人员对应的动作特征参数,能够避免外界环境因素的干扰,可以更加准确地确定动作特征参数;通过根据动作特征参数,确定驾驶人员的当前驾驶行为状态,可以提高检测驾驶人员的驾驶行为状态的准确度。The embodiments of the present application disclose a driving behavior recognition method, device, and storage medium. By acquiring multiple frames of skeleton images corresponding to the driver's current posture, and determining action feature parameters corresponding to the driver according to the multiple frames of skeleton images, external environmental factors can be avoided. The action characteristic parameters can be determined more accurately; by determining the current driving behavior state of the driver according to the action characteristic parameters, the accuracy of detecting the driving behavior state of the driver can be improved.
本领域普通技术人员可以理解,上文中所公开方法中的全部或某些步骤、系统、设备中的功能模块/单元可以被实施为软件、固件、硬件及其适当的组合。Those of ordinary skill in the art can understand that all or some of the steps in the methods disclosed above, functional modules/units in the systems, and devices can be implemented as software, firmware, hardware, and appropriate combinations thereof.
在硬件实施方式中,在以上描述中提及的功能模块/单元之间的划分不一定对应于物理组件的划分;例如,一个物理组件可以具有多个功能,或者一个功能或步骤可以由若干物理组件合作执行。某些物理组件或所有物理组件可以被实施为由处理器,如中央处理器、数字信号处理器或微处理器执行的软件,或者被实施为硬件,或者被实施为集成电路,如专用集成电路。这样的软件可以分布在可存储介质上,存储介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语存储介质包括在用于存储信息(诸如计算机可读指令、数据结构、程序模块或其他数据)的任何方法或技术中实施的易失性和非易失性、可移除和不可移除介质。计算机存储介质包括但不限于RAM、ROM、EEPROM、闪存或其他存储器技术、CD-ROM、数字多功能盘(DVD)或其他光盘存储、磁盒、磁带、磁盘存储或其他磁存储装置、或者可以用于存储期望的信息并且可以被计算机访问的任何其他的介质。此外,本领域普通技术人员公知的是,通信介质通常包含计算机可读指令、数据结构、程序模块或者诸如载波或其他传输机制之类的调制数据信号中的其他数据,并且可包括任何信息递送介质。In a hardware implementation, the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be composed of several physical components Components execute cooperatively. Some or all physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit . Such software may be distributed on storable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). As is known to those of ordinary skill in the art, the term storage medium includes both volatile and nonvolatile implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules or other data , removable and non-removable media. Computer storage media include, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cartridges, magnetic tape, magnetic disk storage or other magnetic storage devices, or may Any other medium used to store desired information and which can be accessed by a computer. In addition, communication media typically embodies computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism, and can include any information delivery media, as is well known to those of ordinary skill in the art .
以上参照附图说明了本申请的一些实施例,并非因此局限本申请的权利范围。本领域技术人员不脱离本申请的范围和实质内所作的任何修改、等同替换和改进,均应在本申请的权利范围之内。Some embodiments of the present application have been described above with reference to the accompanying drawings, which are not intended to limit the scope of the right of the present application. Any modifications, equivalent substitutions and improvements made by those skilled in the art without departing from the scope and essence of the present application shall fall within the scope of the right of the present application.

Claims (11)

  1. 一种驾驶行为识别方法,包括:A driving behavior recognition method, comprising:
    获取驾驶人员当前姿态对应的多帧骨骼图像;Obtain multi-frame skeleton images corresponding to the driver's current posture;
    根据多帧所述骨骼图像确定所述驾驶人员对应的动作特征参数;Determine the action feature parameters corresponding to the driver according to the multiple frames of the skeleton images;
    根据所述动作特征参数,确定所述驾驶人员的当前驾驶行为状态。According to the action characteristic parameter, the current driving behavior state of the driver is determined.
  2. 根据权利要求1所述的驾驶行为识别方法,其中,所述获取驾驶人员当前姿态对应的多帧骨骼图像,包括:The driving behavior recognition method according to claim 1, wherein the acquiring the multi-frame skeleton images corresponding to the current posture of the driver comprises:
    获取图像采集装置发送的多帧所述骨骼图像,其中,多帧所述骨骼图像为所述图像采集装置对包含所述驾驶人员的多帧深度图像进行人体识别、人体部位识别以及骨骼关节点定位生成。Acquiring multiple frames of the skeleton images sent by the image acquisition device, wherein the multiple frames of the skeleton images are for the image acquisition device to perform human body recognition, human body part recognition and skeletal joint point positioning on the multiple frames of depth images including the driver generate.
  3. 根据权利要求1所述的驾驶行为识别方法,其中,所述动作特征参数包括所述驾驶人员的头部与手腕之间的目标距离、前臂和后臂之间的目标夹角以及所述驾驶人员当前姿态的动作持续时间;所述根据多帧所述骨骼图像确定所述驾驶人员对应的动作特征参数,包括:The driving behavior recognition method according to claim 1, wherein the action characteristic parameters include the target distance between the driver's head and the wrist, the target angle between the forearm and the rear arm, and the driver's The action duration of the current posture; the determining the action feature parameters corresponding to the driver according to the multiple frames of the skeleton images, including:
    根据多帧所述骨骼图像确定所述驾驶人员对应的所述目标距离、所述目标夹角以及所述动作持续时间;Determine the target distance, the target angle and the action duration corresponding to the driver according to the multiple frames of the skeleton images;
    所述根据所述驾驶人员对应的动作特征参数,确定所述驾驶人员的当前驾驶行为状态,包括:The determining of the current driving behavior state of the driver according to the action characteristic parameters corresponding to the driver includes:
    根据所述目标距离、所述目标夹角以及所述动作持续时间,确定所述驾驶人员的所述当前驾驶行为状态。The current driving behavior state of the driver is determined according to the target distance, the target included angle and the action duration.
  4. 根据权利要求3所述的驾驶行为识别方法,其中,所述根据多帧所述骨骼图像确定所述驾驶人员对应的所述目标距离、所述目标夹角以及所述动作持续时间之前,还包括:The driving behavior recognition method according to claim 3, wherein before the target distance, the target included angle and the action duration corresponding to the driver are determined according to the multiple frames of the skeleton images, the method further comprises: :
    根据预设的平滑处理策略,对初始的多帧所述骨骼图像进行平滑处理,得到平滑处理后的多帧所述骨骼图像;According to a preset smoothing processing strategy, smoothing the initial multiple frames of the skeleton images to obtain the smoothed multiple frames of the skeleton images;
    所述根据多帧所述骨骼图像确定所述驾驶人员对应的所述目标距离、所述目标夹角以及所述动作持续时间,包括:The determining of the target distance, the target included angle and the action duration corresponding to the driver according to the multiple frames of the skeleton images includes:
    提取平滑处理后的每帧所述骨骼图像中的关节点信息,根据所述关节点信息确定所述驾驶人员对应的所述目标距离和所述目标夹角;extracting joint point information in each frame of the skeleton image after smoothing, and determining the target distance and the target angle corresponding to the driver according to the joint point information;
    确定对应的所述目标距离与所述目标夹角满足预设条件的连续多帧所述骨骼图像,根据连续多帧所述骨骼图像对应的帧数和帧率,确定所述动作持续时间。Determining that the corresponding target distance and the included angle between the target meet a preset condition of the continuous multiple frames of the skeleton image, and determine the action duration according to the frame number and frame rate corresponding to the multiple continuous frames of the skeleton image.
  5. 根据权利要求4所述的驾驶行为识别方法,其中,所述骨骼图像包括头部关节点、颈部关节点、胳膊肘关节点、手腕关节点、肩部关节点以及脊柱关节点;所述根据所述关节点信息确定所述驾驶人员对应的所述目标距离和所述目标夹角,包括:The driving behavior recognition method according to claim 4, wherein the skeleton image includes head joint points, neck joint points, elbow joint points, wrist joint points, shoulder joint points and spine joint points; The joint point information determines the target distance and the target angle corresponding to the driver, including:
    根据所述脊柱关节点与所述颈部关节点建立三维空间坐标系,分别确定头部关节点、胳膊肘关节点、手腕关节点、肩部关节点在所述三维空间坐标系中的头部关节点坐标、胳膊肘关节点坐标、手腕关节点坐标以及肩部关节点坐标;A three-dimensional space coordinate system is established according to the spine joint points and the neck joint points, and the head joint points, elbow joint points, wrist joint points, and shoulder joint points in the three-dimensional space coordinate system are respectively determined. Joint point coordinates, elbow joint point coordinates, wrist joint point coordinates and shoulder joint point coordinates;
    根据所述头部关节点坐标和所述手腕关节点坐标,确定所述目标距离;Determine the target distance according to the head joint point coordinates and the wrist joint point coordinates;
    根据所述胳膊肘关节点坐标、所述手腕关节点坐标以及所述肩部关节点坐标,确定所述目标夹角。The target included angle is determined according to the coordinate of the elbow joint point, the coordinate of the wrist joint point and the coordinate of the shoulder joint point.
  6. 根据权利要求5所述的驾驶行为识别方法,其中,所述根据所述胳膊肘关节点坐标、所述手腕关节点坐标以及所述肩部关节点坐标,确定所述目标夹角,包括:The driving behavior recognition method according to claim 5, wherein the determining the target angle according to the coordinates of the elbow joint point, the wrist joint point coordinate and the shoulder joint point coordinate comprises:
    根据所述胳膊肘关节点坐标与所述肩部关节点坐标,确定所述胳膊肘关节点与所述肩部关节点之间的第一向量;Determine the first vector between the elbow joint point and the shoulder joint point according to the elbow joint point coordinate and the shoulder joint point coordinate;
    根据所述胳膊肘关节点坐标与所述手腕关节点坐标,确定所述胳膊肘关节点与所述手腕关节点之间的第二向量;Determine a second vector between the elbow joint point and the wrist joint point according to the elbow joint point coordinate and the wrist joint point coordinate;
    基于预设的向量点积公式,根据所述第一向量与所述第二向量,确定所述目标夹角。Based on a preset vector dot product formula, the target angle is determined according to the first vector and the second vector.
  7. 根据权利要求3所述的驾驶行为识别方法,其中,所述根据所述目标距离、所述目标夹角以及所述动作持续时间,确定所述驾驶人员的所述当前驾驶行为状态之前,还包括:The driving behavior identification method according to claim 3, wherein before determining the current driving behavior state of the driver according to the target distance, the target angle and the action duration, the method further comprises: :
    获取所述驾驶人员处于标准驾驶姿势状态时的头部与手腕之间的参照距离以及前臂与后臂之间的参照夹角;obtaining the reference distance between the head and the wrist and the reference included angle between the forearm and the rear arm when the driver is in a standard driving posture;
    当所述参照距离处于预设的距离范围值中,确定所述参照距离为标准距离;When the reference distance is within a preset distance range value, determine that the reference distance is a standard distance;
    当所述参照夹角处于预设的第一夹角范围值中,确定所述参照夹角为标准夹角。When the reference included angle is within a preset first included angle range value, the reference included angle is determined to be a standard included angle.
  8. 根据权利要求7所述的驾驶行为识别方法,其中,所述当前驾驶行为状态包括危险驾驶行为状态,所述危险驾驶行为状态包括开车抽烟、开车打电话、急转弯、双手脱离方向盘、捡拾物品中至少一种;所述根据所述目标距离、所述目标夹角以及所述动作持续时间,确定所述驾驶人员的所述当前驾驶行为状态,包括:The driving behavior identification method according to claim 7, wherein the current driving behavior state includes a dangerous driving behavior state, and the dangerous driving behavior state includes smoking while driving, making a phone call while driving, turning sharply, taking hands off the steering wheel, picking up items At least one; the determining the current driving behavior state of the driver according to the target distance, the target angle and the action duration, including:
    若所述目标距离小于所述标准距离并小于预设的距离阈值,且所述动作持续时间大于或等于第一时间阈值,则根据所述目标夹角,确定所述当前驾驶行为状态是否为开车抽烟或开车打电话,其中,预设的所述距离阈值小于所述标准距离;If the target distance is less than the standard distance and less than a preset distance threshold, and the action duration is greater than or equal to a first time threshold, then according to the target angle, it is determined whether the current driving behavior state is driving smoking or making a phone call while driving, wherein the preset distance threshold is less than the standard distance;
    若所述目标距离大于所述标准距离,则根据所述目标距离、所述标准距离、所述目标夹角、所述标准夹角以及所述动作持续时间,确定所述当前驾驶行为状态是否为急转弯、双手脱离方向盘、捡拾物品中的一种。If the target distance is greater than the standard distance, then according to the target distance, the standard distance, the target angle, the standard angle and the action duration, determine whether the current driving behavior state is One of sharp turns, hands off the steering wheel, and picking up items.
  9. 根据权利要求1-8任一项所述的驾驶行为识别方法,其中,所述确定所述驾驶人员的当前驾驶行为状态之后,还包括:The driving behavior identification method according to any one of claims 1-8, wherein after the determining the current driving behavior state of the driver, the method further comprises:
    若所述当前驾驶行为状态为危险驾驶行为状态,则根据所述危险驾驶行为状态进行预警,包括向预设范围内的目标设备发送预警消息和/或在导航地图上显示当前车辆的预警状态,其中,所述目标设备包括车辆、用户携带的移动终端。If the current driving behavior state is a dangerous driving behavior state, performing an early warning according to the dangerous driving behavior state, including sending an early warning message to a target device within a preset range and/or displaying the current vehicle early warning state on a navigation map, Wherein, the target device includes a vehicle and a mobile terminal carried by a user.
  10. 一种驾驶行为识别设备,包括:A driving behavior recognition device, comprising:
    存储器,用于存储程序;memory for storing programs;
    处理器,用于执行所述程序并在执行所述程序时实现如权利要求1至9任一项所述的驾驶行为识别方法。The processor is configured to execute the program and implement the driving behavior recognition method according to any one of claims 1 to 9 when the program is executed.
  11. 一种存储介质,用于可读存储,其中,所述存储介质存储有一个或者多个程序,所述一个或者多个程序可被一个或者多个处理器执行,以实现如权利要求1至9任一项所述的驾驶行为识别方法。A storage medium for readable storage, wherein the storage medium stores one or more programs, and the one or more programs can be executed by one or more processors to realize the invention as claimed in claims 1 to 9 The driving behavior recognition method of any one.
PCT/CN2021/130751 2020-12-30 2021-11-15 Driving behavior recognition method, and device and storage medium WO2022142786A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011631342.5A CN114764912A (en) 2020-12-30 2020-12-30 Driving behavior recognition method, device and storage medium
CN202011631342.5 2020-12-30

Publications (1)

Publication Number Publication Date
WO2022142786A1 true WO2022142786A1 (en) 2022-07-07

Family

ID=82259027

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/130751 WO2022142786A1 (en) 2020-12-30 2021-11-15 Driving behavior recognition method, and device and storage medium

Country Status (2)

Country Link
CN (1) CN114764912A (en)
WO (1) WO2022142786A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117409484A (en) * 2023-12-14 2024-01-16 四川汉唐云分布式存储技术有限公司 Cloud-guard-based client offence detection method, device and storage medium

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115471826B (en) * 2022-08-23 2024-03-26 中国航空油料集团有限公司 Method and device for judging safe driving behavior of aviation fueller and safe operation and maintenance system
CN115129162A (en) * 2022-08-29 2022-09-30 上海英立视电子有限公司 Picture event driving method and system based on human body image change
CN116965781B (en) * 2023-04-28 2024-01-05 南京晓庄学院 Method and system for monitoring vital signs and driving behaviors of driver

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180173974A1 (en) * 2016-12-16 2018-06-21 Automotive Research & Testing Center Method for detecting driving behavior and system using the same
CN109886150A (en) * 2019-01-29 2019-06-14 上海佑显科技有限公司 A kind of driving behavior recognition methods based on Kinect video camera
CN110688921A (en) * 2019-09-17 2020-01-14 东南大学 Method for detecting smoking behavior of driver based on human body action recognition technology
CN111301280A (en) * 2018-12-11 2020-06-19 北京嘀嘀无限科技发展有限公司 Dangerous state identification method and device
CN111461020A (en) * 2020-04-01 2020-07-28 浙江大华技术股份有限公司 Method and device for identifying behaviors of insecure mobile phone and related storage medium
CN111666818A (en) * 2020-05-09 2020-09-15 大连理工大学 Driver abnormal posture detection method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180173974A1 (en) * 2016-12-16 2018-06-21 Automotive Research & Testing Center Method for detecting driving behavior and system using the same
CN111301280A (en) * 2018-12-11 2020-06-19 北京嘀嘀无限科技发展有限公司 Dangerous state identification method and device
CN109886150A (en) * 2019-01-29 2019-06-14 上海佑显科技有限公司 A kind of driving behavior recognition methods based on Kinect video camera
CN110688921A (en) * 2019-09-17 2020-01-14 东南大学 Method for detecting smoking behavior of driver based on human body action recognition technology
CN111461020A (en) * 2020-04-01 2020-07-28 浙江大华技术股份有限公司 Method and device for identifying behaviors of insecure mobile phone and related storage medium
CN111666818A (en) * 2020-05-09 2020-09-15 大连理工大学 Driver abnormal posture detection method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117409484A (en) * 2023-12-14 2024-01-16 四川汉唐云分布式存储技术有限公司 Cloud-guard-based client offence detection method, device and storage medium

Also Published As

Publication number Publication date
CN114764912A (en) 2022-07-19

Similar Documents

Publication Publication Date Title
WO2022142786A1 (en) Driving behavior recognition method, and device and storage medium
CN106952303B (en) Vehicle distance detection method, device and system
JP6364049B2 (en) Vehicle contour detection method, device, storage medium and computer program based on point cloud data
US9158738B2 (en) Apparatus for monitoring vicinity of a vehicle
WO2019028798A1 (en) Method and device for monitoring driving condition, and electronic device
JP4173902B2 (en) Vehicle periphery monitoring device
CN110942474B (en) Robot target tracking method, device and storage medium
US10477155B2 (en) Driving assistance method, driving assistance device, and recording medium recording program using same
CN114041175A (en) Neural network for estimating head pose and gaze using photorealistic synthetic data
JPWO2014002534A1 (en) Object recognition device
CN111784765A (en) Object measurement method, virtual object processing method, object measurement device, virtual object processing device, medium, and electronic apparatus
US20210015376A1 (en) Electronic device and method for measuring heart rate
JP7103354B2 (en) Information processing equipment, information processing methods, and programs
CN111027506B (en) Method and device for determining sight direction, electronic equipment and storage medium
CN114267041B (en) Method and device for identifying object in scene
JP6991045B2 (en) Image processing device, control method of image processing device
JPWO2014027500A1 (en) Feature extraction method, program, and system
CN206074002U (en) A kind of driving auxiliary electronic device
CN113326800B (en) Lane line position determination method and device, vehicle-mounted terminal and storage medium
JP2011134119A (en) Vehicle periphery monitoring device
CN111583669B (en) Overspeed detection method, overspeed detection device, control equipment and storage medium
KR20180083144A (en) Method for detecting marker and an electronic device thereof
KR20130045658A (en) Vehicle distance detection method and system using vehicle shadow
JP2022544348A (en) Methods and systems for identifying objects
KR102356259B1 (en) Electronic apparatus and controlling method thereof

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21913529

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 14.11.2023)