CN111582077A - Safety belt wearing detection method and device based on artificial intelligence software technology - Google Patents

Safety belt wearing detection method and device based on artificial intelligence software technology Download PDF

Info

Publication number
CN111582077A
CN111582077A CN202010327888.5A CN202010327888A CN111582077A CN 111582077 A CN111582077 A CN 111582077A CN 202010327888 A CN202010327888 A CN 202010327888A CN 111582077 A CN111582077 A CN 111582077A
Authority
CN
China
Prior art keywords
line segment
line
image
vehicle
safety belt
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010327888.5A
Other languages
Chinese (zh)
Inventor
李景
林辉
潘钟声
温煦
江勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Yameizhi Technology Co ltd
Original Assignee
Guangzhou Yameizhi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Yameizhi Technology Co ltd filed Critical Guangzhou Yameizhi Technology Co ltd
Priority to CN202010327888.5A priority Critical patent/CN111582077A/en
Publication of CN111582077A publication Critical patent/CN111582077A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a safety belt wearing detection method and device, computer equipment and a storage medium. Relates to the technical field of intelligent automobiles based on artificial intelligence software technology. The method comprises the following steps: acquiring an image of a person in the vehicle, and determining a trunk area of the person in the vehicle from the image to obtain a trunk image; determining a plurality of lines in the torso image; wherein each of the lines has a corresponding geometric feature in the torso image; clustering the lines according to the geometric characteristics corresponding to each line to obtain suspected safety belt edge lines; and outputting a safety belt wearing detection result of the person in the vehicle according to the suspected safety belt edge line. By adopting the method, the wearing detection accuracy of the driver safety belt can be improved.

Description

Safety belt wearing detection method and device based on artificial intelligence software technology
Technical Field
The application relates to the technical field of intelligent automobiles, in particular to a safety belt wearing detection method and device based on an artificial intelligence software technology, computer equipment and a storage medium.
Background
The safety belt is the most effective and direct passive protection facility in the vehicle, and the casualty rate of traffic accidents can be effectively reduced by wearing the safety belt. Global traffic accident reports show that a significant driving safety hazard is improper use of safety belts.
In order to remind a driver of wearing a safety belt in time, in the prior art, the image of the driver is often acquired, and the image of the driver is identified by adopting an image identification method to detect the wearing condition of the safety belt of the driver.
However, due to the influence of factors such as wearing diversity of the driver, complex surrounding environment, insufficient light in the vehicle and the like, the acquired driver image often has a lot of noise line segments and interference line segments, which causes the problem of low accuracy in detecting the wearing condition of the safety belt of the driver in the prior art.
Disclosure of Invention
In view of the above, it is necessary to provide a seat belt wearing detection method, a seat belt wearing detection apparatus, a computer device, and a storage medium, which can improve the accuracy of seat belt wearing detection for a driver.
A seat belt wear detection method, the method comprising:
acquiring an image of a person in the vehicle, and determining a trunk area of the person in the vehicle from the image to obtain a trunk image;
determining a plurality of lines in the torso image; wherein each of the lines has a corresponding geometric feature in the torso image;
clustering the lines according to the geometric characteristics corresponding to each line to obtain suspected safety belt edge lines;
and outputting a safety belt wearing detection result of the person in the vehicle according to the suspected safety belt edge line.
In one embodiment, when the lines are line segments, the geometric features include inclination angles of the line segments, and the clustering the lines according to the geometric features corresponding to each line to obtain suspected seat belt edge lines includes:
determining a line segment to be clustered in the trunk image according to the inclination angle of each line segment; the inclination angle of the line segment to be clustered accords with a preset angle range;
merging the line segments to be clustered according to the position relationship among the line segments to be clustered to obtain merged line segments;
and screening the merged segments according to a preset segment length threshold value to determine the suspected safety belt edge line.
In one embodiment, the geometric features further include end point coordinates of the line segments, the line segments to be clustered include a first line segment and a second line segment, and the merging the line segments to be clustered according to a position relationship between the line segments to be clustered to obtain a merged line segment includes:
when the first line segment and the second line segment are not intersected, acquiring an endpoint ordinate interval of the first line segment and acquiring an endpoint ordinate interval of the second line segment;
judging whether the first line segment and the second line segment need to be merged or not based on an interval overlapping relation between the endpoint ordinate interval of the first line segment and the endpoint ordinate interval of the second line segment;
and if so, combining the first line segment and the second line segment to obtain the combined line segment.
In one embodiment, the determining whether the first line segment and the second line segment need to be merged based on an interval overlapping relationship between an endpoint ordinate interval of the first line segment and an endpoint ordinate interval of the second line segment includes;
when no intersection exists between the endpoint ordinate interval of the first line segment and the endpoint ordinate interval of the second line segment, determining the closest distance between the endpoint of the first line segment and the endpoint of the second line segment as a first distance according to the endpoint coordinate of the first line segment and the endpoint coordinate of the second line segment;
when the first distance is smaller than a preset first distance threshold, judging that the first line segment and the second line segment need to be merged;
alternatively, the first and second electrodes may be,
when the end point ordinate interval of the first line segment is partially overlapped with the end point ordinate interval of the second line segment, determining the closest distance between the end point of the first line segment and the second line segment as a second distance according to the end point coordinate of the first line segment and the end point coordinate of the second line segment;
when the second distance is smaller than a preset second distance threshold, judging that the first line segment and the second line segment need to be merged;
alternatively, the first and second electrodes may be,
when the endpoint ordinate interval of the first line segment comprises the endpoint ordinate interval of the second line segment, determining the closest distance between the midpoint of the second line segment and the first line segment as a third distance according to the endpoint coordinate of the first line segment and the endpoint coordinate of the second line segment;
and when the third distance is smaller than a preset third distance threshold, judging that the first line segment and the second line segment need to be merged.
In one embodiment, the outputting a seat belt wearing detection result of the person in the vehicle according to the suspected seat belt edge line includes:
if the angle formed between the suspected safety belt edge lines is smaller than a preset angle threshold, acquiring a temporary threshold range, wherein the temporary threshold range is determined according to the size information of a reference object in the image of the person in the vehicle;
and when the distance between the suspected safety belt edge lines accords with the temporary threshold range, judging that the vehicle interior person wears the safety belt.
In one embodiment, before outputting the seat belt wearing detection result of the person in the vehicle according to the suspected seat belt edge line, the method further includes:
acquiring a face characteristic point in the image of the person in the vehicle;
determining face size information according to the distance information of the face characteristic points in the image of the person in the vehicle;
and determining a threshold range parameter aiming at the suspected safety belt edge line according to the face size information to obtain the temporary threshold range.
In one embodiment, the obtaining an image of a person in a vehicle, and determining a trunk area of the person in the vehicle from the image to obtain a trunk image includes:
determining a human face area of the person in the vehicle in the image of the person in the vehicle;
acquiring position information and size information of the human face area in the image of the person in the vehicle;
determining the trunk area of the person in the vehicle according to the position information and the size information;
and extracting the trunk image from the image of the person in the vehicle according to the trunk area.
A seatbelt wear detection apparatus, the apparatus comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring images of people in the vehicle, and determining the trunk area of the people in the vehicle from the images to obtain a trunk image;
a determining module for determining a plurality of lines in the torso image; wherein each of the lines has a corresponding geometric feature in the torso image;
the clustering module is used for clustering the lines according to the geometric characteristics corresponding to each line to obtain suspected safety belt edge lines;
and the output module is used for outputting the safety belt wearing detection result of the person in the vehicle according to the suspected safety belt edge line.
A computer device comprising a memory storing a computer program and a processor implementing the steps of the above method when executing the computer program.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method.
According to the safety belt wearing detection method, the safety belt wearing detection device, the computer equipment and the storage medium, the torso image is obtained by obtaining the image of the person in the vehicle and determining the torso area of the person in the vehicle from the image; then, determining a plurality of lines in the trunk image; wherein each line has a corresponding geometric feature in the torso image; then, according to the geometric characteristics corresponding to each line, clustering processing is accurately carried out on the multiple lines to obtain suspected safety belt edge lines, and then safety belt wearing detection results of people in the vehicle can be accurately output according to the suspected safety belt edge lines, so that the influence on the accuracy of the safety belt wearing detection results due to the fact that the images of the people in the vehicle have a lot of noise lines and interference lines can be avoided, and the accuracy of detection on the safety belt wearing condition of a driver is improved.
Drawings
Fig. 1 is an application environment diagram of a seat belt wearing detection method in one embodiment;
fig. 2 is a schematic flow chart of a seat belt wear detection method according to an embodiment;
FIG. 3 is a flow diagram of a line detection in one embodiment;
FIG. 4A is a diagram illustrating a positional relationship of a first line segment according to an embodiment;
FIG. 4B is a diagram illustrating a second line segment position relationship, according to an embodiment;
FIG. 4C is a diagram illustrating a third segment position relationship, according to one embodiment;
FIG. 5 is a schematic diagram of a torso image acquisition process, in one embodiment;
fig. 6 is a schematic flow chart of a seat belt wear detection method in another embodiment;
fig. 7 is a block diagram showing a structure of a seatbelt wearing detection apparatus in one embodiment;
FIG. 8 is a system flow diagram of a seatbelt wear detection method in one embodiment;
FIG. 9 is a flow chart of a segment clustering process of a seatbelt wear detection method in an embodiment;
fig. 10 is a flow chart of a seat belt determination system for a seat belt wear detection method in one embodiment;
FIG. 11 is a diagram illustrating an internal structure of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The safety belt wearing detection method provided by the application can be applied to the application environment shown in fig. 1. The safety belt wearing detection device 110 firstly acquires an image of a person in the vehicle, determines a trunk area of the person in the vehicle from the image, and obtains a trunk image; then, the seatbelt-wearing detection device 110 determines a plurality of lines in the torso image; wherein each line has a corresponding geometric feature in the torso image; then, the safety belt wearing detection device 110 performs clustering processing on the plurality of lines according to the geometric features corresponding to each line to obtain suspected safety belt edge lines; finally, the seat belt wearing detection device 110 outputs a seat belt wearing detection result of the person in the vehicle according to the suspected seat belt edge line. In practical applications, the seat belt wearing detection device 110 may be, but is not limited to, various vehicle-mounted terminals, personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices.
The safety belt wearing detection method provided by the embodiment of the application relates to the computer vision technology and other technologies in the artificial intelligence technology, and is specifically explained by the following embodiments:
the artificial intelligence technology is a comprehensive subject, and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
Computer Vision technology (CV) is a science for researching how to make a machine look, and more specifically, it refers to that a camera and a Computer are used to replace human eyes to perform machine Vision such as identification, tracking and measurement on a target, and further perform graphic processing, so that the Computer processing becomes an image more suitable for human eyes to observe or transmitted to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture information from images or multidimensional data. Computer vision technologies generally include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technologies, virtual reality, augmented reality, synchronous positioning, map construction, and other technologies, and also include common biometric technologies such as face recognition and fingerprint recognition.
In one embodiment, as shown in fig. 2, there is provided a seat belt wearing detection method including the steps of:
and S210, acquiring an image of the person in the vehicle, and determining the trunk area of the person in the vehicle from the image to obtain a trunk image.
The vehicle occupant may refer to a driver or a passenger inside the vehicle.
The trunk image may refer to an image including a trunk portion of a person in the vehicle.
In specific implementation, when the safety belt wearing detection device 110 needs to perform safety belt wearing detection on a person in the vehicle, the safety belt wearing detection device 110 acquires an image of the person in the vehicle through a vehicle-mounted monitoring system (IVMS), and then the safety belt wearing detection device 110 determines a trunk area of the person in the vehicle from the image to obtain a trunk image.
Specifically, the seatbelt wearing detection device 110 may detect face position information or head position information of a person in the vehicle in the image through a DLIB face detection algorithm; then, the seat belt wearing detection device 110 determines the trunk area of the person in the vehicle according to the face position information or the head position information of the person in the vehicle. Finally, the seatbelt-worn detection device 110 determines a torso image of the person in the vehicle in the image from the torso region.
Step S220, determining a plurality of lines in the trunk image; wherein each line has a corresponding geometric feature in the torso image.
Wherein the line may include at least one of a linear line and a curved line. In practical applications, the straight lines may also be named line segments.
The geometric features may refer to features formed by geometric information such as size, inclination angle, and position of lines in the torso image.
In a specific implementation, after the safety belt wearing detection device 110 acquires a trunk image of a person in the vehicle, the safety belt wearing detection device 110 determines a plurality of lines in the trunk image.
Specifically, the seat belt wearing detection device 110 may use the torso image as a line search region, and then the seat belt wearing detection device 110 detects all lines of the line search region through a preset line detection algorithm and determines a geometric feature corresponding to each line.
In practical application, the safety belt wearing detection device 110 may determine a plurality of straight lines in the torso image; specifically, the seatbelt wearing detection device 110 may perform gaussian blur on the trunk image, filter noise interference that may exist in the trunk image, detect that all edge contour information may exist in the trunk image through canny edge detection (an edge detection algorithm), and finally detect the geometric features corresponding to each line in the trunk image through hough transform or LSD line detection algorithm (an edge detection algorithm). Because the edge line of the safety belt is often a linear line, the plurality of the straight lines in the trunk image are determined, so that the straight lines which may be the edge line of the safety belt can be conveniently and directly screened out from the plurality of the straight lines in the follow-up process, and the accuracy of safety belt wearing identification is improved. To facilitate understanding by those skilled in the art, fig. 3 provides a line detection flow chart. The trunk image is sequentially subjected to image blurring processing, edge detection processing and Hough transformation to obtain a line detection result aiming at the trunk image.
And step S230, clustering a plurality of lines according to the geometric characteristics corresponding to each line to obtain suspected safety belt edge lines.
The suspected belt edge line may be a line that may be a belt edge.
In specific implementation, after the safety belt wearing detection device 110 determines a plurality of lines in the trunk image, the safety belt wearing detection device 110 may perform clustering processing on the plurality of lines with similar geometric features according to the geometric features corresponding to each line, so as to obtain suspected safety belt edge lines.
And S240, outputting a safety belt wearing detection result of the person in the vehicle according to the suspected safety belt edge line.
In a specific implementation, after the seat belt wearing detection device 110 determines a suspected seat belt edge line in the trunk image, the seat belt wearing detection device 110 further determines whether the suspected seat belt edge line is a line corresponding to an edge of a seat belt; if the suspected edge line of the safety belt is a line corresponding to the edge of the safety belt, the safety belt wearing detection device 110 determines that the person in the vehicle has worn the safety belt, and performs voice reminding. If the suspected edge line of the safety belt is not the line corresponding to the edge of the safety belt, the safety belt wearing detection device 110 determines that the person in the vehicle does not wear the safety belt, and performs voice reminding.
In the safety belt wearing detection method, the trunk image is obtained by obtaining the image of the person in the vehicle and determining the trunk area of the person in the vehicle from the image; then, determining a plurality of lines in the trunk image; wherein each line has a corresponding geometric feature in the torso image; then, according to the geometric characteristics corresponding to each line, clustering processing is accurately carried out on the multiple lines to obtain suspected safety belt edge lines, and then safety belt wearing detection results of people in the vehicle can be accurately output according to the suspected safety belt edge lines, so that the influence on the accuracy of the safety belt wearing detection results due to the fact that a lot of noise lines and interference lines exist in images of the people in the vehicle can be avoided, and the accuracy of detection on the safety belt wearing condition of a driver is improved.
In another embodiment, when the lines are line segments, the geometric features include inclination angles of the line segments, and the clustering process is performed on the plurality of lines according to the geometric features corresponding to each line to obtain suspected safety belt edge lines, including: determining a line segment to be clustered in the trunk image according to the inclination angle of each line segment; the inclination angle of the line segment to be clustered accords with a preset angle range; merging the line segments to be clustered according to the position relationship among the line segments to be clustered to obtain merged line segments; and screening the combined line segments according to a preset line segment length threshold value to determine a suspected safety belt edge line.
Wherein the geometric feature comprises an inclination angle of the line segment.
Wherein, the inclination angle of the line segment may refer to the inclination angle of the line segment with respect to the reference line in the torso image. In practical applications, the reference line may refer to a horizontal line of the torso image.
The line segment to be clustered may refer to a line segment that may need to be clustered and merged.
In a specific implementation, when the line is a line segment, the seatbelt wearing detection device 110 may determine the inclination angle of each line segment relative to the horizontal line according to the coordinate information of the end point of each line segment. The safety belt wearing detection device 110 performs clustering processing on a plurality of lines according to the geometric features corresponding to each line to obtain a suspected safety belt edge line, and specifically includes: the safety belt wearing detection device 110 may screen each line segment in the trunk image according to a preset angle range and an inclination angle of each line segment in advance, so as to obtain a line segment to be clustered, which conforms to the preset angle range.
In an actual application scene, after a person in the vehicle wears the safety belt, the safety belt usually obliquely spans the trunk part of the person in the vehicle; therefore, the inclination angle of the belt edge line in the torso image often satisfies a certain inclination angle range; for example, the inclination angle of the seat belt edge line in the torso image is often between 30 ° and 80 ° or between 100 ° and 150 °. The line segments to be clustered, which accord with the preset angle range, in the plurality of line segments are determined according to the preset angle range, so that the line segments which are possibly the edge lines of the safety belt can be quickly and accurately screened out from the plurality of line segments in the trunk image, the subsequent data processing amount of clustering the line segments and judging the safety belt is reduced, and the accuracy of safety belt wearing identification is improved. Thus, in practical applications, the preset angle range may be set to 30 ° to 80 ° or 100 ° to 150 ° by those skilled in the art.
Then, the safety belt wearing detection device 110 judges whether line segments to be clustered need to be merged according to the position relationship among the line segments to be clustered, so as to obtain merged line segments; finally, the safety belt wearing detection device 110 filters the combined line segments according to a preset line segment length threshold, that is, length filtering, so as to filter line segments with short line segments, and further determine suspected safety belt edge lines.
It should be noted that, when the seatbelt wearing detection device 110 finds only one line segment to be clustered in the preset angle range, the seatbelt wearing detection device 110 directly uses the line segment to be clustered as the merged line segment.
According to the technical scheme of the embodiment, the line segment to be clustered is determined, wherein the inclination angle of the line segments conforms to a preset angle range; merging the line segments to be clustered according to the position relationship among the line segments to be clustered to obtain merged line segments; and finally, screening the combined line segments according to a preset line segment length threshold value, and determining the suspected safety belt edge line, so that accurate filtering and clustering of the noise line segments and the interference line segments are realized, and the suspected safety belt edge line for judging whether the vehicle interior personnel wear the safety belt can be accurately obtained.
In another embodiment, the geometric features further include end point coordinates of line segments, the line segments to be clustered include a first line segment and a second line segment, and the line segments to be clustered are merged according to a positional relationship between the line segments to be clustered to obtain merged line segments, including: when the first line segment and the second line segment are not intersected, acquiring an endpoint ordinate interval of the first line segment and acquiring an endpoint ordinate interval of the second line segment; judging whether the first line segment and the second line segment need to be merged or not based on the interval overlapping relation between the endpoint ordinate interval of the first line segment and the endpoint ordinate interval of the second line segment; and if so, combining the first line segment and the second line segment to obtain a combined line segment.
The line segments to be clustered can comprise a first line segment and a second line segment.
Wherein the geometric features comprise end point coordinates of the line segment.
The endpoint coordinates may be coordinates of the endpoint of the line segment in the torso image. In practical application, a two-dimensional coordinate system may be established in the torso image in advance, and the coordinates of the end points of the line segments in the torso image are determined based on the two-dimensional coordinate system.
The end point ordinate interval may be an interval formed by end point ordinates of the line segment. For example, knowing the coordinates of the end points of a line segment as (2,3) and (4,5), the interval of the end point ordinate of the line segment can be represented as [3,5 ].
In the specific implementation, the process of merging the line segments to be clustered according to the position relationship between the line segments to be clustered by the safety belt wearing detection device 110 to obtain the merged line segment specifically includes: the safety belt wearing detection device 110 may first select a first line segment and a second line segment in a line segment to be clustered as two line segments for determining whether clustering is required, and when the safety belt wearing detection device 110 detects that the first line segment and the second line segment are not intersected, the safety belt wearing detection device 110 respectively acquires an endpoint ordinate interval of the first line segment and acquires an endpoint ordinate interval of the second line segment.
Then, the seatbelt wearing detection device 110 determines whether the first line segment and the second line segment need to be merged based on an interval overlapping relationship between an endpoint ordinate interval of the first line segment and an endpoint ordinate interval of the second line segment; if the first line segment and the second line segment need to be combined, the safety belt wearing detection device 110 combines the first line segment and the second line segment to obtain a combined line segment. Specifically, the seatbelt wearing detection device 110 may select, as the first endpoint, an endpoint whose ordinate is the largest, from among the endpoint coordinates of the first line segment and the endpoint coordinates of the second line segment; selecting the endpoint with the smallest ordinate as a second endpoint; finally, the seatbelt-worn detecting device 110 determines the merged line segment based on the coordinates of the first endpoint and the coordinates of the second endpoint.
It should be noted that, when the safety belt wearing detection device 110 detects that the first line segment intersects with the second line segment, the safety belt wearing detection device 110 directly merges the first line segment and the second line segment to obtain a merged line segment.
According to the technical scheme of the embodiment, in the process of merging the line segments to be clustered to obtain the merged line segment according to the position relationship between the line segments to be clustered, when the first line segment is not intersected with the second line segment, the position relationship between the first line segment and the second line segment is accurately described through the interval overlapping relationship between the endpoint ordinate interval of the first line segment and the endpoint ordinate interval of the second line segment, and whether the first line segment and the second line segment need to be merged or not is accurately judged, so that the merged line segment is obtained.
In another embodiment, determining whether the first line segment and the second line segment need to be merged based on an interval overlapping relationship between an endpoint ordinate interval of the first line segment and an endpoint ordinate interval of the second line segment includes; when no intersection exists between the endpoint ordinate interval of the first line segment and the endpoint ordinate interval of the second line segment, determining the closest distance between the endpoint of the first line segment and the endpoint of the second line segment as a first distance according to the endpoint coordinate of the first line segment and the endpoint coordinate of the second line segment; and when the first distance is smaller than a preset first distance threshold, judging that the first line segment and the second line segment need to be merged.
In specific implementation, the process of determining whether to merge the first line segment and the second line segment based on the interval overlapping relationship between the endpoint ordinate interval of the first line segment and the endpoint ordinate interval of the second line segment by the seatbelt wearing detection device 110 specifically includes: when the safety belt wearing detection device 110 determines that there is no intersection between the end point ordinate interval of the first line segment and the end point ordinate interval of the second line segment, that is, the first line segment and the second line segment are in the first line segment positional relationship, the safety belt wearing detection device 110 determines the closest distance between the end point of the first line segment and the end point of the second line segment as the first distance according to the end point coordinate of the first line segment and the end point coordinate of the second line segment; then, the seatbelt wearing detection device 110 determines whether the first distance is smaller than a preset first distance threshold; if the first distance is smaller than the preset first distance threshold, the seatbelt wearing detection device 110 determines that the first line segment and the second line segment need to be merged.
To facilitate understanding by those skilled in the art, fig. 4A provides a schematic diagram of a first line segment positional relationship; as shown in fig. 4A, the distance between the end point b of the first line segment 410 and the end point a of the second line segment 420 is the first distance.
In another embodiment, when the end point ordinate interval of the first line segment and the end point ordinate interval of the second line segment are partially overlapped, the closest distance between the end point of the first line segment and the second line segment is determined as the second distance according to the end point coordinate of the first line segment and the end point coordinate of the second line segment; when the second distance is smaller than a preset second distance threshold, judging that the first line segment and the second line segment need to be merged;
in a specific implementation, when the seatbelt wearing detection device 110 determines that the end point ordinate interval of the first line segment and the end point ordinate interval of the second line segment are partially overlapped, that is, the first line segment and the second line segment are located in the position relationship of the second line segment, the seatbelt wearing detection device 110 determines the closest distance between the end point of the first line segment and the second line segment as the second distance according to the end point coordinate of the first line segment and the end point coordinate of the second line segment; then, the seatbelt wearing detection device 110 determines whether the second distance is smaller than a preset second distance threshold; if the second distance is smaller than the preset second distance threshold, the seatbelt wearing detection device 110 determines that the first line segment and the second line segment need to be merged.
To facilitate understanding by those skilled in the art, FIG. 4B provides a schematic illustration of a second line segment positional relationship; as shown in fig. 4B, the closest distance between the end point B of the first line segment 410 and the second line segment 420 is the second distance.
In another embodiment, when the endpoint ordinate interval of the first line segment includes the endpoint ordinate interval of the second line segment, determining the closest distance between the midpoint of the second line segment and the first line segment as the third distance according to the endpoint coordinate of the first line segment and the endpoint coordinate of the second line segment; and when the third distance is smaller than a preset third distance threshold, judging that the first line segment and the second line segment need to be merged.
In a specific implementation, when the seatbelt wearing detection device 110 determines that the endpoint ordinate interval of the first line segment includes the endpoint ordinate interval of the second line segment, that is, when the first line segment and the second line segment are in the third line segment positional relationship, the seatbelt wearing detection device 110 determines, according to the endpoint coordinate of the first line segment and the endpoint coordinate of the second line segment, a closest distance between the midpoint of the second line segment and the first line segment as a third distance; then, the seatbelt wearing detection device 110 determines whether the third distance is smaller than a preset third distance threshold; if the third distance is smaller than the preset third distance threshold, the seatbelt wearing detection device 110 determines that the first line segment and the second line segment need to be merged.
To facilitate understanding by those skilled in the art, FIG. 4C provides a schematic illustration of a third segment positional relationship; as illustrated in fig. 4C, the closest distance between the midpoints e of the first line segment 410 and the second line segment 420 is taken as the third distance.
The first distance threshold, the second distance threshold, and the third distance threshold may be adaptively set according to the size information of the reference object in the image of the vehicle occupant, and are not particularly limited herein. Therefore, the condition that the fixed threshold value is not suitable due to the non-uniform image size can be avoided.
According to the technical scheme of the embodiment, when the first line segment and the second line segment are not intersected, the position relation between the first line segment and the second line segment is accurately described through the interval overlapping relation between the endpoint ordinate interval of the first line segment and the endpoint ordinate interval of the second line segment and the distance between the endpoints of the line segments, and then whether the first line segment and the second line segment need to be combined or not is accurately judged, and the combined line segment is obtained.
In another embodiment, outputting a seat belt wearing detection result of a person in the vehicle according to the suspected seat belt edge line includes: if the angle formed between the suspected safety belt edge lines is smaller than a preset angle threshold, acquiring a temporary threshold range, wherein the temporary threshold range is determined according to the size information of a reference object in the image of the person in the vehicle; and when the distance between the suspected safety belt edge lines accords with the temporary threshold range, judging that the vehicle interior personnel wear the safety belt.
The temporary threshold range is used for judging whether the suspected safety belt edge line is the safety belt edge line.
In the specific implementation, the process of outputting the seat belt wearing detection result of the person in the vehicle according to the suspected seat belt edge line by the seat belt wearing detection device 110 specifically includes: the seatbelt-wearing detection device 110 may determine two approximately parallel suspected seatbelt edge lines among the suspected seatbelt edge lines. Specifically, the seatbelt wearing detection device 110 may determine whether an angle formed between two suspected seatbelt edge lines is smaller than a preset angle threshold; if yes, the two suspected safety belt edge lines are two approximately parallel line segments.
In practical application scenarios, the safety belt edge lines in the torso image are often in a parallel or approximately parallel relationship. When the edge lines of the safety belt are parallel, the angle formed between the straight lines where the edge lines of the safety belt are located is 0 degree; when the safety belt edge lines are approximately parallel, an angle formed between straight lines where the safety belt edge lines are located is smaller than a preset angle threshold value. Therefore, by screening the suspected safety belt edge line of which the angle formed by the two line segments is smaller than the preset angle threshold value, the line segment which is possibly the safety belt edge line can be screened out quickly and accurately from the line segment in the trunk image, and the line segment is used for accurately identifying the safety belt wearing condition of the personnel in the vehicle in the follow-up process.
When the angle formed between the suspected seat belt edge lines is smaller than the preset angle threshold, the seat belt wearing detection device 110 adaptively determines the temporary threshold range for the suspected seat belt edge lines according to the reference object size information in the image of the vehicle occupant. Then, the seat belt wearing detection device 110 determines whether the distance of the line segment between the suspected seat belt edge lines meets the temporary threshold range, if yes, it indicates that the seat belt edge lines exist in the trunk image, and the seat belt wearing detection device 110 determines that the vehicle interior person wears the seat belt. If not, it is indicated that the safety belt edge line does not exist in the trunk image, and the safety belt wearing detection device 110 determines that the person in the vehicle does not wear the safety belt.
In practical applications, the distance between the suspected seat belt edge lines may be the closest distance between the suspected seat belt edge lines, or may be the farthest distance between the suspected seat belt edge lines.
Certainly, in the process of acquiring the line segment distance between the suspected edges of the seat belt by the seat belt wearing detection device 110, the seat belt wearing detection device 110 may select a plurality of sampling points on one suspected edge of the seat belt, then respectively acquire the closest distance between each sampling point and another suspected edge of the seat belt, and use the average distance of the closest distances corresponding to each sampling point as the line segment distance between the suspected edges of the seat belt; therefore, the line segment distance between the suspected safety belt edge lines can be accurately calculated.
According to the technical scheme of the embodiment, when the angle formed between the suspected safety belt edge lines is smaller than the preset angle threshold, the temporary threshold range for the suspected safety belt edge lines is adaptively determined according to the size information of the reference object in the image of the people in the vehicle, and whether the people in the vehicle wear the safety belt is accurately judged according to the distance between the temporary threshold range and the suspected safety belt edge lines, so that the condition that the fixed threshold parameter is not applicable due to the non-uniform image size is avoided, and the safety belt wearing detection accuracy is improved.
In another embodiment, the method for detecting the seat belt wearing of the vehicle occupant further includes, before outputting a seat belt wearing detection result of the vehicle occupant according to the suspected seat belt edge line: acquiring a face characteristic point in an image of a person in the vehicle; determining face size information according to distance information of the face characteristic points in the image of the person in the vehicle; and determining a threshold range parameter aiming at the suspected safety belt edge line according to the face size information to obtain a temporary threshold range.
Wherein the threshold range parameter is used to generate a temporary threshold range.
In a specific implementation, when the reference object is set as a human face, the process of determining the temporary threshold range for the suspected seat belt edge line by the seat belt wearing detection device 110 according to the size information of the reference object in the image of the person in the vehicle specifically includes: the seatbelt wearing detection device 110 may input an image of a person in the vehicle to the face detection module; the human face detection module determines human face characteristic points of people in the vehicle in the images of the people in the vehicle through a human face detection algorithm; then, the distance information of the human face characteristic points in the image of the person in the vehicle is used as human face size information, and a threshold range parameter aiming at the suspected safety belt edge line is determined according to the human face size information; specifically, the seatbelt wearing detection device 110 may calculate a vertical distance L between a human eye feature point and a nose tip feature point of the person in the vehicle in the image of the person in the vehicle, and use the distance L as a threshold range parameter L for the suspected seatbelt edge line, and may further generate the temporary threshold range according to the threshold range parameter L. In practical applications, the range of temporary threshold values may be expressed as 0.7L-1.3L.
It should be noted that, when the seatbelt wearing detection device 110 determines 68 personal face feature points of the person in the vehicle using the DLIB face detection algorithm, the seatbelt wearing detection device 110 may use a vertical distance between feature point No. 28 and feature point No. 34 of the 68 personal face feature points as a vertical distance L between the eye feature point and the nose tip feature point of the person in the vehicle.
According to the technical scheme of the embodiment, the face characteristic points in the image of the person in the vehicle are obtained; the threshold range parameter aiming at the suspected safety belt edge line is determined according to the distance information of the face characteristic point in the image of the person in the vehicle, so that the temporary threshold in the safety belt wearing detection process is determined by taking the face in the image of the person in the vehicle as a reference object, the condition that the fixed threshold parameter is not suitable due to the fact that the sizes of the images are not uniform is avoided, and the safety belt wearing detection accuracy is improved.
In another embodiment, acquiring an image of a person in a vehicle, determining a torso region of the person in the vehicle from the image, and obtaining a torso image includes: determining a human face area of the person in the vehicle in the image of the person in the vehicle; acquiring position information and size information of a human face area in an image of a person in the vehicle; determining the trunk area of the person in the vehicle according to the position information and the size information; and extracting a trunk image from the image of the person in the vehicle according to the trunk area.
In the concrete implementation, the safety belt wearing detection device 110 is obtaining the image of the person in the vehicle, determines the trunk area of the person in the vehicle from the image, and obtains the trunk image, and the process specifically includes: the safety belt wearing detection device 110 can determine the face area of the person in the vehicle in the image of the person in the vehicle; then, the seatbelt wearing detection device 110 acquires position information and size information of a face area in an image of a person in the vehicle; then, the safety belt wearing detection device 110 determines the trunk area of the person in the vehicle according to the position information and the size information; finally, the safety belt wearing detection device 110 extracts a trunk image from the image of the person in the vehicle according to the trunk area.
To facilitate understanding by those skilled in the art, FIG. 5 provides a schematic illustration of a torso image acquisition process; the seat belt wearing detection device 110 may input an image 510 of a person in the vehicle to the face detection module; the face detection module determines a face area of the person in the vehicle from the image of the person in the vehicle through a DLIB face detection algorithm, and then generates a face rectangular frame 520; then, the seatbelt wearing detection device 110 determines a torso rectangular frame 530 corresponding to the torso region of the vehicle occupant, based on the position information and the size information of the face rectangular frame 520. Specifically, the seatbelt-worn detecting device 110 may use the coordinates of the vertex of the lower left corner of the face rectangular frame 520 as the coordinates of the vertex of the upper left corner of the torso rectangular frame 530; taking the distance from the vertex of the lower left corner of the human face rectangular frame 520 to the last line of pixels of the image 510 of the person in the vehicle as the height of the trunk rectangular frame 530; determining the width w2 of the torso rectangular frame 530 according to the face size w1 of the face rectangular frame 520; the size relationship between the face size w1 and the width w2 of the torso rectangular frame 530 may be: w 2-1.5 w 1. Finally, the seatbelt-worn detection device 110 extracts a torso image 540 from the image of the vehicle occupant according to the torso rectangular frame 530.
According to the technical scheme of the embodiment, the human face area of the person in the vehicle is determined in the image of the person in the vehicle; acquiring position information and size information of a human face area in an image of a person in the vehicle; accurately positioning the trunk area of the person in the vehicle according to the position information and the size information; therefore, the trunk area is obtained based on the position information of the face, the trunk area can be used for accurately extracting the trunk image from the image of the person in the vehicle, and the interference of other areas in the image on the detection result is avoided.
Meanwhile, the occupied area of the safety belt only occupies a small position of the whole image. In order to reduce the influence of other areas of the image on the safety belt detection, the search area for the safety belt detection can be reduced by locating the trunk area of the person in the vehicle.
In another embodiment, as shown in fig. 6, there is provided a seat belt wearing detection method including the steps of: step S602, determining the human face area of the person in the vehicle in the image of the person in the vehicle. And step S604, acquiring the position information and the size information of the human face area in the image of the person in the vehicle. And step S606, determining the trunk area of the person in the vehicle according to the position information and the size information. And step S608, extracting a trunk image from the image of the person in the vehicle according to the trunk area. Step S610, determining a plurality of line segments in the torso image; wherein each of the line segments has a corresponding geometric feature in the torso image; the geometric feature includes at least one of an inclination angle of the line segment and an endpoint coordinate of the line segment. Step S612, determining a line segment to be clustered in the line segments according to the inclination angle of each line segment; and the inclination angle of the line segment to be clustered accords with a preset angle range. And step S614, merging the line segments to be clustered according to the position relation among the line segments to be clustered to obtain merged line segments. And step S616, screening the combined line segments according to a preset line segment length threshold value, and determining a suspected safety belt edge line. Step S618, if the angle formed between the suspected seat belt edge lines is smaller than a preset angle threshold, obtaining a face feature point in the image of the vehicle occupant. And S620, determining face size information according to the distance information of the face characteristic points in the image of the person in the vehicle. Step S622, determining a threshold range parameter for the suspected seat belt edge line according to the face size information, and obtaining a temporary threshold range. In step S624, when the distance between the suspected seat belt edge lines meets the temporary threshold range, it is determined that the vehicle occupant has worn a seat belt. It should be noted that, for the specific limitations of the above steps, reference may be made to the above specific limitations of a seat belt wearing detection method, and details are not described herein again.
It should be understood that although the steps in the flowcharts of fig. 2 and 6 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2 and 6 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 7, there is provided a seatbelt wearing detection device including:
the acquisition module 710 is configured to acquire an image of a person in the vehicle, and determine a trunk area of the person in the vehicle from the image to obtain a trunk image;
a determining module 720 for determining a plurality of lines in the torso image; wherein each of the lines has a corresponding geometric feature in the torso image;
the clustering module 730 is configured to perform clustering processing on the multiple lines according to the geometric features corresponding to each line to obtain suspected safety belt edge lines;
and an output module 740, configured to output a seat belt wearing detection result of the person in the vehicle according to the suspected seat belt edge line.
In one embodiment, the geometric features include inclination angles of the line segments, and the clustering module 730 is specifically configured to determine a line segment to be clustered in the plurality of line segments according to the inclination angle of each line segment; the inclination angle of the line segment to be clustered accords with a preset angle range; merging the line segments to be clustered according to the position relationship among the line segments to be clustered to obtain merged line segments; and screening the merged segments according to a preset segment length threshold value to determine the suspected safety belt edge line.
In one embodiment, the geometric features further include end point coordinates of the line segments, the line segments to be clustered include a first line segment and a second line segment, and the clustering module 730 is specifically configured to, when the first line segment and the second line segment are not intersected, obtain an end point ordinate interval of the first line segment, and obtain an end point ordinate interval of the second line segment; judging whether the first line segment and the second line segment need to be merged or not based on an interval overlapping relation between the endpoint ordinate interval of the first line segment and the endpoint ordinate interval of the second line segment; and if so, combining the first line segment and the second line segment to obtain the combined line segment.
In one embodiment, the clustering module 730 is specifically configured to, when there is no intersection between the endpoint ordinate interval of the first line segment and the endpoint ordinate interval of the second line segment, determine, as the first distance, a closest distance between the endpoint of the first line segment and the endpoint of the second line segment according to the endpoint coordinate of the first line segment and the endpoint coordinate of the second line segment; when the first distance is smaller than a preset first distance threshold, judging that the first line segment and the second line segment need to be merged;
when the end point ordinate interval of the first line segment is partially overlapped with the end point ordinate interval of the second line segment, determining the closest distance between the end point of the first line segment and the second line segment as a second distance according to the end point coordinate of the first line segment and the end point coordinate of the second line segment; when the second distance is smaller than a preset second distance threshold, judging that the first line segment and the second line segment need to be merged;
when the endpoint ordinate interval of the first line segment comprises the endpoint ordinate interval of the second line segment, determining the closest distance between the midpoint of the second line segment and the first line segment as a third distance according to the endpoint coordinate of the first line segment and the endpoint coordinate of the second line segment; and when the third distance is smaller than a preset third distance threshold, judging that the first line segment and the second line segment need to be merged.
In one embodiment, the output module 740 is specifically configured to, if an angle formed between the suspected seat belt edge lines is smaller than a preset angle threshold, acquire a temporary threshold range, where the temporary threshold range is determined according to size information of a reference object in the image of the vehicle occupant; and when the distance between the suspected safety belt edge lines accords with the temporary threshold range, judging that the vehicle interior person wears the safety belt.
In one embodiment, the reference object is a human face, and the output module 740 is specifically configured to obtain human face feature points in the image of the person in the vehicle; determining face size information according to the distance information of the face characteristic points in the image of the person in the vehicle; and determining a threshold range parameter aiming at the suspected safety belt edge line according to the face size information to obtain the temporary threshold range.
In one embodiment, the obtaining module 710 is specifically configured to determine a face area of the person in the vehicle from the image of the person in the vehicle; acquiring position information and size information of the human face area in the image of the person in the vehicle; determining the trunk area of the person in the vehicle according to the position information and the size information; and extracting the trunk image from the image of the person in the vehicle according to the trunk area.
For specific definition of a seat belt wear detection device, reference may be made to the above definition of a seat belt wear detection method, which is not described herein again. Each module in the above described seat belt wearing detection apparatus may be implemented wholly or partially by software, hardware, or a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
To facilitate understanding by those skilled in the art, fig. 8 provides a system flow diagram of a seat belt wear detection method; the safety belt wearing detection equipment 110 receives images of people in the vehicle, then carries out face detection on the images of the people in the vehicle to obtain the trunk coordinates of the people in the vehicle, and obtains the trunk images of the people in the vehicle; then, the safety belt wearing detection device 110 performs line segment detection, line segment filtering and line segment clustering on the trunk image to obtain suspected safety belt edge lines; finally, the seat belt wearing detection device 110 performs seat belt determination on the suspected seat belt edge line, and outputs a seat belt determination result. The specific limitations of the above steps can be referred to the specific limitations of the above seat belt wearing detection method, and are not described herein again.
To facilitate understanding by those skilled in the art, fig. 9 provides a line segment clustering flow chart of a seat belt wear detection method; the safety belt wearing detection device 110 performs angle screening on a plurality of line segments in the trunk image, eliminates line segments which do not meet angle screening conditions, performs intersection point detection on the line segments which meet the angle screening conditions, and when the line segments are intersected, namely when a first line segment is intersected with a second line segment, the safety belt wearing detection device 110 directly performs combination processing on the first line segment and the second line segment to obtain a starting endpoint and a stopping endpoint of the combined line segments. When the first line segment and the second line segment are not intersected, the safety belt wearing detection device 110 judges whether to merge the first line segment and the second line segment based on a distance relationship between an end point of the first line segment and an end point of the second line segment, and further obtains a starting end point and an ending end point of the merged line segment. The specific limitations of the above steps can be referred to the specific limitations of the above seat belt wearing detection method, and are not described herein again.
To facilitate understanding by those skilled in the art, fig. 10 provides a belt determination system flow diagram of a belt wear detection method; the safety belt wearing detection device 110 performs length screening on the candidate line segments, eliminates the line segments which do not satisfy the length screening condition, performs angle screening on the line segments which satisfy the length screening condition, eliminates the line segments which do not satisfy the angle screening condition, performs distance screening on the line segments which satisfy the angle screening condition, eliminates the line segments which do not satisfy the angle screening condition, and uses the line segments which satisfy the angle screening condition as the safety belt edge lines.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 11. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing geometric feature data of the line segments. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a seat belt wear detection method.
Those skilled in the art will appreciate that the architecture shown in fig. 11 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of a seatbelt wear detection method as described above. Here, the steps of a seat belt wear detection method may be the steps in a seat belt wear detection method of the above-described embodiments.
In one embodiment, a computer-readable storage medium is provided, in which a computer program is stored, which, when executed by a processor, causes the processor to carry out the steps of a seat belt wear detection method as described above. Here, the steps of a seat belt wear detection method may be the steps in a seat belt wear detection method of the above-described embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A seat belt wear detection method, characterized in that the method comprises:
acquiring an image of a person in the vehicle, and determining a trunk area of the person in the vehicle from the image to obtain a trunk image;
determining a plurality of lines in the torso image; wherein each of the lines has a corresponding geometric feature in the torso image;
clustering the lines according to the geometric characteristics corresponding to each line to obtain suspected safety belt edge lines;
and outputting a safety belt wearing detection result of the person in the vehicle according to the suspected safety belt edge line.
2. The method according to claim 1, wherein when the lines are line segments, the geometric features include inclination angles of the line segments, and the clustering the lines according to the geometric features corresponding to each line to obtain suspected seat belt edge lines comprises:
determining a line segment to be clustered in the trunk image according to the inclination angle of each line segment; the inclination angle of the line segment to be clustered accords with a preset angle range;
merging the line segments to be clustered according to the position relationship among the line segments to be clustered to obtain merged line segments;
and screening the merged segments according to a preset segment length threshold value to determine the suspected safety belt edge line.
3. The method according to claim 2, wherein the geometric features further include end point coordinates of the line segments, the line segments to be clustered include a first line segment and a second line segment, and the merging the line segments to be clustered according to the position relationship between the line segments to be clustered to obtain a merged line segment includes:
when the first line segment and the second line segment are not intersected, acquiring an endpoint ordinate interval of the first line segment and acquiring an endpoint ordinate interval of the second line segment;
judging whether the first line segment and the second line segment need to be merged or not based on an interval overlapping relation between the endpoint ordinate interval of the first line segment and the endpoint ordinate interval of the second line segment;
and if so, combining the first line segment and the second line segment to obtain the combined line segment.
4. The method of claim 3, wherein the determining whether the first line segment and the second line segment need to be merged based on a segment overlap relationship between an endpoint ordinate segment of the first line segment and an endpoint ordinate segment of the second line segment comprises;
when no intersection exists between the endpoint ordinate interval of the first line segment and the endpoint ordinate interval of the second line segment, determining the closest distance between the endpoint of the first line segment and the endpoint of the second line segment as a first distance according to the endpoint coordinate of the first line segment and the endpoint coordinate of the second line segment;
when the first distance is smaller than a preset first distance threshold, judging that the first line segment and the second line segment need to be merged;
alternatively, the first and second electrodes may be,
when the end point ordinate interval of the first line segment is partially overlapped with the end point ordinate interval of the second line segment, determining the closest distance between the end point of the first line segment and the second line segment as a second distance according to the end point coordinate of the first line segment and the end point coordinate of the second line segment;
when the second distance is smaller than a preset second distance threshold, judging that the first line segment and the second line segment need to be merged;
alternatively, the first and second electrodes may be,
when the endpoint ordinate interval of the first line segment comprises the endpoint ordinate interval of the second line segment, determining the closest distance between the midpoint of the second line segment and the first line segment as a third distance according to the endpoint coordinate of the first line segment and the endpoint coordinate of the second line segment;
and when the third distance is smaller than a preset third distance threshold, judging that the first line segment and the second line segment need to be merged.
5. The method according to claim 1, wherein outputting the belt wearing detection result of the vehicle occupant according to the suspected belt edge line includes:
if the angle formed between the suspected safety belt edge lines is smaller than a preset angle threshold, acquiring a temporary threshold range, wherein the temporary threshold range is determined according to the size information of a reference object in the image of the person in the vehicle;
and when the distance between the suspected safety belt edge lines accords with the temporary threshold range, judging that the vehicle interior person wears the safety belt.
6. The method according to claim 5, wherein the reference object is a human face, and before outputting the seat belt wearing detection result of the vehicle occupant according to the suspected seat belt edge line, the method further comprises:
acquiring a face characteristic point in the image of the person in the vehicle;
determining face size information according to the distance information of the face characteristic points in the image of the person in the vehicle;
and determining a threshold range parameter aiming at the suspected safety belt edge line according to the face size information to obtain the temporary threshold range.
7. The method according to any one of claims 1 to 6, wherein the obtaining an image of a person in the vehicle, determining a torso region of the person in the vehicle from the image, and obtaining a torso image comprises:
determining a human face area of the person in the vehicle in the image of the person in the vehicle;
acquiring position information and size information of the human face area in the image of the person in the vehicle;
determining the trunk area of the person in the vehicle according to the position information and the size information;
and extracting the trunk image from the image of the person in the vehicle according to the trunk area.
8. A seatbelt wearing detection device, characterized in that the device comprises:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring images of people in the vehicle, and determining the trunk area of the people in the vehicle from the images to obtain a trunk image;
a determining module for determining a plurality of lines in the torso image; wherein each of the lines has a corresponding geometric feature in the torso image;
the clustering module is used for clustering the lines according to the geometric characteristics corresponding to each line to obtain suspected safety belt edge lines;
and the output module is used for outputting the safety belt wearing detection result of the person in the vehicle according to the suspected safety belt edge line.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202010327888.5A 2020-04-23 2020-04-23 Safety belt wearing detection method and device based on artificial intelligence software technology Pending CN111582077A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010327888.5A CN111582077A (en) 2020-04-23 2020-04-23 Safety belt wearing detection method and device based on artificial intelligence software technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010327888.5A CN111582077A (en) 2020-04-23 2020-04-23 Safety belt wearing detection method and device based on artificial intelligence software technology

Publications (1)

Publication Number Publication Date
CN111582077A true CN111582077A (en) 2020-08-25

Family

ID=72127143

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010327888.5A Pending CN111582077A (en) 2020-04-23 2020-04-23 Safety belt wearing detection method and device based on artificial intelligence software technology

Country Status (1)

Country Link
CN (1) CN111582077A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112001344A (en) * 2020-08-31 2020-11-27 深圳市豪恩汽车电子装备股份有限公司 Motor vehicle target detection device and method
CN112132040A (en) * 2020-09-24 2020-12-25 明见(厦门)软件开发有限公司 Vision-based safety belt real-time monitoring method, terminal equipment and storage medium
CN112232136A (en) * 2020-09-22 2021-01-15 北京紫光展锐通信技术有限公司 Vehicle safety belt detection method and device, electronic equipment and storage medium
CN115123141A (en) * 2022-07-14 2022-09-30 东风汽车集团股份有限公司 Vision-based passenger safety belt reminding device and method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010113506A (en) * 2008-11-06 2010-05-20 Aisin Aw Co Ltd Occupant position detection device, occupant position detection method, and occupant position detection program
CN102915640A (en) * 2012-10-30 2013-02-06 武汉烽火众智数字技术有限责任公司 Safety belt detecting method based on Hough transform
CN103150556A (en) * 2013-02-20 2013-06-12 西安理工大学 Safety belt automatic detection method for monitoring road traffic
CN104112141A (en) * 2014-06-29 2014-10-22 中南大学 Method for detecting lorry safety belt hanging state based on road monitoring equipment
KR20150045235A (en) * 2013-10-18 2015-04-28 아이브스테크놀러지(주) Apparatus and method for detecting seat belt
CN107330367A (en) * 2017-05-27 2017-11-07 湖北天业云商网络科技有限公司 A kind of Safe belt detection method and system positioned based on trunk

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010113506A (en) * 2008-11-06 2010-05-20 Aisin Aw Co Ltd Occupant position detection device, occupant position detection method, and occupant position detection program
CN102915640A (en) * 2012-10-30 2013-02-06 武汉烽火众智数字技术有限责任公司 Safety belt detecting method based on Hough transform
CN103150556A (en) * 2013-02-20 2013-06-12 西安理工大学 Safety belt automatic detection method for monitoring road traffic
KR20150045235A (en) * 2013-10-18 2015-04-28 아이브스테크놀러지(주) Apparatus and method for detecting seat belt
CN104112141A (en) * 2014-06-29 2014-10-22 中南大学 Method for detecting lorry safety belt hanging state based on road monitoring equipment
CN107330367A (en) * 2017-05-27 2017-11-07 湖北天业云商网络科技有限公司 A kind of Safe belt detection method and system positioned based on trunk

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112001344A (en) * 2020-08-31 2020-11-27 深圳市豪恩汽车电子装备股份有限公司 Motor vehicle target detection device and method
CN112232136A (en) * 2020-09-22 2021-01-15 北京紫光展锐通信技术有限公司 Vehicle safety belt detection method and device, electronic equipment and storage medium
CN112132040A (en) * 2020-09-24 2020-12-25 明见(厦门)软件开发有限公司 Vision-based safety belt real-time monitoring method, terminal equipment and storage medium
CN112132040B (en) * 2020-09-24 2024-03-15 明见(厦门)软件开发有限公司 Vision-based safety belt real-time monitoring method, terminal equipment and storage medium
CN115123141A (en) * 2022-07-14 2022-09-30 东风汽车集团股份有限公司 Vision-based passenger safety belt reminding device and method

Similar Documents

Publication Publication Date Title
EP3539054B1 (en) Neural network image processing apparatus
CN111582077A (en) Safety belt wearing detection method and device based on artificial intelligence software technology
Yan et al. A method of lane edge detection based on Canny algorithm
CN111178245A (en) Lane line detection method, lane line detection device, computer device, and storage medium
CN110889355B (en) Face recognition verification method, face recognition verification system and storage medium
US10867403B2 (en) Vehicle external recognition apparatus
US20040252863A1 (en) Stereo-vision based imminent collision detection
Kortli et al. A novel illumination-invariant lane detection system
CN110826370B (en) Method and device for identifying identity of person in vehicle, vehicle and storage medium
EP2000979A2 (en) Road video image analyzing device and road video image analyzing method
US10043279B1 (en) Robust detection and classification of body parts in a depth map
CN111461170A (en) Vehicle image detection method and device, computer equipment and storage medium
US10521659B2 (en) Image processing device, image processing method, and image processing program
CN109255802B (en) Pedestrian tracking method, device, computer equipment and storage medium
US11657592B2 (en) Systems and methods for object recognition
CN112101195B (en) Crowd density estimation method, crowd density estimation device, computer equipment and storage medium
CN115066708A (en) Point cloud data motion segmentation method and device, computer equipment and storage medium
Romera et al. A Real-Time Multi-scale Vehicle Detection and Tracking Approach for Smartphones.
CN103810696A (en) Method for detecting image of target object and device thereof
CN108052921B (en) Lane line detection method, device and terminal
CN113553938A (en) Safety belt detection method and device, computer equipment and storage medium
CN111160086A (en) Lane line recognition method, lane line recognition device, lane line recognition equipment and storage medium
CN111178224A (en) Object rule judging method and device, computer equipment and storage medium
CN111144404A (en) Legacy object detection method, device, system, computer device, and storage medium
CN108090425B (en) Lane line detection method, device and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination