CN116206281B - Sight line detection method and device, electronic equipment and storage medium - Google Patents
Sight line detection method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN116206281B CN116206281B CN202310465756.2A CN202310465756A CN116206281B CN 116206281 B CN116206281 B CN 116206281B CN 202310465756 A CN202310465756 A CN 202310465756A CN 116206281 B CN116206281 B CN 116206281B
- Authority
- CN
- China
- Prior art keywords
- sight
- preset
- detection
- weight
- line
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 477
- 230000000903 blocking effect Effects 0.000 claims abstract description 228
- 230000033001 locomotion Effects 0.000 claims abstract description 105
- 238000013507 mapping Methods 0.000 claims abstract description 65
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 61
- 238000000034 method Methods 0.000 claims abstract description 38
- 238000012216 screening Methods 0.000 claims abstract description 31
- 230000003287 optical effect Effects 0.000 claims description 40
- 238000012549 training Methods 0.000 claims description 33
- 230000000007 visual effect Effects 0.000 claims description 25
- 230000009466 transformation Effects 0.000 claims description 18
- 238000012545 processing Methods 0.000 claims description 17
- 238000004590 computer program Methods 0.000 claims description 16
- 238000012706 support-vector machine Methods 0.000 claims description 11
- 230000008569 process Effects 0.000 claims description 8
- 238000007781 pre-processing Methods 0.000 claims description 7
- 230000002708 enhancing effect Effects 0.000 claims description 5
- 238000001914 filtration Methods 0.000 claims description 4
- 238000003707 image sharpening Methods 0.000 claims description 4
- 238000003706 image smoothing Methods 0.000 claims description 4
- 230000004888 barrier function Effects 0.000 description 12
- 238000004891 communication Methods 0.000 description 8
- 238000000605 extraction Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 238000010276 construction Methods 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 3
- 239000000428 dust Substances 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000007635 classification algorithm Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 206010039203 Road traffic accident Diseases 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/7715—Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of computer vision, and discloses a sight line detection method, a sight line detection device, electronic equipment and a storage medium. The method comprises the following steps: the method comprises the steps of acquiring sight line detection images in at least two preset sight line detection areas according to an in-vehicle camera, dividing the preset sight line detection areas based on the sight line range of a driver, determining motion characteristics corresponding to the sight line detection images in the at least two preset sight line detection areas respectively based on a preset motion detection algorithm, determining first sight line blocking weights according to the motion characteristics and a preset first weight mapping relation, determining obstacle distribution characteristics of the sight line detection images in the at least two preset sight line detection areas based on a preset obstacle model, determining second sight line blocking weights according to the obstacle distribution characteristics and a preset second weight mapping relation, and determining sight line blocking conditions based on the first sight line blocking weights and the second sight line blocking weights under preset screening conditions. The accuracy of the line-of-sight blocking situation description can be improved.
Description
Technical Field
The present invention relates to the field of computer vision, and in particular, to a line-of-sight detection method, apparatus, electronic device, and storage medium.
Background
With the development of economy and the continuous improvement of the living standard of people, more and more vehicles are on the road, and the safety protection of drivers is a perpetual theme pursued by the development of automobiles. Especially under severe conditions such as heavy rain, dust, snow and the like, a driver often cannot timely and rapidly find obstacles in the front view due to the blocked sight, so that frequent traffic accidents are caused.
With the popularization of intelligent automobiles, the capture of the vision of a driver by using an in-car camera becomes a research hotspot for vision detection. The existing line-of-sight detection method generally adopts the following two types of methods: firstly, a sight line detection method based on a template library and secondly, a sight line detection method based on feature point motion detection. However, the above-described line-of-sight detection method has the following disadvantages:
1. the sight line detection method based on the template library needs to be designed to be matched with the template artificially, so that the problem that the precision of the sight line detection method is low due to insufficient experience of a designer exists;
2. the sight line detection method based on feature point motion detection is adopted to describe that sight line blocking behaviors are single, and universality and robustness are not high.
Disclosure of Invention
The invention provides a sight line detection method, a sight line detection device, electronic equipment and a storage medium, which are used for solving the problems of low detection precision, universality and robustness of the existing sight line detection method, improving the accuracy of sight line blocking condition description and enhancing the safety of a user driving a vehicle.
According to an aspect of the present invention, there is provided a line-of-sight detection method including:
acquiring sight line detection images in at least two preset sight line detection areas according to an in-vehicle camera, wherein the preset sight line detection areas are divided based on the visual field range of a driver;
determining motion characteristics corresponding to sight line detection images in at least two preset sight line detection areas respectively based on a preset motion detection algorithm, and determining a first sight line blocking weight according to the motion characteristics and a preset first weight mapping relation;
determining obstacle distribution characteristics of the sight line detection images in at least two preset sight line detection areas based on a preset obstacle model, and determining a second sight line blocking weight according to the obstacle distribution characteristics and a preset second weight mapping relation;
and determining the sight blocking condition based on the first sight blocking weight and the second sight blocking weight under the preset screening condition.
According to another aspect of the present invention, there is provided a line-of-sight detection apparatus, the apparatus comprising:
the image acquisition module is used for acquiring sight line detection images in at least two preset sight line detection areas according to the in-vehicle camera, wherein the preset sight line detection areas are divided based on the visual field range of a driver;
the first weight determining module is used for determining motion characteristics corresponding to the sight line detection images in at least two preset sight line detection areas respectively based on a preset motion detection algorithm, and determining a first sight line blocking weight according to the motion characteristics and a preset first weight mapping relation;
the second weight determining module is used for determining obstacle distribution characteristics of the sight line detection images in at least two preset sight line detection areas based on a preset obstacle model, and determining a second sight line blocking weight according to the obstacle distribution characteristics and a preset second weight mapping relation;
and the sight blocking determining module is used for determining the sight blocking condition based on the first sight blocking weight and the second sight blocking weight under the preset screening condition.
According to another aspect of the present invention, there is provided an electronic apparatus including:
at least one processor; and
A memory communicatively coupled to the at least one processor; wherein,,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the gaze detection method of any one of the embodiments of the present invention.
According to another aspect of the present invention, there is provided a computer readable storage medium storing computer instructions for causing a processor to execute a line of sight detection method according to any one of the embodiments of the present invention.
According to the technical scheme, sight line detection images in at least two preset sight line detection areas are acquired according to an in-vehicle camera, wherein the preset sight line detection areas are divided based on the sight field range of a driver, movement characteristics corresponding to the sight line detection images in the at least two preset sight line detection areas are determined based on a preset movement detection algorithm, first sight line blocking weights are determined according to the movement characteristics and a preset first weight mapping relation, obstacle distribution characteristics of the sight line detection images in the at least two preset sight line detection areas are determined based on a preset obstacle model, second sight line blocking weights are determined according to the obstacle distribution characteristics and a preset second weight mapping relation, and sight line blocking conditions are determined based on the first sight line blocking weights and the second sight line blocking weights under preset screening conditions. According to the embodiment of the invention, at least two preset sight line detection areas are divided according to the visual field range of a driver, the first sight line blocking weight and the second sight line blocking weight corresponding to each preset sight line detection area are respectively determined according to the preset motion detection algorithm and the preset obstacle model, the sight line blocking condition is finally determined according to the first sight line blocking weight and the second sight line blocking weight, the sight line blocking condition can be adaptively described through the motion characteristics and the obstacle distribution characteristics of multiple detection areas, the robustness and the accuracy of sight line detection are improved, the universality is better, and the safety of driving a vehicle by a user is enhanced.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the invention or to delineate the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a line-of-sight detection method according to a first embodiment of the present invention;
fig. 2 is a flowchart of a line-of-sight detection method according to a second embodiment of the present invention;
FIG. 3 is a flowchart of a training process of a preset obstacle model according to a second embodiment of the invention;
fig. 4 is a flowchart of a line-of-sight detection method according to a third embodiment of the present invention;
fig. 5 is a flowchart of another line-of-sight detection method according to the third embodiment of the present invention;
fig. 6 is a schematic structural diagram of a sight line detection apparatus according to a fourth embodiment of the present invention;
Fig. 7 is a schematic structural diagram of an electronic device implementing a line-of-sight detection method according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
Fig. 1 is a flowchart of a line-of-sight detection method according to an embodiment of the present invention, where the method may be applied to a case of detecting a line of sight of a driver, and the method may be performed by a line-of-sight detection device, which may be implemented in hardware and/or software, and the line-of-sight detection device may be configured in an electronic device, for example, may include a vehicle-mounted device, etc. As shown in fig. 1, the line-of-sight detection method provided in the first embodiment specifically includes the following steps:
s110, acquiring sight line detection images in at least two preset sight line detection areas according to an in-vehicle camera, wherein the preset sight line detection areas are divided based on the visual field range of a driver.
In the implementation of the present invention, the preset sight line detection area may be understood as a sight line area for detecting whether the sight line of the driver is blocked, which is divided based on the sight field of the driver, and the number of the preset sight line detection areas may be at least two, and the specific number may be set correspondingly according to the actual requirement; the preset line-of-sight detection region may be provided in front of a front windshield or a wiper of the vehicle; the preset line-of-sight detection region may be painted in a color that is clearly distinguishable from the vehicle body, for example, may be blue or green. The in-vehicle cameras can be used for collecting the sight line detection images corresponding to the preset sight line detection areas, the number of the in-vehicle cameras can be consistent with the number of the preset sight line detection areas, namely, each in-vehicle camera can collect the sight line detection images corresponding to the preset sight line detection areas respectively, and one in-vehicle camera can be used for collecting the sight line detection images corresponding to the preset sight line detection areas at the same time.
Specifically, the in-vehicle camera may be used to collect line-of-sight detection images of preset line-of-sight detection areas in the visual field of the driver, where the number of the preset line-of-sight detection areas may be at least two, the specific number of the preset line-of-sight detection areas may be set correspondingly according to actual requirements, and the preset line-of-sight detection areas may be sprayed with colors that can be clearly distinguished from the vehicle body, for example, may be blue or green; the in-vehicle camera can be used for simultaneously acquiring the sight line detection images corresponding to the preset sight line detection areas, or a plurality of in-vehicle cameras can be used for respectively and independently acquiring the sight line detection images corresponding to the preset sight line detection areas, and the embodiment of the invention is not limited to the above. It should be understood that after the line of sight detection images corresponding to the preset line of sight detection areas are obtained, the line of sight detection images can be further subjected to image segmentation to generate line of sight detection image slices corresponding to the line of sight detection images respectively, and then the line of sight detection image slices are used for detecting the line of sight blocking condition of the driver, so that the robustness and the accuracy of line of sight blocking condition judgment can be improved.
S120, determining motion characteristics corresponding to the sight line detection images in at least two preset sight line detection areas respectively based on a preset motion detection algorithm, and determining a first sight line blocking weight according to the motion characteristics and a preset first weight mapping relation.
In the implementation of the present invention, the preset motion detection algorithm may be understood as a preconfigured algorithm for extracting motion features corresponding to each line-of-sight detection image, and the preset motion detection algorithm may include an optical flow algorithm, a continuous inter-frame difference algorithm, a background difference algorithm, and the like. The motion feature may be understood as a feature for characterizing a motion situation of the line-of-sight detection image, and may include, for example, a motion direction and a motion speed of a pixel point or a key point, and the like. The first sight blocking weight can be understood as a sight blocking weight determined according to the motion characteristic, and the value of the first sight blocking weight can be expressed by adopting 1 or 0, wherein the first sight blocking weight is 1 for indicating that the sight of the driver is blocked, and the first sight blocking weight is 0 for indicating that the sight of the driver is not blocked. The preset first weight mapping relation can be understood as a mapping relation which is preset and used for representing the motion characteristics and the first sight blocking weight, and the preset first weight mapping relation can comprise that the motion characteristics of a plurality of continuous frames of sight detection images in a preset sight detection area are alternately changed in positive and negative directions, or the motion characteristics of a plurality of continuous frames of sight detection images in the preset sight detection area are suddenly changed for a plurality of times, and the like.
Specifically, a preset motion detection algorithm may be used to extract motion features corresponding to the sight line detection images in each preset sight line detection area, where the preset motion detection algorithm may include, but is not limited to, the following: optical flow algorithm, continuous inter-frame difference algorithm, background difference algorithm, etc., and accordingly, the extracted motion features may also include, but are not limited to, the following: optical flow direction motion features, continuous frame differential motion features, background differential motion features, and the like; the determined motion characteristics of each preset sight line detection area may be compared with a preset first weight mapping relation, and whether the motion characteristics meet the preset first weight mapping relation is determined, where the preset first weight mapping relation may include, but is not limited to: when the motion characteristics of the continuous frames of sight line detection images in the preset sight line detection areas are alternately changed in positive and negative, the motion characteristics of the continuous frames of sight line detection images in the preset sight line detection areas are suddenly changed for a plurality of times, and the like, and when the motion characteristics of each preset sight line detection area meet a preset first weight mapping relation, a first sight line blockage weight corresponding to the preset sight line detection area can be set to be 1 to indicate that the sight line of a driver is blocked; if the first sight-blocking weight corresponding to the preset sight line detection area is not satisfied, the first sight line blocking weight may be set to 0, which indicates that the sight line of the driver is not blocked, and it should be understood that in the embodiment of the present invention, the first sight line blocking weight is set to 1, and the first sight line blocking weight is set to 0, which indicates that the sight line of the driver is not blocked, which is merely an example, and may be used in other manners in practical applications, so that a distinction between a sight line blocking situation and a sight line non-blocking situation may be achieved, for example, the sight line blocking situation of the driver is indicated by Y, the sight line of the driver is indicated by N, and the present invention is not limited thereto.
S130, determining obstacle distribution characteristics of the sight line detection images in at least two preset sight line detection areas based on a preset obstacle model, and determining a second sight line blocking weight according to the obstacle distribution characteristics and a preset second weight mapping relation.
In the implementation of the present invention, the preset obstacle model may be understood as a pre-trained model for detecting the obstacle in each line-of-sight detection image, and may be obtained by pre-training based on algorithms such as a support vector machine (Support Vector Machine, SVM), a K-Nearest Neighbor (KNN), and a convolutional neural network (Convolutional Neural Network, CNN). The obstacle distribution feature may be understood as a feature for describing the obstacle distribution in each line-of-sight detection image, and may include, for example, a proportion of image features corresponding to each line-of-sight detection image satisfying a preset obstacle model, or a proportion of image features corresponding to each line-of-sight detection image not satisfying a preset obstacle model, or the like. The second sight blocking weight may be understood as a sight blocking weight determined according to the obstacle distribution feature, and the value of the second sight blocking weight may be set to 1 or 0, where the second sight blocking weight takes 1 to indicate that the sight of the driver is blocked, and the second sight blocking weight takes 0 to indicate that the sight of the driver is not blocked. The preset second weight mapping relationship may be understood as a mapping relationship configured in advance and used for representing the barrier distribution feature and the second sight blocking weight, and the preset second weight mapping relationship may include that the barrier distribution feature satisfies a preset threshold value or more, or that the barrier distribution feature satisfies a certain preset threshold value range, or the like.
Specifically, the pre-trained preset obstacle model may be used to extract the obstacle distribution features corresponding to the sight line detection images in each preset sight line detection area, for example, the method may include directly inputting each sight line detection image into the preset obstacle model to extract the obstacle distribution features, or extracting the image features of each sight line detection image first, and then inputting the image features into the preset obstacle model to extract the obstacle distribution features, where the obstacle distribution features may include, but are not limited to, the following: the image features corresponding to the sight line detection images meet the proportion of a preset obstacle model, the image features corresponding to the sight line detection images do not meet the proportion of the preset obstacle model, and the like; the extracted obstacle distribution characteristics of each preset sight line detection area may be compared with a preset second weight mapping relation, and whether the obstacle distribution characteristics meet the preset second weight mapping relation or not is judged, wherein the preset second weight mapping relation may include, but is not limited to: the obstacle distribution feature satisfies a preset threshold value or more, the obstacle distribution feature satisfies a certain preset threshold value range, and the like, and when the obstacle distribution feature satisfies a preset second weight mapping relationship, a second sight blocking weight corresponding to the preset sight detection area can be set to be 1 to indicate that the sight of the driver is blocked; if the first sight-blocking weight is not satisfied, the second sight-blocking weight corresponding to the preset sight-detection area may be set to 0, which indicates that the driver's sight is not blocked, and similarly, the first sight-blocking weight is 1, which indicates that the driver's sight is not blocked, and the first sight-blocking weight is 0, which indicates that the driver's sight is not blocked, which is merely an example, and may be expressed in other manners in practical application.
And S140, determining the sight blocking condition based on the first sight blocking weight and the second sight blocking weight under the preset screening condition.
In the implementation of the present invention, the preset screening condition may be understood as a pre-configured condition for screening the driver's sight blocking situation according to the first sight blocking weight and the second sight blocking weight, and the preset screening condition may include at least one of the first sight blocking weight and the second sight blocking weight being equal to 1, or at least two of the first sight blocking weight and the second sight blocking weight being equal to 1, or the like, for example.
Specifically, the obtained first sight-blocking weight and second sight-blocking weight corresponding to each preset sight-line detection area may be compared with preset screening conditions, and whether the preset screening conditions are satisfied may be determined, where the preset screening conditions may include, but are not limited to: at least one of the first sight blocking weight and the second sight blocking weight is equal to 1, at least two of the first sight blocking weight and the second sight blocking weight are equal to 1, and the like, and if a preset screening condition is met, the sight blocking condition can be determined as sight blocking; if not, the blocked line of sight may be determined as unobstructed.
According to the technical scheme, sight line detection images in at least two preset sight line detection areas are acquired according to an in-vehicle camera, wherein the preset sight line detection areas are divided based on the sight field range of a driver, movement characteristics corresponding to the sight line detection images in the at least two preset sight line detection areas are determined based on a preset movement detection algorithm, first sight line blocking weights are determined according to the movement characteristics and a preset first weight mapping relation, obstacle distribution characteristics of the sight line detection images in the at least two preset sight line detection areas are determined based on a preset obstacle model, second sight line blocking weights are determined according to the obstacle distribution characteristics and a preset second weight mapping relation, and sight line blocking conditions are determined based on the first sight line blocking weights and the second sight line blocking weights under preset screening conditions. According to the embodiment of the invention, at least two preset sight line detection areas are divided according to the visual field range of a driver, the first sight line blocking weight and the second sight line blocking weight corresponding to each preset sight line detection area are respectively determined according to the preset motion detection algorithm and the preset obstacle model, the sight line blocking condition is finally determined according to the first sight line blocking weight and the second sight line blocking weight, the sight line blocking condition can be adaptively described through the motion characteristics and the obstacle distribution characteristics of multiple detection areas, the robustness and the accuracy of sight line detection are improved, the universality is better, and the safety of the user driving the vehicle is further improved.
Example two
Fig. 2 is a flowchart of a line-of-sight detection method according to a second embodiment of the present invention, which is further optimized and expanded based on the foregoing embodiments, and may be combined with each of the optional technical solutions in the foregoing embodiments. As shown in fig. 2, the line-of-sight detection method provided in the second embodiment specifically includes the following steps:
s210, dividing at least two preset sight line detection areas based on the visual field range of a driver, and acquiring sight line detection images of at least five continuous frames in the corresponding preset sight line detection areas by using an in-vehicle camera.
S220, extracting detection key points corresponding to the sight line detection images in at least two preset sight line detection areas by adopting a Harris corner detection algorithm.
In the implementation of the invention, the Harris angular point detection algorithm can be used for extracting the characteristic extraction algorithm of the local characteristic of each sight detection image, and the basic principle is that a local window is utilized to move on the image to judge whether the gray level has larger change, and if the gray level value (on a gradient map) in the window has larger change, the angular point exists in the area where the window is positioned.
Specifically, the existing Harris corner detection algorithm can be adopted to extract corner features corresponding to the sight line detection images in each preset sight line detection area respectively, and the extracted corner features are used as detection key points corresponding to the sight line detection images respectively. It should be understood that, in the embodiment of the present invention, extracting the corner feature of the line of sight detection image is merely taken as an example, and other types of feature points of the line of sight detection image may be used in practical applications, for example, may include, but not limited to: scale-invariant feature transform (SIFT) feature points, acceleration robust feature (Speeded Up Robust Features, SURF) feature points, ORB (Oriented FAST and Rotated BRIEF) feature points, direction gradient histogram (Histogram of Oriented Gradient, HOG) feature points, FAST feature points, BRISK feature points, and local binary pattern (Local Binary Pattern, LBP) feature points, etc., to which embodiments of the present invention are not limited.
S230, calling a preset optical flow algorithm to generate optical flows corresponding to detection key points in the two adjacent frames of sight detection images.
In the implementation of the invention, the preset optical flow algorithm can be understood as a preconfigured algorithm for detecting the movement condition of each detection key point in the sight detection image, and the preset optical flow algorithm can comprise a Lucas-Kanade optical flow algorithm, a Horn-Schunck optical flow algorithm, a FlowNet optical flow algorithm based on a neural network and the like.
Specifically, a preset optical flow algorithm may be invoked to track detection key points in two adjacent frames of sight detection images, and take the motion condition of each detection key point as a corresponding optical flow, where the preset optical flow algorithm may include, but is not limited to, the following several types of optical flows: a Lucas-Kanade optical flow algorithm, a Horn-Schunck optical flow algorithm, a FlowNet optical flow algorithm and the like. In some embodiments, a pyramid Lucas-Kanade optical flow algorithm may be invoked to iteratively track all detection key points in two adjacent frames of line-of-sight detection images, and take the motion condition of the same detection key point in the two adjacent frames of line-of-sight detection images as a corresponding optical flow.
S240, removing non-moving detection key points according to the optical flow, and taking the rest detection key points as moving points.
In the implementation of the present invention, a moving point may be understood as a detection key point that moves in two adjacent frames of line-of-sight detection images.
Specifically, whether each detection key point moves in the two adjacent frames of sight detection images or not can be detected according to the optical flow condition of all detection key points in the two adjacent frames of sight detection images, detection key points in which no movement occurs are removed, and the rest detection key points are used as moving points.
S250, connecting the same detection key points in two adjacent frames of sight detection images to serve as a moving point vector line.
Specifically, the same detection key points in the two adjacent frames of line-of-sight detection images may be connected by a vector line, and used as a moving point vector line.
S260, extracting the direction of the corresponding moving point vector line as a motion characteristic aiming at the preset sight line detection area.
Specifically, a corresponding moving point vector line may be extracted in each preset line of sight detection area, and a vector direction corresponding to the moving point vector line may be used as a motion feature corresponding to each preset line of sight detection area.
S270, judging whether the motion characteristics meet a preset first weight mapping relation, if so, setting the first sight blocking weight to be 1, and if not, setting the first sight blocking weight to be 0, wherein the preset first weight mapping relation at least comprises: the motion characteristics of at least five adjacent frames of sight detection images in the preset sight detection area are alternately changed in positive and negative.
Specifically, the motion characteristics of each preset line-of-sight detection area determined in S260 may be compared with a preset first weight mapping relationship, and whether the motion characteristics satisfy the preset first weight mapping relationship may be determined, where the preset first weight mapping relationship at least includes: the motion characteristics of at least five adjacent sight line detection images in the preset sight line detection areas are alternately changed in positive and negative, namely the motion characteristics can comprise two conditions of positive and negative, and when the motion characteristics of each preset sight line detection area meet one of the two conditions of positive and negative and positive, a first sight line blocking weight corresponding to the preset sight line detection area can be set to be 1 to indicate that the sight line of a driver is blocked; if the first sight blocking weight does not meet the preset sight detection range, the first sight blocking weight corresponding to the preset sight detection range can be set to be 0, and the fact that the sight of the driver is not blocked is indicated.
S280, extracting scale invariant feature transformation features and direction gradient histogram features of at least two sight line detection images according to a preset image processing rule.
In the implementation of the present invention, the preset image processing rule may be understood as an algorithm rule for extracting a Scale Invariant Feature Transform (SIFT) feature and a direction gradient Histogram (HOG) feature of each line-of-sight detection image, and the preset image processing rule may include a SIFT feature extraction algorithm, an HOG feature extraction algorithm, and the like.
Specifically, SIFT features and HOG features corresponding to each line-of-sight detection image may be extracted respectively according to a preset image processing rule, where the preset image processing rule may at least include: SIFT feature extraction algorithm, HOG feature extraction algorithm, etc. It should be understood that, extracting SIFT features and HOG features of the line-of-sight detection image in the embodiment of the present invention is merely exemplary, and other types of feature points of the line-of-sight detection image may be extracted in practical applications, for example, may include, but not limited to: SURF feature points, ORB feature points, FAST feature points, BRISK feature points, LBP feature points, etc., to which embodiments of the present invention are not limited.
S290, inputting the scale invariant feature transformation features and the direction gradient histogram features which are respectively corresponding to the at least two sight line detection images into a preset barrier model, and counting the proportion that the scale invariant feature transformation features and the direction gradient histogram features which are respectively corresponding to the at least two sight line detection images meet the preset barrier model.
Specifically, SIFT features and HOG features corresponding to each line-of-sight detection image may be input into a preset obstacle model that has been trained in advance, obstacle detection results corresponding to each line-of-sight detection image may be output from the preset obstacle model, and the proportion of the SIFT features and HOG features corresponding to each line-of-sight detection image that satisfy the preset obstacle model may be counted.
And S2100, taking the proportion as the obstacle distribution characteristic of the sight line detection images in at least two preset sight line detection areas.
Specifically, the SIFT feature and HOG feature corresponding to each line-of-sight detection image counted in S290 may be used as the obstacle distribution feature corresponding to each line-of-sight detection image, respectively, in proportion to the obstacle model.
S2110, judging whether the barrier distribution characteristics meet a preset second weight mapping relation, if yes, setting the second sight blocking weight to be 1, and if not, setting the second sight blocking weight to be 0, wherein the preset second weight mapping relation at least comprises: the obstacle distribution characteristic meets greater than or equal to a preset threshold.
Specifically, the obstacle distribution feature of each preset line-of-sight detection area determined in S2100 may be compared with a preset second weight mapping relationship, to determine whether the obstacle distribution feature meets the preset second weight mapping relationship, where the preset second weight mapping relationship at least includes: the obstacle distribution feature satisfies a preset threshold value or more, and the preset threshold value may be set to 2%, 5% or 10% by way of example, and when the obstacle distribution feature satisfies a preset second weight mapping relationship, a second sight blocking weight corresponding to the preset sight detection area may be set to 1, which indicates that the driver is blocked from sight; if the first vision blocking weight does not meet the preset vision detection area, the second vision blocking weight corresponding to the preset vision detection area can be set to 0, and the fact that the vision of the driver is not blocked is indicated.
S2120, judging whether the first sight blocking weight and the second sight blocking weight meet preset screening conditions, if yes, determining the sight blocking condition as sight blocking, and if not, determining the sight blocking condition as that the sight is not blocked, wherein the preset screening conditions at least comprise: at least one of the first line of sight blocking weight and the second line of sight blocking weight is equal to 1.
Specifically, the obtained first sight blocking weight and the second sight blocking weight may be compared with a preset screening condition, and whether the preset screening condition is satisfied or not is determined, where the preset screening condition at least includes: at least one of the first sight blocking weight and the second sight blocking weight is equal to 1, and if the first sight blocking weight and the second sight blocking weight are met, the sight blocking condition is determined as sight blocking; if not, the sight-blocked condition is determined as the sight-unblocked condition.
Further, on the basis of the above embodiment of the present invention, fig. 3 is a flowchart of a training process of a preset obstacle model according to a second embodiment of the present invention. As shown in fig. 3, a training process for a preset obstacle model according to a second embodiment of the present invention specifically includes the following steps:
s310, acquiring a preset sight line detection data set based on the visual field range of the driver, and determining a positive sample and a negative sample according to the condition of the obstacle in the preset sight line detection data set.
In the practice of the present invention, the preset gaze detection dataset may be understood as a preconfigured driving gaze dataset acquired based on the driver's field of view. Positive samples may be understood as sample data in which the preset line of sight detection data is concentrated in the preset line of sight detection area and there are obstacles, and correspondingly, negative samples may be understood as the number of samples in which the preset line of sight detection data is concentrated in the preset line of sight detection area and there are no obstacles, wherein the obstacles may include raindrops, dust, or other vehicles, etc.
Specifically, the in-vehicle camera can be used for acquiring a preset sight line detection data set based on the visual field range of the driver, and then the preset sight line detection data set is divided into a positive sample and a negative sample according to whether an obstacle exists in the preset sight line detection area, so that a corresponding positive sample data set and negative sample data set are generated.
S320, preprocessing the positive sample to be used as a training set, wherein the preprocessing at least comprises the following steps: image smoothing, nonlinear mean filtering, and image sharpening.
S330, extracting scale invariant feature transformation features and direction gradient histogram features corresponding to all samples in the training set.
S340, inputting the extracted scale invariant feature transformation features and the direction gradient histogram features into a support vector machine classifier for training so as to generate a preset obstacle model.
Specifically, the obstacle in the preset sight line detection area can be characterized as a hypersphere, the center of the preset obstacle model can be set as M, and the radius can be set as R; and inputting SIFT features and HOG features corresponding to the samples extracted in the step S330 into a support vector machine classifier for training, and obtaining a corresponding preset obstacle model after training is completed.
Further, on the basis of the above embodiment of the present invention, the line-of-sight detection method further includes:
normalizing and enhancing the vision detection image;
and dividing the at least two sight line detection images after the image enhancement by adopting a sliding window method so as to generate sight line detection image slices corresponding to the at least two sight line detection images respectively.
Specifically, before the corresponding first sight blocking weight and second sight blocking weight are obtained according to the sight detection image, normalization and image enhancement processing are performed on the sight detection image, and then a sliding window method is adopted to divide each sight detection image after image enhancement, so that sight detection image slices corresponding to the sight detection images are generated. In the embodiment of the invention, the vision detection image is divided into a plurality of vision detection image slices, and the corresponding first vision blocking weight and second vision blocking weight are acquired according to the vision detection image slices of each vision detection image, so that the calculation accuracy of the first vision blocking weight and the second vision blocking weight can be improved, and the robustness and the accuracy of the vision blocking condition judgment are further improved.
According to the technical scheme, at least two preset sight detection areas are divided based on the visual field range of a driver, a camera in a vehicle is utilized to collect sight detection images of at least five continuous frames corresponding to the preset sight detection areas, a Harris corner detection algorithm is adopted to extract detection key points corresponding to the sight detection images in the at least two preset sight detection areas respectively, a preset optical flow algorithm is called to generate optical flows corresponding to the detection key points in the adjacent two frames of sight detection images, non-moving detection key points are removed according to the optical flows, the rest detection key points are used as moving points, the same detection key points in the adjacent two frames of sight detection images are connected to be used as moving point vector lines, the direction of the corresponding moving point vector lines is extracted for the preset sight detection areas to serve as a motion feature, whether the motion feature meets a preset first weight mapping relation is judged, if yes, the first sight blocking weight is set to be 1, and if no, the first sight blocking weight is set to be 0, wherein the preset first weight mapping relation at least comprises: the method comprises the steps of extracting scale invariant feature transformation features and direction gradient histogram features of at least two sight line detection images according to preset image processing rules, inputting the scale invariant feature transformation features and the direction gradient histogram features of at least two sight line detection images respectively corresponding to the at least two sight line detection images into a preset barrier model, counting the proportion that the scale invariant feature transformation features and the direction gradient histogram features of the at least two sight line detection images respectively corresponding to the at least two sight line detection images meet the preset barrier model, taking the proportion as barrier distribution features of the sight line detection images in the at least two preset sight line detection areas, judging whether the barrier distribution features meet a preset second weight mapping relation, if yes, setting the second sight line blocking weight to be 1, otherwise, setting the second sight line blocking weight to be 0, wherein the preset second weight mapping relation at least comprises: the obstacle distribution characteristics meet the preset threshold value or more, whether the first sight blocking weight and the second sight blocking weight meet the preset screening conditions is judged, if yes, the sight blocking condition is determined to be blocked, if not, the sight blocking condition is determined to be unblocked, and the preset screening conditions at least comprise: at least one of the first line of sight blocking weight and the second line of sight blocking weight is equal to 1. According to the embodiment of the invention, at least two preset sight line detection areas are divided according to the visual field range of a driver, the first sight line blocking weight and the second sight line blocking weight corresponding to each preset sight line detection area are respectively determined according to the preset optical flow algorithm and the preset obstacle model, and finally whether the first sight line blocking weight and the second sight line blocking weight meet preset screening conditions or not is judged, so that the sight line blocking condition of the driver is determined, the sight line blocking condition can be adaptively described through the motion detection and the feature point detection of multiple detection areas, the robustness and the accuracy of the sight line detection are improved, the universality is better, and the safety of the driving of the vehicle of the user is further improved; meanwhile, the construction of the preset obstacle model is realized by adopting a single classification algorithm, namely, only the positive sample data set is used, so that the adaptability to complex road conditions can be improved, the construction cost of the preset obstacle model is greatly reduced, the instantaneity is improved, and the user experience is enhanced.
Example III
Fig. 4 is a flowchart of a line-of-sight detection method according to a third embodiment of the present invention. The present embodiment provides, on the basis of the above embodiments, one embodiment of a line-of-sight detection method capable of realizing line-of-sight detection of a driver based on a plurality of divided preset line-of-sight detection areas. As shown in fig. 4, the line-of-sight detection method provided in the third embodiment of the present invention specifically includes the following steps:
s410, acquiring a preset sight line detection data set based on the visual field range of a driver by using an in-vehicle camera, and generating a training data set required for training a preset obstacle model.
Specifically, firstly, a preset sight line detection data set based on the visual field range of a driver can be acquired by using an in-vehicle camera, and then all samples are cut according to the size of 200 x 200; then determining a positive sample and a negative sample according to the condition of the presence or absence of an obstacle in a preset sight detection data set, and performing preprocessing operations such as image smoothing, nonlinear mean filtering, image sharpening and the like on the positive sample; and finally, taking the preprocessed positive sample as a training data set required for training a preset obstacle model.
S420, extracting SIFT features and HOG features corresponding to each sample in the training set, and inputting the image features into a support vector machine classifier for training to generate a preset obstacle model.
Specifically, raindrops or dust in the preset sight line detection area can be characterized as a super sphere, the center of the preset obstacle model can be set to be M, and the radius can be set to be R; and inputting the SIFT features and the HOG features corresponding to the extracted samples into a support vector machine classifier for training, and obtaining a corresponding preset obstacle model after training is completed. The preset obstacle model in the embodiment of the invention can be expressed as follows:
(1)
wherein,,representing a gaussian kernel function; />Representing SIFT features and HOG features corresponding to the ith sample in the training set; i represents a sample sequence number in the training data set; m represents the center of a preset obstacle model; r represents the radius of the preset obstacle model.
The training of the preset obstacle model by using the support vector machine classifier may include the steps of:
A. constructing an objective function as shown in formula (2):
(2)
wherein, C represents a penalty coefficient;represents a relaxation variable; m represents the number of samples in the training dataset.
B. And D, solving the objective function constructed in the step A by using a Lagrangian dual solving method, and further obtaining the central position M and the radius R of the corresponding preset obstacle model.
S430, acquiring sight line detection images of at least five continuous frames in each preset sight line detection area in real time, and further obtaining sight line detection image slices corresponding to each preset sight line detection area.
Specifically, the in-vehicle camera can be used for collecting at least five continuous frames of sight detection images in each preset sight detection area in real time, performing image normalization and image enhancement processing on the obtained sight detection images, and then dividing the enhanced sight detection images by adopting a sliding window method, so as to obtain sight detection image slices corresponding to each preset sight detection area.
S440, determining motion characteristics corresponding to the sight line detection image slices in each preset sight line detection area by utilizing a gradient-based optical flow algorithm, and determining a first sight line blockage weight corresponding to each preset sight line detection area according to the motion characteristics.
Specifically, firstly, a Harris angular point algorithm can be utilized to obtain detection key points corresponding to the sight detection images in each preset sight detection area respectively, and then a Lucas-Kanade optical flow algorithm is called to carry out iterative tracking on all detection key points in two adjacent frames of sight detection images; then eliminating detection key points which do not move, and taking the rest detection key points as moving points; connecting the same detection key points in two adjacent frames of sight detection images by using vector lines, then solving the size and the direction of the vector lines, and taking the vector direction corresponding to the vector lines as the motion characteristics corresponding to each preset sight detection area; judging whether the optical flow movement direction of a moving point appears in the sight line detection images of five continuous adjacent frames and changes alternately in positive and negative, if so, setting a first sight line blockage weight corresponding to the preset sight line detection area to be 1, and indicating that the sight line of a driver is blocked; if not, setting the first sight blocking weight corresponding to the preset sight detection area to 0, which indicates that the sight of the driver is not blocked.
S450, extracting SIFT features and HOG features corresponding to the sight line detection image slices obtained in S430.
S460, inputting SIFT features and HOG features corresponding to the sight line detection image slices into the preset obstacle model in S420, and counting the proportion that SIFT features and HOG features of all the sight line detection image slices in each preset sight line detection area meet the preset obstacle model.
And S470, taking the proportion counted in the S460 as an obstacle distribution characteristic corresponding to the preset sight line detection area, and determining a second sight line blocking weight corresponding to each preset sight line detection area according to the obstacle distribution characteristic.
Specifically, the obstacle distribution characteristics of each preset sight line detection area may be compared with a preset threshold, for example, 2%, and if the obstacle distribution characteristics are greater than or equal to the preset threshold, a second sight line blockage weight corresponding to the preset sight line detection area is set to 1, which indicates that the sight line of the driver is blocked; otherwise, setting the second sight blocking weight corresponding to the preset sight detection area to 0, which indicates that the sight of the driver is not blocked.
S480, determining the sight blocking condition according to the first sight blocking weight and the second sight blocking weight corresponding to each preset sight detection area.
Specifically, if at least one of all the first sight blocking weights and the second sight blocking weights is equal to 1, determining that the sight blocking condition is blocked; otherwise, the sight-blocked condition is determined as the sight-unblocked condition.
Fig. 5 is a flowchart of another line-of-sight detection method according to the third embodiment of the present invention. As shown in fig. 5, the line-of-sight detection method may be suitable for the case of dividing the visual field of the driver into two preset line-of-sight detection regions, and the specific implementation principle is the same as that of the line-of-sight detection method shown in fig. 4, so that the description thereof will not be repeated here.
According to the technical scheme, a preset sight line detection data set based on the visual field range of a driver is acquired by utilizing an in-vehicle camera, a training data set required for training a preset obstacle model is generated, SIFT features and HOG features corresponding to all samples in the training set are extracted, the image features are input into a support vector machine classifier for training to generate a preset obstacle model, sight line detection images of at least five continuous frames in each preset sight line detection area are acquired in real time, sight line detection image slices corresponding to each preset sight line detection area are further obtained, movement features corresponding to sight line detection image slices in each preset sight line detection area are determined by utilizing a gradient-based optical flow algorithm, first sight line blocking weights corresponding to each preset sight line detection area are determined according to the movement features, SIFT features and HOG features corresponding to each sight line detection image slice obtained in S430 are extracted respectively, SIFT features and HOG features corresponding to each sight line detection image slice are input into the preset obstacle model in S420, the SIFT features and HOG features of all sight line detection image slices satisfy the preset weights in each preset sight line detection area, the preset sight line detection weights and the preset sight line blocking weights 460 are used as second sight line detection area blocking-up factors, and the second sight line blocking-up ratio is determined according to the preset sight line detection area blocking-profile. According to the embodiment of the invention, at least two preset sight line detection areas are divided according to the visual field range of a driver, and then, according to a gradient-based optical flow algorithm and a preset obstacle model trained by a support vector machine classifier, a first sight line blocking weight and a second sight line blocking weight which correspond to the preset sight line detection areas respectively are respectively determined, and finally, whether the first sight line blocking weight and the second sight line blocking weight meet preset screening conditions or not is judged, so that the sight line blocking condition of the driver is determined, the sight line blocking condition can be adaptively described through the motion detection and the feature point detection of multiple detection areas, the robustness and the accuracy of the sight line detection are improved, and the method has better universality, and is beneficial to enhancing the safety of driving vehicles of users; meanwhile, the construction of the preset obstacle model is realized by adopting a single classification algorithm, namely, only a positive sample data set is used, so that the adaptability to the complex road surface condition can be improved, and the construction cost of the preset obstacle model is greatly reduced; in addition, through cutting apart sight detection image into a plurality of sight detection image section, obtain corresponding first sight blocking weight and second sight blocking weight according to the sight detection image section of each sight detection image again, can improve the calculation degree of accuracy that first sight blocking weight and second sight are obstructed, and then further promote the robustness and the accuracy that the sight blocking condition was judged, strengthened user experience.
Example IV
Fig. 6 is a schematic structural diagram of a sight line detection apparatus according to a fourth embodiment of the present invention. As shown in fig. 6, the apparatus includes:
the image acquisition module 41 is configured to acquire line-of-sight detection images in at least two preset line-of-sight detection areas according to an in-vehicle camera, where the preset line-of-sight detection areas are divided based on a visual field range of a driver.
The first weight determining module 42 is configured to determine motion features corresponding to the sight line detection images in the at least two preset sight line detection areas respectively based on a preset motion detection algorithm, and determine a first sight line blockage weight according to the motion features and a preset first weight mapping relationship.
A second weight determining module 43, configured to determine an obstacle distribution feature of the line-of-sight detection images in the at least two preset line-of-sight detection areas based on a preset obstacle model, and determine a second line-of-sight blocking weight according to the obstacle distribution feature and a preset second weight mapping relationship.
A sight blocking determination module 44, configured to determine a sight blocking situation based on the first sight blocking weight and the second sight blocking weight under a preset screening condition.
According to the technical scheme, the image acquisition module acquires sight line detection images in at least two preset sight line detection areas according to the camera in the vehicle, wherein the preset sight line detection areas are divided based on the sight line range of a driver, the first weight determination module determines motion characteristics corresponding to the sight line detection images in the at least two preset sight line detection areas respectively based on a preset motion detection algorithm, determines first sight line blocking weights according to the motion characteristics and a preset first weight mapping relation, the second weight determination module determines obstacle distribution characteristics of the sight line detection images in the at least two preset sight line detection areas based on a preset obstacle model, determines second sight line blocking weights according to the obstacle distribution characteristics and a preset second weight mapping relation, and the sight line blocking determination module determines sight line blocking conditions based on the first sight line blocking weights and the second sight line blocking weights under preset screening conditions. According to the embodiment of the invention, at least two preset sight line detection areas are divided according to the visual field range of a driver, the first sight line blocking weight and the second sight line blocking weight corresponding to each preset sight line detection area are respectively determined according to the preset motion detection algorithm and the preset obstacle model, the sight line blocking condition is finally determined according to the first sight line blocking weight and the second sight line blocking weight, the sight line blocking condition can be adaptively described through the motion characteristics and the obstacle distribution characteristics of multiple detection areas, the robustness and the accuracy of sight line detection are improved, the universality is better, and the safety of the user driving the vehicle is further improved.
Further, on the basis of the above embodiment of the invention, the image acquisition module 41 includes:
the image acquisition unit is used for dividing at least two preset sight detection areas based on the visual field range of the driver and acquiring sight detection images of at least five continuous frames in the corresponding preset sight detection areas by utilizing the in-vehicle camera.
Further, on the basis of the above embodiment of the invention, the first weight determining module 42 includes:
the key point extraction unit is used for extracting detection key points corresponding to the sight line detection images in at least two preset sight line detection areas respectively by adopting a Harris corner detection algorithm.
And the optical flow generating unit is used for calling a preset optical flow algorithm to generate optical flows corresponding to the detection key points in the two adjacent frames of sight detection images.
And the mobile point determining unit is used for eliminating the non-mobile detection key points according to the optical streams and taking the rest detection key points as mobile points.
And the vector line determining unit is used for connecting the same detection key points in the two adjacent frames of sight detection images as moving point vector lines.
And the motion characteristic determining unit is used for extracting the direction of the corresponding moving point vector line as a motion characteristic aiming at the preset sight line detection area.
The first weight determining unit is configured to determine whether the motion feature meets a preset first weight mapping relationship, if yes, set the first sight blocking weight to 1, and if no, set the first sight blocking weight to 0, where the preset first weight mapping relationship at least includes: the motion characteristics of at least five adjacent frames of sight detection images in the preset sight detection area are alternately changed in positive and negative.
Further, on the basis of the above embodiment of the invention, the second weight determining module 43 includes:
and the feature extraction unit is used for extracting the scale-invariant feature transformation features and the direction gradient histogram features of at least two vision detection images according to a preset image processing rule.
The satisfaction proportion statistics unit is used for inputting the scale invariant feature transformation features and the direction gradient histogram features which are respectively corresponding to the at least two sight line detection images into a preset barrier model, and counting the proportion that the scale invariant feature transformation features and the direction gradient histogram features which are respectively corresponding to the at least two sight line detection images satisfy the preset barrier model.
And the obstacle distribution characteristic determining unit is used for taking the proportion as the obstacle distribution characteristic of the sight line detection images in at least two preset sight line detection areas.
The second weight determining unit is configured to determine whether the obstacle distribution feature meets a preset second weight mapping relationship, if yes, set the second sight blocking weight to 1, and if no, set the second sight blocking weight to 0, where the preset second weight mapping relationship at least includes: the obstacle distribution characteristic meets greater than or equal to a preset threshold.
Further, on the basis of the above embodiment of the present invention, the training process of the preset obstacle model includes:
collecting a preset sight line detection data set based on the visual field range of a driver, and determining a positive sample and a negative sample according to the condition of the presence or absence of an obstacle in the preset sight line detection data set;
preprocessing the positive sample to be used as a training set, wherein the preprocessing at least comprises the following steps: image smoothing, nonlinear mean filtering and image sharpening;
extracting scale invariant feature transformation features and direction gradient histogram features corresponding to all samples in a training set;
and inputting the extracted scale invariant feature transformation features and the direction gradient histogram features into a support vector machine classifier for training so as to generate a preset obstacle model.
Further, on the basis of the above-described embodiment of the invention, the sight line blockage determination module 44 includes:
The sight blocking determining unit is configured to determine whether the first sight blocking weight and the second sight blocking weight meet a preset screening condition, if yes, determine that the sight blocking condition is blocked, and if not, determine that the sight blocking condition is not blocked, where the preset screening condition at least includes: at least one of the first line of sight blocking weight and the second line of sight blocking weight is equal to 1.
Further, on the basis of the above embodiment of the present invention, the line-of-sight detection method further includes:
and the image processing module is used for normalizing and enhancing the vision detection image.
And the image slice generation module is used for respectively dividing at least two sight line detection images after the image enhancement by adopting a sliding window method so as to generate sight line detection image slices corresponding to the at least two sight line detection images respectively.
The sight line detection device provided by the embodiment of the invention can execute the sight line detection method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Example five
Fig. 7 shows a schematic diagram of an electronic device 50 that may be used to implement an embodiment of the invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 7, the electronic device 50 includes at least one processor 51, and a memory such as a Read Only Memory (ROM) 52, a Random Access Memory (RAM) 53, etc. communicatively connected to the at least one processor 51, wherein the memory stores a computer program executable by the at least one processor, and the processor 51 may perform various suitable actions and processes according to the computer program stored in the Read Only Memory (ROM) 52 or the computer program loaded from the storage unit 58 into the Random Access Memory (RAM) 53. In the RAM 53, various programs and data required for the operation of the electronic device 50 can also be stored. The processor 51, the ROM 52 and the RAM 53 are connected to each other via a bus 54. An input/output (I/O) interface 55 is also connected to bus 54.
Various components in the electronic device 50 are connected to the I/O interface 55, including: an input unit 56 such as a keyboard, a mouse, etc.; an output unit 57 such as various types of displays, speakers, and the like; a storage unit 58 such as a magnetic disk, an optical disk, or the like; and a communication unit 59 such as a network card, modem, wireless communication transceiver, etc. The communication unit 59 allows the electronic device 50 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks.
The processor 51 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 51 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 51 performs the respective methods and processes described above, such as the line-of-sight detection method.
In some embodiments, the gaze detection method may be implemented as a computer program tangibly embodied on a computer readable storage medium, such as the storage unit 58. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 50 via the ROM 52 and/or the communication unit 59. When the computer program is loaded into RAM 53 and executed by processor 51, one or more steps of the gaze detection method described above may be performed. Alternatively, in other embodiments, the processor 51 may be configured to perform the gaze detection method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present invention may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present invention are achieved, and the present invention is not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.
Claims (10)
1. A line-of-sight detection method, the method comprising:
acquiring sight line detection images in at least two preset sight line detection areas according to an in-vehicle camera, wherein the preset sight line detection areas are divided based on the visual field range of a driver;
determining motion characteristics corresponding to the sight line detection images in the at least two preset sight line detection areas respectively based on a preset motion detection algorithm, and determining a first sight line blocking weight according to the motion characteristics and a preset first weight mapping relation, wherein the preset first weight mapping relation is a mapping relation which is preset and used for representing the motion characteristics and the first sight line blocking weight;
Determining obstacle distribution characteristics of the sight line detection images in the at least two preset sight line detection areas based on a preset obstacle model, and determining a second sight line blocking weight according to the obstacle distribution characteristics and a preset second weight mapping relation, wherein the obstacle distribution characteristics are a mapping relation which is configured in advance and used for representing the obstacle distribution characteristics and the second sight line blocking weight;
determining a sight blocking condition based on the first sight blocking weight and the second sight blocking weight under a preset screening condition;
the determining the first sight blocking weight according to the motion characteristic and a preset first weight mapping relation comprises the following steps: setting the first sight blocking weight to 1 if the motion characteristics of the sight detection images of a plurality of frames continuously connected in the preset sight detection area are alternately changed in positive and negative, or setting the first sight blocking weight to 0 if the motion characteristics of the sight detection images of a plurality of frames continuously connected in the preset sight detection area are suddenly changed for a preset number of times, setting the first sight blocking weight to 1 if the motion characteristics of the sight detection images of a plurality of frames continuously connected in the preset sight detection area are suddenly changed, and setting the first sight blocking weight to 0 if the motion characteristics of the sight detection images of a plurality of frames continuously connected in the preset sight detection area are not changed in positive and negative;
The determining a second sight blocking weight according to the obstacle distribution feature and a preset second weight mapping relation comprises the following steps: and if the obstacle distribution characteristics meet a preset threshold value or more, setting the second sight blocking weight to 1, if not, setting the second sight blocking weight to 0, or if the obstacle distribution characteristics meet a preset threshold value range, setting the second sight blocking weight to 1, and if not, setting the second sight blocking weight to 0.
2. The method of claim 1, wherein capturing line-of-sight detection images from the in-vehicle camera within at least two predetermined line-of-sight detection regions comprises:
dividing at least two preset sight detection areas based on the visual field range of the driver, and acquiring the sight detection images corresponding to at least five continuous frames in the preset sight detection areas by utilizing the in-vehicle camera.
3. The method according to claim 1, wherein the determining, based on a preset motion detection algorithm, motion features respectively corresponding to the line-of-sight detection images in the at least two preset line-of-sight detection areas, and determining a first line-of-sight blocking weight according to the motion features and a preset first weight mapping relationship, includes:
Extracting detection key points corresponding to the sight line detection images in the at least two preset sight line detection areas respectively by adopting a Harris corner detection algorithm;
invoking a preset optical flow algorithm to generate optical flows corresponding to detection key points in two adjacent frames of sight detection images;
removing non-moving detection key points according to the optical flow, and taking the rest detection key points as moving points;
connecting the same detection key points in two adjacent frames of sight detection images to serve as a moving point vector line;
extracting the direction of the corresponding moving point vector line as the motion characteristic aiming at the preset sight detection area;
judging whether the motion characteristics of at least five adjacent frames of sight detection images in the preset sight detection area are alternately changed in positive and negative, if so, setting the first sight blocking weight to be 1, and if not, setting the first sight blocking weight to be 0.
4. The method of claim 1, wherein the determining the obstacle distribution feature of the line-of-sight detection images within the at least two preset line-of-sight detection regions based on a preset obstacle model and determining the second line-of-sight blocking weight according to the obstacle distribution feature and a preset second weight mapping relationship comprises:
Extracting scale invariant feature transformation features and direction gradient histogram features of the at least two sight line detection images according to a preset image processing rule;
inputting the scale-invariant feature transformation features and the direction gradient histogram features which are respectively corresponding to the at least two sight line detection images into the preset obstacle model, and counting the proportion that the scale-invariant feature transformation features and the direction gradient histogram features which are respectively corresponding to the at least two sight line detection images meet the preset obstacle model;
taking the ratio as an obstacle distribution characteristic of the sight line detection images in the at least two preset sight line detection areas;
judging whether the obstacle distribution characteristics are larger than or equal to the preset threshold, if yes, setting the second sight blocking weight to be 1, and if not, setting the second sight blocking weight to be 0.
5. The method according to any one of claims 1 or 4, wherein the training process of the preset obstacle model comprises:
collecting a preset sight line detection data set based on the visual field range of the driver, and determining a positive sample and a negative sample according to the condition of the presence or absence of an obstacle in the preset sight line detection data set;
And preprocessing the positive sample to be used as a training set, wherein the preprocessing at least comprises the following steps: image smoothing, nonlinear mean filtering and image sharpening;
extracting scale invariant feature transformation features and direction gradient histogram features corresponding to all samples in the training set;
inputting the extracted scale invariant feature transformation features and the direction gradient histogram features into a support vector machine classifier for training so as to generate the preset obstacle model.
6. The method of claim 1, wherein the determining a line of sight blocking condition based on the first line of sight blocking weight and the second line of sight blocking weight under a preset screening condition comprises:
judging whether the first sight blocking weight and the second sight blocking weight meet preset screening conditions, if yes, determining the sight blocking condition as sight blocking, and if not, determining the sight blocking condition as that sight is not blocked, wherein the preset screening conditions at least comprise: at least one of the first line of sight blocking weight and the second line of sight blocking weight is equal to 1.
7. The method as recited in claim 1, further comprising:
Normalizing and enhancing the sight line detection image;
and dividing the at least two sight line detection images after the image enhancement by adopting a sliding window method so as to generate sight line detection image slices corresponding to the at least two sight line detection images respectively.
8. A line-of-sight detection apparatus, the apparatus comprising:
the image acquisition module is used for acquiring sight line detection images in at least two preset sight line detection areas according to the in-vehicle camera, wherein the preset sight line detection areas are divided based on the visual field range of a driver;
the first weight determining module is used for determining motion characteristics of sight line detection images in the at least two preset sight line detection areas respectively based on a preset motion detection algorithm, and determining a first sight line blocking weight according to the motion characteristics and a preset first weight mapping relation, wherein the preset first weight mapping relation is a mapping relation which is preset and used for representing the motion characteristics and the first sight line blocking weight;
a second weight determining module, configured to determine an obstacle distribution feature of the line-of-sight detection images in the at least two preset line-of-sight detection areas based on a preset obstacle model, and determine a second line-of-sight blocking weight according to the obstacle distribution feature and a preset second weight mapping relationship, where the obstacle distribution feature is a mapping relationship configured in advance and used for representing the obstacle distribution feature and the second line-of-sight blocking weight;
The sight blocking determining module is used for determining sight blocking conditions based on the first sight blocking weight and the second sight blocking weight under preset screening conditions;
the first weight determining module is specifically configured to, when executing the step of determining a first sight blocking weight according to the motion feature and a preset first weight mapping relationship: setting the first sight blocking weight to 1 if the motion characteristics of the sight detection images of a plurality of frames continuously connected in the preset sight detection area are alternately changed in positive and negative, or setting the first sight blocking weight to 0 if the motion characteristics of the sight detection images of a plurality of frames continuously connected in the preset sight detection area are suddenly changed for a preset number of times, setting the first sight blocking weight to 1 if the motion characteristics of the sight detection images of a plurality of frames continuously connected in the preset sight detection area are suddenly changed, and setting the first sight blocking weight to 0 if the motion characteristics of the sight detection images of a plurality of frames continuously connected in the preset sight detection area are not changed in positive and negative;
the second weight determining module is specifically configured to, when executing the step of determining a second sight blocking weight according to the obstacle distribution feature and a preset second weight mapping relationship: and if the obstacle distribution characteristics meet a preset threshold value or more, setting the second sight blocking weight to 1, if not, setting the second sight blocking weight to 0, or if the obstacle distribution characteristics meet a preset threshold value range, setting the second sight blocking weight to 1, and if not, setting the second sight blocking weight to 0.
9. An electronic device, the electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the gaze detection method of any of claims 1-7.
10. A computer readable storage medium storing computer instructions for causing a processor to perform the line of sight detection method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310465756.2A CN116206281B (en) | 2023-04-27 | 2023-04-27 | Sight line detection method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310465756.2A CN116206281B (en) | 2023-04-27 | 2023-04-27 | Sight line detection method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116206281A CN116206281A (en) | 2023-06-02 |
CN116206281B true CN116206281B (en) | 2023-07-18 |
Family
ID=86508019
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310465756.2A Active CN116206281B (en) | 2023-04-27 | 2023-04-27 | Sight line detection method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116206281B (en) |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4722777B2 (en) * | 2006-06-21 | 2011-07-13 | 本田技研工業株式会社 | Obstacle recognition judgment device |
US8848978B2 (en) * | 2011-09-16 | 2014-09-30 | Harman International (China) Holdings Co., Ltd. | Fast obstacle detection |
CN108680157B (en) * | 2018-03-12 | 2020-12-04 | 海信集团有限公司 | Method, device and terminal for planning obstacle detection area |
CN111967396A (en) * | 2020-08-18 | 2020-11-20 | 上海眼控科技股份有限公司 | Processing method, device and equipment for obstacle detection and storage medium |
CN112115889B (en) * | 2020-09-23 | 2022-08-30 | 成都信息工程大学 | Intelligent vehicle moving target detection method based on vision |
CN113807184B (en) * | 2021-08-17 | 2024-06-21 | 北京百度网讯科技有限公司 | Obstacle detection method and device, electronic equipment and automatic driving vehicle |
-
2023
- 2023-04-27 CN CN202310465756.2A patent/CN116206281B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN116206281A (en) | 2023-06-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110942000B (en) | Unmanned vehicle target detection method based on deep learning | |
CN106845478B (en) | A kind of secondary licence plate recognition method and device of character confidence level | |
WO2018103608A1 (en) | Text detection method, device and storage medium | |
CN114118124B (en) | Image detection method and device | |
CN110544211B (en) | Method, system, terminal and storage medium for detecting lens attached object | |
CN112633276B (en) | Training method, recognition method, device, equipment and medium | |
CN108830196A (en) | Pedestrian detection method based on feature pyramid network | |
CN110334703B (en) | Ship detection and identification method in day and night image | |
CN114202743A (en) | Improved fast-RCNN-based small target detection method in automatic driving scene | |
CN115147809B (en) | Obstacle detection method, device, equipment and storage medium | |
CN111915583A (en) | Vehicle and pedestrian detection method based on vehicle-mounted thermal infrared imager in complex scene | |
CN113888461A (en) | Method, system and equipment for detecting defects of hardware parts based on deep learning | |
CN110532875A (en) | Night mode camera lens pays the detection system, terminal and storage medium of object | |
CN110544232A (en) | detection system, terminal and storage medium for lens attached object | |
Ghahremannezhad et al. | Automatic road detection in traffic videos | |
CN115861352A (en) | Monocular vision, IMU and laser radar data fusion and edge extraction method | |
CN111932530B (en) | Three-dimensional object detection method, device, equipment and readable storage medium | |
CN113378837A (en) | License plate shielding identification method and device, electronic equipment and storage medium | |
ZHANG et al. | Multi-target vehicle detection and tracking based on video | |
Razzok et al. | A new pedestrian recognition system based on edge detection and different census transform features under weather conditions | |
CN116206281B (en) | Sight line detection method and device, electronic equipment and storage medium | |
Ballinas-Hernández et al. | Marked and unmarked speed bump detection for autonomous vehicles using stereo vision | |
CN114898306B (en) | Method and device for detecting target orientation and electronic equipment | |
CN113139488B (en) | Method and device for training segmented neural network | |
CN116071713A (en) | Zebra crossing determination method, device, electronic equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |