CN113052026B - Method and device for positioning smoking behavior in cabin - Google Patents

Method and device for positioning smoking behavior in cabin Download PDF

Info

Publication number
CN113052026B
CN113052026B CN202110270771.2A CN202110270771A CN113052026B CN 113052026 B CN113052026 B CN 113052026B CN 202110270771 A CN202110270771 A CN 202110270771A CN 113052026 B CN113052026 B CN 113052026B
Authority
CN
China
Prior art keywords
detection
smoking
target
area
detection frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110270771.2A
Other languages
Chinese (zh)
Other versions
CN113052026A (en
Inventor
成一诺
袁璋诣
王洪亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingwei Hirain Tech Co Ltd
Original Assignee
Beijing Jingwei Hirain Tech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingwei Hirain Tech Co Ltd filed Critical Beijing Jingwei Hirain Tech Co Ltd
Priority to CN202110270771.2A priority Critical patent/CN113052026B/en
Publication of CN113052026A publication Critical patent/CN113052026A/en
Application granted granted Critical
Publication of CN113052026B publication Critical patent/CN113052026B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The invention provides a method and a device for positioning smoking behavior in a cabin, wherein the method comprises the following steps: acquiring a picture detection result pair; performing de-duplication treatment on the picture detection result in the picture detection result pair, wherein the picture detection result obtained after the de-duplication treatment is a target picture detection result; the smoking detection frame in the target picture detection result is a target detection frame; calculating the area of the target detection frame; determining a target area to which a target detection frame belongs according to the area change range of the target camera in each area; each subarea in the target area is a target subarea; and determining the target subarea to which the target detection frame belongs according to the horizontal position change range of the target camera in each target subarea. Therefore, mapping from the two-dimensional image to the three-dimensional space position is realized, and accurate positioning of smoking behavior is completed. And compared with a detection mode using a plurality of smoke sensors, the number of the required sensors is smaller, and the hardware cost is reduced.

Description

Method and device for positioning smoking behavior in cabin
Technical Field
The invention relates to the field of automobile electronics, in particular to a method and a device for positioning smoking behavior in a cabin.
Background
In the running process of the vehicle, when a driver or a passenger generates smoke, the states of the window, the air conditioner and the like at the space position where the smoke is generated can be controlled through the automobile cabin control system, so that the cabin air environment is improved, and the riding experience of the driver or the passenger is improved.
One of the current smoke detection methods is to adopt smoke sensors, determine whether smoke exists according to the smoke concentration, and detect the smoke concentration distribution in the cabin by arranging a plurality of smoke sensors, so as to provide detection input for a control system and position the smoke. However, due to random scattering of smoke, the position of the smoking behavior and the smoking behavior cannot be accurately positioned, and in addition, the multi-sensor scheme also brings about an increase in cost.
Another detection method is to use a vehicle-mounted Driver Monitoring System (DMS) and an Occupant Monitoring System (OMS), which can already detect the smoke behavior of the driver and the passengers by recognizing the smoke behavior of the two-dimensional images captured by the DMS and OMS. However, the DMS or OMS can only locate the smoking behaviour from the two-dimensional image, and cannot be mapped directly from the two-dimensional image to the three-dimensional spatial location, and therefore also cannot accurately locate the spatial location of the smoking behaviour within the cabin.
Disclosure of Invention
In view of this, the embodiments of the present invention provide a method and an apparatus for positioning a smoking behavior in a cabin, so as to accurately position a spatial position of the smoking behavior in the cabin.
In order to achieve the above object, the embodiment of the present invention provides the following technical solutions:
the first aspect of the application provides a method for positioning smoke exhausting behaviors in a cabin, which is based on a DMS camera of a vehicle-mounted driver monitoring system and an OMS camera of a passenger monitoring system; the cabin interior is divided into a plurality of zones, wherein each zone comprises at least one sub-zone;
the method comprises the following steps:
acquiring a picture detection result pair; the image detection result pair comprises: the picture detection results are respectively output by the DMS and the OMS; each picture detection result at least comprises: detecting a sign signal of smoking behavior; the smoking behavior detection mark signal is a first value or a second value; the second value represents that the smoking behavior exists; the picture detection result that the smoking behavior detection mark signal is the second value further comprises: an image including a smoke detection frame and position data of the smoke detection frame;
performing de-duplication treatment on the picture detection results in the picture detection result pair, wherein the picture detection result obtained after the de-duplication treatment is a target picture detection result; the smoking detection frame in the target picture detection result is a target detection frame;
calculating the area of the target detection frame;
determining a target area to which the target detection frame belongs according to the area change range of the target camera in each area; the area of the target detection frame is positioned in the area change range of the target area; each subarea in the target area is a target subarea;
determining a target sub-area to which the target detection frame belongs according to the horizontal position change range of the target camera in each target sub-area; the horizontal position coordinates of the target detection frame are positioned in the horizontal position change range of the target subarea to which the horizontal position coordinates belong; the area change range and the horizontal position change range are obtained through pre-calibration.
Optionally, the performing the deduplication process includes:
extracting face characteristic data in each smoking detection frame in the picture detection result pair;
carrying out face feature matching on each smoking detection frame;
if the matching item exists, deleting the smoking detection frame with relatively low confidence in the matching item.
Optionally, the performing face feature matching includes:
and calculating Euclidean distance between face features of any two smoking detection frames in the picture detection result pair, and if the calculated Euclidean distance is smaller than a set threshold value, successfully matching the any two smoking detection frames, wherein the any two smoking detection frames form a pair of matching items.
Optionally, before locating the smoking behaviour, the method further comprises:
dividing the cabin interior into a plurality of zones, each zone comprising at least one sub-zone;
calibrating the change range of the smoking detection frame output by each camera based on the smoking behavior detection data acquired by each camera; the variation range includes: the area variation range in each region, and the horizontal position variation range in each sub-region.
Optionally, the smoking behavior detection data collected by each camera includes: a smoking behavior detection sample set collected in each sub-area; the smoking behavior detection sample in the smoking behavior detection sample set includes: the value is a second value smoking behavior detection mark signal, an image containing a smoking detection frame and position data of the smoking detection frame;
before calibration, the method further comprises:
and carrying out outlier data elimination processing on each smoking behavior detection sample set to obtain a final sample set.
Optionally, the calibrating includes:
counting the area change range of the smoking detection frame in each area;
and counting the horizontal position change range of the smoking detection frame of each final sample set in the corresponding sub-area.
Optionally, the outlier data rejection processing includes:
calculating the average value of the K-neighbor distances of each smoking behavior detection sample in the smoking behavior detection sample set in each subarea;
the smoking behavior detection samples in the smoking behavior detection sample set in each subarea are arranged in a descending order according to the average value of the K-neighbor distance;
removing the first n.a% of smoking behavior detection samples in the descending order of arrangement results as outlier data;
wherein n is the total number of samples in the smoking behavior detection sample set in each sub-area, and a is a real number.
A second aspect of the present application provides an in-cabin smoke behavior locating device, the cabin interior being divided into a plurality of zones, wherein each zone comprises at least one sub-zone;
the device comprises:
an acquisition unit configured to: acquiring a picture detection result pair; the image detection result pair comprises: the picture detection results are respectively output by the DMS and the OMS; each picture detection result at least comprises: detecting a sign signal of smoking behavior; the smoking behavior detection mark signal is a first value or a second value; the second value represents that the smoking behavior exists; the picture detection result that the smoking behavior detection mark signal is the second value further comprises: image comprising a smoke detection frame position data of the smoking detection frame;
a preprocessing unit for: performing de-duplication treatment on the picture detection results in the picture detection result pair, wherein the picture detection result obtained after the de-duplication treatment is a target picture detection result; the smoking detection frame in the target picture detection result is a target detection frame;
a positioning unit for:
calculating the area of the target detection frame;
determining a target area to which the target detection frame belongs according to the area change range of the target camera in each area; the area of the target detection frame is positioned in the area change range of the target area; each subarea in the target area is a target subarea;
determining a target sub-area to which the target detection frame belongs according to the horizontal position change range of the target camera in each target sub-area; the horizontal position coordinates of the target detection frame are positioned in the horizontal position change range of the target subarea to which the horizontal position coordinates belong; the area change range and the horizontal position change range are obtained through pre-calibration.
Optionally, in the aspect of performing deduplication processing, the preprocessing unit is specifically configured to:
extracting face characteristic data in each smoking detection frame in the picture detection result pair;
carrying out face feature matching on each smoking detection frame;
if the matching item exists, deleting the smoking detection frame with relatively low confidence in the matching item.
Optionally, in the aspect of face feature matching, the preprocessing unit is specifically configured to:
and calculating Euclidean distance between face features in any two smoking detection frames in the picture detection result pair, and if the calculated Euclidean distance is smaller than a set threshold value, successfully matching the any two smoking detection frames, wherein the any two smoking detection frames form a pair of matching items.
Optionally, the device may further comprise a calibration unit for:
dividing the cabin interior into a plurality of zones, each zone comprising at least one sub-zone;
calibrating the change range of the smoking detection frame output by each camera based on the smoking behavior detection data acquired by each camera; wherein, the change scope includes: the area variation range in each region, and the horizontal position variation range in each sub-region.
Optionally, before the calibration, the calibration unit may further perform outlier data rejection processing on the smoking behavior detection sample set collected in each sub-area, to obtain a final sample set.
Optionally, the calibration unit may perform outlier data rejection processing by:
calculating the average value of the K-neighbor distances of each smoking behavior detection sample in the smoking behavior detection sample set in each subarea;
the smoking behavior detection samples in the smoking behavior detection sample set in each subarea are arranged in a descending order according to the average value of the K-neighbor distance;
removing the first n.a% of smoking behavior detection samples in the descending order of arrangement results as outlier data;
wherein n is the total number of samples in the smoking behavior detection sample set in each sub-area, and a is a real number.
Optionally, the calibrating includes:
counting the area change range of the smoking detection frame in each area;
and counting the horizontal position change range of the smoking detection frame of each final sample set in the corresponding sub-area.
Therefore, in the embodiment of the invention, based on the calibrated data, the smoke detection frame obtained by detecting the DMS or the OMS is positioned, and finally, the area where the smoke action is positioned and the subarea of the area can be determined, so that the mapping from the two-dimensional image to the three-dimensional space position is realized, and the accurate positioning of the smoke action position is completed. And compared with a detection mode using a plurality of smoke sensors, the number of the required sensors is smaller, and the hardware cost is reduced.
Drawings
Fig. 1 is a schematic view of a scene in a cabin according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of an off-line calibration variation range according to an embodiment of the present invention;
fig. 3 is a schematic flow chart of outlier data rejection processing according to an embodiment of the present invention;
fig. 4 is a schematic flow chart of a method for positioning smoking behavior in a cabin according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a positioning device for smoke evacuation behavior in a cabin according to an embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a method and a device for positioning smoking behaviors in a cabin, which are used for accurately positioning the spatial positions of the smoking behaviors in the cabin.
The cabin interior smoking behavior positioning device can be specifically a cabin interior smoking behavior positioning controller or other controllers with functions integrated on the vehicle. The smoke behavior positioning device can position a smoke detection frame detected by the DMS camera or the OMS camera based on calibrated data.
Referring to fig. 1, the DMS camera may be disposed at the a-pillar 2-1 in the vehicle, the OMS camera may be disposed at the rearview mirror 2-2, and of course, the DMS camera or the OMS camera may be disposed at other positions. As long as there is overlap of the fields of view of the two locations disposed and all areas within the vehicle can be covered.
In calibrating the data, the cabin interior may be divided into a plurality of zones, each zone comprising at least one sub-zone. And then, calibrating the change range of the smoking detection frame output by each camera based on the smoking behavior detection data acquired by each camera.
Specifically, the variation range may include: the area change range of the smoke detection frame output by each camera in each region, and the horizontal position change range of the smoke detection frame output by each camera in each sub region.
FIG. 2 illustrates a more specific exemplary flow of off-line calibration ranges, including:
s1: the cabin interior is divided into a plurality of zones, each zone comprising at least one sub-zone.
Taking a common car as an example, the interior of the cabin can be divided into a front row region D1 and a rear row region D2, and the front row region D1 can be further divided into two sub-regions { D11, D12} for two seats (driver's seat and assistant driver) in the front row and three passenger seats in the rear row in the common car.
Each sub-region contains one seat, and similarly, the rear row region D2 is divided into three sub-regions { D21, D22, D23}, one seat being contained in each sub-region.
Of course, the cabin interior is not limited to be divided into front and rear rows of regions, and the number of regions can be increased or decreased correspondingly for cabins with larger or smaller space, and will not be described here.
S2: and collecting smoking behavior detection data output by each camera.
The smoking behaviour detection data comprises a plurality of smoking behaviour detection results.
It should be noted that, the DMS and the OMS perform image detection on the captured image to obtain an image detection result. The picture detection result comprises a smoking behavior detection flag signal flag. The flag may take either a first value or a second value (e.g., 0 or 1). The flag is a second value to represent that the smoking behavior exists, and the corresponding picture detection result is the smoking behavior detection result.
Each smoking behaviour detection result comprises at least: the smoking behavior detection flag signal with the value of the second value, the image containing the smoking detection frame, and the position data of the smoking detection frame.
The position data of the smoke detection frame may further include: the pixel coordinates of the central point and the width and height of the detection frame. In other embodiments of the present invention, the smoking behaviour detection result may further comprise a confidence level, denoted sigma.
In addition, time alignment of the two camera detection results is also completed. The time alignment refers to keeping the frames captured by the DMS and OMS cameras consistent in time, and is a prior art, and is not described herein.
S3: dividing the smoking behavior detection data acquired by each camera according to the subareas to obtain a smoking behavior detection sample set corresponding to each subarea.
The smoking behavior detection result in the smoking behavior detection sample set may be referred to as a smoking behavior detection sample.
S4: and carrying out outlier data elimination processing on each smoking behavior detection sample set acquired by each camera to obtain a final sample set.
Assume that the smoking behavior detection sample set is D n ={D i (x i ,y i ,w i ,h i ) I=1, 2,..n }. Wherein n represents the total number of the smoking detection frames in a certain subarea, i represents the ith smoking detection frame in the n smoking detection frames, and x i ,y i ,w i ,h i The pixel coordinates of the center point of the smoke detection frame of the ith smoke detection frame and the width and height of the smoke detection frame are respectively.
Before the outlier data eliminating process, the sample data in the smoking behavior detection sample set may be normalized (for example, normalized), and then the outlier data eliminating process may be performed based on the normalized sample data.
Referring to fig. 3, in one example, the outlier rejection process may further include:
step a: calculating the average value of the K-neighbor distances of each smoking behavior detection sample in the smoking behavior detection sample set in each subarea;
the smoking behaviour detection sample of the ith smoking detection frame may be referred to as Di.
K-neighbors refer to K pieces of sample data closest to Di, euclidean distances between Di and all other sample data in a sample set can be directly calculated, and then the K pieces of sample data are arranged in ascending order according to the Euclidean distances, and the first K pieces of sample data are K neighbors.
And dividing the Euclidean distance sum of the K neighbors and Di by K to obtain the average value of the K-neighbor distances.
K is a positive integer, and the specific value of K can be flexibly designed and is not described herein.
Step b: the smoking behavior detection samples in the smoking behavior detection sample set in each subarea are arranged in a descending order according to the average value of the K-neighbor distances;
for example, there are 4 sample data D 1 -D 4 Their corresponding K-nearest neighbor distance averages are 1.5, 1, 1.2 and 1.6, respectively. Then the descending order is followed by: d (D) 4 、D 1 、D 3 、D 2
Step c: removing the first n.a% of smoking behavior detection samples in the descending order of arrangement results as outlier data;
wherein a is a real number and K is a positive integer. For example, let a take 0.3.
Assuming k=1000, the first 3 samples in descending order can be taken as outlier culling.
S5: and counting the area change range of the smoking detection frame in each area.
The farther from the camera, the smaller the area of the detection frame in the camera, and conversely, the larger. The width in the detection frame position data is multiplied by the height to obtain the area.
Taking the front and rear row areas as an example, the area of the smoke detection frame corresponding to the smoke action occurring in the front row area is larger than that of the rear row area.
Because each sample set aims at a subarea, the areas of the smoking detection frames of all subareas in the same area are required to be counted together.
After statistics, the area variation range of the smoke detection frame of the front and rear row area corresponding to the DMS camera can be expressed as:
the area variation range of the smoke detection frame of the front and rear row area corresponding to the OMS camera can be expressed as:
s6: and counting the horizontal position change range of the smoking detection frame in the corresponding subarea in each final sample set.
The different subareas correspond to different pixel area ranges on the image, and the positions of the detection frames in the respective pixel areas are not fixed and have fluctuation, so that the horizontal position change range of the smoking detection frames in the corresponding subareas can be counted.
Taking the front and rear row areas as an example, after statistics, the horizontal position change range of the smoking detection frame of each sub-area in the front row area corresponding to the DMS camera can be expressed as:
the horizontal position change range of the smoke detection frame of each sub-region in the back row region corresponding to the DMS camera can be expressed as follows:
the horizontal position change range of the smoke detection frame of each sub-region in the front row region corresponding to the OMS camera can be expressed as:
the horizontal position change range of the smoke detection frame of each subarea in the back row area corresponding to the OMS camera can be expressed as:
thus, the off-line calibration process is completed.
The following describes a positioning scheme based on the off-line calibration results. Fig. 4 illustrates an exemplary interaction flow of the in-cabin smoke behavior locating device for on-line monitoring of smoke behavior for locating, comprising:
s41: acquiring a picture detection result pair;
wherein, the picture detection result pair comprises picture detection results respectively output by the DMS and the OMS, and each picture detection result at least comprises: the smoking behaviour detects a flag signal.
The aforementioned reference to the smoking behavior detection flag signal may be a first value (e.g., 0) or a second value (e.g., 1), the second value being indicative of the presence of smoking behavior.
In one example, the smoke behavior detection flag signals flag1 and flag2 may be extracted from a pair of image detection results output from the DMS and OMS, respectively, and if flag1||flag 2+=1 (||representing logical or operation), it indicates that the DMS or OMS detects smoke behavior, and the process is performed downward. If the flag1||flag 2= 0, the operation of acquiring the picture detection results of the DMS and OMS is continued.
When the smoking behavior detection flag signal is the second value, the picture detection result further includes: an image including a smoke detection frame, and position data of the smoke detection frame.
The position data of the smoke detection frame may further include: the pixel coordinates of the central point and the width and height of the detection frame. In other embodiments of the present invention, the picture detection result may further include a confidence level, denoted as σ.
In addition, the time alignment of the detection results of the two cameras of the DMS and the OMS can be completed.
It should be noted that, in the online monitoring stage, the DMS camera and the OMS camera are turned on to monitor the behavior states of the driver and the passenger, and the image detection results of the DMS and the OMS are respectively output.
S42: and performing de-duplication processing on the picture detection result in the picture detection result pair.
After the de-duplication process, the pair of picture detection results may include one picture detection result or may still include two picture detection results.
In one example, the following deduplication process may be performed:
step A: and extracting face characteristic data in each smoking detection frame in the picture detection result pair.
Specifically, all the smoking detection frame data of the DMS and the OMS can be extracted respectively,And->Wherein m is the number of smoking detection frames detected by the DMS, n is the number of smoking detection frames detected by the OMS, and the face features in the detection frames are extracted>And->Illustratively, the feature vector dimensions of the face features are all 28x2.
And (B) step (B): carrying out face feature matching on each smoking detection frame;
in one example, the Euclidean distance between face features in any two smoking detection frames of the DMS and OMS may be calculatedAnd D is smaller than a set threshold D, if the matching of the two arbitrary smoking detection frames is successful, a pair of matching items is formed, otherwise, the matching of the two arbitrary smoking detection frames is failed.
Step C: judging whether a matching item exists, if so, deleting a smoking detection frame with relatively low confidence in the matching item.
Specifically, according to two smoking detection frames (confidence coefficient is output by a DMS/OMS detection result) in the matching item, the smoking detection frame with the highest confidence coefficient is screened out by using a non-maximum suppression algorithm, and the smoking detection frame with the low confidence coefficient is deleted.
If no match exists, the process proceeds to step S43.
After the step C, a picture detection result (which may be referred to as a target picture detection result) after the de-duplication processing may be obtained, and at this time, the sampling detection frames in the picture detection result are all non-duplication detection frames. The smoking detection frame in the target picture detection result may be referred to as a target detection frame.
Steps a to C are also performed by fusing the data of the two cameras.
S43: calculating the area of a target smoking detection frame;
can be usedAnd the scale of each non-repeated detection frame under different cameras is represented.
S44: and determining the target area of the target detection frame according to the area change range of the target camera in each area.
Taking the ith smoking detection frame output by DMS as an example, assume that the area of the detection frame is S i DMS And S is i DMS Just inWithin the range, then the D1 region (front row region) can be determined as the target region.
That is, the area of the target detection frame is within the area variation range of the target area.
For convenience of reference, each sub-region within the target region may be referred to as a target sub-region.
Therefore, the area change ranges of the front and rear row areas of different cameras combined with off-line calibration And->Area S of each smoking detection frame under different cameras can be judged i DMS The area change range is adopted, so that the front and rear row positions of each smoking behavior in the vehicle are determined.
S45: and determining the target subarea to which the target detection frame belongs according to the horizontal position change range of the target camera in each target subarea.
The horizontal position coordinates of the target detection frame are located in the horizontal position change range of the target subarea to which the horizontal position coordinates belong.
In the previous example, it has been determined that the ith smoke detection frame belongs to the front row area, its horizontal position coordinates (x i ,y i ) Is positioned atThen the sub-region D12 is the sub-region to which the ith smoke detection frame belongs.
Thus, the position of the ith smoking detection frame can be accurately determined.
That is, the detection frame transverse position change range of different cameras combined with off-line calibration in different position areasCan judge the horizontal position coordinate (x i ,y i ) And the transverse position in the vehicle is used for further completing the positioning of the smoking behavior in the vehicle.
Therefore, in the embodiment of the invention, in the smoking detection frame of the same camera, the front and rear row positions of the smoking behavior in the vehicle can be distinguished by combining the area size of the detection frame with the area calibration data of the detection frame under the current camera.
Meanwhile, based on the principle that the detection frames positioned at different horizontal positions in the same row are similar in size, the front and rear row positions of the detection frames can be judged preferentially, and then the transverse position of the detection frame in the current row is judged according to the horizontal position coordinates of the detection frame.
That is, the embodiment of the invention locates the smoking detection frame detected by the DMS or OMS based on the calibrated data, and finally can determine which region the smoking behavior is located in and which sub-region of the region, thereby realizing the mapping from the two-dimensional image to the three-dimensional space position and completing the accurate location of the smoking behavior position. And compared with a detection mode using a plurality of smoke sensors, the number of the required sensors is smaller, and the hardware cost is reduced.
In summary, the method for positioning the smoking behavior in the cabin provided by the invention has the following advantages:
1. accurately judging the position of the smoking behavior in the cabin space: the method can be based on accurate offline calibration data of the smoke detection frame and a robust detection frame data fusion algorithm, and can be more accurately corresponding to the corresponding position area in the vehicle, so that the accurate positioning of the smoke behavior position is realized.
2. Realize the full coverage of the position area in the car: by fusing the DMS and OMS smoking detection results, extracting robust face features to match the smoking frames, and eliminating the repeated detection frames by adopting a non-maximum suppression algorithm, the smoking behavior can be accurately detected when smoking at any position in the vehicle, and repeated detection can not occur.
3. The scale position area off-line calibration process of each camera adopts a large amount of off-line data to complete calibration, and clustering algorithm (K-adjacent) is adopted to remove outlier data in the calibration process, so that the robustness of calibration sample data is improved, and the calibration result is more accurate.
4. Compared with the traditional smoke detector-based smoke behavior detection, the smoke detector has the advantages that the behavior detection result is more accurate, the number of required sensors is smaller, and the hardware cost is reduced.
The embodiment of the present invention also needs to protect the positioning device for smoking behavior in the cabin, please refer to fig. 5, which exemplarily includes:
an acquisition unit 1 for: acquiring a picture detection result pair;
the picture detection result pair comprises picture detection results, and each picture detection result at least comprises: and the picture detection results are respectively output by the DMS and the OMS.
Each picture detection result at least comprises: detecting a sign signal of smoking behavior;
the smoking behavior detection flag signal may be a first value or a second value, wherein the second value is indicative of a presence of a smoking behavior and the first value is indicative of a non-smoking behavior; the picture detection result that the smoking behavior detection mark signal is the second value further comprises: an image including a smoke detection frame and position data of the smoke detection frame;
a preprocessing unit 2 for: performing de-duplication treatment on the picture detection result in the picture detection result pair to obtain a target picture detection result;
the smoking detection frame in the target picture detection result may be referred to as a target detection frame.
A positioning unit 3 for:
calculating the area of the target detection frame;
determining a target area to which a target detection frame belongs according to the area change range of the target camera in each area; the area of the target detection frame is positioned in the area change range of the target area; each subarea in the target area is a target subarea;
determining a target sub-area to which a target detection frame belongs according to the horizontal position change range of the target camera in each target sub-area; the horizontal position coordinates of the target detection frame are positioned in the horizontal position change range of the target subarea to which the horizontal position coordinates belong; the area change range and the horizontal position change range are obtained through pre-calibration.
The specific description is referred to the foregoing description, and will not be repeated here.
In other embodiments of the present invention, the preprocessing unit 2 may be specifically configured to:
extracting face characteristic data in each smoking detection frame in the picture detection result pair;
carrying out face feature matching on each smoking detection frame;
if the matching item exists, deleting the smoking detection frame with relatively low confidence in the matching item.
The specific description is referred to the foregoing description, and will not be repeated here.
In other embodiments of the present invention, in terms of performing face feature matching, the preprocessing unit 2 may be specifically configured to:
and calculating Euclidean distance between face features in any two smoking detection frames in the picture detection result pair, and if the calculated Euclidean distance is smaller than a set threshold value, successfully matching the any two smoking detection frames, wherein the any two smoking detection frames form a pair of matching items.
The specific description is referred to the foregoing description, and will not be repeated here.
In other embodiments of the present invention, the apparatus may further include a calibration unit for:
dividing the cabin interior into a plurality of zones, each zone comprising at least one sub-zone;
calibrating the change range of the smoking detection frame output by each camera based on the smoking behavior detection data acquired by each camera; wherein, the change scope includes: the area variation range in each region, and the horizontal position variation range in each sub-region.
The specific description is referred to the foregoing description, and will not be repeated here.
In addition, before the calibration, the calibration unit can also perform outlier data elimination processing on the smoke behavior detection sample set collected in each sub-area to obtain a final sample set.
Specifically, the calibration unit may perform outlier data rejection processing by:
calculating the average value of the K-neighbor distances of each smoking behavior detection sample in the smoking behavior detection sample set in each subarea;
the smoking behavior detection samples in the smoking behavior detection sample set in each subarea are arranged in a descending order according to the average value of the K-neighbor distance;
removing the first n.a% of smoking behavior detection samples in the descending order of arrangement results as outlier data;
wherein n is the total number of samples in the smoking behavior detection sample set in each sub-area, and a is a real number.
Correspondingly, the calibrating comprises the following steps:
counting the area change range of the smoking detection frame in each area;
and counting the horizontal position change range of the smoking detection frame of each final sample set in the corresponding sub-area.
The specific description is referred to the foregoing description, and will not be repeated here.
Those of skill would further appreciate that the elements and model steps of the examples described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the elements and steps of the examples have been described generally in terms of functionality in the foregoing description to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or model described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, WD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method for positioning smoking behavior in a cabin is characterized by comprising the steps of based on a DMS camera of a vehicle-mounted driver monitoring system and an OMS camera of a passenger monitoring system; the cabin interior is divided into a plurality of zones, wherein each zone comprises at least one sub-zone;
the method comprises the following steps:
acquiring a picture detection result pair; the image detection result pair comprises: the picture detection results are respectively output by the DMS and the OMS; each picture detection result at least comprises: detecting a sign signal of smoking behavior; the smoking behavior detection mark signal is a first value or a second value; the second value represents that the smoking behavior exists; the picture detection result that the smoking behavior detection mark signal is the second value further comprises: an image including a smoke detection frame and position data of the smoke detection frame;
performing de-duplication treatment on the picture detection results in the picture detection result pair, wherein the picture detection result obtained after the de-duplication treatment is a target picture detection result; the smoking detection frame in the target picture detection result is a target detection frame;
calculating the area of the target detection frame;
determining a target area to which the target detection frame belongs according to the area change range of the target camera in each area; the area of the target detection frame is positioned in the area change range of the target area; each subarea in the target area is a target subarea;
determining a target sub-area to which the target detection frame belongs according to the horizontal position change range of the target camera in each target sub-area; the horizontal position coordinates of the target detection frame are positioned in the horizontal position change range of the target subarea to which the horizontal position coordinates belong; the area change range and the horizontal position change range are obtained through pre-calibration.
2. The method of claim 1, wherein performing the deduplication process comprises:
extracting face characteristic data in each smoking detection frame in the picture detection result pair;
carrying out face feature matching on each smoking detection frame;
if the matching item exists, deleting the smoking detection frame with relatively low confidence in the matching item.
3. The method of claim 2, wherein said performing face feature matching comprises:
and calculating Euclidean distance between face features of any two smoking detection frames in the picture detection result pair, and if the calculated Euclidean distance is smaller than a set threshold value, successfully matching the any two smoking detection frames, wherein the any two smoking detection frames form a pair of matching items.
4. A method according to any one of claims 1-3, further comprising, prior to locating the smoking behaviour:
dividing the cabin interior into a plurality of zones, each zone comprising at least one sub-zone;
calibrating the change range of the smoking detection frame output by each camera based on the smoking behavior detection data acquired by each camera; the variation range includes: the area variation range in each region, and the horizontal position variation range in each sub-region.
5. The method of claim 4, wherein,
collected by each camera the smoking behavior detection data includes: a smoking behavior detection sample set collected in each sub-area; the smoking behavior detection sample in the smoking behavior detection sample set includes: the value is a second value smoking behavior detection mark signal, an image containing a smoking detection frame and position data of the smoking detection frame;
before calibration, the method further comprises:
and carrying out outlier data elimination processing on each smoking behavior detection sample set to obtain a final sample set.
6. The method of claim 5, wherein said calibrating comprises:
counting the area change range of the smoking detection frame in each area;
and counting the horizontal position change range of the smoking detection frame of each final sample set in the corresponding sub-area.
7. The method of claim 5, wherein the outlier rejection process comprises:
calculating the average value of the K-neighbor distances of each smoking behavior detection sample in the smoking behavior detection sample set in each subarea;
the smoking behavior detection samples in the smoking behavior detection sample set in each subarea are arranged in a descending order according to the average value of the K-neighbor distance;
removing the first n.a% of smoking behavior detection samples in the descending order of arrangement results as outlier data;
wherein n is the total number of samples in the smoking behavior detection sample set in each sub-area, and a is a real number.
8. A smoke behavior locating device in a cabin, wherein the cabin interior is divided into a plurality of zones, wherein each zone comprises at least one sub-zone;
the device comprises:
an acquisition unit configured to: acquiring a picture detection result pair; the image detection result pair comprises: the picture detection results are respectively output by the DMS and the OMS; each picture detection result at least comprises: detecting a sign signal of smoking behavior; the smoking behavior detection mark signal is a first value or a second value; the second value represents that the smoking behavior exists; the picture detection result that the smoking behavior detection mark signal is the second value further comprises: an image including a smoke detection frame and position data of the smoke detection frame;
a preprocessing unit for: performing de-duplication treatment on the picture detection results in the picture detection result pair, wherein the picture detection result obtained after the de-duplication treatment is a target picture detection result; the smoking detection frame in the target picture detection result is a target detection frame;
a positioning unit for:
calculating the area of the target detection frame;
determining a target area to which the target detection frame belongs according to the area change range of the target camera in each area; the area of the target detection frame is positioned in the area change range of the target area; each subarea in the target area is a target subarea;
determining a target sub-area to which the target detection frame belongs according to the horizontal position change range of the target camera in each target sub-area; the horizontal position coordinates of the target detection frame are positioned in the horizontal position change range of the target subarea to which the horizontal position coordinates belong; the area change range and the horizontal position change range are obtained through pre-calibration.
9. The apparatus of claim 8, wherein in the performing of the deduplication process, the preprocessing unit is specifically configured to:
extracting face characteristic data in each smoking detection frame in the picture detection result pair;
carrying out face feature matching on each smoking detection frame;
if the matching item exists, deleting the smoking detection frame with relatively low confidence in the matching item.
10. The apparatus according to claim 9, wherein in the aspect of performing face feature matching, the preprocessing unit is specifically configured to:
and calculating Euclidean distance between face features in any two smoking detection frames in the picture detection result pair, and if the calculated Euclidean distance is smaller than a set threshold value, successfully matching the any two smoking detection frames, wherein the any two smoking detection frames form a pair of matching items.
CN202110270771.2A 2021-03-12 2021-03-12 Method and device for positioning smoking behavior in cabin Active CN113052026B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110270771.2A CN113052026B (en) 2021-03-12 2021-03-12 Method and device for positioning smoking behavior in cabin

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110270771.2A CN113052026B (en) 2021-03-12 2021-03-12 Method and device for positioning smoking behavior in cabin

Publications (2)

Publication Number Publication Date
CN113052026A CN113052026A (en) 2021-06-29
CN113052026B true CN113052026B (en) 2023-07-18

Family

ID=76511995

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110270771.2A Active CN113052026B (en) 2021-03-12 2021-03-12 Method and device for positioning smoking behavior in cabin

Country Status (1)

Country Link
CN (1) CN113052026B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010237971A (en) * 2009-03-31 2010-10-21 Saxa Inc Monitoring device for smoking in walking
WO2019048604A1 (en) * 2017-09-09 2019-03-14 Fcm Dienstleistungs Ag Automatic early detection of smoke, soot and fire with increased detection reliability using machine learning
CN110032916A (en) * 2018-01-12 2019-07-19 北京京东尚科信息技术有限公司 A kind of method and apparatus detecting target object
CN110738186A (en) * 2019-10-23 2020-01-31 德瑞姆创新科技(深圳)有限公司 driver smoking detection method and system based on computer vision technology
CN111553214A (en) * 2020-04-20 2020-08-18 哈尔滨工程大学 Method and system for detecting smoking behavior of driver
CN111723602A (en) * 2019-03-19 2020-09-29 杭州海康威视数字技术股份有限公司 Driver behavior recognition method, device, equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010237971A (en) * 2009-03-31 2010-10-21 Saxa Inc Monitoring device for smoking in walking
WO2019048604A1 (en) * 2017-09-09 2019-03-14 Fcm Dienstleistungs Ag Automatic early detection of smoke, soot and fire with increased detection reliability using machine learning
CN110032916A (en) * 2018-01-12 2019-07-19 北京京东尚科信息技术有限公司 A kind of method and apparatus detecting target object
CN111723602A (en) * 2019-03-19 2020-09-29 杭州海康威视数字技术股份有限公司 Driver behavior recognition method, device, equipment and storage medium
CN110738186A (en) * 2019-10-23 2020-01-31 德瑞姆创新科技(深圳)有限公司 driver smoking detection method and system based on computer vision technology
CN111553214A (en) * 2020-04-20 2020-08-18 哈尔滨工程大学 Method and system for detecting smoking behavior of driver

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
智能交通中的出租车违章行为自动检测系统;黄训平;贾克斌;聂文真;刘鹏宇;王凤珍;;测控技术(01);全文 *

Also Published As

Publication number Publication date
CN113052026A (en) 2021-06-29

Similar Documents

Publication Publication Date Title
CN109690623B (en) System and method for recognizing pose of camera in scene
EP1640912B1 (en) Moving-object height determining apparatus
US9047518B2 (en) Method for the detection and tracking of lane markings
CN109541583B (en) Front vehicle distance detection method and system
CN107230218B (en) Method and apparatus for generating confidence measures for estimates derived from images captured by vehicle-mounted cameras
US9662977B2 (en) Driver state monitoring system
US6690011B2 (en) Infrared image-processing apparatus
CN112349144B (en) Monocular vision-based vehicle collision early warning method and system
JP4943034B2 (en) Stereo image processing device
CN106447730B (en) Parameter estimation method and device and electronic equipment
US20160217335A1 (en) Stixel estimation and road scene segmentation using deep learning
CN110023951A (en) Information processing equipment, imaging device, apparatus control system, information processing method and computer program product
JP2015041164A (en) Image processor, image processing method and program
CN108806019B (en) Driving record data processing method and device based on acceleration sensor
US11080562B1 (en) Key point recognition with uncertainty measurement
CN115346197A (en) Driver distraction behavior identification method based on bidirectional video stream
US9955136B2 (en) Distance measuring device and vehicle provided therewith
EP3486871B1 (en) A vision system and method for autonomous driving and/or driver assistance in a motor vehicle
CN112735164B (en) Test data construction method and test method
CN113052026B (en) Method and device for positioning smoking behavior in cabin
JP7048157B2 (en) Analytical equipment, analysis method and program
JP2007280387A (en) Method and device for detecting object movement
CN116012822B (en) Fatigue driving identification method and device and electronic equipment
JP4176558B2 (en) Vehicle periphery display device
WO2022264533A1 (en) Detection-frame position-accuracy improving system and detection-frame position correction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant