WO2013157466A1 - Smoking detection device, method and program - Google Patents
Smoking detection device, method and program Download PDFInfo
- Publication number
- WO2013157466A1 WO2013157466A1 PCT/JP2013/060859 JP2013060859W WO2013157466A1 WO 2013157466 A1 WO2013157466 A1 WO 2013157466A1 JP 2013060859 W JP2013060859 W JP 2013060859W WO 2013157466 A1 WO2013157466 A1 WO 2013157466A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- smoking
- cigarette
- area
- image
- luminance
- Prior art date
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60H—ARRANGEMENTS OF HEATING, COOLING, VENTILATING OR OTHER AIR-TREATING DEVICES SPECIALLY ADAPTED FOR PASSENGER OR GOODS SPACES OF VEHICLES
- B60H1/00—Heating, cooling or ventilating [HVAC] devices
- B60H1/00642—Control systems or circuits; Control members or indication devices for heating, cooling or ventilating devices
- B60H1/00735—Control systems or circuits characterised by their input, i.e. by the detection, measurement or calculation of particular conditions, e.g. signal treatment, dynamic models
- B60H1/00742—Control systems or circuits characterised by their input, i.e. by the detection, measurement or calculation of particular conditions, e.g. signal treatment, dynamic models by detection of the vehicle occupants' presence; by detection of conditions relating to the body of occupants, e.g. using radiant heat detectors
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60H—ARRANGEMENTS OF HEATING, COOLING, VENTILATING OR OTHER AIR-TREATING DEVICES SPECIALLY ADAPTED FOR PASSENGER OR GOODS SPACES OF VEHICLES
- B60H1/00—Heating, cooling or ventilating [HVAC] devices
- B60H1/00642—Control systems or circuits; Control members or indication devices for heating, cooling or ventilating devices
- B60H1/00735—Control systems or circuits characterised by their input, i.e. by the detection, measurement or calculation of particular conditions, e.g. signal treatment, dynamic models
- B60H1/008—Control systems or circuits characterised by their input, i.e. by the detection, measurement or calculation of particular conditions, e.g. signal treatment, dynamic models the input being air quality
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F24—HEATING; RANGES; VENTILATING
- F24F—AIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
- F24F11/00—Control or safety arrangements
- F24F11/30—Control or safety arrangements for purposes related to the operation of the system, e.g. for safety or monitoring
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F24—HEATING; RANGES; VENTILATING
- F24F—AIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
- F24F11/00—Control or safety arrangements
- F24F11/30—Control or safety arrangements for purposes related to the operation of the system, e.g. for safety or monitoring
- F24F11/32—Responding to malfunctions or emergencies
- F24F11/33—Responding to malfunctions or emergencies to fire, excessive heat or smoke
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F24—HEATING; RANGES; VENTILATING
- F24F—AIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
- F24F11/00—Control or safety arrangements
- F24F11/62—Control or safety arrangements characterised by the type of control or by internal processing, e.g. using fuzzy logic, adaptive control or estimation of values
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
- G06V20/597—Recognising the driver's state or behaviour, e.g. attention or drowsiness
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F24—HEATING; RANGES; VENTILATING
- F24F—AIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
- F24F11/00—Control or safety arrangements
- F24F11/62—Control or safety arrangements characterised by the type of control or by internal processing, e.g. using fuzzy logic, adaptive control or estimation of values
- F24F11/63—Electronic processing
- F24F11/65—Electronic processing for selecting an operating mode
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F24—HEATING; RANGES; VENTILATING
- F24F—AIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
- F24F2110/00—Control inputs relating to air properties
- F24F2110/50—Air quality properties
- F24F2110/62—Tobacco smoke
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02B—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
- Y02B30/00—Energy efficient heating, ventilation or air conditioning [HVAC]
- Y02B30/70—Efficient control or regulation technologies, e.g. for control of refrigerant flow, motor or heating
Definitions
- the present invention relates to a smoking detection apparatus, method, and program that can capture an image of a driver's face (face image) of a car, for example, and can grasp the smoking state of the driver from the face image.
- a near infrared light is irradiated from the projector toward the driver's face, 2.
- a driver monitor system that captures a driver's face with a camera, obtains a face image, and analyzes the face image to detect a driver's state or perform personal authentication.
- One embodiment provides a smoking detection apparatus, method, and program capable of accurately detecting a driver's smoking state with a simple configuration.
- a smoking detection device that detects a smoking state of the person based on image information from an imaging unit that photographs the person.
- a smoking detection device detects a cigarette from a smoking area setting means for setting a smoking area where a smoking cigarette is expected to exist based on a face image obtained by photographing the person's face, and an image in the smoking area
- a smoking estimation means for estimating that the tobacco is in the smoking state when the tobacco is present in the smoking area.
- a driver monitor system 1 for analyzing images is installed.
- this driver monitor system 1 is used as a smoking detection device.
- the driver monitor system 1 includes, for example, an imaging unit 5 disposed in the vicinity of a dash mode 3 meter (not shown) and a control unit 7 that controls the operation of the imaging unit 5 and the like.
- the imaging unit 5 includes a camera 9 that images the driver's face and a pair of imaging projectors 11a and 11b (collectively referred to as 11) for irradiating the driver's face and the like with light. It has.
- the camera 9 is, for example, a CCD camera that can take an image with near-infrared light (that is, has a constant sensitivity to the near-infrared light), and for example, takes an image of the driver's face obliquely from the front and below. Is arranged to be possible.
- the projector 11 is made of, for example, a near-infrared LED, and is arranged so as to be substantially coaxial with the camera 9 so as to irradiate near-infrared light toward the driver's face.
- the irradiation area is substantially conical around the driver's face.
- the driver monitor system 1 includes an imaging unit 5 having the camera 9 and the projector 11 described above, and a control unit 7 that controls the imaging unit 5 and the like.
- An air conditioner 13, a speaker 15, a power window 17, an air purifier 19, and the like are connected to the control unit 7 as devices controlled by a control signal from the control unit 7.
- control unit 7 is an electronic control device including a known microcomputer.
- the control unit 7 performs image processing and the like based on the image signal from the camera 9 and performs various controls according to the detection of the driver's smoking state and the detection result as described later.
- a driver's face is photographed by a camera, and a face image indicating the driver's face is acquired from the photographed image.
- a method for acquiring the face image for example, a well-known method as described in Japanese Patent Application Laid-Open No. 2008-276328 can be employed. In this method, as described in paragraphs [0030] to [0032] of the publication, based on the difference between the luminance of the driver's face and the luminance of the driver's background image in the image, A face image (face image) is extracted.
- FIG. 4A For example, from a line segment (wire frame) that divides parts such as eyes, mouth and nose of the face.
- a mask having a mask pattern is formed.
- This mask is represented by a figure (face patch drawing) in which triangles (triangular patches) connecting feature points of the face image are combined.
- the feature points in the face image indicate points on the face having features that are clearly different from the surroundings on the image, such as the edges of the eyes and the mouth (especially useful for face discrimination). Furthermore, secondary points (for example, points at specific positions between feature points) obtained from the feature points may be included in the feature points. Note that in any face image, the feature points can be obtained by, for example, a well-known edge detection process.
- AAM Active Appearance Models
- Edwards, G., Taylor, C. J. and Cootes, T. F .: Interpreting Face Images using Active Appearance Models, In IEEE Conf. On Automatic Face and Gesture Recognition 1998, pp.300-305, Japan, (1998)).
- This AAM is a technique mainly used for facial expression, tracking, face recognition, etc., and it is a two-dimensional space that expresses the change in appearance from the basic shape with the arrangement of a certain feature point group as the basic shape. Is a known face image model. The AAM method will be briefly described below.
- each feature point is changed based on the principal component vector calculated by the principal component analysis.
- the luminance information of the registered face is changed based on (3), and the luminance value (registered luminance value) of the registered luminance information matches the input image in the current frame.
- the process (3) is repeated.
- the position where the luminance value is closest is set as the model fitting position (that is, the position where the mask is estimated to fit the face image in the current frame).
- the setting of the mask valid flag is determined by threshold processing of the difference between the registered luminance value at the time of fitting and the luminance value of the input image, for example. That is, the mask valid flag is a flag indicating whether or not the mask has been accurately fitted to the face image in the current frame.
- the region of the mouth and nose (gray portion in the same figure) is divided in order to determine the region where the driver is smoking.
- the mouth and nose region is a region connecting feature points indicating the outer periphery of the mouth and nose.
- a smoking area where smoking tobacco is expected to exist is set so as to include the mouth area and the nose area.
- a rectangular smoking area as shown in FIG. That is, the upper end of the nose region is the upper end of the smoking region, the lower end of the mouth region is a predetermined value (a predetermined number of pixels) the lower end of the smoking region, the left end of the mouth region is the left end of the smoking region, and the mouth The right end of the smoking area is the right end of the smoking area.
- a rectangular smoking area is set.
- the smoking area is appropriately enlarged or enlarged so that the size of the smoking area becomes constant according to the change in the size of the face image. to shrink.
- this smoking area is an area where it is determined that there is a high possibility of smoking cigarettes based on data obtained through experiments, etc. Can be reduced.
- a smoking region can be set as appropriate, such as a polygonal or circular region centered on the center of the mouth (for example, the center of the area).
- the smoking area is divided into a large number of small areas (array elements) represented by a matrix of vertical M ⁇ horizontal N, and the luminance value in the area is stored.
- the average value of the luminance values of each pixel in the small area is stored.
- the smoking area can be expressed as a luminance distribution (luminance distribution) as shown in FIG. Note that the luminance distribution may be created by obtaining the luminance value for each pixel.
- data of array elements in a plurality of frames is stored here.
- the luminance distribution of FIG. 6 obtained for each frame is added for each array element, and a necessary number of data is accumulated.
- a sufficient number of data is obtained.
- the difference between the registered luminance distribution and the luminance distribution of the current frame is obtained. Accordingly, when the cigarette is added, the image of the cigarette portion is different, and thus an image (difference image) as shown in FIG. 5C is obtained, for example. Therefore, it can be determined from this difference image whether or not tobacco is added. That is, the smoking state can be estimated.
- the difference in luminance between the leading end portion of the tobacco (one end portion in the axial direction of the tobacco) and the other portion (base end portion) is obtained.
- a difference in average luminance between the front end portion and the base end portion is obtained, and when the difference in luminance is equal to or greater than a predetermined value, it can be determined that the front end of the cigarette is lit.
- the brightness of the tip of the cigarette is greatly increased compared to the brightness of the other parts, so the presence or absence of smoking can be determined from the difference in brightness.
- the presence or absence of smoking may be determined simply from the brightness of the tip.
- ⁇ Brightness distribution registration routine> This routine shows a process for registering the luminance distribution of the smoking area as shown in FIG. 6 on the basis of the face image in a state where the user does not smoke in advance.
- step (S) 100 a face image is acquired as described above based on the image photographed by the camera 9.
- step 110 feature points are obtained from the face image by the AAM technique, and a mask as shown in FIG.
- step 120 it is determined whether or not the mask valid flag is set (is set). If an affirmative determination is made here, the process proceeds to step 130. If a negative determination is made, the process returns to step 100, and the same processing is repeated.
- step 130 since the mask is suitably formed, a smoking area including the mouth and nose in the mask is set, and a luminance distribution of the smoking area as shown in FIG. 6 is created.
- the produced luminance distribution is accumulated. That is, the luminance distribution of the frame is added for each array element.
- step 150 it is determined whether or not the necessary data (for example, 30 frames) has been accumulated. If an affirmative determination is made here, the process proceeds to step 160, whereas if a negative determination is made, the process returns to step 100 and the same processing is repeated.
- the necessary data for example, 30 frames
- step 160 since the necessary amount of data has been collected, it is registered as a luminance distribution in the smoking area including the mouth and nose at normal times (when not smoking).
- step 170 the process proceeds to a smoking detection routine described later, and the present process is temporarily terminated.
- ⁇ Smoking detection routine> This routine shows a process of detecting a smoking state based on a photographed driver's face image.
- step 200 of FIG. 9 When actually detecting the smoking state of the driver, as shown in step 200 of FIG. 9, first, the driver's face is photographed and the face image is acquired.
- step 210 feature points are obtained from the face image by the AAM technique, and a mask as shown in FIG. 4A is produced.
- step 220 it is determined whether or not the mask valid flag is set. If an affirmative determination is made here, the process proceeds to step 230. On the other hand, if a negative determination is made, the process returns to step 200 and the same processing is repeated.
- step 230 a smoking area including the mouth and nose in the mask is set, and a luminance distribution of the smoking area is created.
- step 240 the difference between the luminance distribution in the current frame thus created and the registered luminance distribution created in the luminance distribution production routine is calculated.
- the subsequent step 250 it is determined whether or not the shape of the cigarette as shown in FIG. For example, it is determined whether or not a long image (a long rectangular image) having a predetermined length and width corresponding to an actual cigarette has been extracted.
- step 260 the process proceeds to step 260 as “possible smoking state”, whereas if a negative determination is made, the process returns to step 200 as “not smoking state”, and the same processing is repeated.
- step 260 since there is a high possibility of being in a smoking state, the tip of the cigarette (which is likely to be on fire) is detected for confirmation. That is, the brightness of an area within a predetermined range from the tip of the cigarette (for example, the average value of the luminance of pixels within the range) is detected.
- step 270 it is determined whether or not the tip of the cigarette is shining strongly. For example, when the brightness of the tip of the cigarette is greater than or equal to a predetermined determination value, or when the difference in brightness (average value) between the tip of the cigarette and a portion other than the tip is greater than or equal to a predetermined value , the tip of the cigarette Is determined to be shining strongly.
- step 280 the process proceeds to step 280 as “smoking state”, whereas if a negative determination is made, the process returns to step 200 to confirm again, and the same processing is repeated.
- step 280 it is determined that the person is in a smoking state, and for example, a smoking confirmation flag is set.
- step 290 since smoking is in progress, the air purifier 19 is activated to purify the air in the vehicle.
- step 300 the speaker 15 is driven to output an alarm such as “Let's be careful about health”.
- step 310 in order to release smoke outside the vehicle, the power window 17 is driven to control the window to be opened a little.
- the air conditioner 13 may be controlled to improve air circulation and promote air purification.
- a human face image can be taken using a device such as the camera 9 used in the normal driver monitor system 1 and cigarettes can be detected from the face image.
- a device such as the camera 9 used in the normal driver monitor system 1 and cigarettes can be detected from the face image.
- Such a special device is not necessary, and the device can be simplified and the cost can be reduced.
- a smoking area is set in the vicinity of the mouth or nose, for example, where it is expected that there is a cigarette currently being smoked, and if tobacco can be detected in that area, it can be estimated as a smoking state, so the determination accuracy is high .
- the image of smoking when comparing an image of a person smoking with an image of no smoking, the image of smoking should have a cigarette image. Therefore, in the present embodiment, an image that is not smoked is obtained in advance, and an image of the cigarette is extracted (if smoked) by comparing the image with an image for which smoking is determined. it can. Thereby, a cigarette can be detected with high accuracy.
- the tip of the cigarette shines brightly. Therefore, in this embodiment, the brightness
- the smoking area setting unit (means) corresponds to step 230
- the cigarette detection unit (means) corresponds to step 240
- the smoking estimation unit (means) corresponds to steps 250 to 270, respectively.
- the present invention is not limited to the above-described embodiment, and can be implemented in various modes without departing from the gist of the present invention.
- a computer program that performs each process as shown in FIGS. 8 and 9 of the above-described smoking detection apparatus is also within the scope of the present invention. That is, the function of the smoking detection device described above can be realized by processing executed by a computer program. This program can be executed by being recorded on a computer-readable recording medium 8 and loaded into the control unit 7 (computer) and started as necessary.
- a smoking detection device that detects a smoking state of a person based on image information from an imaging unit that photographs the person.
- a smoking detection device detects a cigarette from an image in the smoking area, and a smoking area setting unit that sets a smoking area where a smoking cigarette is expected to exist based on a face image obtained by photographing the person's face And a smoking estimation unit that estimates that the cigarette is in the smoking state when the cigarette is present in the smoking area.
- a smoking area where smoking cigarettes are expected to exist is set based on a face image obtained by photographing a human face, cigarettes are detected from the image in the smoking area, and the smoking area is detected. If tobacco is present, it is assumed that it is a smoking state.
- a human face image can be taken using a device such as a camera used in a normal driver monitor, and cigarettes can be detected from the face image. Therefore, a special device such as an infrared camera is not necessary, and the device can be simplified and the cost can be reduced.
- a smoking area is set in the vicinity of the mouth or nose, for example, where it is expected that there is a cigarette currently being smoked, and if tobacco can be detected in that area, it can be estimated as a smoking state, so the determination accuracy is high There is an advantage.
- the smoking area includes at least one of a mouth area and a nose area in an area smaller than the entire area of the face image.
- This embodiment illustrates a preferred smoking area.
- tobacco exists around the mouth and nose. Therefore, by setting a smoking area around the mouth and nose where there is a high possibility that there is a cigarette during smoking, a highly accurate smoking determination can be made.
- the cigarette detection unit compares the image in the smoking area with a basic image that does not include a preset cigarette, and extracts the cigarette image. , Detecting the tobacco.
- the image of smoking When comparing an image of a person smoking with an image of no smoking, the image of smoking should have a cigarette image. Therefore, by obtaining a basic image that has not been smoked in advance and comparing the basic image with an image for which smoking determination is performed, an image of a cigarette can be easily extracted (if smoked). . Thereby, a cigarette can be detected.
- the smoking estimation unit obtains luminance at the tip of the cigarette based on the extracted image of the tobacco, and based on the luminance of the tip, the smoking state Make a decision.
- the tip of the cigarette shines brightly (for example, other than the tip). Therefore, based on the brightness
- the smoking estimation unit determines the smoking state when the luminance at the tip of the cigarette is equal to or higher than a predetermined value.
- the cigarette is determined to be a smoking state, so that a precise smoking determination can be made.
- the smoking estimation unit determines the smoking state when the luminance at the tip of the cigarette is greater than a predetermined value by the luminance other than the tip.
- the smoking state is determined when the luminance at the tip of the cigarette is higher than the luminance of the other portions, it is possible to make a highly accurate smoking determination.
- the imaging by the imaging unit is performed by irradiating near infrared light.
- a white portion appears brighter (compared to photographing with visible light). Since cigarettes are usually white, it is possible to detect cigarettes with high accuracy by photographing with near-infrared light.
- Another aspect of the embodiment is a program for causing a computer to function as the smoking area setting unit, the cigarette detection unit, and the smoking estimation unit.
- the smoking area setting unit, the cigarette detection unit, and the smoking estimation unit can be realized by processing executed by a computer program.
- Such a program can be used by, for example, recording it on a computer-readable recording medium such as FD, MO, DVD-ROM, CD-ROM, hard disk, etc., and loading and starting the computer as necessary.
- a computer-readable recording medium such as FD, MO, DVD-ROM, CD-ROM, hard disk, etc.
- the ROM or backup RAM may be recorded as a computer-readable recording medium, and the ROM or backup RAM may be incorporated into a computer and used.
Landscapes
- Engineering & Computer Science (AREA)
- Mechanical Engineering (AREA)
- Chemical & Material Sciences (AREA)
- Combustion & Propulsion (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Fuzzy Systems (AREA)
- Mathematical Physics (AREA)
- Signal Processing (AREA)
- Thermal Sciences (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
This smoking detection device detects a smoking state of persons on the basis of the image information from an imaging unit which images persons. This smoking detection device is provided with: a smoking region setting means which, on the basis of a face image taken of a person's face, sets a smoking region where a cigarette being smoked is predicted to be; a cigarette detection means which detects cigarettes from images in said smoking region; and a smoking inference means which infers a smoking state if there is a cigarette in the smoking region.
Description
本発明は、例えば自動車のドライバ(運転者)の顔の画像(顔画像)を撮像し、その顔画像から運転者の喫煙状態を把握できる喫煙検出装置、方法及びプログラムに関する。
The present invention relates to a smoking detection apparatus, method, and program that can capture an image of a driver's face (face image) of a car, for example, and can grasp the smoking state of the driver from the face image.
従来より、例えば脇見や居眠りなどといった運転操作上好ましくない運転者の状態を検出するためや運転者の個人認証を行うために、投光器から運転者の顔に向けて近赤外光を照射し、カメラによって運転者の顔を撮影して顔画像を取得し、その顔画像を解析することによって運転者の状態を検知したり個人認証を行うドライバモニタシステムが知られている。
Conventionally, in order to detect a driver's state that is not preferable in driving operation such as a look-aside or doze, or to perform personal authentication of the driver, a near infrared light is irradiated from the projector toward the driver's face, 2. Description of the Related Art There is known a driver monitor system that captures a driver's face with a camera, obtains a face image, and analyzes the face image to detect a driver's state or perform personal authentication.
また、近年のドライバモニタシステムでは、運転者の喫煙状態を、タバコの先端の高温の燃焼部分を赤外線カメラ(温度センサ)で検知することによって検出して、エアコンを作動させる空調制御を行ったり、タバコのポイ捨て検知を行う技術が提案されている(下記特許文献1~3参照)。
In recent driver monitor systems, the driver's smoking state is detected by detecting the high-temperature combustion part at the tip of the cigarette with an infrared camera (temperature sensor), and air conditioning control is performed to activate the air conditioner. Techniques for detecting littering of tobacco have been proposed (see Patent Documents 1 to 3 below).
しかしながら、上述した従来技術では、赤外線カメラを用いて喫煙状態を検知するので、通常のカメラによるドライバモニタシステムに加えて赤外線カメラが必要である。そのため、装置が複雑化するとともに、コストが上昇するという問題がある。
However, in the above-described prior art, since an smoking state is detected using an infrared camera, an infrared camera is required in addition to a driver monitor system using a normal camera. Therefore, there is a problem that the apparatus becomes complicated and the cost increases.
また、装置を簡易化するために、赤外線カメラを使用しない場合には、喫煙状態の検知の精度が低いという問題がある。
Also, there is a problem that the accuracy of detecting the smoking state is low when an infrared camera is not used to simplify the device.
一実施形態は、運転者の喫煙状態を、簡易な構成でしかも精度良く検出できる喫煙検出装置、方法及びプログラムを提供する。
One embodiment provides a smoking detection apparatus, method, and program capable of accurately detecting a driver's smoking state with a simple configuration.
実施形態の一観点として、人を撮影する撮像手段からの画像情報に基づいて、前記人の喫煙状態を検出する喫煙検出装置を提供する。喫煙検出装置は、前記人の顔を撮影した顔画像に基づいて、喫煙中のタバコが存在すると予想される喫煙領域を設定する喫煙境域設定手段と、前記喫煙領域における画像から、前記タバコを検出するタバコ検出手段と、前記喫煙領域に前記タバコが存在する場合には、前記喫煙状態であると推定する喫煙推定手段と、を備える。
As one aspect of the embodiment, there is provided a smoking detection device that detects a smoking state of the person based on image information from an imaging unit that photographs the person. A smoking detection device detects a cigarette from a smoking area setting means for setting a smoking area where a smoking cigarette is expected to exist based on a face image obtained by photographing the person's face, and an image in the smoking area And a smoking estimation means for estimating that the tobacco is in the smoking state when the tobacco is present in the smoking area.
以下に本発明の喫煙検出装置の実施形態を図面と共に説明する。
Hereinafter, embodiments of the smoking detection apparatus of the present invention will be described with reference to the drawings.
a)まず、本実施形態の喫煙検出装置を搭載した車両のシステム構成を、図1~図3に基づいて説明する。
A) First, the system configuration of a vehicle equipped with the smoking detection device of this embodiment will be described with reference to FIGS.
図1に示す様に、車両(自動車)には、ドライバ(運転者)の居眠りや脇見等の状態を検出したり個人認証などを行うために、運転者の顔画像を撮像して、その顔画像を解析するドライバモニタシステム1が搭載されている。本実施形態では、このドライバモニタシステム1が喫煙検出装置として用いられる。
As shown in FIG. 1, in order to detect the driver's (driver's) snooze, look aside, or perform personal authentication, the vehicle (automobile) captures the driver's face image, A driver monitor system 1 for analyzing images is installed. In this embodiment, this driver monitor system 1 is used as a smoking detection device.
このドライバモニタシステム1は、例えば、ダッシュモード3のメータ(図示せず)の近傍に配置された撮像部5と撮像部5等の動作を制御する制御部7とを備えている。
The driver monitor system 1 includes, for example, an imaging unit 5 disposed in the vicinity of a dash mode 3 meter (not shown) and a control unit 7 that controls the operation of the imaging unit 5 and the like.
図2に示す様に、前記撮像部5は、運転者の顔を撮像するカメラ9と、運転者の顔等に光を照射する撮像用の一対の投光器11a、11b(11と総称する)とを備えている。
As shown in FIG. 2, the imaging unit 5 includes a camera 9 that images the driver's face and a pair of imaging projectors 11a and 11b (collectively referred to as 11) for irradiating the driver's face and the like with light. It has.
カメラ9は、近赤外光により画像の撮像を行うことが可能な(即ち近赤外光に対して一定の感度を持つ)例えばCCDカメラであり、例えば運転者の顔を正面斜め下方より撮像が可能な様に配置されている。
The camera 9 is, for example, a CCD camera that can take an image with near-infrared light (that is, has a constant sensitivity to the near-infrared light), and for example, takes an image of the driver's face obliquely from the front and below. Is arranged to be possible.
投光器11は、例えば近赤外LEDからなり、近赤外光を運転者の顔に向けて照射するように、カメラ9とほぼ同軸となる様に配置されている。なお、その照射領域は、運転者の顔を中心にほぼ円錐状である。
The projector 11 is made of, for example, a near-infrared LED, and is arranged so as to be substantially coaxial with the camera 9 so as to irradiate near-infrared light toward the driver's face. The irradiation area is substantially conical around the driver's face.
b)次に、ドライバモニタシステム1の電気的構成について、図3に基づいて説明する。
図3に示す様に、ドライバモニタシステム1は、上述したカメラ9及び投光器11を有する撮像部5と、撮像部5等を制御する制御部7とを備えている。この制御部7には、制御部7からの制御信号によって制御される装置として、エアコン13、スピーカ15、パワーウインド17、空気清浄機19等が接続されている。 b) Next, the electrical configuration of thedriver monitor system 1 will be described with reference to FIG.
As shown in FIG. 3, thedriver monitor system 1 includes an imaging unit 5 having the camera 9 and the projector 11 described above, and a control unit 7 that controls the imaging unit 5 and the like. An air conditioner 13, a speaker 15, a power window 17, an air purifier 19, and the like are connected to the control unit 7 as devices controlled by a control signal from the control unit 7.
図3に示す様に、ドライバモニタシステム1は、上述したカメラ9及び投光器11を有する撮像部5と、撮像部5等を制御する制御部7とを備えている。この制御部7には、制御部7からの制御信号によって制御される装置として、エアコン13、スピーカ15、パワーウインド17、空気清浄機19等が接続されている。 b) Next, the electrical configuration of the
As shown in FIG. 3, the
このうち、制御部7は、周知のマイクロコンピュータを備えた電子制御装置である。制御部7は、カメラ9からの画像信号に基づいて、画像処理等を行うとともに、後述する様に、運転者の喫煙状態の検出や、その検出結果に応じた各種の制御を行う。
Among these, the control unit 7 is an electronic control device including a known microcomputer. The control unit 7 performs image processing and the like based on the image signal from the camera 9 and performs various controls according to the detection of the driver's smoking state and the detection result as described later.
c)次に、ドライバモニタシステム1のうち、喫煙検出装置として実施される処理の原理について、図4~図7に基づいて説明する。
本実施形態の喫煙検出装置では、まず、カメラによって運転者の顔を撮影し、その撮影した画像から運転者の顔を示す顔画像を取得する。 c) Next, the principle of processing performed as a smoking detection device in thedriver monitor system 1 will be described with reference to FIGS.
In the smoking detection apparatus of the present embodiment, first, a driver's face is photographed by a camera, and a face image indicating the driver's face is acquired from the photographed image.
本実施形態の喫煙検出装置では、まず、カメラによって運転者の顔を撮影し、その撮影した画像から運転者の顔を示す顔画像を取得する。 c) Next, the principle of processing performed as a smoking detection device in the
In the smoking detection apparatus of the present embodiment, first, a driver's face is photographed by a camera, and a face image indicating the driver's face is acquired from the photographed image.
この顔画像の取得方法としては、例えば特開2008-276328号公報に記載の様な周知の手法を採用できる。この手法では、その公報の段落[0030]~[0032]に記載の様に、画像における運転者の顔の部分の輝度と運転者の背景の画像の輝度との違いに基づいて、運転者の顔の画像(顔画像)を抽出する。
As a method for acquiring the face image, for example, a well-known method as described in Japanese Patent Application Laid-Open No. 2008-276328 can be employed. In this method, as described in paragraphs [0030] to [0032] of the publication, based on the difference between the luminance of the driver's face and the luminance of the driver's background image in the image, A face image (face image) is extracted.
次に、上述の様にして得られた顔画像に対して、図4(a)に示す様に、例えば顔の目や口や鼻等のパーツを区分するような線分(ワイヤーフレーム)からなるマスクパターンを有するマスクを形成する。
このマスクは、顔画像の特徴点を結ぶ三角形(三角パッチ)を組み合わせた図形(顔パッチ図面)によって表される。 Next, with respect to the face image obtained as described above, as shown in FIG. 4A, for example, from a line segment (wire frame) that divides parts such as eyes, mouth and nose of the face. A mask having a mask pattern is formed.
This mask is represented by a figure (face patch drawing) in which triangles (triangular patches) connecting feature points of the face image are combined.
このマスクは、顔画像の特徴点を結ぶ三角形(三角パッチ)を組み合わせた図形(顔パッチ図面)によって表される。 Next, with respect to the face image obtained as described above, as shown in FIG. 4A, for example, from a line segment (wire frame) that divides parts such as eyes, mouth and nose of the face. A mask having a mask pattern is formed.
This mask is represented by a figure (face patch drawing) in which triangles (triangular patches) connecting feature points of the face image are combined.
この顔画像における特徴点とは、顔において、例えば目の端や口の端など、画像上周囲とは明瞭に異なる特徴のある(特に顔の判別に有用な)点を示している。更には、この特徴点から得られた2次的な点(例えば特徴点間の特定の位置の点など)も特徴点に含めることもある。なお、任意の顔画像において、特徴点は、例えば周知のエッジ検出処理などによって求めることができる。
The feature points in the face image indicate points on the face having features that are clearly different from the surroundings on the image, such as the edges of the eyes and the mouth (especially useful for face discrimination). Furthermore, secondary points (for example, points at specific positions between feature points) obtained from the feature points may be included in the feature points. Note that in any face image, the feature points can be obtained by, for example, a well-known edge detection process.
この様なマスクパターンを形成する手法としては、例えば周知のAAM(Active Appearance Models)の手法を利用できる(例えば、Edwards, G., Taylor, C. J. and Cootes, T. F.: Interpreting Face Images using Active Appearance Models, In IEEE Conf. on Automatic Face and Gesture Recognition 1998, pp.300-305, Japan, (1998).参照)。
As a method for forming such a mask pattern, for example, a well-known AAM (Active Appearance Models) method can be used (for example, Edwards, G., Taylor, C. J. and Cootes, T. F .: Interpreting Face). Images using Active Appearance Models, In IEEE Conf. On Automatic Face and Gesture Recognition 1998, pp.300-305, Japan, (1998)).
このAAMは、顔の表情やトラッキングや顔の認識などに主に使用される技術であり、ある特徴点群の配置を基本形状として、その基本形状からの見え方の変化を表現した2次元空間における公知の顔画像モデルである。以下に、簡単に、AAMの手法について説明する。
This AAM is a technique mainly used for facial expression, tracking, face recognition, etc., and it is a two-dimensional space that expresses the change in appearance from the basic shape with the arrangement of a certain feature point group as the basic shape. Is a known face image model. The AAM method will be briefly described below.
(1)学習用の顔画像について、その特徴点(目じり、鼻頭、口の端等)を設定し、モデルを作製する。この時点で、どの特徴点がどの部位を示しているかかが分かる。
(1) For the learning face image, set its feature points (eye ring, nasal head, mouth edge, etc.) and create a model. At this point, it can be seen which feature point indicates which part.
(2)次に、学習用の輝度情報を登録する。
(2) Next, register brightness information for learning.
(3)次に、主成分分析により算出した主成分ベクトルに基づき、各特徴点を変化させる。
(3) Next, each feature point is changed based on the principal component vector calculated by the principal component analysis.
(4)次に、登録した顔の輝度情報を、前記(3)に基づいて変化させ、現フレームでの入力画像と登録された輝度情報の輝度値(登録輝度値)が一致するように、繰り返し前記(3)の処理を行う。
(4) Next, the luminance information of the registered face is changed based on (3), and the luminance value (registered luminance value) of the registered luminance information matches the input image in the current frame. The process (3) is repeated.
(5)次に、輝度値が最も近くなった位置を、モデルフィッティング位置(即ち、マスクが現フレームでの顔画像にフィットすると推定された位置)とする。
(5) Next, the position where the luminance value is closest is set as the model fitting position (that is, the position where the mask is estimated to fit the face image in the current frame).
そして、前記(1)で予めどの特徴点がどの部位かが分かるので、この(5)の処理後に、各部位ごとに線を引くことにより、前記図4(a)に示すようなマスクを形成できる。
Since the (1) in advance which feature points which sites are known, after the processing of (5), by drawing a line for each site, a mask as shown in FIG. 4 (a) forming it can.
(6)次に、上述の様に好適にマスクを形成できた場合には、そのことを示すマスク有効フラグを設定する。
(6) Next, when a mask can be formed suitably as described above, a mask valid flag indicating that is set.
具体的には、マスク有効フラグの設定は、例えば、フィッティング時の登録輝度値と入力画像の輝度値との差の閾値処理により決定する。即ち、このマスク有効フラグとは、正確にマスクが現フレームでの顔画像にフィッティングしたか否かを示すフラグである。
Specifically, the setting of the mask valid flag is determined by threshold processing of the difference between the registered luminance value at the time of fitting and the luminance value of the input image, for example. That is, the mask valid flag is a flag indicating whether or not the mask has been accurately fitted to the face image in the current frame.
次に、図4(b)に示す様に、前記顔画像において、運転者が喫煙中のタバコが存在する領域を定めるために、口と鼻との領域(同図でグレー部分)を区分する。なお、口と鼻との領域とは、口と鼻の外周を示す特徴点を結ぶ領域である。
Next, as shown in FIG. 4B, in the face image, the region of the mouth and nose (gray portion in the same figure) is divided in order to determine the region where the driver is smoking. . The mouth and nose region is a region connecting feature points indicating the outer periphery of the mouth and nose.
次に、この口の領域と鼻の領域とを含む様に、喫煙中のタバコが存在すると予想される喫煙領域を設定する。
具体的には、例えば図5(a)に示す様な長方形の喫煙領域を設定する。つまり、鼻の領域の上端を喫煙領域の上端とし、口の領域の下端より所定値(所定数の画素)分下方を喫煙領域の下端とし、口の領域の左端を喫煙領域の左端とし、口の領域の右端を喫煙領域の右端とする。これにより、長方形の喫煙領域が設定される。 Next, a smoking area where smoking tobacco is expected to exist is set so as to include the mouth area and the nose area.
Specifically, for example, a rectangular smoking area as shown in FIG. That is, the upper end of the nose region is the upper end of the smoking region, the lower end of the mouth region is a predetermined value (a predetermined number of pixels) the lower end of the smoking region, the left end of the mouth region is the left end of the smoking region, and the mouth The right end of the smoking area is the right end of the smoking area. Thereby, a rectangular smoking area is set.
具体的には、例えば図5(a)に示す様な長方形の喫煙領域を設定する。つまり、鼻の領域の上端を喫煙領域の上端とし、口の領域の下端より所定値(所定数の画素)分下方を喫煙領域の下端とし、口の領域の左端を喫煙領域の左端とし、口の領域の右端を喫煙領域の右端とする。これにより、長方形の喫煙領域が設定される。 Next, a smoking area where smoking tobacco is expected to exist is set so as to include the mouth area and the nose area.
Specifically, for example, a rectangular smoking area as shown in FIG. That is, the upper end of the nose region is the upper end of the smoking region, the lower end of the mouth region is a predetermined value (a predetermined number of pixels) the lower end of the smoking region, the left end of the mouth region is the left end of the smoking region, and the mouth The right end of the smoking area is the right end of the smoking area. Thereby, a rectangular smoking area is set.
なお、カメラ9と顔との位置関係によっては、顔画像の大きさが変化するので、顔画像の大きさの変化に合わせて、喫煙領域が一定の大きさになるように、適宜、拡大・縮小する。
Note that the size of the face image changes depending on the positional relationship between the camera 9 and the face. Therefore, the smoking area is appropriately enlarged or enlarged so that the size of the smoking area becomes constant according to the change in the size of the face image. to shrink.
また、この喫煙領域は、実験等によって得られたデータに基づいて、喫煙中のタバコが存在する可能性が高いと判断された領域であり、更に各上下端や左右端において、適宜領域を増やしたり減らすことが可能である。また、これとは別に、例えば口の中心(例えば面積中心)を中心とした多角形や円形の領域など、適宜、喫煙領域を設定することができる。
In addition, this smoking area is an area where it is determined that there is a high possibility of smoking cigarettes based on data obtained through experiments, etc. Can be reduced. Separately from this, for example, a smoking region can be set as appropriate, such as a polygonal or circular region centered on the center of the mouth (for example, the center of the area).
次に、前記喫煙領域を、例えば図6に示す様に、縦M×横Nの行列で示される多数の小領域(配列要素)に区分し、その領域における輝度値を記憶する。例えば、その小領域における各画素の輝度値の平均値を記憶する。これにより、喫煙領域は、前記図6に示す様な輝度の分布(輝度分布)として表現することができる。なお、各画素毎に輝度値を求めて輝度分布を作製してもよい。
Next, for example, as shown in FIG. 6, the smoking area is divided into a large number of small areas (array elements) represented by a matrix of vertical M × horizontal N, and the luminance value in the area is stored. For example, the average value of the luminance values of each pixel in the small area is stored. Thereby, the smoking area can be expressed as a luminance distribution (luminance distribution) as shown in FIG. Note that the luminance distribution may be created by obtaining the luminance value for each pixel.
また、単一のフレームにおける画像だけでは、データの信頼性が低いので、ここでは、複数のフレームにおける配列要素のデータを蓄積する。
例えば、各フレーム毎に得られる前記図6の輝度分布を、各配列要素毎に加算し、必要な数のデータを蓄積する。例えば30フレーム分のデータが蓄積された場合を、十分なデータ数が得られたとする。 In addition, since data reliability is low only with an image in a single frame, data of array elements in a plurality of frames is stored here.
For example, the luminance distribution of FIG. 6 obtained for each frame is added for each array element, and a necessary number of data is accumulated. For example, when data for 30 frames is accumulated, it is assumed that a sufficient number of data is obtained.
例えば、各フレーム毎に得られる前記図6の輝度分布を、各配列要素毎に加算し、必要な数のデータを蓄積する。例えば30フレーム分のデータが蓄積された場合を、十分なデータ数が得られたとする。 In addition, since data reliability is low only with an image in a single frame, data of array elements in a plurality of frames is stored here.
For example, the luminance distribution of FIG. 6 obtained for each frame is added for each array element, and a necessary number of data is accumulated. For example, when data for 30 frames is accumulated, it is assumed that a sufficient number of data is obtained.
そして、前記図5(a)に示す様なタバコを吸っていない場合の喫煙領域の画像を、登録輝度分布(基礎画像)として記憶しておき、その基礎画像と実際に喫煙を検出する現フレームにおける喫煙領域の画像(図5(b)参照)とを比較する。
And the image of the smoking area | region when not smoking as shown to the said Fig.5 (a) is memorize | stored as registration brightness distribution (basic image), The present frame which actually detects smoking with the basic image The image of the smoking area in (see FIG. 5B) is compared.
具体的には、登録輝度分布と現フレームの輝度分布との差分を求める。これにより、タバコをくわえている場合には、タバコの部分の画像が異なるので、例えば図5(c)に示すような画像(差分画像)が得られる。
従って、この差分画像から、タバコがくわえられているか否かの判定ができる。即ち、喫煙状態の推定が可能となる。 Specifically, the difference between the registered luminance distribution and the luminance distribution of the current frame is obtained. Accordingly, when the cigarette is added, the image of the cigarette portion is different, and thus an image (difference image) as shown in FIG. 5C is obtained, for example.
Therefore, it can be determined from this difference image whether or not tobacco is added. That is, the smoking state can be estimated.
従って、この差分画像から、タバコがくわえられているか否かの判定ができる。即ち、喫煙状態の推定が可能となる。 Specifically, the difference between the registered luminance distribution and the luminance distribution of the current frame is obtained. Accordingly, when the cigarette is added, the image of the cigarette portion is different, and thus an image (difference image) as shown in FIG. 5C is obtained, for example.
Therefore, it can be determined from this difference image whether or not tobacco is added. That is, the smoking state can be estimated.
更に、図7に示す様に、タバコの先端部(タバコの軸方向における一方の端部)と、それ以外の箇所(基端部)との輝度の違いを求める。例えば先端部と基端部との平均の輝度の違いを求め、この輝度の違いが所定の値以上の場合には、タバコの先端に火が付けられていると確定することができる。
Further, as shown in FIG. 7, the difference in luminance between the leading end portion of the tobacco (one end portion in the axial direction of the tobacco) and the other portion (base end portion) is obtained. For example, a difference in average luminance between the front end portion and the base end portion is obtained, and when the difference in luminance is equal to or greater than a predetermined value, it can be determined that the front end of the cigarette is lit.
つまり、タバコに火がつけられている場合には、タバコの先端部の輝度が他の部分の輝度に比べて大きく上昇するので、この輝度の違いによって喫煙の有無を判断することができる。
In other words, when the cigarette is lit, the brightness of the tip of the cigarette is greatly increased compared to the brightness of the other parts, so the presence or absence of smoking can be determined from the difference in brightness.
或いは、タバコに火が付けられている場合には、その輝度は非常に大きなものであるので、単に先端部の輝度の大きさから、喫煙の有無を判定してもよい。
Alternatively, when the cigarette is lit, its brightness is very high, so the presence or absence of smoking may be determined simply from the brightness of the tip.
なお、タバコの先端部の検出には、タバコの軸方向におけるどちら一方の端部を、先端からの所定距離で区分すればよい。この距離は、実験等で火により輝度が上昇する領域を求めておけばよい。
In addition, what is necessary is just to classify one end part in the axial direction of a tobacco by the predetermined distance from a front end for the detection of the front-end | tip part of a tobacco. For this distance, an area where the brightness increases due to fire may be obtained by experiments or the like.
ここで、軸方向のどちら側を先端部とするかは、どちらを先端部としても喫煙の判定は可能であるが、例えば口の中心よりも遠い方を先端部とすることが望ましい。
Here, it is possible to determine which side of the axial direction is the tip portion, whichever is the tip portion, but smoking can be determined. For example, it is desirable to set the tip portion farther from the center of the mouth.
d)次に、喫煙検出装置で実施される処理方法について、図8、図9に基づいて説明する。
D) Next, the processing method implemented by the smoking detection apparatus will be described with reference to FIGS.
<輝度分布登録ルーチン>
このルーチンは、予めタバコを吸っていない状態の顔画像に基づいて、前記図6に示した様な喫煙領域の輝度分布を登録するための処理を示している。 <Brightness distribution registration routine>
This routine shows a process for registering the luminance distribution of the smoking area as shown in FIG. 6 on the basis of the face image in a state where the user does not smoke in advance.
このルーチンは、予めタバコを吸っていない状態の顔画像に基づいて、前記図6に示した様な喫煙領域の輝度分布を登録するための処理を示している。 <Brightness distribution registration routine>
This routine shows a process for registering the luminance distribution of the smoking area as shown in FIG. 6 on the basis of the face image in a state where the user does not smoke in advance.
図8に示す様に、まず、ステップ(S)100では、カメラ9によって撮影された画像に基づいて、上述した様にして顔画像を取得する。
As shown in FIG. 8, first, in step (S) 100, a face image is acquired as described above based on the image photographed by the camera 9.
続くステップ110では、前記AAMの技術により、顔画像から特徴点を求めて、前記図4(a)に示す様なマスクを作製する。
In the following step 110, feature points are obtained from the face image by the AAM technique, and a mask as shown in FIG.
続くステップ120では、マスク有効フラグがたっているか(設定されているか)否かを判定する。ここで肯定判断されるとステップ130に進み、一方否定判断されると前記ステップ100に戻り、同様な処理を繰り返す。
In the following step 120, it is determined whether or not the mask valid flag is set (is set). If an affirmative determination is made here, the process proceeds to step 130. If a negative determination is made, the process returns to step 100, and the same processing is repeated.
ステップ130では、好適にマスクが形成されたので、そのマスクにおける口及び鼻を含む喫煙領域を設定し、前記図6に示す様な喫煙領域の輝度分布を作製する。
In step 130, since the mask is suitably formed, a smoking area including the mouth and nose in the mask is set, and a luminance distribution of the smoking area as shown in FIG. 6 is created.
続くステップ140では、作製した輝度分布を蓄積する。即ちフレームの輝度分布を、各配列要素毎に加算する。
In the subsequent step 140, the produced luminance distribution is accumulated. That is, the luminance distribution of the frame is added for each array element.
続くステップ150では、データが必要分(例えば30フレーム分)溜まったか否かを判定する。ここで肯定判断されるとステップ160に進み、一方否定判断されると前記ステップ100に戻り、同様な処理を繰り返す。
In the following step 150, it is determined whether or not the necessary data (for example, 30 frames) has been accumulated. If an affirmative determination is made here, the process proceeds to step 160, whereas if a negative determination is made, the process returns to step 100 and the same processing is repeated.
ステップ160では、データが必要分溜まったので、通常時(非喫煙時)の口及び鼻を含む喫煙領域における輝度分布として登録する。
In step 160, since the necessary amount of data has been collected, it is registered as a luminance distribution in the smoking area including the mouth and nose at normal times (when not smoking).
続くステップ170では、後述する喫煙検出ルーチンに移行し、一旦本処理を終了する。
In the subsequent step 170, the process proceeds to a smoking detection routine described later, and the present process is temporarily terminated.
<喫煙検出ルーチン>
このルーチンは、撮影された運転者の顔画像に基づいて、喫煙状態を検出する処理を示している。 <Smoking detection routine>
This routine shows a process of detecting a smoking state based on a photographed driver's face image.
このルーチンは、撮影された運転者の顔画像に基づいて、喫煙状態を検出する処理を示している。 <Smoking detection routine>
This routine shows a process of detecting a smoking state based on a photographed driver's face image.
実際に運転者の喫煙状態の検出を行う場合には、図9のステップ200に示す様に、まず、運転者の顔を撮影して、その顔画像を取得する。
When actually detecting the smoking state of the driver, as shown in step 200 of FIG. 9, first, the driver's face is photographed and the face image is acquired.
続くステップ210では、前記AAMの技術により、顔画像から特徴点を求めて、前記図4(a)に示す様なマスクを作製する。
In the subsequent step 210, feature points are obtained from the face image by the AAM technique, and a mask as shown in FIG. 4A is produced.
続くステップ220では、マスク有効フラグがたっているか否かを判定する。ここで肯定判断されるとステップ230に進み、一方否定判断されると前記ステップ200に戻り、同様な処理を繰り返す。
In the following step 220, it is determined whether or not the mask valid flag is set. If an affirmative determination is made here, the process proceeds to step 230. On the other hand, if a negative determination is made, the process returns to step 200 and the same processing is repeated.
ステップ230では、マスクにおける口及び鼻を含む喫煙領域を設定し、その喫煙領域の輝度分布を作製する。
In step 230, a smoking area including the mouth and nose in the mask is set, and a luminance distribution of the smoking area is created.
続くステップ240では、作製した現フレームにおける輝度分布と、前記輝度分布作製ルーチンにて作製した登録輝度分布の差分を算出する。
In the following step 240, the difference between the luminance distribution in the current frame thus created and the registered luminance distribution created in the luminance distribution production routine is calculated.
なお、ここでは、タバコの検出を確実に行うために、一定以上の輝度の差がある領域をタバコの画像として抽出する。
In this case, in order to reliably detect cigarettes, an area having a luminance difference of a certain level or more is extracted as a cigarette image.
続くステップ250では、差分領域から、前記図5(c)に示す様なタバコの形状が検出できたか否かを判定する。例えば実際のタバコに相当する所定の長さと幅を有する長尺の画像(長尺の長方形の画像)が抽出できたか否かを判定する。
In the subsequent step 250, it is determined whether or not the shape of the cigarette as shown in FIG. For example, it is determined whether or not a long image (a long rectangular image) having a predetermined length and width corresponding to an actual cigarette has been extracted.
ここで肯定判断されると、「喫煙状態の可能性がある」としてステップ260に進み、一方否定判断されると、「喫煙状態ではない」として前記ステップ200に戻り、同様な処理を繰り返す。
If an affirmative determination is made here, the process proceeds to step 260 as “possible smoking state”, whereas if a negative determination is made, the process returns to step 200 as “not smoking state”, and the same processing is repeated.
ステップ260では、喫煙状態である可能性が高いので、その確認のために、(火がついている可能性が高い)タバコの先端部を検出する。即ち、タバコの先端から所定範囲の領域の明るさ(例えばその範囲の画素の輝度の平均値)を検出する。
In step 260, since there is a high possibility of being in a smoking state, the tip of the cigarette (which is likely to be on fire) is detected for confirmation. That is, the brightness of an area within a predetermined range from the tip of the cigarette (for example, the average value of the luminance of pixels within the range) is detected.
続くステップ270では、タバコの先端部が強く光っているか否かを判定する。例えばタバコの先端部の輝度が所定の判定値以上の場合、或いは、タバコの先端部と先端部以外の箇所との輝度(平均値)の差が所定値以上の場合には、タバコの先端部が強く光っていると判定する。
In the following step 270, it is determined whether or not the tip of the cigarette is shining strongly. For example, when the brightness of the tip of the cigarette is greater than or equal to a predetermined determination value, or when the difference in brightness (average value) between the tip of the cigarette and a portion other than the tip is greater than or equal to a predetermined value , the tip of the cigarette Is determined to be shining strongly.
ここで肯定判断されると、「喫煙状態である」としてステップ280に進み、一方否定判断されると、再度確認するために前記ステップ200に戻り、同様な処理を繰り返す。
If an affirmative determination is made here, the process proceeds to step 280 as “smoking state”, whereas if a negative determination is made, the process returns to step 200 to confirm again, and the same processing is repeated.
ステップ280では、喫煙状態であると判定し、例えば喫煙確定フラグをセットする。
In step 280, it is determined that the person is in a smoking state, and for example, a smoking confirmation flag is set.
続くステップ290、300、310では、喫煙状態であるとの判定に基づいた各処理を行い、以下上述した処理を繰り返す。
In subsequent steps 290, 300, and 310, each process based on the determination that the person is in a smoking state is performed, and the processes described above are repeated.
なお、各ステップ290、300、310の処理は、少なくとも1種を実行すればよく、その順番も適宜選択することができる。
In addition, what is necessary is just to perform at least 1 sort for the process of each step 290, 300, 310, The order can also be selected suitably.
具体的には、ステップ290では、喫煙中であるので、車内の空気を浄化するために、空気清浄機19を作動させる。
Specifically, in step 290, since smoking is in progress, the air purifier 19 is activated to purify the air in the vehicle.
ステップ300では、スピーカ15を駆動して、例えば「健康に注意しましょう」などの警報を出力する。
In step 300, the speaker 15 is driven to output an alarm such as “Let's be careful about health”.
ステップ310では、煙を車外に放出するために、パワーウインド17を駆動して、窓を少し開ける制御を行う。
In step 310, in order to release smoke outside the vehicle, the power window 17 is driven to control the window to be opened a little.
なお、これ以外に、例えばエアコン13を制御して、空気の循環性を高め、空気の浄化を促進させてもよい。
In addition to this, for example, the air conditioner 13 may be controlled to improve air circulation and promote air purification.
e)この様に、本実施形態では、通常のドライバモニタシステム1で用いられるカメラ9等の装置を用いて、人の顔画像を撮影し、その顔画像からタバコを検出できるので、赤外線カメラのような特別の装置が不要であり、装置を簡易化して、コストの低減を図ることができる。
e) As described above, in the present embodiment, a human face image can be taken using a device such as the camera 9 used in the normal driver monitor system 1 and cigarettes can be detected from the face image. Such a special device is not necessary, and the device can be simplified and the cost can be reduced.
また、通常、喫煙中のタバコが存在していると予想される例えば口や鼻の近傍の喫煙領域を設定し、その領域内においてタバコを検出できれば、喫煙状態と推定できるので、判定精度が高い。
In addition, if a smoking area is set in the vicinity of the mouth or nose, for example, where it is expected that there is a cigarette currently being smoked, and if tobacco can be detected in that area, it can be estimated as a smoking state, so the determination accuracy is high .
更に、喫煙状態の場合には、タバコは口や鼻の周辺に存在すると推定できるが、本実施形態では、喫煙中のタバコが存在する可能性が高い口や鼻の周辺の喫煙領域を設定することにより、精度の高い喫煙判定を行うことができる。
Furthermore, in the case of a smoking state, it can be estimated that cigarettes exist around the mouth and nose, but in this embodiment, a smoking area around the mouth and nose where there is a high possibility of smoking cigarettes. Thus, it is possible to make a highly accurate smoking determination.
また、人が喫煙している画像と喫煙していない画像とを比較すると、喫煙している画像では、タバコの画像が存在しているはずである。従って、本実施形態では、予め喫煙していない画像を求めておき、その画像と喫煙判定を行う画像とを比較することにより、(喫煙している場合には)タバコの画像を抽出することができる。これにより、タバコの検出を精度良く行うことができる。
Also, when comparing an image of a person smoking with an image of no smoking, the image of smoking should have a cigarette image. Therefore, in the present embodiment, an image that is not smoked is obtained in advance, and an image of the cigarette is extracted (if smoked) by comparing the image with an image for which smoking is determined. it can. Thereby, a cigarette can be detected with high accuracy.
特に、本実施形態では、近赤外光を照射して撮影すると、タバコの白い部分が明るい画像が得られる。よって、タバコの抽出を容易に行うことができる。
In particular, in the present embodiment, when photographing is performed by irradiating near-infrared light, an image in which the white portion of the cigarette is bright is obtained. Therefore, extraction of tobacco can be performed easily.
更に、タバコに火が付けられている場合には、タバコの先端部が明るく輝く。よって、本実施形態では、このタバコの先端部の輝度を検出し、その輝度に基づいて、喫煙状態を一層正確に判定することができる。
Furthermore, when the cigarette is lit, the tip of the cigarette shines brightly. Therefore, in this embodiment, the brightness | luminance of the front-end | tip part of this cigarette can be detected, and a smoking state can be determined more correctly based on the brightness | luminance.
喫煙領域設定部(手段)はステップ230等に、タバコ検出部(手段)はステップ240等に、喫煙推定部(手段)はステップ250~270等に、それぞれ対応する。
The smoking area setting unit (means) corresponds to step 230, the cigarette detection unit (means) corresponds to step 240, and the smoking estimation unit (means) corresponds to steps 250 to 270, respectively.
尚、本発明は前記実施形態に限定されるものではなく、本発明の要旨を逸脱しない範囲において種々の態様で実施しうる。
The present invention is not limited to the above-described embodiment, and can be implemented in various modes without departing from the gist of the present invention.
例えば、上述した喫煙検出装置の図8及び図9に示す様な各処理を行うコンピュータのプログラムも、本発明の範囲である。つまり、上述した喫煙検出装置の機能は、コンピュータのプログラムにより実行される処理により実現することができる。このプログラムは、コンピュータ読み取り可能な記録媒体8に記録し、必要に応じて制御部7(コンピュータ)にロードして起動することにより実行できる。
For example, a computer program that performs each process as shown in FIGS. 8 and 9 of the above-described smoking detection apparatus is also within the scope of the present invention. That is, the function of the smoking detection device described above can be realized by processing executed by a computer program. This program can be executed by being recorded on a computer-readable recording medium 8 and loaded into the control unit 7 (computer) and started as necessary.
以下、上述した実施形態の要旨を要約する。
The following summarizes the gist of the embodiment described above.
(1)実施形態の一観点として、人を撮影する撮像部からの画像情報に基づいて、前記人の喫煙状態を検出する喫煙検出装置を提供する。喫煙検出装置は、前記人の顔を撮影した顔画像に基づいて、喫煙中のタバコが存在すると予想される喫煙領域を設定する喫煙境域設定部と、前記喫煙領域における画像から、前記タバコを検出するタバコ検出部と、前記喫煙領域に前記タバコが存在する場合には、前記喫煙状態であると推定する喫煙推定部と、を備える。
(1) As one aspect of the embodiment, there is provided a smoking detection device that detects a smoking state of a person based on image information from an imaging unit that photographs the person. A smoking detection device detects a cigarette from an image in the smoking area, and a smoking area setting unit that sets a smoking area where a smoking cigarette is expected to exist based on a face image obtained by photographing the person's face And a smoking estimation unit that estimates that the cigarette is in the smoking state when the cigarette is present in the smoking area.
本喫煙検出装置では、人の顔を撮影した顔画像に基づいて、喫煙中のタバコが存在すると予想される喫煙領域を設定し、その喫煙領域における画像からタバコの検出を行い、その喫煙領域にタバコが存在する場合には、喫煙状態であると推定する。
In this smoking detection device, a smoking area where smoking cigarettes are expected to exist is set based on a face image obtained by photographing a human face, cigarettes are detected from the image in the smoking area, and the smoking area is detected. If tobacco is present, it is assumed that it is a smoking state.
つまり、本喫煙検出装置では、通常のドライバモニタで用いられるカメラ等の装置を用いて、人の顔画像を撮影し、その顔画像からタバコを検出できる。そのため、赤外線カメラのような特別の装置が不要であり、装置を簡易化して、コストの低減を図ることができる。
That is, in this smoking detection device, a human face image can be taken using a device such as a camera used in a normal driver monitor, and cigarettes can be detected from the face image. Therefore, a special device such as an infrared camera is not necessary, and the device can be simplified and the cost can be reduced.
また、通常、喫煙中のタバコが存在していると予想される例えば口や鼻の近傍の喫煙領域を設定し、その領域内においてタバコを検出できれば、喫煙状態と推定できるので、判定精度が高いという利点がある。
In addition, if a smoking area is set in the vicinity of the mouth or nose, for example, where it is expected that there is a cigarette currently being smoked, and if tobacco can be detected in that area, it can be estimated as a smoking state, so the determination accuracy is high There is an advantage.
なお、ここでは、喫煙領域にタバコが存在する場合には、タバコに火がついている慨然性が高いとして、喫煙状態であると推定している。後述する様に、実際に火が付けられているか否かの判定を加味することにより、一層精度の高い判定を行うことができる。
In addition, here, when tobacco exists in the smoking area, it is estimated that the tobacco is in a smoking state because it is highly likely that the tobacco is on fire. As will be described later, a more accurate determination can be made by taking into account the determination of whether or not the fire is actually on.
(2)実施形態の他の観点では、前記喫煙領域は、前記顔画像の全領域より小さな領域において、口の領域及び鼻の領域のうち少なくとも一方の領域を含む。
(2) In another aspect of the embodiment, the smoking area includes at least one of a mouth area and a nose area in an area smaller than the entire area of the face image.
本実施形態は、好ましい喫煙領域を例示している。通常、喫煙状態の場合には、タバコは口や鼻の周辺に存在すると推定できる。よって、喫煙中のタバコが存在する可能性が高い口や鼻の周辺の喫煙領域を設定することにより、精度の高い喫煙判定を行うことができる。
This embodiment illustrates a preferred smoking area. Usually, in the case of smoking, it can be estimated that tobacco exists around the mouth and nose. Therefore, by setting a smoking area around the mouth and nose where there is a high possibility that there is a cigarette during smoking, a highly accurate smoking determination can be made.
(3)実施形態の他の観点では、前記タバコ検出部は、前記喫煙領域における画像と、予め設定されたタバコをくわえていない基礎画像とを比較して、前記タバコの画像を抽出することにより、前記タバコを検出する。
(3) In another aspect of the embodiment, the cigarette detection unit compares the image in the smoking area with a basic image that does not include a preset cigarette, and extracts the cigarette image. , Detecting the tobacco.
人が喫煙している画像と喫煙していない画像とを比較すると、喫煙している画像では、タバコの画像が存在しているはずである。従って、予め喫煙していない基礎画像を求めておき、その基礎画像と喫煙判定を行う画像とを比較することにより、(喫煙している場合には)タバコの画像を容易に抽出することができる。これにより、タバコの検出を行うことができる。
When comparing an image of a person smoking with an image of no smoking, the image of smoking should have a cigarette image. Therefore, by obtaining a basic image that has not been smoked in advance and comparing the basic image with an image for which smoking determination is performed, an image of a cigarette can be easily extracted (if smoked). . Thereby, a cigarette can be detected.
なお、タバコの色は通常は白色であり、顔の色(輝度)とは異なるので、タバコの抽出は容易である。
In addition, since the color of tobacco is usually white and is different from the face color (luminance), extraction of tobacco is easy.
(4)実施形態の他の観点では、前記喫煙推定部は、前記抽出したタバコの画像に基づいて、前記タバコの先端部における輝度を求め、この先端部の輝度に基づいて、前記喫煙状態の判定を行う。
(4) In another aspect of the embodiment, the smoking estimation unit obtains luminance at the tip of the cigarette based on the extracted image of the tobacco, and based on the luminance of the tip, the smoking state Make a decision.
タバコに火が付けられている場合には、タバコの先端部が(例えば先端部以外の箇所より)明るく輝く。よって、このタバコの先端部の輝度に基づいて、喫煙状態を正確に判定することができる。
When the cigarette is lit, the tip of the cigarette shines brightly (for example, other than the tip). Therefore, based on the brightness | luminance of the front-end | tip part of this tobacco, a smoking state can be determined correctly.
(5)実施形態の他の観点では、前記喫煙推定部は、前記タバコの先端部における輝度が、所定値以上の場合には、前記喫煙状態と判定する。
(5) In another aspect of the embodiment, the smoking estimation unit determines the smoking state when the luminance at the tip of the cigarette is equal to or higher than a predetermined value.
ここでは、タバコの先端部における輝度が、タバコに火を付けた状態を示す値よりも大きな場合には、喫煙状態と判定するので、精度の良い喫煙判定を行うことができる。
Here, when the brightness at the tip of the cigarette is larger than the value indicating the state in which the cigarette is lit, the cigarette is determined to be a smoking state, so that a precise smoking determination can be made.
(6)実施形態の他の観点では、前記喫煙推定部は、前記タバコの先端部における輝度が該先端部以外の輝度より所定値以上大きな場合には、前記喫煙状態と判定する。
(6) In another aspect of the embodiment, the smoking estimation unit determines the smoking state when the luminance at the tip of the cigarette is greater than a predetermined value by the luminance other than the tip.
ここでは、タバコの先端部における輝度が他の部分の輝度よりも大きな場合には、喫煙状態と判定するので、精度の良い喫煙判定を行うことができる。
Here, since the smoking state is determined when the luminance at the tip of the cigarette is higher than the luminance of the other portions, it is possible to make a highly accurate smoking determination.
(7)実施形態の他の観点では、前記撮像部による撮影は、近赤外光を照射して行う。
(7) In another aspect of the embodiment, the imaging by the imaging unit is performed by irradiating near infrared light.
近赤外光を照射した対象物を通常のドライバモニタで使用されるカメラ(例えばCCDカメラ)で撮影すると、(可視光で撮影した場合に比べて)白い部分が明るく写る。タバコは通常白いので、近赤外光で撮影することにより、精度良くタバコを検出することができる。
When an object irradiated with near-infrared light is photographed with a camera (for example, a CCD camera) used in an ordinary driver monitor, a white portion appears brighter (compared to photographing with visible light). Since cigarettes are usually white, it is possible to detect cigarettes with high accuracy by photographing with near-infrared light.
(8)実施形態の他の観点は、コンピュータを、前記喫煙境域設定部、前記タバコ検出部、前記喫煙推定部として機能させるためのプログラムである。
(8) Another aspect of the embodiment is a program for causing a computer to function as the smoking area setting unit, the cigarette detection unit, and the smoking estimation unit.
つまり、喫煙境域設定部、タバコ検出部、喫煙推定部は、コンピュータのプログラムにより実行される処理により実現することができる。
That is, the smoking area setting unit, the cigarette detection unit, and the smoking estimation unit can be realized by processing executed by a computer program.
このようなプログラムは、例えば、FD、MO、DVD-ROM、CD-ROM、ハードディスク等のコンピュータ読み取り可能な記録媒体に記録し、必要に応じてコンピュータにロードして起動することにより用いることができる。この他、ROMやバックアップRAMをコンピュータ読み取り可能な記録媒体として本プログラムを記録しておき、そのROMあるいはバックアップRAMをコンピュータに組み込んで用いても良い。
Such a program can be used by, for example, recording it on a computer-readable recording medium such as FD, MO, DVD-ROM, CD-ROM, hard disk, etc., and loading and starting the computer as necessary. . In addition, the ROM or backup RAM may be recorded as a computer-readable recording medium, and the ROM or backup RAM may be incorporated into a computer and used.
1…ドライバモニタシステム 5…撮像部 7…制御部
9、9a、9b…カメラ 11…投光器 DESCRIPTION OFSYMBOLS 1 ... Driver monitor system 5 ... Imaging part 7 ... Control part 9, 9a, 9b ... Camera 11 ... Floodlight
9、9a、9b…カメラ 11…投光器 DESCRIPTION OF
Claims (9)
- 人を撮影する撮像部からの画像情報に基づいて、前記人の喫煙状態を検出する喫煙検出装置であって、
前記人の顔を撮影した顔画像に基づいて、喫煙中のタバコが存在すると予想される喫煙領域を設定する喫煙境域設定手段と、
前記喫煙領域における画像から、前記タバコを検出するタバコ検出手段と、
前記喫煙領域に前記タバコが存在する場合には、前記喫煙状態であると推定する喫煙推定手段と、
を備えたことを特徴とする喫煙検出装置。 Based on image information from an imaging unit that photographs a person, a smoking detection device that detects the smoking state of the person,
Based on a face image obtained by photographing the person's face, a smoking area setting means for setting a smoking area where a cigarette during smoking is expected to exist,
Tobacco detection means for detecting the cigarette from an image in the smoking area;
When the cigarette is present in the smoking area, smoking estimation means for estimating the smoking state;
A smoking detection device comprising: - 前記喫煙領域は、前記顔画像の全領域より小さな領域において、口の領域及び鼻の領域のうち少なくとも一方の領域を含むことを特徴とする請求項1に記載の喫煙検出装置。 The smoking detection apparatus according to claim 1, wherein the smoking area includes at least one of a mouth area and a nose area in an area smaller than the entire area of the face image.
- 前記タバコ検出手段は、前記喫煙領域における画像と、予め設定されたタバコをくわえていない基礎画像とを比較して、前記タバコの画像を抽出することにより、前記タバコを検出すること特徴とする請求項1又は2に記載の喫煙検出装置。 The cigarette detection means detects the cigarette by extracting an image of the cigarette by comparing an image in the smoking area with a basic image not including a preset cigarette. Item 3. The smoking detection device according to Item 1 or 2.
- 前記喫煙推定手段は、前記抽出したタバコの画像に基づいて、前記タバコの先端部における輝度を求め、この先端部の輝度に基づいて、前記喫煙状態の判定を行うことを特徴とする請求項3に記載の喫煙検出装置。 The smoking estimating means obtains the luminance at the tip of the cigarette based on the extracted image of the cigarette and determines the smoking state based on the luminance of the tip. The smoking detection device according to 1.
- 前記喫煙推定手段は、前記タバコの先端部における輝度が、所定値以上の場合には、前記喫煙状態と判定することを特徴とする請求項4に記載の喫煙検出装置。 The smoking detecting device according to claim 4, wherein the smoking estimating means determines that the smoking state is present when the luminance at the tip of the cigarette is equal to or higher than a predetermined value.
- 前記喫煙推定手段は、前記タバコの先端部における輝度が該先端部以外の輝度より所定値以上大きな場合には、前記喫煙状態と判定することを特徴とする請求項4に記載の喫煙検出装置。 The smoking detection device according to claim 4, wherein the smoking estimation means determines that the smoking state is present when the luminance at the tip of the cigarette is greater than a predetermined value by more than a luminance other than the tip.
- 前記撮像部による撮影は、近赤外光を照射して行うことを特徴とする請求項1~6のいずれか1項に記載の喫煙検出装置。 The smoking detection apparatus according to any one of claims 1 to 6, wherein the imaging by the imaging unit is performed by irradiating near infrared light.
- コンピュータを、請求項1に記載の前記喫煙境域設定手段、前記タバコ検出手段、前記喫煙推定手段として機能させるためのプログラム。 A program for causing a computer to function as the smoking area setting unit, the cigarette detection unit, and the smoking estimation unit according to claim 1.
- 人を撮影する撮像部からの画像情報に基づいて、前記人の喫煙状態を検出する喫煙検出方法であって、
前記人の顔を撮影した顔画像に基づいて、喫煙中のタバコが存在すると予想される喫煙領域を設定し、
前記喫煙領域における画像から、前記タバコを検出し、
前記喫煙領域に前記タバコが存在する場合には、前記喫煙状態であると推定することを特徴とする喫煙検出方法。 A smoking detection method for detecting a smoking state of the person based on image information from an imaging unit that photographs the person,
Based on the face image obtained by photographing the person's face, set a smoking area where smoking tobacco is expected to exist,
From the image in the smoking area, detect the cigarette,
A smoking detection method, wherein the smoking state is estimated when the cigarette is present in the smoking area.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2012-096744 | 2012-04-20 | ||
JP2012096744A JP2013225205A (en) | 2012-04-20 | 2012-04-20 | Smoking detection device and program |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013157466A1 true WO2013157466A1 (en) | 2013-10-24 |
Family
ID=49383425
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2013/060859 WO2013157466A1 (en) | 2012-04-20 | 2013-04-10 | Smoking detection device, method and program |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP2013225205A (en) |
WO (1) | WO2013157466A1 (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104598934A (en) * | 2014-12-17 | 2015-05-06 | 安徽清新互联信息科技有限公司 | Monitoring method for smoking behavior of driver |
CN104978829A (en) * | 2015-06-24 | 2015-10-14 | 国家电网公司 | Indoor smoking monitoring control method and system |
CN108710837A (en) * | 2018-05-07 | 2018-10-26 | 广州通达汽车电气股份有限公司 | Cigarette smoking recognition methods, device, computer equipment and storage medium |
CN109334386A (en) * | 2018-10-09 | 2019-02-15 | 上海博泰悦臻网络技术服务有限公司 | In-vehicle air purification method and its system |
CN109872510A (en) * | 2017-12-04 | 2019-06-11 | 通用汽车环球科技运作有限责任公司 | Interior Smoke Detection and reporting system and method for Car sharing and the shared vehicle of seating |
CN110264670A (en) * | 2019-06-24 | 2019-09-20 | 广州鹰瞰信息科技有限公司 | Based on passenger stock tired driver driving condition analytical equipment |
CN113323539A (en) * | 2021-06-23 | 2021-08-31 | 曼德电子电器有限公司 | Method and device for vehicle smoke discharge, storage medium, vehicle and electronic equipment |
US11170241B2 (en) * | 2017-03-03 | 2021-11-09 | Valeo Comfort And Driving Assistance | Device for determining the attentiveness of a driver of a vehicle, on-board system comprising such a device, and associated method |
CN113761980A (en) * | 2020-06-04 | 2021-12-07 | 杭州海康威视系统技术有限公司 | Smoking detection method and device, electronic equipment and machine-readable storage medium |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101924061B1 (en) * | 2016-01-07 | 2018-11-30 | 엘지전자 주식회사 | Auxiliary apparatus for vehicle and Vehicle |
CN111753602A (en) * | 2019-03-29 | 2020-10-09 | 北京市商汤科技开发有限公司 | Motion recognition method and device, electronic equipment and storage medium |
CN111457493B (en) * | 2020-03-31 | 2022-02-25 | 广东美的制冷设备有限公司 | Air purification control method and device and related equipment |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003080920A (en) * | 2001-09-12 | 2003-03-19 | Denso Corp | Vehicular air conditioner |
JP2008036352A (en) * | 2006-08-10 | 2008-02-21 | Omron Corp | Cigarette smoke separation apparatus, method and program |
JP2012054897A (en) * | 2010-09-03 | 2012-03-15 | Sharp Corp | Conference system, information processing apparatus, and information processing method |
-
2012
- 2012-04-20 JP JP2012096744A patent/JP2013225205A/en active Pending
-
2013
- 2013-04-10 WO PCT/JP2013/060859 patent/WO2013157466A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003080920A (en) * | 2001-09-12 | 2003-03-19 | Denso Corp | Vehicular air conditioner |
JP2008036352A (en) * | 2006-08-10 | 2008-02-21 | Omron Corp | Cigarette smoke separation apparatus, method and program |
JP2012054897A (en) * | 2010-09-03 | 2012-03-15 | Sharp Corp | Conference system, information processing apparatus, and information processing method |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104598934A (en) * | 2014-12-17 | 2015-05-06 | 安徽清新互联信息科技有限公司 | Monitoring method for smoking behavior of driver |
CN104598934B (en) * | 2014-12-17 | 2018-09-18 | 安徽清新互联信息科技有限公司 | A kind of driver's cigarette smoking monitoring method |
CN104978829A (en) * | 2015-06-24 | 2015-10-14 | 国家电网公司 | Indoor smoking monitoring control method and system |
CN104978829B (en) * | 2015-06-24 | 2018-01-02 | 国家电网公司 | A kind of indoor smoke method for monitoring and controlling and system |
US11170241B2 (en) * | 2017-03-03 | 2021-11-09 | Valeo Comfort And Driving Assistance | Device for determining the attentiveness of a driver of a vehicle, on-board system comprising such a device, and associated method |
CN109872510A (en) * | 2017-12-04 | 2019-06-11 | 通用汽车环球科技运作有限责任公司 | Interior Smoke Detection and reporting system and method for Car sharing and the shared vehicle of seating |
CN108710837A (en) * | 2018-05-07 | 2018-10-26 | 广州通达汽车电气股份有限公司 | Cigarette smoking recognition methods, device, computer equipment and storage medium |
CN109334386A (en) * | 2018-10-09 | 2019-02-15 | 上海博泰悦臻网络技术服务有限公司 | In-vehicle air purification method and its system |
CN110264670A (en) * | 2019-06-24 | 2019-09-20 | 广州鹰瞰信息科技有限公司 | Based on passenger stock tired driver driving condition analytical equipment |
CN113761980A (en) * | 2020-06-04 | 2021-12-07 | 杭州海康威视系统技术有限公司 | Smoking detection method and device, electronic equipment and machine-readable storage medium |
CN113761980B (en) * | 2020-06-04 | 2024-03-01 | 杭州海康威视系统技术有限公司 | Smoking detection method, smoking detection device, electronic equipment and machine-readable storage medium |
CN113323539A (en) * | 2021-06-23 | 2021-08-31 | 曼德电子电器有限公司 | Method and device for vehicle smoke discharge, storage medium, vehicle and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
JP2013225205A (en) | 2013-10-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2013157466A1 (en) | Smoking detection device, method and program | |
CN106709420B (en) | Method for monitoring driving behavior of commercial vehicle driver | |
CN105769120B (en) | Method for detecting fatigue driving and device | |
WO2016038784A1 (en) | Driver state determination apparatus | |
JP5761074B2 (en) | Imaging control apparatus and program | |
US8983151B2 (en) | Apparatus for recognizing face based on environment adaptation | |
US9888875B2 (en) | Driver monitoring apparatus | |
JP4793269B2 (en) | Sleepiness detection device | |
JP4263737B2 (en) | Pedestrian detection device | |
US9662977B2 (en) | Driver state monitoring system | |
WO2014119235A1 (en) | Alertness level detection device and alertness level detection method | |
EP2860665A2 (en) | Face detection apparatus, and face detection method | |
JP4819606B2 (en) | Device part discrimination device and gender judgment device | |
KR20120074820A (en) | Control system for vehicle using face recognition function | |
WO2015079657A1 (en) | Viewing area estimation device | |
JP4992823B2 (en) | Face detection apparatus and face detection method | |
JP2018045386A (en) | Line-of-sight measurement device | |
US20160232415A1 (en) | Detection detection of cell phone or mobile device use in motor vehicle | |
JP4989249B2 (en) | Eye detection device, dozing detection device, and method of eye detection device | |
CN107507395A (en) | A kind of fatigue driving detecting system and method | |
JP2009219555A (en) | Drowsiness detector, driving support apparatus, drowsiness detecting method | |
JP4623172B2 (en) | Pupil detection device, program for pupil detection device, and pupil detection method | |
JP2013257637A (en) | Person detection device | |
JP2019083019A (en) | Driver state determination device | |
JP4623171B2 (en) | Pupil detection device, program for pupil detection device, and pupil detection method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13778224 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 13778224 Country of ref document: EP Kind code of ref document: A1 |