CN106295474A - The fatigue detection method of deck officer, system and server - Google Patents
The fatigue detection method of deck officer, system and server Download PDFInfo
- Publication number
- CN106295474A CN106295474A CN201510279711.1A CN201510279711A CN106295474A CN 106295474 A CN106295474 A CN 106295474A CN 201510279711 A CN201510279711 A CN 201510279711A CN 106295474 A CN106295474 A CN 106295474A
- Authority
- CN
- China
- Prior art keywords
- human eye
- fatigue
- image information
- eye area
- server
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
- G06V20/597—Recognising the driver's state or behaviour, e.g. attention or drowsiness
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
本发明提出一种船舶驾驶员的疲劳检测方法、系统和服务器。其中,该疲劳检测方法包括:接收可穿戴设备采集的视频流;将视频流中的多个视频帧转换为多个图像信息;获取多个图像信息中的人眼区域;以及对多个图像信息中的人眼区域进行疲劳分析以判断船舶驾驶员是否处于疲劳状态,并将分析结果发送至可穿戴设备。本发明实施例的疲劳检测方法,通过可穿戴设备获取船舶驾驶员的视频图像,可以避免各种客观因素的影响,保障采集的视频图像的质量,从而提高了服务器疲劳检测的可靠性。并且,在船舶驾驶员处于疲劳状态时通过可穿戴设备对其进行提醒,从而大大提高了船舶驾驶员在驾驶船舶时的安全性,避免了事故的发生,保证了船舶驾驶员的生命财产安全。
The invention provides a fatigue detection method, system and server for ship drivers. Wherein, the fatigue detection method includes: receiving a video stream collected by a wearable device; converting a plurality of video frames in the video stream into a plurality of image information; acquiring the human eye area in the plurality of image information; and analyzing the plurality of image information Fatigue analysis is carried out in the human eye area to determine whether the ship driver is in a state of fatigue, and the analysis results are sent to the wearable device. In the fatigue detection method of the embodiment of the present invention, the wearable device acquires the video image of the ship driver, which can avoid the influence of various objective factors and ensure the quality of the collected video image, thereby improving the reliability of server fatigue detection. Moreover, when the ship driver is in a fatigue state, the wearable device reminds him, thereby greatly improving the safety of the ship driver when driving the ship, avoiding accidents, and ensuring the safety of the ship driver's life and property.
Description
技术领域technical field
本发明涉及航运技术领域,尤其涉及一种船舶驾驶员的疲劳检测方法、系统和服务器。The invention relates to the technical field of shipping, in particular to a fatigue detection method, system and server for ship drivers.
背景技术Background technique
近年来,随着我国航运业的快速发展,从整体上看其综合实力已经得到了显著的提高,已经预备了安全、科学的发展条件。然而,在航运业快速发展的过程中仍然存在很多问题,安全问题比较突出,在重大安全隐患无法得到有效控制的情况下,安全事故时有发生,船舶驾驶员的生命财产安全受到了极大的威胁。In recent years, with the rapid development of my country's shipping industry, its comprehensive strength has been significantly improved overall, and conditions for safe and scientific development have been prepared. However, there are still many problems in the process of rapid development of the shipping industry, and safety problems are more prominent. When major safety hazards cannot be effectively controlled, safety accidents occur from time to time, and the life and property safety of ship drivers are greatly affected. threaten.
比起发展较为快速的车辆驾驶员的疲劳识别技术,针对船舶驾驶员的疲劳识别技术还处于萌芽阶段。目前,针对车辆驾驶员的疲劳识别技术主要有,例如,Volvo汽车公司推出的驾驶员警示系统来协助驾驶员提高行车安全,在驾驶员进入睡眠状态前及时给予警示;由卡内基梅隆大学研发的PERCLOS系统可以通过分析驾驶员眼睛的位置和开度,对驾驶员疲劳状态进行判定;FaceLAB系统通过监测驾驶员头部姿态、眼睛开闭状态、凝视方向、瞳孔直径等特征参数,对驾驶员的疲劳状态进行实时监测;欧盟的AWAKE系统可以通过对驾驶员行为的综合监控,通过利用图像、压力等多种传感器,对驾驶员眼部状态、视线方向、方向盘握力等信息进行实时监测。Compared with the rapid development of fatigue recognition technology for vehicle drivers, the fatigue recognition technology for ship drivers is still in its infancy. At present, the fatigue recognition technology for vehicle drivers mainly includes, for example, the driver warning system launched by Volvo Cars to assist drivers in improving driving safety, and to give warnings in time before the driver enters a sleep state; by Carnegie Mellon University The developed PERCLOS system can judge the driver's fatigue status by analyzing the position and opening of the driver's eyes; The driver's fatigue status can be monitored in real time; the European Union's AWAKE system can monitor the driver's eye condition, line of sight direction, steering wheel grip and other information in real time through comprehensive monitoring of driver behavior and by using various sensors such as image and pressure.
然而,相对于车辆驾驶员疲劳识别技术而言,针对船舶驾驶员的疲劳识别技术的发展主要受以下几方面影响:However, compared with vehicle driver fatigue recognition technology, the development of ship driver fatigue recognition technology is mainly affected by the following aspects:
(1)船舶的驾驶舱面积较大,船舶驾驶员在驾驶过程中通常要采取侧身瞭望等行为观察水面环境。因此,船舶驾驶员驾驶时的活动范围较大,而现有的基于车辆驾驶员疲劳识别技术很难全面、准确地采集船舶驾驶员状态信息。(1) The area of the cockpit of the ship is relatively large, and the ship driver usually takes behaviors such as looking sideways to observe the water surface environment during the driving process. Therefore, the range of activities of the ship driver is relatively large, and the existing vehicle driver fatigue recognition technology is difficult to comprehensively and accurately collect the status information of the ship driver.
(2)船舶驾驶员在驾驶过程中存在操作简单、单一等特点,加之船舶的速度较慢,使得船舶驾驶员在驾驶过程中的容错性较强,导致船舶驾驶员在驾驶过程中操作规范化的意识较弱。(2) The ship driver has the characteristics of simple and single operation during the driving process, and the slow speed of the ship makes the ship driver more fault-tolerant during the driving process, which leads to the standardized operation of the ship driver during the driving process. Awareness is weak.
(3)船舶驾驶的环境主要受自然环境和船舶环境两方面影响。由于水上环境受大雾、水面波动等多个因素的影响,通常水面环境比路上环境复杂多变。再者,船舶内的设备噪音、震动程度较复杂,诸多环境因素影响造成船舶驾驶员的劳动程度和心理压力增强,极易造成船舶驾驶员的疲劳。(3) The driving environment of the ship is mainly affected by the natural environment and the ship environment. Because the water environment is affected by many factors such as heavy fog and water surface fluctuations, the water surface environment is usually more complex and changeable than the road environment. Furthermore, the noise and vibration of the equipment in the ship are relatively complex, and the influence of many environmental factors increases the labor level and psychological pressure of the ship driver, which can easily cause the fatigue of the ship driver.
因此,针对船舶驾驶员的疲劳检测技术相比较于车辆驾驶员的疲劳检测技术而言相对复杂,且需要考虑的因素较多。然而船速相对于车速来说相对较慢,对船舶疲劳驾驶的容错能力较强,因此对船舶疲劳检测的实时性要求并不高。Therefore, compared with the fatigue detection technology for vehicle drivers, the fatigue detection technology for ship drivers is relatively complicated, and there are many factors to be considered. However, the speed of the ship is relatively slow compared to the speed of the vehicle, and it has a strong fault tolerance for ship fatigue driving, so the real-time requirements for ship fatigue detection are not high.
发明内容Contents of the invention
本发明旨在至少在一定程度上解决相关技术中的技术问题之一。The present invention aims to solve one of the technical problems in the related art at least to a certain extent.
为此,本发明的第一个目的在于提出一种船舶驾驶员的疲劳检测方法,该疲劳检测方法通过可穿戴设备获取船舶驾驶员的视频图像,可以避免各种客观因素的影响,保障采集的视频图像的质量,从而提高了服务器疲劳检测的可靠性。并且,在船舶驾驶员处于疲劳状态时通过可穿戴设备对其进行提醒,从而大大提高了船舶驾驶员在驾驶船舶时的安全性,避免了事故的发生,保证了船舶驾驶员的生命财产安全。For this reason, the first purpose of the present invention is to propose a fatigue detection method for ship pilots. The fatigue detection method acquires video images of ship pilots through wearable devices, which can avoid the influence of various objective factors and ensure the accuracy of collection. The quality of video images, thereby improving the reliability of server fatigue detection. Moreover, when the ship driver is in a fatigue state, the wearable device reminds him, thereby greatly improving the safety of the ship driver when driving the ship, avoiding accidents, and ensuring the safety of the ship driver's life and property.
本发明的第二个目的在于提出一种船舶驾驶员的疲劳检测系统。The second object of the present invention is to propose a fatigue detection system for ship drivers.
本发明的第三个目的在于提出一种服务器。A third object of the present invention is to propose a server.
为达上述目的,本发明第一方面实施例提出了一种船舶驾驶员的疲劳检测方法,包括以下步骤:接收可穿戴设备采集的视频流;将所述视频流中的多个视频帧转换为多个图像信息;获取所述多个图像信息中的人眼区域;以及对所述多个图像信息中的人眼区域进行疲劳分析以判断船舶驾驶员是否处于疲劳状态,并将分析结果发送至所述可穿戴设备。In order to achieve the above purpose, the embodiment of the first aspect of the present invention proposes a fatigue detection method for ship drivers, including the following steps: receiving a video stream collected by a wearable device; converting a plurality of video frames in the video stream into A plurality of image information; obtaining the human eye area in the plurality of image information; and performing fatigue analysis on the human eye area in the plurality of image information to determine whether the ship driver is in a fatigue state, and sending the analysis result to The wearable device.
本发明实施例的船舶驾驶员的疲劳检测方法,通过可穿戴设备获取船舶驾驶员的视频图像,可以避免各种客观因素的影响,包括光照环境、水面波动环境、操作环境以及船舶驾驶员前方水面视野等,基于可穿戴设备的前端采集系统,能够采集清晰的视频图像,在船舶内抖动、光照不足等恶劣条件下,也能够保障采集的视频图像的质量,从而提高了服务器疲劳检测的可靠性。并且,将机器视觉技术融合到疲劳检测方法中,通过可穿戴设备将视频图像发送至服务器,通过服务器对视频图像进行处理、人眼定位和疲劳检测,在船舶驾驶员处于疲劳状态时通过可穿戴设备对其进行提醒,以便警示船舶驾驶员,从而大大提高了船舶驾驶员在驾驶船舶时的安全性,避免了事故的发生,保证了船舶驾驶员的生命财产安全。The fatigue detection method of the ship driver in the embodiment of the present invention can avoid the influence of various objective factors, including the lighting environment, the water surface fluctuation environment, the operating environment and the water surface in front of the ship driver by acquiring the video image of the ship driver through the wearable device. Field of view, etc., based on the front-end acquisition system of wearable devices, can collect clear video images, and can also guarantee the quality of the collected video images under harsh conditions such as shaking inside the ship and insufficient light, thereby improving the reliability of server fatigue detection . Moreover, the machine vision technology is integrated into the fatigue detection method, and the video image is sent to the server through the wearable device, and the video image is processed, human eye positioning and fatigue detection are performed through the server, and when the ship driver is in a fatigue state, the wearable The equipment reminds it to warn the ship driver, thus greatly improving the safety of the ship driver when driving the ship, avoiding accidents, and ensuring the safety of the ship driver's life and property.
为达上述目的,本发明第二方面实施例提出了一种船舶驾驶员的疲劳检测系统,包括服务器和可穿戴设备,其中,所述可穿戴设备用于采集视频流,并将所述视频流发送至所述服务器,以及接收所述服务器发送的分析结果;所述服务器用于接收所述可穿戴设备采集的视频流,并将所述视频流中的多个视频帧转换为多个图像信息,以及获取所述多个图像信息中的人眼区域,并对所述多个图像信息中的人眼区域进行疲劳分析以判断船舶驾驶员是否处于疲劳状态,以及将分析结果发送至所述可穿戴设备。In order to achieve the above purpose, the embodiment of the second aspect of the present invention proposes a fatigue detection system for ship drivers, including a server and a wearable device, wherein the wearable device is used to collect video streams and store the video streams Send to the server, and receive the analysis result sent by the server; the server is used to receive the video stream collected by the wearable device, and convert multiple video frames in the video stream into multiple image information , and acquire the human eye area in the plurality of image information, and perform fatigue analysis on the human eye area in the plurality of image information to determine whether the ship driver is in a fatigue state, and send the analysis result to the available wearable device.
本发明实施例的船舶驾驶员的疲劳检测系统,通过可穿戴设备获取船舶驾驶员的视频图像,可以避免各种客观因素的影响,包括光照环境、水面波动环境、操作环境以及船舶驾驶员前方水面视野等,基于可穿戴设备的前端采集系统,能够采集清晰的视频图像,在船舶内抖动、光照不足等恶劣条件下,也能够保障采集的视频图像的质量,从而提高了服务器疲劳检测的可靠性。并且,将机器视觉技术融合到疲劳检测方法中,通过可穿戴设备将视频图像发送至服务器,通过服务器对视频图像进行处理、人眼定位和疲劳检测,在船舶驾驶员处于疲劳状态时通过可穿戴设备对其进行提醒,以便警示船舶驾驶员,从而大大提高了船舶驾驶员在驾驶船舶时的安全性,避免了事故的发生,保证了船舶驾驶员的生命财产安全。The fatigue detection system of the ship driver in the embodiment of the present invention can avoid the influence of various objective factors by acquiring the video image of the ship driver through the wearable device, including the lighting environment, the water surface fluctuation environment, the operating environment and the water surface in front of the ship driver Field of view, etc., based on the front-end acquisition system of wearable devices, can collect clear video images, and can also guarantee the quality of the collected video images under harsh conditions such as shaking inside the ship and insufficient light, thereby improving the reliability of server fatigue detection . Moreover, the machine vision technology is integrated into the fatigue detection method, and the video image is sent to the server through the wearable device, and the video image is processed, human eye positioning and fatigue detection are performed through the server, and when the ship driver is in a fatigue state, the wearable The equipment reminds it to warn the ship driver, thus greatly improving the safety of the ship driver when driving the ship, avoiding accidents, and ensuring the safety of the ship driver's life and property.
为达上述目的,本发明第一方面实施例提出了一种服务器,包括:接收模块,用于接收可穿戴设备采集的视频流;转换模块,用于将所述视频流中的多个视频帧转换为多个图像信息;获取模块,用于获取所述多个图像信息中的人眼区域;以及分析模块,用于对所述多个图像信息中的人眼区域进行疲劳分析以判断船舶驾驶员是否处于疲劳状态,并将分析结果发送至所述可穿戴设备。To achieve the above purpose, the embodiment of the first aspect of the present invention proposes a server, including: a receiving module, used to receive a video stream collected by a wearable device; a conversion module, used to convert multiple video frames in the video stream converted into a plurality of image information; the acquisition module is used to obtain the human eye area in the plurality of image information; and the analysis module is used to perform fatigue analysis on the human eye area in the plurality of image information to judge ship driving Whether the employee is in a state of fatigue, and the analysis result is sent to the wearable device.
本发明实施例的服务器,对视频图像进行处理、人眼定位和疲劳检测,在船舶驾驶员处于疲劳状态时通过可穿戴设备对其进行提醒,以便警示船舶驾驶员,从而大大提高了船舶驾驶员在驾驶船舶时的安全性,避免了事故的发生,保证了船舶驾驶员的生命财产安全。The server of the embodiment of the present invention processes video images, locates human eyes and detects fatigue, and reminds the ship driver through a wearable device when he is in a fatigued state, so as to warn the ship driver, thereby greatly improving the safety of the ship driver. The safety when driving the ship avoids the occurrence of accidents and ensures the safety of the life and property of the ship driver.
本发明附加的方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本发明的实践了解到。Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
附图说明Description of drawings
本发明上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:The above and/or additional aspects and advantages of the present invention will become apparent and easy to understand from the following description of the embodiments in conjunction with the accompanying drawings, wherein:
图1是本发明一个实施例的船舶驾驶员的疲劳检测方法的流程图;Fig. 1 is the flowchart of the fatigue detection method of the ship driver of an embodiment of the present invention;
图2是本发明一个具体实施例的船舶驾驶员的疲劳检测方法的流程图;Fig. 2 is the flowchart of the fatigue detection method of the ship driver of a specific embodiment of the present invention;
图3是本发明实施例中Haar特征的示意图;Fig. 3 is the schematic diagram of Haar characteristic in the embodiment of the present invention;
图4是本发明实施例中积分图的示意图;Fig. 4 is the schematic diagram of integral diagram in the embodiment of the present invention;
图5是本发明一个实施例的船舶驾驶员的疲劳检测系统的结构示意图;以及Fig. 5 is a structural schematic diagram of a ship driver's fatigue detection system according to an embodiment of the present invention; and
图6是本发明一个实施例的服务器的结构示意图。Fig. 6 is a schematic structural diagram of a server according to an embodiment of the present invention.
具体实施方式detailed description
下面详细描述本发明的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本发明,而不能理解为对本发明的限制。Embodiments of the present invention are described in detail below, examples of which are shown in the drawings, wherein the same or similar reference numerals designate the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the figures are exemplary and are intended to explain the present invention and should not be construed as limiting the present invention.
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本发明的描述中,“多个”的含义是两个或两个以上,除非另有明确具体的限定。In addition, the terms "first" and "second" are used for descriptive purposes only, and cannot be interpreted as indicating or implying relative importance or implicitly specifying the quantity of indicated technical features. Thus, a feature defined as "first" and "second" may explicitly or implicitly include one or more of these features. In the description of the present invention, "plurality" means two or more, unless otherwise specifically defined.
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本发明的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本发明的实施例所属技术领域的技术人员所理解。Any process or method descriptions in flowcharts or otherwise described herein may be understood to represent modules, segments or portions of code comprising one or more executable instructions for implementing specific logical functions or steps of the process , and the scope of preferred embodiments of the invention includes alternative implementations in which functions may be performed out of the order shown or discussed, including substantially concurrently or in reverse order depending on the functions involved, which shall It is understood by those skilled in the art to which the embodiments of the present invention pertain.
图1是本发明一个实施例的船舶驾驶员的疲劳检测方法的流程图,图2是本发明一个具体实施例的船舶驾驶员的疲劳检测方法的流程图。Fig. 1 is a flow chart of a fatigue detection method for a ship driver according to an embodiment of the present invention, and Fig. 2 is a flow chart of a fatigue detection method for a ship driver according to a specific embodiment of the present invention.
如图1和图2所示,船舶驾驶员的疲劳检测方法包括:As shown in Figure 1 and Figure 2, the fatigue detection methods for ship drivers include:
S101,接收可穿戴设备采集的视频流。S101. Receive a video stream collected by a wearable device.
在本发明的一个实施例中,可穿戴设备可以为眼镜。具体而言,本发明的基于可穿戴设备的船舶驾驶员的疲劳检测方法是采用机器视觉,基于Raspberry Pi B+硬件设备平台进行开发的,前端采集视频流的设备可以采用眼镜的形态,在眼镜上设置有RPiCamera红外摄像头。采用眼镜的方式可以有效避免因船员活动范围、驾驶习惯和船舶环境等因素的影响。也就是说,通过可穿戴眼镜可以直接获取船舶驾驶员的人眼图像,不仅可以避免各种客观因素的影响,还可以提高采集到的人眼图像的质量,为后续的人眼定位和疲劳检测提供优质的图像信息,降低图像噪声。此外,采用红外摄像头可以满足在夜间光照不足的条件下采集到清晰人眼图像的需求。In one embodiment of the present invention, the wearable device may be glasses. Specifically, the wearable device-based ship driver fatigue detection method of the present invention is developed based on the Raspberry Pi B+ hardware device platform using machine vision, and the front-end device for collecting video streams can be in the form of glasses. Set with RPiCamera infrared camera. The use of glasses can effectively avoid the influence of factors such as the range of activities of the crew, driving habits and the environment of the ship. That is to say, the human eye image of the ship driver can be directly obtained through wearable glasses, which can not only avoid the influence of various objective factors, but also improve the quality of the collected human eye image, and provide guidance for subsequent human eye positioning and fatigue detection. Provide high-quality image information and reduce image noise. In addition, the use of an infrared camera can meet the needs of collecting clear human eye images under the condition of insufficient light at night.
进一步而言,可穿戴设备在采集到视频图像之后,可先对视频图像进行预处理,例如,对视频图像进行压缩处理、或者设定视频图像的帧率等,由此可以在满足视频图像的质量要求的前提下,提高视频图像传送速率。然后,可穿戴设备将视频图像以视频流的方式传送至后端的图像处理服务器上,其中,服务器和可穿戴设备可以通过无线网络进行通信,通信的方式可包括但不限于Wifi、红外、蓝牙、3G网络中的一种。服务器在接收到可穿戴设备采集的视频流之后,将视频流在服务器上进行备份。Furthermore, after the wearable device collects the video image, it can first preprocess the video image, for example, compress the video image, or set the frame rate of the video image, etc., so that it can meet the requirements of the video image Under the premise of meeting the quality requirements, the video image transmission rate is improved. Then, the wearable device transmits the video image to the back-end image processing server in the form of a video stream, wherein the server and the wearable device can communicate through a wireless network, and the communication methods may include but are not limited to Wifi, infrared, bluetooth, One of the 3G networks. After the server receives the video stream collected by the wearable device, the video stream is backed up on the server.
S102,将视频流中的多个视频帧转换为多个图像信息。S102. Convert multiple video frames in the video stream into multiple image information.
具体地,服务器从接收到的视频流中获取到多个视频帧,并根据服务器中预设的阈值将多个视频帧转换为图像信息。例如,可穿戴设备采集连续10分钟的视频图像,由于正常人眨眼时间在0.2-0.4秒左右,而在疲劳状态下眨眼速度普遍较慢,是一个逐渐闭眼的过程,眼睛从张开到闭合一般至少需要1秒左右的时间。因此,服务器可以将视频帧率设定为10(即FPS=10)就可以满足人眼状态的实时捕获,在该条件下可以产生6000张样本图像信息。Specifically, the server acquires multiple video frames from the received video stream, and converts the multiple video frames into image information according to a preset threshold in the server. For example, a wearable device collects video images continuously for 10 minutes. Since the blinking time of normal people is about 0.2-0.4 seconds, and the blinking speed is generally slow in the fatigue state, it is a process of gradually closing the eyes, from opening to closing. Generally, it takes at least 1 second or so. Therefore, the server can set the video frame rate to 10 (that is, FPS=10) to meet the real-time capture of the state of human eyes. Under this condition, 6000 pieces of sample image information can be generated.
应当理解的是,服务器需要预先对船舶驾驶员的眼部信息进行特征学习,以提高人眼检测和疲劳检测的准确度。It should be understood that the server needs to perform feature learning on the ship driver's eye information in advance, so as to improve the accuracy of human eye detection and fatigue detection.
S103,获取多个图像信息中的人眼区域。S103. Acquire human eye regions in multiple pieces of image information.
在本发明的一个实施例中,在服务器获取多个图像信息中的人眼区域之前,还可以对多个图像信息进行预处理,从而服务器可以得到较好质量的图像信息。其中,预处理包括图像去噪处理、均衡化处理、对比度处理中的一种或者多种。In an embodiment of the present invention, before the server obtains the human eye area in the multiple image information, preprocessing may be performed on the multiple image information, so that the server can obtain image information of better quality. Wherein, the preprocessing includes one or more of image denoising processing, equalization processing, and contrast processing.
具体地,服务器对多个图像信息中的人眼区域进行定位,即定位人眼位置,以从图像信息中获取准确的部分包含人眼区域的图像,去除图像信息中无用的信息。Specifically, the server locates the human eye area in the plurality of image information, that is, locates the position of the human eye, so as to obtain an accurate image partially containing the human eye area from the image information, and remove useless information in the image information.
在本发明的一个实施例中,服务器获取多个图像信息中的人眼区域具体包括:In one embodiment of the present invention, the server obtains the human eye area in the plurality of image information specifically includes:
S1031,服务器根据基于Haar特征的Adaboost算法对多个图像信息进行人眼定位,并获取第一人眼区域。S1031. The server performs human eye positioning on multiple image information according to the Haar feature-based Adaboost algorithm, and acquires a first human eye area.
S1032,服务器对多个图像信息进行二值化处理,并据基于Haar特征的Adaboost算法对二值化处理后的图像信息进行人眼定位,以获取第二人眼区域。S1032. The server performs binarization processing on a plurality of image information, and performs human eye positioning on the binarized image information according to the Adaboost algorithm based on Haar features, so as to obtain a second human eye area.
S1033,服务器判断第一人眼区域和第二人眼区域是否匹配,并在匹配时将第一人眼区域和/或第二人眼区域作为多个图像信息中的人眼区域。S1033. The server judges whether the first human eye area matches the second human eye area, and takes the first human eye area and/or the second human eye area as the human eye area in the plurality of image information when matching.
具体地,服务器根据图像信息结合对船舶驾驶员眼部特征的学习结果,利用基于Haar特征的Adaboost算法对人眼进行初定位,获得第一人眼区域。然后,服务器利用图像处理技术对图像信息进行分析和处理,获得该图像信息的二值图像,并根据生成的二值图像利用基于Haar特征的Adaboost算法对人眼进行再一次的定位,获得第二人眼区域。接着,服务器将第一人眼区域和第二人眼区域进行匹配,如果第一人眼区域的图像集合包含第二人眼区域的图像集合,则判断人眼检测成功,否则删除该图像信息。Specifically, the server uses the Adaboost algorithm based on Haar features to initially locate the human eye according to the image information combined with the learning results of the ship driver's eye features, and obtains the first human eye area. Then, the server uses image processing technology to analyze and process the image information, obtains the binary image of the image information, and uses the Haar-based Adaboost algorithm to locate the human eye again according to the generated binary image, and obtains the second human eye area. Next, the server matches the first human eye area with the second human eye area, and if the image set of the first human eye area includes the image set of the second human eye area, it is judged that the human eye detection is successful; otherwise, the image information is deleted.
具体而言,服务器先对图像信息进行基于Haar的人眼特征提取。其中,在图像信息中人眼特征可表述为坐标、距离、颜色、亮度、形状等信息。Haar特征属于矩阵特征,因此可以将其抽象为以点、线。面等基本集合元素组成的简单图形。其中,如图3所示,Haar特征可分为三类:边缘特征、线型特征和环绕特征。Haar特征的基本思想就是先将矩形框分块,再将分块的灰度像素与边缘特征相结合分析的一种特征分析方法。在目标图像中可将特定位置的矩形图像区域抽象为Haar特征,通过该方法可将目标区域图像特征进行量化处理。图像中白色区域的像素灰度值与减去黑色区域像素灰度值之和,所得到的数值就是所覆盖区域的特征值。Specifically, the server first performs Haar-based human eye feature extraction on the image information. Among them, the characteristics of the human eye in the image information can be expressed as coordinates, distance, color, brightness, shape and other information. The Haar feature belongs to the matrix feature, so it can be abstracted as points and lines. Simple graphics composed of basic collection elements such as faces. Among them, as shown in Figure 3, Haar features can be divided into three categories: edge features, line features and surround features. The basic idea of the Haar feature is to first divide the rectangular frame into blocks, and then combine the gray pixels of the blocks with the edge features to analyze a feature analysis method. In the target image, the rectangular image area at a specific position can be abstracted as Haar features, and the image features of the target area can be quantified by this method. The pixel gray value of the white area in the image is subtracted from the sum of the pixel gray value of the black area, and the obtained value is the feature value of the covered area.
通过使用积分图的方式计算,可提高特征计算速度。积分图是一种可描述全局信息的矩阵表示方法,其定义为:The speed of feature calculation can be improved by using integral graph calculation. Integral graph is a matrix representation method that can describe global information, which is defined as:
其中,f(x,y)是原图像在(x,y)处的积分图像,g(x′,y′)是在(x,y)处的原始图像。因此,如图4所示,在(x,y)点处积分图像之等于在该点左上方灰色区域所有像素值之和。Among them, f(x,y) is the integral image of the original image at (x,y), and g(x′,y′) is the original image at (x,y). Therefore, as shown in Figure 4, the integral image at point (x, y) is equal to the sum of all pixel values in the upper left gray area of this point.
进而,服务器根据Adaboost算法对图像信息中的人眼位置进行识别。对于捕获的24*24像素的图像而言,其Haar特征在图像匹配的个数多达上万中,而其中只存在少数可用特征。本发明中通过采用Adaboost算法实现快速的人眼检测,其基本思想是利用大量训练集训练弱分类器,通过算法叠加最终构成强分类器。Furthermore, the server identifies the position of human eyes in the image information according to the Adaboost algorithm. For the captured image of 24*24 pixels, the number of Haar features is as many as tens of thousands of image matches, and there are only a few available features. In the present invention, the fast human eye detection is realized by using the Adaboost algorithm. The basic idea is to use a large number of training sets to train weak classifiers, and finally form a strong classifier through algorithm superposition.
如果人眼区域图像有k个特征,则可以表示为fj(xi),其中,1≤j≤k,xi表示为第i个样本图像。那么每个图像的特征集可表示为{f1(xi),f2(xi),f3(xi),…fj(xi),…fk(xi)},其中,每个特征对应一个弱分类器。If the human eye area image has k features, it can be expressed as f j ( xi ), where, 1≤j≤k, x i is represented as the i-th sample image. Then the feature set of each image can be expressed as {f 1 ( xi ),f 2 ( xi ),f 3 ( xi ),…f j ( xi ),…f k ( xi )}, where , each feature corresponds to a weak classifier.
服务器将一个弱分类器hj(x)的组成包含特征fj(x),阈值θj和符号pj三部分,其中,一个特征对应一个弱分类器,分类阈值是一个对所有矩阵进行分类的特征值,分类符号则是一个表示拥有正负方向的符号。服务器将第j个特征的弱分类器表示为:The server forms a weak classifier h j (x) into three parts: feature f j (x), threshold θ j and symbol p j , where a feature corresponds to a weak classifier, and the classification threshold is a method to classify all matrices The eigenvalue of , and the classification symbol is a symbol indicating that it has a positive and negative direction. The server represents the weak classifier for the jth feature as:
其中,hj(x)为弱分类器的值,θj为阈值,pj用于控制不等号方向,取值为+1或-1,fj(x)为特征值。Among them, h j (x) is the value of the weak classifier, θ j is the threshold, p j is used to control the direction of the inequality sign, and the value is +1 or -1, and f j (x) is the feature value.
基于Adaboost算法,对已知的n个训练样本(x1,y1),(x2,y2),…,(xn,yn)进行如下步骤运算,其中yi={0,1}对应样本的真和假。Based on the Adaboost algorithm, the following steps are performed on known n training samples (x 1 ,y 1 ),(x 2 ,y 2 ),…,(x n ,y n ), where y i ={0,1 } corresponds to the true and false of the sample.
(1)取n个训练样本,其中m个人眼样本,l个非人眼样本,表示为(x1,y1),(x2,y2),…,(xn,yn),其中,yi=0、yi=1分别对应人眼样本和非人眼样本。(1) Take n training samples, including m human eye samples and l non-human eye samples, expressed as (x 1 ,y 1 ),(x 2 ,y 2 ),…,(x n ,y n ), Wherein, y i =0 and y i =1 correspond to human eye samples and non-human eye samples respectively.
(2)初始化误差权重,对于yi=0的样本,对于yi=1的样本, (2) Initialize the error weight, for samples with y i =0, For samples with y i =1,
(3)初始化t=1,其中t≤T,T为训练样本分类器个数。(3) Initialize t=1, where t≤T, T is the number of training sample classifiers.
(4)将权重归一化为 (4) Normalize the weights to
(5)对每个特征f训练一个弱分类器h(x,f,p,θ),计算其对应的弱分类器的加权(qi)的错误率εf=∑|hj(xj)-qi|,并选出误差εf最小的分类器ht,并更新权重其中,ei=0表示被正确的分类,ei=1表示被错误的分类, (5) Train a weak classifier h(x,f,p,θ) for each feature f, and calculate the weighted (q i ) error rate of the corresponding weak classifier ε f =∑|h j (x j )-q i |, and select the classifier h t with the smallest error ε f , and update the weight Among them, e i =0 means it is correctly classified, e i =1 means it is wrongly classified,
(6)另t=t+1,重复步骤(4),直到t>T。(6) Another t=t+1, repeat step (4) until t>T.
(7)最后得到强分类器为:(7) Finally, the strong classifier is obtained as:
S104,对多个图像信息中的人眼区域进行疲劳分析以判断船舶驾驶员是否处于疲劳状态,并将分析结果发送至可穿戴设备。S104, performing fatigue analysis on the human eye area in multiple image information to determine whether the ship driver is in a fatigue state, and sending the analysis result to the wearable device.
在本发明的一个实施例中,服务器对人眼区域进行疲劳分析以判断船舶驾驶员是否处于疲劳状态具体包括:服务器根据PERCLOS算法计算多个图像信息中的人眼区域的PERCLOS值,并将PERCLOS值与疲劳程度判别的阈值进行比较,以及在PERCLOS值大于等于疲劳程度判别的阈值时判断船舶驾驶员处于疲劳状态。其中,服务器根据以下公式计算所述PERCLOS值:In an embodiment of the present invention, the server performs fatigue analysis on the human eye area to determine whether the ship driver is in a fatigue state specifically includes: the server calculates the PERCLOS values of the human eye area in multiple image information according to the PERCLOS algorithm, and converts the PERCLOS The value is compared with the threshold value for judging the fatigue degree, and when the PERCLOS value is greater than or equal to the threshold value for judging the fatigue degree, it is judged that the ship driver is in a fatigue state. Wherein, the server calculates the PERCLOS value according to the following formula:
其中,N为连续时间内人眼区域采样总数, Among them, N is the total number of human eye area samples in continuous time,
具体而言,由于眼睛的状态与船舶驾驶员的疲劳程度具有很高的相关性,PERCLOS算法(即Percentage of Eyelid Closure Over the Pupil Over time)是通过分析眼睛的开闭情况检测疲劳的一种方法。其中,P80标准与疲劳程度的相关性最高,是公认的“黄金判定”标准。Specifically, since the state of the eyes has a high correlation with the fatigue degree of the ship driver, the PERCLOS algorithm (ie, Percentage of Eyelid Closure Over the Pupil Over time) is a method to detect fatigue by analyzing the opening and closing of the eyes . Among them, the P80 standard has the highest correlation with the degree of fatigue and is recognized as the "golden judgment" standard.
在服务器对图像信息中的人眼区域进行定位之后,通过图像处理技术对人眼区域中人眼的开闭程度进行判定。也就是说,在服务器计算得出P(i)之后,服务器可以将P(i)与疲劳程度判别的阈值T进行比较,其中,阈值T是根据实验对船舶驾驶环境进行综合评定后得到的较为理想的数值参数,如果P(i)≥T,则判断人眼处于闭合状态,即判断船舶驾驶员处于疲劳状态。如果P(i)<T,则判断人眼处于张开状态,即判断船舶驾驶员未处于疲劳状态。然后,服务器将判断船舶驾驶员是否处于疲劳状态的分析结果发送至可穿戴设备。After the server locates the human eye area in the image information, the degree of opening and closing of the human eye in the human eye area is determined by image processing technology. That is to say, after the server calculates P(i), the server can compare P(i) with the threshold T for judging the degree of fatigue, where the threshold T is obtained after a comprehensive evaluation of the ship driving environment based on experiments. An ideal numerical parameter, if P(i)≥T, it is judged that the human eyes are closed, that is, it is judged that the ship driver is in a fatigued state. If P(i)<T, it is judged that the human eyes are in an open state, that is, it is judged that the ship driver is not in a fatigued state. Then, the server sends the analysis result of judging whether the ship driver is in a fatigued state to the wearable device.
在本发明的一个实施例中,在服务器将分析结果发送至可穿戴设备之后,如果服务器判断船舶驾驶员处于疲劳状态时,可穿戴设备进行报警提示。其中,报警提示包括灯光提示、语音提示和振动提示中的一种或者多种。In one embodiment of the present invention, after the server sends the analysis result to the wearable device, if the server judges that the ship driver is in a fatigue state, the wearable device will give an alarm prompt. Wherein, the alarm prompt includes one or more of light prompt, voice prompt and vibration prompt.
本发明实施例的船舶驾驶员的疲劳检测方法,通过可穿戴设备获取船舶驾驶员的视频图像,可以避免各种客观因素的影响,包括光照环境、水面波动环境、操作环境以及船舶驾驶员前方水面视野等,基于可穿戴设备的前端采集系统,能够采集清晰的视频图像,在船舶内抖动、光照不足等恶劣条件下,也能够保障采集的视频图像的质量,从而提高了服务器疲劳检测的可靠性。The fatigue detection method of the ship driver in the embodiment of the present invention can avoid the influence of various objective factors, including the lighting environment, the water surface fluctuation environment, the operating environment and the water surface in front of the ship driver by acquiring the video image of the ship driver through the wearable device. Field of view, etc., based on the front-end acquisition system of wearable devices, can collect clear video images, and can also guarantee the quality of the collected video images under harsh conditions such as shaking inside the ship and insufficient light, thereby improving the reliability of server fatigue detection .
并且,将机器视觉技术融合到疲劳检测方法中,通过可穿戴设备将视频图像发送至服务器,通过服务器对视频图像进行处理、人眼定位和疲劳检测,在船舶驾驶员处于疲劳状态时通过可穿戴设备对其进行提醒,以便警示船舶驾驶员,从而大大提高了船舶驾驶员在驾驶船舶时的安全性,避免了事故的发生,保证了船舶驾驶员的生命财产安全。Moreover, the machine vision technology is integrated into the fatigue detection method, and the video image is sent to the server through the wearable device, and the video image is processed, human eye positioning and fatigue detection are performed through the server, and when the ship driver is in a fatigue state, the wearable The equipment reminds it to warn the ship driver, thus greatly improving the safety of the ship driver when driving the ship, avoiding accidents, and ensuring the safety of the ship driver's life and property.
为了实现上述实施例,本发明还提出一种船舶驾驶员的疲劳检测系统。In order to realize the above-mentioned embodiments, the present invention also proposes a ship driver's fatigue detection system.
图5是本发明一个实施例的船舶驾驶员的疲劳检测系统的结构示意图,如图5所示,船舶驾驶员的疲劳检测系统包括服务器10和可穿戴设备20。FIG. 5 is a schematic structural diagram of a ship driver's fatigue detection system according to an embodiment of the present invention. As shown in FIG. 5 , the ship driver's fatigue detection system includes a server 10 and a wearable device 20 .
具体地,可穿戴设备20用于采集视频流,并将视频流发送至服务器10,以及接收服务器发送的分析结果。其中,可穿戴设备20可以为眼镜。可穿戴设备20在采集到视频图像之后,可先对视频图像进行预处理,例如,对视频图像进行压缩处理、或者设定视频图像的帧率等,由此可以在满足视频图像的质量要求的前提下,提高视频图像传送速率。然后,可穿戴设备20将视频图像以视频流的方式传送至后端的图像处理服务器10上,其中,服务器10和可穿戴设备20可以通过无线网络进行通信,通信的方式可包括但不限于Wifi、红外、蓝牙、3G网络中的一种。Specifically, the wearable device 20 is used to collect video streams, send the video streams to the server 10, and receive analysis results sent by the server. Wherein, the wearable device 20 may be glasses. After the wearable device 20 captures the video image, it can first preprocess the video image, for example, compress the video image, or set the frame rate of the video image, etc., so that it can meet the quality requirements of the video image Under the premise, improve the video image transmission rate. Then, the wearable device 20 transmits the video image to the back-end image processing server 10 in the form of a video stream, wherein the server 10 and the wearable device 20 can communicate through a wireless network, and the communication method can include but not limited to Wifi, One of infrared, bluetooth, 3G network.
服务器10用于接收可穿戴设备20采集的视频流,并将视频流中的多个视频帧转换为多个图像信息,以及获取多个图像信息中的人眼区域,并对多个图像信息中的人眼区域进行疲劳分析以判断船舶驾驶员是否处于疲劳状态,以及将分析结果发送至可穿戴设备20。具体而言,服务器10在接收到可穿戴设备20采集的视频流之后,将视频流在服务器10上进行备份。然后,服务器10从接收到的视频流中获取到多个视频帧,并根据服务器10中预设的阈值将多个视频帧转换为图像信息。例如,可穿戴设备20采集连续10分钟的视频图像,由于正常人眨眼时间在0.2-0.4秒左右,而在疲劳状态下眨眼速度普遍较慢,是一个逐渐闭眼的过程,眼睛从张开到闭合一般至少需要1秒左右的时间。因此,服务器10可以将视频帧率设定为10(即FPS=10)就可以满足人眼状态的实时捕获,在该条件下可以产生6000张样本图像信息。The server 10 is used to receive the video stream collected by the wearable device 20, and convert multiple video frames in the video stream into multiple image information, and obtain the human eye area in the multiple image information, and analyze the multiple image information Fatigue analysis is performed on the human eye area to determine whether the ship driver is in a fatigue state, and the analysis result is sent to the wearable device 20 . Specifically, after the server 10 receives the video stream collected by the wearable device 20 , it backs up the video stream on the server 10 . Then, the server 10 acquires a plurality of video frames from the received video stream, and converts the plurality of video frames into image information according to a preset threshold in the server 10 . For example, the wearable device 20 collects continuous 10-minute video images. Since the blinking time of normal people is about 0.2-0.4 seconds, and the blinking speed is generally slow in the fatigue state, it is a process of gradually closing the eyes. Closing generally takes at least 1 second or so. Therefore, the server 10 can set the video frame rate to 10 (that is, FPS=10) to satisfy the real-time capture of the state of human eyes. Under this condition, 6000 pieces of sample image information can be generated.
其中,服务器10还用于在获取多个图像信息中的人眼区域之前,对多个图像信息进行预处理,从而服务器10可以得到较好质量的图像信息。其中,预处理包括图像去噪处理、均衡化处理、对比度处理中的一种或者多种。Wherein, the server 10 is further configured to preprocess the plurality of image information before obtaining the human eye area in the plurality of image information, so that the server 10 can obtain image information of better quality. Wherein, the preprocessing includes one or more of image denoising processing, equalization processing, and contrast processing.
然后,服务器10对多个图像信息中的人眼区域进行定位,即定位人眼位置,以从图像信息中获取准确的部分包含人眼区域的图像,去除图像信息中无用的信息。Then, the server 10 locates the human eye area in the plurality of image information, that is, locates the human eye position, so as to obtain an accurate image partially containing the human eye area from the image information, and remove useless information in the image information.
在本发明的一个实施例中,服务器10具体用于根据基于Haar特征的Adaboost算法对多个图像信息进行人眼定位,并获取第一人眼区域,以及对多个图像信息进行二值化处理,并据基于Haar特征的Adaboost算法对二值化处理后的图像信息进行人眼定位,以获取第二人眼区域,以及判断第一人眼区域和第二人眼区域是否匹配,并在匹配时将第一人眼区域或第二人眼区域作为多个图像信息中的人眼区域。具体地,服务器10根据图像信息结合对船舶驾驶员眼部特征的学习结果,利用基于Haar特征的Adaboost算法对人眼进行初定位,获得第一人眼区域。然后,服务器10利用图像处理技术对图像信息进行分析和处理,获得该图像信息的二值图像,并根据生成的二值图像利用基于Haar特征的Adaboost算法对人眼进行再一次的定位,获得第二人眼区域。接着,服务器10将第一人眼区域和第二人眼区域进行匹配,如果第一人眼区域的图像集合包含第二人眼区域的图像集合,则判断人眼检测成功,否则删除该图像信息。In one embodiment of the present invention, the server 10 is specifically configured to perform human eye positioning on multiple image information according to the Adaboost algorithm based on Haar features, obtain the first human eye area, and perform binarization processing on multiple image information , and according to the Adaboost algorithm based on Haar features, the binarized image information is used to locate the human eyes to obtain the second human eye area, and to judge whether the first human eye area and the second human eye area match, and in the matching In this case, the first human eye area or the second human eye area is used as the human eye area in the plurality of image information. Specifically, the server 10 uses the Haar feature-based Adaboost algorithm to initially locate the human eye according to the image information combined with the learning result of the ship driver's eye features, and obtains the first human eye area. Then, the server 10 uses image processing technology to analyze and process the image information, obtains the binary image of the image information, and uses the Adaboost algorithm based on the Haar feature to locate the human eye again according to the generated binary image, and obtains the second Two eye area. Next, the server 10 matches the first human eye area with the second human eye area, and if the image set of the first human eye area includes the image set of the second human eye area, it is judged that the human eye detection is successful, otherwise, the image information is deleted .
进一步而言,服务器10先对图像信息进行基于Haar的人眼特征提取。其中,在图像信息中人眼特征可表述为坐标、距离、颜色、亮度、形状等信息。Haar特征属于矩阵特征,因此可以将其抽象为以点、线。面等基本集合元素组成的简单图形。其中,如图3所示,Haar特征可分为三类:边缘特征、线型特征和环绕特征。Haar特征的基本思想就是先将矩形框分块,再将分块的灰度像素与边缘特征相结合分析的一种特征分析方法。在目标图像中可将特定位置的矩形图像区域抽象为Haar特征,通过该方法可将目标区域图像特征进行量化处理。图像中白色区域的像素灰度值与减去黑色区域像素灰度值之和,所得到的数值就是所覆盖区域的特征值。Furthermore, the server 10 first performs Haar-based human eye feature extraction on the image information. Among them, the characteristics of the human eye in the image information can be expressed as coordinates, distance, color, brightness, shape and other information. The Haar feature belongs to the matrix feature, so it can be abstracted as points and lines. Simple graphics composed of basic collection elements such as faces. Among them, as shown in Figure 3, Haar features can be divided into three categories: edge features, line features and surround features. The basic idea of the Haar feature is to first divide the rectangular frame into blocks, and then combine the gray pixels of the blocks with the edge features to analyze a feature analysis method. In the target image, the rectangular image area at a specific position can be abstracted as Haar features, and the image features of the target area can be quantified by this method. The pixel gray value of the white area in the image is subtracted from the sum of the pixel gray value of the black area, and the obtained value is the feature value of the covered area.
服务器10通过使用积分图的方式计算,可提高特征计算速度。积分图是一种可描述全局信息的矩阵表示方法,其定义为:The server 10 can increase the feature calculation speed by using the integral graph for calculation. Integral graph is a matrix representation method that can describe global information, which is defined as:
其中,f(x,y)是原图像在(x,y)处的积分图像,g(x′,y′)是在(x,y)处的原始图像。因此,如图4所示,在(x,y)点处积分图像之等于在该点左上方灰色区域所有像素值之和。Among them, f(x,y) is the integral image of the original image at (x,y), and g(x′,y′) is the original image at (x,y). Therefore, as shown in Figure 4, the integral image at point (x, y) is equal to the sum of all pixel values in the upper left gray area of this point.
进而,服务器10根据Adaboost算法对图像信息中的人眼位置进行识别。对于捕获的24*24像素的图像而言,其Haar特征在图像匹配的个数多达上万中,而其中只存在少数可用特征。本发明中通过采用Adaboost算法实现快速的人眼检测,其基本思想是利用大量训练集训练弱分类器,通过算法叠加最终构成强分类器。Furthermore, the server 10 identifies the position of the human eyes in the image information according to the Adaboost algorithm. For the captured image of 24*24 pixels, the number of Haar features is as many as tens of thousands of image matches, and there are only a few available features. In the present invention, the fast human eye detection is realized by using the Adaboost algorithm. The basic idea is to use a large number of training sets to train weak classifiers, and finally form a strong classifier through algorithm superposition.
如果人眼区域图像有k个特征,则可以表示为fj(xi),其中,1≤j≤k,xi表示为第i个样本图像。那么每个图像的特征集可表示为{f1(xi),f2(xi),f3(xi),…fj(xi),…fk(xi)},其中,每个特征对应一个弱分类器。If the human eye area image has k features, it can be expressed as f j ( xi ), where, 1≤j≤k, x i is represented as the i-th sample image. Then the feature set of each image can be expressed as {f 1 ( xi ),f 2 ( xi ),f 3 ( xi ),…f j ( xi ),…f k ( xi )}, where , each feature corresponds to a weak classifier.
服务器10将一个弱分类器hj(x)的组成包含特征fj(x),阈值θj和符号pj三部分,其中,一个特征对应一个弱分类器,分类阈值是一个对所有矩阵进行分类的特征值,分类符号则是一个表示拥有正负方向的符号。服务器10将第j个特征的弱分类器表示为:The server 10 makes a weak classifier h j (x) consist of three parts: feature f j (x), threshold θ j and symbol p j , where one feature corresponds to a weak classifier, and the classification threshold is a set of all matrices The eigenvalue of the classification, and the classification symbol is a symbol indicating that it has a positive and negative direction. The server 10 represents the weak classifier of the jth feature as:
其中,hj(x)为弱分类器的值,θj为阈值,pj用于控制不等号方向,取值为+1或-1,fj(x)为特征值。Among them, h j (x) is the value of the weak classifier, θ j is the threshold, p j is used to control the direction of the inequality sign, and the value is +1 or -1, and f j (x) is the feature value.
基于Adaboost算法,对已知的n个训练样本(x1,y1),(x2,y2),…,(xn,yn)进行如下步骤运算,其中yi={0,1}对应样本的真和假。Based on the Adaboost algorithm, the following steps are performed on known n training samples (x 1 ,y 1 ),(x 2 ,y 2 ),…,(x n ,y n ), where y i ={0,1 } corresponds to the true and false of the sample.
(1)取n个训练样本,其中m个人眼样本,l个非人眼样本,表示为(x1,y1),(x2,y2),…,(xn,yn),其中,yi=0、yi=1分别对应人眼样本和非人眼样本。(1) Take n training samples, including m human eye samples and l non-human eye samples, expressed as (x 1 ,y 1 ),(x 2 ,y 2 ),…,(x n ,y n ), Wherein, y i =0 and y i =1 correspond to human eye samples and non-human eye samples respectively.
(2)初始化误差权重,对于yi=0的样本,对于yi=1的样本, (2) Initialize the error weight, for samples with y i =0, For samples with y i =1,
(3)初始化t=1,其中t≤T,T为训练样本分类器个数。(3) Initialize t=1, where t≤T, T is the number of training sample classifiers.
(4)将权重归一化为 (4) Normalize the weights to
(5)对每个特征f训练一个弱分类器h(x,f,p,θ),计算其对应的弱分类器的加权(qi)的错误率εf=∑|hj(xj)-qi|,并选出误差εf最小的分类器ht,并更新权重其中,ei=0表示被正确的分类,ei=1表示被错误的分类, (5) Train a weak classifier h(x,f,p,θ) for each feature f, and calculate the weighted (q i ) error rate of the corresponding weak classifier ε f =∑|h j (x j )-q i |, and select the classifier h t with the smallest error ε f , and update the weight Among them, e i =0 means it is correctly classified, e i =1 means it is wrongly classified,
(6)另t=t+1,重复步骤(4),直到t>T。(6) Another t=t+1, repeat step (4) until t>T.
(7)最后得到强分类器为:(7) Finally, the strong classifier is obtained as:
在本发明的一个实施例中,服务器10具体用于根据PERCLOS算法计算多个图像信息中的人眼区域的PERCLOS值,并将PERCLOS值与疲劳程度判别的阈值进行比较,以及在PERCLOS值大于等于疲劳程度判别的阈值时判断船舶驾驶员处于疲劳状态。其中,服务器10根据以下公式计算所述PERCLOS值:其中,N为连续时间内人眼区域采样总数,由于眼睛的状态与船舶驾驶员的疲劳程度具有很高的相关性,PERCLOS算法是通过分析眼睛的开闭情况检测疲劳的一种方法。其中,P80标准与疲劳程度的相关性最高,是公认的“黄金判定”标准。In one embodiment of the present invention, the server 10 is specifically configured to calculate the PERCLOS value of the human eye area in multiple image information according to the PERCLOS algorithm, and compare the PERCLOS value with the threshold value for judging the degree of fatigue, and when the PERCLOS value is greater than or equal to When the threshold value of the fatigue degree is determined, it is judged that the ship pilot is in a fatigue state. Wherein, the server 10 calculates the PERCLOS value according to the following formula: Among them, N is the total number of human eye area samples in continuous time, Since the state of the eyes has a high correlation with the degree of fatigue of the ship's pilot, the PERCLOS algorithm is a method to detect fatigue by analyzing the opening and closing of the eyes. Among them, the P80 standard has the highest correlation with the degree of fatigue and is recognized as the "golden judgment" standard.
在服务器10对图像信息中的人眼区域进行定位之后,通过图像处理技术对人眼区域中人眼的开闭程度进行判定。也就是说,在服务器10计算得出P(i)之后,服务器10可以将P(i)与疲劳程度判别的阈值T进行比较,其中,阈值T是根据实验对船舶驾驶环境进行综合评定后得到的较为理想的数值参数,如果P(i)≥T,则判断人眼处于闭合状态,即判断船舶驾驶员处于疲劳状态。如果P(i)<T,则判断人眼处于张开状态,即判断船舶驾驶员未处于疲劳状态。然后,服务器10将判断船舶驾驶员是否处于疲劳状态的分析结果发送至可穿戴设备20。After the server 10 locates the human eye area in the image information, the degree of opening and closing of the human eye in the human eye area is determined by image processing technology. That is to say, after the server 10 calculates P(i), the server 10 can compare P(i) with the threshold T for judging the degree of fatigue, wherein the threshold T is obtained after comprehensively evaluating the driving environment of the ship according to experiments. If P(i)≥T, it is judged that the human eyes are closed, that is, it is judged that the ship driver is in a fatigued state. If P(i)<T, it is judged that the human eyes are in an open state, that is, it is judged that the ship driver is not in a fatigued state. Then, the server 10 sends the analysis result of judging whether the ship driver is in a fatigue state to the wearable device 20 .
在本发明的一个实施例中,可穿戴设备20还用于当服务器10判断船舶驾驶员处于疲劳状态时,进行报警提示。其中,报警提示包括灯光提示、语音提示和振动提示中的一种或者多种。In one embodiment of the present invention, the wearable device 20 is also used to give an alarm prompt when the server 10 judges that the ship driver is in a fatigue state. Wherein, the alarm prompt includes one or more of light prompt, voice prompt and vibration prompt.
本发明实施例的船舶驾驶员的疲劳检测系统,通过可穿戴设备获取船舶驾驶员的视频图像,可以避免各种客观因素的影响,包括光照环境、水面波动环境、操作环境以及船舶驾驶员前方水面视野等,基于可穿戴设备的前端采集系统,能够采集清晰的视频图像,在船舶内抖动、光照不足等恶劣条件下,也能够保障采集的视频图像的质量,从而提高了服务器疲劳检测的可靠性。The fatigue detection system of the ship driver in the embodiment of the present invention can avoid the influence of various objective factors by acquiring the video image of the ship driver through the wearable device, including the lighting environment, the water surface fluctuation environment, the operating environment and the water surface in front of the ship driver Field of view, etc., based on the front-end acquisition system of wearable devices, can collect clear video images, and can also guarantee the quality of the collected video images under harsh conditions such as shaking inside the ship and insufficient light, thereby improving the reliability of server fatigue detection .
并且,将机器视觉技术融合到疲劳检测方法中,通过可穿戴设备将视频图像发送至服务器,通过服务器对视频图像进行处理、人眼定位和疲劳检测,在船舶驾驶员处于疲劳状态时通过可穿戴设备对其进行提醒,以便警示船舶驾驶员,从而大大提高了船舶驾驶员在驾驶船舶时的安全性,避免了事故的发生,保证了船舶驾驶员的生命财产安全。Moreover, the machine vision technology is integrated into the fatigue detection method, and the video image is sent to the server through the wearable device, and the video image is processed, human eye positioning and fatigue detection are performed through the server, and when the ship driver is in a fatigue state, the wearable The equipment reminds it to warn the ship driver, thus greatly improving the safety of the ship driver when driving the ship, avoiding accidents, and ensuring the safety of the ship driver's life and property.
为了实现上述实施例,本发明还提出一种服务器。In order to realize the foregoing embodiments, the present invention further proposes a server.
图6是本发明一个实施例的服务器的结构示意图,如图6所示,服务器包括接收模块110、转换模块120、获取模块130、分析模块140和预处理模块150,其中,获取模块130包括第一获取单元131、第二获取单元132和判断单元133,分析模块140包括计算单元141、比较单元142和判断单元143。FIG. 6 is a schematic structural diagram of a server according to an embodiment of the present invention. As shown in FIG. 6, the server includes a receiving module 110, a conversion module 120, an acquisition module 130, an analysis module 140, and a preprocessing module 150, wherein the acquisition module 130 includes a first An acquisition unit 131 , a second acquisition unit 132 and a judgment unit 133 , and the analysis module 140 includes a calculation unit 141 , a comparison unit 142 and a judgment unit 143 .
具体地,接收模块110用于接收可穿戴设备采集的视频流。Specifically, the receiving module 110 is configured to receive video streams collected by wearable devices.
转换模块120用于将视频流中的多个视频帧转换为多个图像信息。具体地,转换模块120从接收模块110接收到的视频流中获取到多个视频帧,并根据预设的阈值将多个视频帧转换为图像信息。例如,可穿戴设备采集连续10分钟的视频图像,由于正常人眨眼时间在0.2-0.4秒左右,而在疲劳状态下眨眼速度普遍较慢,是一个逐渐闭眼的过程,眼睛从张开到闭合一般至少需要1秒左右的时间。因此,转换模块120可以将视频帧率设定为10(即FPS=10)就可以满足人眼状态的实时捕获,在该条件下可以产生6000张样本图像信息。The converting module 120 is used for converting multiple video frames in the video stream into multiple image information. Specifically, the converting module 120 acquires multiple video frames from the video stream received by the receiving module 110, and converts the multiple video frames into image information according to a preset threshold. For example, a wearable device collects video images continuously for 10 minutes. Since the blinking time of normal people is about 0.2-0.4 seconds, and the blinking speed is generally slow in the fatigue state, it is a process of gradually closing the eyes, from opening to closing. Generally, it takes at least 1 second or so. Therefore, the conversion module 120 can set the video frame rate to 10 (that is, FPS=10) to satisfy the real-time capture of the state of human eyes. Under this condition, 6000 pieces of sample image information can be generated.
获取模块130用于获取多个图像信息中的人眼区域。The obtaining module 130 is used for obtaining human eye regions in multiple image information.
在本发明的一个实施例中,服务器还包括预处理模块150,预处理模块150用于对多个图像信息进行预处理,其中,预处理包括图像去噪处理、均衡化处理、对比度处理中的一种或者多种。In one embodiment of the present invention, the server further includes a preprocessing module 150, and the preprocessing module 150 is used to preprocess a plurality of image information, wherein the preprocessing includes image denoising processing, equalization processing, and contrast processing. One or more.
在本发明的一个实施例中,获取模块130包括第一获取单元131、第二获取单元132和判断单元133。其中,第一获取单元131用于根据基于Haar特征的Adaboost算法对多个图像信息进行人眼定位,并获取第一人眼区域。第二获取单元132用于对多个图像信息进行二值化处理,并据基于Haar特征的Adaboost算法对二值化处理后的图像信息进行人眼定位,以获取第二人眼区域。判断单元133用于判断第一人眼区域和第二人眼区域是否匹配,并在匹配时将第一人眼区域和/或第二人眼区域作为多个图像信息中的人眼区域。具体地,第一获取单元131根据图像信息结合对船舶驾驶员眼部特征的学习结果,利用基于Haar特征的Adaboost算法对人眼进行初定位,获得第一人眼区域。然后,第二获取单元132利用图像处理技术对图像信息进行分析和处理,获得该图像信息的二值图像,并根据生成的二值图像利用基于Haar特征的Adaboost算法对人眼进行再一次的定位,获得第二人眼区域。接着,判断单元133将第一人眼区域和第二人眼区域进行匹配,如果第一人眼区域的图像集合包含第二人眼区域的图像集合,则判断人眼检测成功,否则删除该图像信息。In an embodiment of the present invention, the obtaining module 130 includes a first obtaining unit 131 , a second obtaining unit 132 and a judging unit 133 . Wherein, the first acquisition unit 131 is configured to perform human eye positioning on a plurality of image information according to the Adaboost algorithm based on Haar features, and acquire the first human eye area. The second acquisition unit 132 is used to perform binarization processing on a plurality of image information, and perform human eye positioning on the binarized image information according to the Adaboost algorithm based on Haar features, so as to obtain the second human eye area. The judging unit 133 is configured to judge whether the first human eye area and the second human eye area match, and take the first human eye area and/or the second human eye area as the human eye area in the plurality of image information when matching. Specifically, the first acquiring unit 131 uses the Haar feature-based Adaboost algorithm to initially locate the human eye according to the image information combined with the learning result of the ship driver's eye features, and obtains the first human eye area. Then, the second acquisition unit 132 uses image processing technology to analyze and process the image information, obtains the binary image of the image information, and uses the Adaboost algorithm based on the Haar feature to locate the human eye again according to the generated binary image , to get the second eye area. Next, the judging unit 133 matches the first human eye area with the second human eye area, and if the image set of the first human eye area includes the image set of the second human eye area, it is judged that the human eye detection is successful, otherwise the image is deleted information.
具体而言,第一获取单元131和第二获取单元132先对图像信息进行基于Haar的人眼特征提取。其中,在图像信息中人眼特征可表述为坐标、距离、颜色、亮度、形状等信息。Haar特征属于矩阵特征,因此可以将其抽象为以点、线。面等基本集合元素组成的简单图形。其中,如图3所示,Haar特征可分为三类:边缘特征、线型特征和环绕特征。Haar特征的基本思想就是先将矩形框分块,再将分块的灰度像素与边缘特征相结合分析的一种特征分析方法。在目标图像中可将特定位置的矩形图像区域抽象为Haar特征,通过该方法可将目标区域图像特征进行量化处理。图像中白色区域的像素灰度值与减去黑色区域像素灰度值之和,所得到的数值就是所覆盖区域的特征值。Specifically, the first acquisition unit 131 and the second acquisition unit 132 first perform Haar-based human eye feature extraction on the image information. Among them, the characteristics of the human eye in the image information can be expressed as coordinates, distance, color, brightness, shape and other information. The Haar feature belongs to the matrix feature, so it can be abstracted as points and lines. Simple graphics composed of basic collection elements such as faces. Among them, as shown in Figure 3, Haar features can be divided into three categories: edge features, line features and surround features. The basic idea of the Haar feature is to first divide the rectangular frame into blocks, and then combine the gray pixels of the blocks with the edge features to analyze a feature analysis method. In the target image, the rectangular image area at a specific position can be abstracted as Haar features, and the image features of the target area can be quantified by this method. The pixel gray value of the white area in the image is subtracted from the sum of the pixel gray value of the black area, and the obtained value is the feature value of the covered area.
通过使用积分图的方式计算,可提高特征计算速度。积分图是一种可描述全局信息的矩阵表示方法,其定义为:The speed of feature calculation can be improved by using integral graph calculation. Integral graph is a matrix representation method that can describe global information, which is defined as:
其中,f(x,y)是原图像在(x,y)处的积分图像,g(x′,y′)是在(x,y)处的原始图像。因此,如图4所示,在(x,y)点处积分图像之等于在该点左上方灰色区域所有像素值之和。Among them, f(x,y) is the integral image of the original image at (x,y), and g(x′,y′) is the original image at (x,y). Therefore, as shown in Figure 4, the integral image at point (x, y) is equal to the sum of all pixel values in the upper left gray area of this point.
进而,第一获取单元131和第二获取单元132根据Adaboost算法对图像信息中的人眼位置进行识别。对于捕获的24*24像素的图像而言,其Haar特征在图像匹配的个数多达上万中,而其中只存在少数可用特征。本发明中通过采用Adaboost算法实现快速的人眼检测,其基本思想是利用大量训练集训练弱分类器,通过算法叠加最终构成强分类器。Further, the first acquiring unit 131 and the second acquiring unit 132 identify the positions of human eyes in the image information according to the Adaboost algorithm. For the captured image of 24*24 pixels, the number of Haar features is as many as tens of thousands of image matches, and there are only a few available features. In the present invention, the fast human eye detection is realized by using the Adaboost algorithm. The basic idea is to use a large number of training sets to train weak classifiers, and finally form a strong classifier through algorithm superposition.
如果人眼区域图像有k个特征,则可以表示为fj(xi),其中,1≤j≤k,xi表示为第i个样本图像。那么每个图像的特征集可表示为{f1(xi),f2(xi),f3(xi),…fj(xi),…fk(xi)},其中,每个特征对应一个弱分类器。If the human eye area image has k features, it can be expressed as f j ( xi ), where, 1≤j≤k, x i is represented as the i-th sample image. Then the feature set of each image can be expressed as {f 1 ( xi ),f 2 ( xi ),f 3 ( xi ),…f j ( xi ),…f k ( xi )}, where , each feature corresponds to a weak classifier.
第一获取单元131和第二获取单元132将一个弱分类器hj(x)的组成包含特征fj(x),阈值θj和符号pj三部分,其中,一个特征对应一个弱分类器,分类阈值是一个对所有矩阵进行分类的特征值,分类符号则是一个表示拥有正负方向的符号。服务器将第j个特征的弱分类器表示为:The first acquisition unit 131 and the second acquisition unit 132 make a weak classifier h j (x) consist of three parts: feature f j (x), threshold θ j and symbol p j , wherein one feature corresponds to a weak classifier , the classification threshold is an eigenvalue that classifies all matrices, and the classification sign is a sign indicating that it has a positive and negative direction. The server represents the weak classifier for the jth feature as:
其中,hj(x)为弱分类器的值,θj为阈值,pj用于控制不等号方向,取值为+1或-1,fj(x)为特征值。Among them, h j (x) is the value of the weak classifier, θ j is the threshold, p j is used to control the direction of the inequality sign, and the value is +1 or -1, and f j (x) is the feature value.
基于Adaboost算法,对已知的n个训练样本(x1,y1),(x2,y2),…,(xn,yn)进行如下步骤运算,其中yi={0,1}对应样本的真和假。Based on the Adaboost algorithm, the following steps are performed on known n training samples (x 1 ,y 1 ),(x 2 ,y 2 ),…,(x n ,y n ), where y i ={0,1 } corresponds to the true and false of the sample.
(1)取n个训练样本,其中m个人眼样本,l个非人眼样本,表示为(x1,y1),(x2,y2),…,(xn,yn),其中,yi=0、yi=1分别对应人眼样本和非人眼样本。(1) Take n training samples, including m human eye samples and l non-human eye samples, expressed as (x 1 ,y 1 ),(x 2 ,y 2 ),…,(x n ,y n ), Wherein, y i =0 and y i =1 correspond to human eye samples and non-human eye samples respectively.
(2)初始化误差权重,对于yi=0的样本,对于yi=1的样本, (2) Initialize the error weight, for samples with y i =0, For samples with y i =1,
(3)初始化t=1,其中t≤T,T为训练样本分类器个数。(3) Initialize t=1, where t≤T, T is the number of training sample classifiers.
(4)将权重归一化为 (4) Normalize the weights to
(5)对每个特征f训练一个弱分类器h(x,f,p,θ),计算其对应的弱分类器的加权(qi)的错误率εf=∑|hj(xj)-qi|,并选出误差εf最小的分类器ht,并更新权重其中,ei=0表示被正确的分类,ei=1表示被错误的分类, (5) Train a weak classifier h(x,f,p,θ) for each feature f, and calculate the weighted (q i ) error rate of the corresponding weak classifier ε f =∑|h j (x j )-q i |, and select the classifier h t with the smallest error ε f , and update the weight Among them, e i =0 means it is correctly classified, e i =1 means it is wrongly classified,
(6)另t=t+1,重复步骤(4),直到t>T。(6) Another t=t+1, repeat step (4) until t>T.
(7)最后得到强分类器为:(7) Finally, the strong classifier is obtained as:
分析模块140用于对多个图像信息中的人眼区域进行疲劳分析以判断船舶驾驶员是否处于疲劳状态,并将分析结果发送至可穿戴设备。The analysis module 140 is used to perform fatigue analysis on the human eye area in multiple image information to determine whether the ship driver is in a fatigue state, and send the analysis result to the wearable device.
在本发明的一个实施例中,分析模块140包括计算单元141、比较单元142和判断单元143。其中,计算单元141用于根据PERCLOS算法计算多个图像信息中的人眼区域的PERCLOS值。其中,计算单元141根据以下公式计算PERCLOS值:其中,N为连续时间内人眼区域采样总数, 比较单元142用于将PERCLOS值与疲劳程度判别的阈值进行比较。判断单元143用于在PERCLOS值大于等于疲劳程度判别的阈值时,判断船舶驾驶员处于疲劳状态。In one embodiment of the present invention, the analyzing module 140 includes a calculating unit 141 , a comparing unit 142 and a judging unit 143 . Wherein, the calculating unit 141 is used for calculating the PERCLOS value of the human eye area in the plurality of image information according to the PERCLOS algorithm. Wherein, the calculation unit 141 calculates the PERCLOS value according to the following formula: Among them, N is the total number of human eye area samples in continuous time, The comparing unit 142 is used to compare the PERCLOS value with the threshold value for judging the degree of fatigue. The judging unit 143 is used for judging that the ship driver is in a fatigue state when the PERCLOS value is greater than or equal to the threshold for judging the fatigue degree.
本发明实施例的服务器,对视频图像进行处理、人眼定位和疲劳检测,在船舶驾驶员处于疲劳状态时通过可穿戴设备对其进行提醒,以便警示船舶驾驶员,从而大大提高了船舶驾驶员在驾驶船舶时的安全性,避免了事故的发生,保证了船舶驾驶员的生命财产安全。The server of the embodiment of the present invention processes video images, locates human eyes and detects fatigue, and reminds the ship driver through a wearable device when he is in a fatigued state, so as to warn the ship driver, thereby greatly improving the safety of the ship driver. The safety when driving the ship avoids the occurrence of accidents and ensures the safety of the life and property of the ship driver.
应当理解,本发明的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。例如,如果用硬件来实现,和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。It should be understood that various parts of the present invention can be realized by hardware, software, firmware or their combination. In the above described embodiments, various steps or methods may be implemented by software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, it can be implemented by any one or combination of the following techniques known in the art: Discrete logic circuits, ASICs with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), etc.
在本发明中,除非另有明确的规定和限定,术语“安装”、“相连”、“连接”、等术语应做广义理解,例如,可以是固定连接,也可以是可拆卸连接,或成一体;可以是机械连接,也可以是电连接;可以是直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连通或两个元件的相互作用关系,除非另有明确的限定。对于本领域的普通技术人员而言,可以根据具体情况理解上述术语在本发明中的具体含义。In the present invention, unless otherwise clearly specified and limited, the terms "installation", "connection", "connection" and other terms should be understood in a broad sense, for example, it can be a fixed connection, a detachable connection, or a Integral; it may be mechanically connected or electrically connected; it may be directly connected or indirectly connected through an intermediary, and it may be the internal communication of two elements or the interaction relationship between two elements, unless otherwise clearly defined. Those of ordinary skill in the art can understand the specific meanings of the above terms in the present invention according to specific situations.
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。In the description of this specification, descriptions referring to the terms "one embodiment", "some embodiments", "example", "specific examples", or "some examples" mean that specific features described in connection with the embodiment or example , structure, material or characteristic is included in at least one embodiment or example of the present invention. In this specification, the schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the described specific features, structures, materials or characteristics may be combined in any suitable manner in any one or more embodiments or examples. In addition, those skilled in the art can combine and combine different embodiments or examples and features of different embodiments or examples described in this specification without conflicting with each other.
尽管上面已经示出和描述了本发明的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本发明的限制,本领域的普通技术人员在本发明的范围内可以对上述实施例进行变化、修改、替换和变型。Although the embodiments of the present invention have been shown and described above, it can be understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and those skilled in the art can make the above-mentioned The embodiments are subject to changes, modifications, substitutions and variations.
Claims (21)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201510279711.1A CN106295474B (en) | 2015-05-28 | 2015-05-28 | Fatigue detection method, system and the server of deck officer |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201510279711.1A CN106295474B (en) | 2015-05-28 | 2015-05-28 | Fatigue detection method, system and the server of deck officer |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN106295474A true CN106295474A (en) | 2017-01-04 |
| CN106295474B CN106295474B (en) | 2019-03-22 |
Family
ID=57634266
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201510279711.1A Expired - Fee Related CN106295474B (en) | 2015-05-28 | 2015-05-28 | Fatigue detection method, system and the server of deck officer |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN106295474B (en) |
Cited By (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108304764A (en) * | 2017-04-24 | 2018-07-20 | 中国民用航空局民用航空医学中心 | Fatigue state detection device and detection method in simulated flight driving procedure |
| CN109407609A (en) * | 2018-12-05 | 2019-03-01 | 江苏永钢集团有限公司 | A kind of facility information point detection system |
| CN110063736A (en) * | 2019-05-06 | 2019-07-30 | 苏州国科视清医疗科技有限公司 | The awake system of fatigue detecting and rush of eye movement parameter monitoring based on MOD-Net network |
| CN111353636A (en) * | 2020-02-24 | 2020-06-30 | 交通运输部水运科学研究所 | A method and system for predicting ship driving behavior based on multimodal data |
| CN113947869A (en) * | 2021-10-18 | 2022-01-18 | 广州海事科技有限公司 | Alarm method, system, computer equipment and medium based on ship driving state |
| CN114537612A (en) * | 2021-12-31 | 2022-05-27 | 武汉理工大学 | Fatigue detection device and method for crew on duty at ship bridge |
| CN114663964A (en) * | 2022-05-24 | 2022-06-24 | 武汉理工大学 | Ship remote driving behavior state monitoring and early warning method and system and storage medium |
| CN114782934A (en) * | 2022-05-10 | 2022-07-22 | 北京明略昭辉科技有限公司 | Fatigue driving detection method and device, readable medium and electronic equipment |
| CN116824555A (en) * | 2023-06-14 | 2023-09-29 | 交通运输部水运科学研究所 | A method and system for monitoring crew fatigue during navigation |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102324166A (en) * | 2011-09-19 | 2012-01-18 | 深圳市汉华安道科技有限责任公司 | Fatigue driving detection method and device |
| CN103093215A (en) * | 2013-02-01 | 2013-05-08 | 北京天诚盛业科技有限公司 | Eye location method and device |
| CN104269028A (en) * | 2014-10-23 | 2015-01-07 | 深圳大学 | Fatigue driving detection method and system |
-
2015
- 2015-05-28 CN CN201510279711.1A patent/CN106295474B/en not_active Expired - Fee Related
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102324166A (en) * | 2011-09-19 | 2012-01-18 | 深圳市汉华安道科技有限责任公司 | Fatigue driving detection method and device |
| CN103093215A (en) * | 2013-02-01 | 2013-05-08 | 北京天诚盛业科技有限公司 | Eye location method and device |
| CN104269028A (en) * | 2014-10-23 | 2015-01-07 | 深圳大学 | Fatigue driving detection method and system |
Non-Patent Citations (1)
| Title |
|---|
| 杨东: ""基于面部变化特征的驾驶疲劳监测方法研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108304764A (en) * | 2017-04-24 | 2018-07-20 | 中国民用航空局民用航空医学中心 | Fatigue state detection device and detection method in simulated flight driving procedure |
| CN108304764B (en) * | 2017-04-24 | 2021-12-24 | 中国民用航空局民用航空医学中心 | Fatigue state detection device and detection method in simulated flight driving process |
| CN109407609A (en) * | 2018-12-05 | 2019-03-01 | 江苏永钢集团有限公司 | A kind of facility information point detection system |
| CN110063736A (en) * | 2019-05-06 | 2019-07-30 | 苏州国科视清医疗科技有限公司 | The awake system of fatigue detecting and rush of eye movement parameter monitoring based on MOD-Net network |
| CN110063736B (en) * | 2019-05-06 | 2022-03-08 | 苏州国科视清医疗科技有限公司 | Eye movement parameter monitoring fatigue detection and wake-up promotion system based on MOD-Net network |
| CN111353636A (en) * | 2020-02-24 | 2020-06-30 | 交通运输部水运科学研究所 | A method and system for predicting ship driving behavior based on multimodal data |
| CN113947869A (en) * | 2021-10-18 | 2022-01-18 | 广州海事科技有限公司 | Alarm method, system, computer equipment and medium based on ship driving state |
| CN113947869B (en) * | 2021-10-18 | 2023-09-01 | 广州海事科技有限公司 | Alarm method, system, computer equipment and medium based on ship driving state |
| CN114537612A (en) * | 2021-12-31 | 2022-05-27 | 武汉理工大学 | Fatigue detection device and method for crew on duty at ship bridge |
| CN114782934A (en) * | 2022-05-10 | 2022-07-22 | 北京明略昭辉科技有限公司 | Fatigue driving detection method and device, readable medium and electronic equipment |
| CN114663964A (en) * | 2022-05-24 | 2022-06-24 | 武汉理工大学 | Ship remote driving behavior state monitoring and early warning method and system and storage medium |
| CN116824555A (en) * | 2023-06-14 | 2023-09-29 | 交通运输部水运科学研究所 | A method and system for monitoring crew fatigue during navigation |
Also Published As
| Publication number | Publication date |
|---|---|
| CN106295474B (en) | 2019-03-22 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN106295474A (en) | The fatigue detection method of deck officer, system and server | |
| CN108791299B (en) | Driving fatigue detection and early warning system and method based on vision | |
| CN112686090B (en) | Intelligent monitoring system for abnormal behavior in bus | |
| CN104637246B (en) | Driver multi-behavior early warning system and danger evaluation method | |
| CN105769120B (en) | Fatigue driving detection method and device | |
| CN112016429B (en) | Fatigue driving detection method based on train cab scene | |
| CN104670155B (en) | Vehicle anti-theft alarm system based on cloud-based Internet of Vehicles | |
| CN107679468A (en) | A kind of embedded computer vision detects fatigue driving method and device | |
| CN105354988B (en) | A kind of driver tired driving detecting system and detection method based on machine vision | |
| WO2020042984A1 (en) | Vehicle behavior detection method and apparatus | |
| CN112633057A (en) | Intelligent monitoring method for abnormal behaviors in bus | |
| CN103065121B (en) | The engine driver's method for monitoring state analyzed based on video human face and device | |
| CN103366506A (en) | Device and method for automatically monitoring telephone call behavior of driver when driving | |
| CN104068868A (en) | Method and device for monitoring driver fatigue on basis of machine vision | |
| CN106530623A (en) | Fatigue driving detection device and method | |
| CN111753674A (en) | A detection and recognition method of fatigue driving based on deep learning | |
| CN112698660B (en) | Driving behavior visual perception device and method based on 9-axis sensor | |
| CN108108651B (en) | Method and system for detecting driver non-attentive driving based on video face analysis | |
| CN102085099A (en) | Method and device for detecting fatigue driving | |
| CN106529496A (en) | Locomotive driver real-time video fatigue detection method | |
| CN117523537A (en) | Dynamic judging method for dangerous degree of vehicle driving | |
| Chen | Research on driver fatigue detection strategy based on human eye state | |
| CN113901866A (en) | Fatigue driving early warning method based on machine vision | |
| CN204706141U (en) | Wearable device | |
| CN114492656B (en) | A fatigue monitoring system based on computer vision and sensors |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20190322 |
|
| CF01 | Termination of patent right due to non-payment of annual fee |