CN109823344B - Driving prompting method and system - Google Patents

Driving prompting method and system Download PDF

Info

Publication number
CN109823344B
CN109823344B CN201711170806.5A CN201711170806A CN109823344B CN 109823344 B CN109823344 B CN 109823344B CN 201711170806 A CN201711170806 A CN 201711170806A CN 109823344 B CN109823344 B CN 109823344B
Authority
CN
China
Prior art keywords
driving
image
target
processor
warning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711170806.5A
Other languages
Chinese (zh)
Other versions
CN109823344A (en
Inventor
何亮融
杨宗翰
王思捷
林建锜
萧博格
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Acer Inc
Original Assignee
Acer Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Acer Inc filed Critical Acer Inc
Priority to CN201711170806.5A priority Critical patent/CN109823344B/en
Publication of CN109823344A publication Critical patent/CN109823344A/en
Application granted granted Critical
Publication of CN109823344B publication Critical patent/CN109823344B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Eye Examination Apparatus (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a driving prompting method and system. The driving prompt system is loaded on a mobile vehicle, and the method comprises the following steps. An environmental image is acquired via an image acquisition device. And performing target object detection on the environment image to acquire a target position of the image detection object. The gaze location of the driver is detected via an eye tracking device. And judging whether the fixation position is matched with the target position. And if the watching position does not coincide with the target position, controlling the warning device to start warning operation.

Description

Driving prompting method and system
Technical Field
The present invention relates to a driving assistance technology, and in particular, to a driving prompt method and system.
Background
As the number and kinds of vehicles are increasing, the probability of traffic accidents is also increasing year by year. Obviously, besides the continuous progress of the traffic technology in the power section, the improvement of the safety during the driving process is another subject to be addressed. For example, there are many objects needing driving attention on the driving route of an automobile, such as traffic signs, pedestrians, obstacles, etc., and driving requires attentive viewing of the road condition environment as a basis for executing the next driving action.
In today's society, traffic accidents often occur when driving without attentive attention to road conditions. Driving may not notice a traffic sign or an obstacle on the road due to fatigue or distraction, thus resulting in driving not performing a correct driving action depending on the traffic sign or the obstacle in front of the eye. For example, if the driver is distracted from using the mobile phone and does not notice the light signal conversion of the traffic light at the right moment, the driver may not react to make a driving behavior of running the red light or running the yellow light. If the driver is overly fatigued without noticing the speed limit sign on the road, there is a great chance that the driver will be overdriven. The above situation is very likely to cause traffic accidents. In other words, if the driver can reliably notice the objects needing attention on the driving guide way, the occurrence probability of traffic accidents can be greatly reduced.
Disclosure of Invention
In view of the above, the present invention provides a driving prompting method and system, which can determine whether a driving sight line is focused on a target object through an eyeball tracking device, and activate a warning in response to detecting that the driving sight line is not focused on the target object, thereby improving the safety of driving a vehicle.
The invention provides a driving prompting method which is suitable for a driving prompting system comprising an eyeball tracking device and an image acquisition device. The driving prompt system is loaded on a mobile vehicle, and the method comprises the following steps. An environmental image is acquired via an image acquisition device. And performing target object detection on the environment image to acquire a target position of the image detection object. The gaze location of the driver is detected via an eye tracking device. And judging whether the fixation position is matched with the target position. And if the watching position does not coincide with the target position, controlling the warning device to start warning operation.
From another perspective, the present invention provides a driving prompting system, which includes an image capturing device, an eye tracking device, a memory, and a processor. The memory stores a plurality of instructions, and the processor is coupled to the image acquisition device, the eyeball tracking device and the memory. The processor is configured to execute the instructions to: acquiring an environment image via an image acquisition device; performing target object detection on the environment image to acquire a target position of an image detection object; detecting a gaze location of the driver via an eye tracking device; judging whether the fixation position is matched with the target position; and if the watching position does not coincide with the target position, controlling the warning device to start warning operation.
Based on the above, the driving prompting method and system provided by the embodiment of the invention can track the gazing position of the eyeball of the driver, and acquire the target position of the image detection object on the environment image through image acquisition and target object detection. Therefore, whether the driving sight line falls on the target object can be detected by comparing the gazing position of the driving eyeball with the target position of the image detection object. Therefore, if the gaze position detected by the eyeball tracking device does not coincide with the target position of the image detection object of the target object, the warning operation can be started to prompt driving.
In order to make the aforementioned and other features and advantages of the invention more comprehensible, embodiments accompanied with figures are described in detail below.
Drawings
FIG. 1 is a block diagram of a driving prompt system according to an embodiment of the invention;
fig. 2 is a schematic view of a driving prompt system and a mobile vehicle according to an embodiment of the invention;
FIG. 3 is a flow diagram of a method in accordance with an embodiment of the present invention;
FIG. 4 is a block diagram of a driving prompt system according to an embodiment of the invention;
FIG. 5 is a flow diagram of a method in accordance with an embodiment of the present invention;
FIG. 6 is a schematic diagram of obtaining a gaze location, in accordance with an embodiment of the present invention;
FIGS. 7A and 7B are schematic diagrams illustrating a scenario of obtaining a gaze location according to an embodiment of the invention;
FIG. 8 is a schematic diagram of determining whether a gaze location matches a target location, in accordance with an embodiment of the present invention;
FIG. 9 is a schematic diagram of determining whether the gaze location matches the target location, according to an embodiment of the invention.
Description of the reference numerals
10. 40: a driving prompt system;
110. 410: an eyeball tracking device;
120. 420: a warning device;
130. 430: a processor;
140. 440, a step of: a memory;
150. 450, 490: an image acquisition device;
460: a distance measuring device;
470: a vehicle travel element;
21. 63: driving;
22. 61: a vehicle;
m1: a rear-view mirror;
f 1: a field of view range;
62: a target object;
PL1, PL 2: a projection plane;
s1, S2: detecting an object by an image;
img1, Img2, Img 8: an environmental image;
img 3: projecting a reference image;
d 1: spacing;
z1: a specific block;
z2: an image block;
s301 to S306, 501 to S512: and (5) carrying out the following steps.
Detailed Description
Some embodiments of the invention will now be described in detail with reference to the drawings, wherein like reference numerals are used to refer to like or similar elements throughout the several views. These examples are only a part of the present invention and do not disclose all possible embodiments of the present invention. Rather, these embodiments are merely exemplary of the claimed method, electronic device, and computer-readable storage medium.
Fig. 1 is a block diagram of a driving prompting system according to an embodiment of the invention. Referring to fig. 1, the driving prompting system 10 includes an eye tracking device 110, an alert device 120, a processor 130, a memory 140, and an image capturing device 150.
The driving prompting system 10 is suitable for being disposed on a mobile vehicle and continuously detecting surrounding target objects during driving/riding the mobile vehicle. The mobile vehicle is a vehicle that is moved by human control, such as various automobiles, buses, bicycles, motorcycles, boats, airplanes, and mobile machines, and the invention is not limited thereto. Here, the driving guidance system 10 may also detect an eyeball characteristic of the driving to identify the gaze location of the driving. By comparing the position of the target object with the driving gaze position, the driving guidance system 10 can determine whether the driving gaze actually falls on the target object to be noticed. Thus, the driving guidance system 10 can provide a warning to notify the driver when it is detected that the driver is not looking at the target object to be noticed during the driving.
The eye tracking device 110 is a device capable of tracking and measuring the position and movement of the eyes, and is suitable for detecting the features of the driving eyes. In one embodiment, the eye tracking device 110 may include a face image capturing module for acquiring a face image and an eye image of the driver, and determining the gaze position of the driver according to the face turning direction and the pupil position. In an embodiment, the eye tracking device 110 may include a light emitting module and an eye image capturing module. The light emitting module of the eye tracking device 110 emits a light beam toward the driving eye, and the eye image capturing module of the eye tracking device 110 captures an eye image. The eyeball tracking device 110 detects the pupil position and the bright point position of the driving in the eye image, and determines the gaze fixation position of the current eyeball according to the position corresponding relationship between the pupil position and the bright point position. The bright spot position is a reflective spot formed by emitting a light beam to irradiate the eyes of a driver through the light emitting module.
The warning device 120 is used for executing warning operation to send warning information to the driver. The warning device 120 may be an in-vehicle dashboard, a navigation device, a user mobile phone, etc., and may perform various warning operations according to the type of the warning device 120 to send out corresponding warning information. For example, when the warning device 120 is an in-vehicle dashboard, the warning message may be a text prompt, a light flashing prompt, a color changing prompt, etc. When the warning device 120 is a speaker in the vehicle or a mobile phone for driving, the warning information may be a voice prompt or the like. When the warning device 120 is a navigation device, the warning information may be a text prompt, a voice prompt, or the like. Alternatively, the warning device 120 may be a component of a mobile vehicle that is in contact with the driver, such as a seat or a steering wheel. In other words, the warning information may be a type of indication such as seat vibration or steering wheel vibration, and any method that can give a notification to indicate driving is applicable to the present invention.
The processor 130 is coupled to the eye tracking device 110, the warning device 120, the memory 140, and the image capturing device 150 to control the overall operation of the driving prompting system 10. In the embodiment, the Processor 130 is, for example, a Central Processing Unit (CPU), or other Programmable Microprocessor (Microprocessor), a Digital Signal Processor (DSP), a Programmable controller, an Application Specific Integrated Circuit (ASIC), a Programmable Logic Device (PLD), or other hardware devices with computing capability, but the disclosure is not limited thereto.
The Memory 140 is, for example, any type of fixed or removable Random Access Memory (RAM), Read-Only Memory (ROM), Flash Memory (Flash Memory), hard disk, or other similar devices or combination thereof, and is used for storing data, program codes, images, and the like that may be used in the operation of the driving guidance system 10. That is, the memory 140 is also used to record a plurality of instructions executable by the processor 130.
The image capturing device 150 is disposed on the mobile carrier for capturing an image of an environment in front of the mobile carrier. The image capturing Device 150 is, for example, an image sensor having a Charge Coupled Device (CCD) or a Complementary Metal-Oxide Semiconductor (CMOS) Device, and is used for an environmental image. In one embodiment, the image capturing device 150 may be disposed in front of the vehicle, such as above the windshield of the vehicle, to capture the image of the road condition in front of the vehicle. Alternatively, the image capturing device 150 may be a drive recorder or other digital camera installed in a vehicle, but the present invention is not limited to such embodiments.
Fig. 2 is a schematic view of a driving prompt system and a mobile vehicle according to an embodiment of the invention. Referring to fig. 2, if the driving guidance system 10 is applied to an automobile driving environment, when the vehicle 22 (i.e., a mobile vehicle) is driven 21, the image capturing device 150 may capture an image toward the front of the vehicle 22 based on the field of view f 1. Further, the eyeball-tracking device 110 may be provided on one side of the rear mirror M1 to acquire an eye image of the driving in the direction toward the driving 21, thereby detecting the gaze position of the driving 21. However, fig. 2 is only an exemplary illustration, and the number and the actual positions of the eye tracking devices 110 and the image capturing devices 150 are not limited in the present invention, and may be designed according to the actual application.
Fig. 3 is a flowchart of a driving prompting method according to an embodiment of the invention. Referring to fig. 3, the manner of the present embodiment is applied to the driving indication system 10 in the above embodiment, and the detailed steps of providing warning according to the driving attention position in the present embodiment will be described below with reference to various elements in the driving indication system 10.
First, in step S301, the processor 130 acquires an environment image via the image acquisition device 150. The image capturing device 150 is configured to capture images of the periphery of the mobile vehicle, for example, the image capturing device 150 may be oriented to the traveling direction, the side of the traveling direction, or the rear of the traveling direction to capture the environment image. In addition, the present invention is not limited to the number of image capturing devices 150. In one embodiment, the processor 130 may use more than one image capture device to capture multiple images of the environment simultaneously. It should be noted that the Field of View (FOV) of the image capturing device 150 will depend on the performance and the setting position of the image capturing device 150, and the imaging content of the environment image will also depend on the FOV of the image capturing device 150.
In step S302, the processor 130 performs object detection on the environment image to obtain a target position of the image detection object. Specifically, specific image characteristics of various target objects have been established in the database. The processor 130 may analyze the environmental image according to the specific image feature in the database to determine whether an image detection object matching the specific image feature exists in the environmental image. In other words, when it is found that an image detection object conforming to a specific image feature exists in the environment image in front of the vehicle, the processor 130 may determine that the corresponding target object is located in the shooting direction of the image capturing device 150, like in front of the vehicle. For example, the processor 130 may perform image analysis through contour information or color information of the environment image to detect the presence or absence of the target object. The target object may be a traffic light (e.g., a traffic signal, a driving specification sign, a road guide sign, etc.) or an obstacle (e.g., a pedestrian, a road block, an animal, etc.).
In an embodiment, the target position of the image detection object may be a representative image coordinate of the image detection object on the environment image, or a three-dimensional space coordinate generated by a spatial depth analysis, which is not limited in the present invention. It should be noted that, if the target position of the image detection object is a three-dimensional space coordinate, more than one image acquisition device is installed to acquire a plurality of environment images, and the three-dimensional space coordinate of the image detection object can be acquired through depth information calculation.
In step S303, the processor 130 detects the driving gaze position via the eye tracking device 110. In one embodiment, the eye tracking device 110 can detect and determine the gaze point by image analysis. The gaze tracking technique may calculate a position shift amount of the same pupil feature point as movement information of the eyeball from two eye images acquired from the front and back. Alternatively, the eye tracking apparatus 110 may track the gaze point based on the relationship between the pupil and the orbit, the shape, and the gaze direction. The eye tracking apparatus 110 continuously provides the detected driving gaze location to the processor 130. In one embodiment, the eye tracking device 110 can analyze the two-dimensional projection coordinates of the driving sight line on a projection plane projected in front of the driver, which can be used as the gaze location of the driver. In one embodiment, the driving gaze location may also be a three-dimensional spatial coordinate based on the setting of the depth information of the projection plane.
In step S304, the processor 130 determines whether the gaze position matches the target position. In detail, after the processor 130 obtains the gaze location provided by the eye tracking device 110 and analyzes the target location, the processor 130 may determine whether the gaze location is close enough to the target location to determine whether the driving sight line falls on the target object.
In step S305, if the gaze position does not match the target position (no in step S304), the processor 130 controls the warning device 120 to start a warning operation. On the other hand, in step S306, if the gaze position matches the target position (yes in step S304), the processor 130 controls the warning device 120 not to activate the warning operation. In other words, the processor 130 controls the warning device 120 to initiate a warning operation in response to determining that the gaze location does not coincide with the target location. The processor 130 controls the alerting device 120 not to initiate an alerting operation in response to determining that the gaze location matches the target location. Specifically, when the processor 130 determines that the driving gaze position does not coincide with the target position of the image detection object, which means that the driving does not gaze at the target object, the processor 130 may control the warning device 120 to issue an audible warning, a visual warning, or a tactile warning to prompt the driving to inform the driving of the existence of the overlooked important target object around the mobile vehicle.
However, the implementation of the present invention is not limited to the above description, and the contents of the above embodiments may be modified as appropriate for actual needs. For example, in an embodiment of the present invention, after the warning device is activated, it may be determined again whether the gaze position matches the target position, so as to prevent the warning device from continuing to perform the warning operation or further control the components of the mobile carrier. The following description will be made in detail with reference to an embodiment.
Fig. 4 is a block diagram of a driving prompting system according to an embodiment of the invention. Referring to fig. 4, the driving prompting system 40 includes an eye tracking device 410, a warning device 420, a processor 430, a memory 440, an image capturing device 450, and a distance measuring device 460. It should be noted that the functions and the coupling relationships of the eye tracking device 410, the warning device 420, the processor 430, the memory 440 and the image capturing device 450 are similar to those of the eye tracking device 110, the warning device 120, the processor 130, the memory 140 and the image capturing device 150 shown in fig. 1, and are not repeated herein. Unlike the embodiment shown in fig. 1, the driving guidance system 40 further includes a distance measuring device 460, and the processor 430 is coupled to a vehicle driving element 470 of the mobile vehicle.
The distance measuring device 460 may be an ultrasonic distance measuring device, an infrared distance measuring device, or a laser distance measuring device (or called radar) for measuring the distance between the moving carrier and the target object. The distance measuring device 460 may also be an image depth analyzing device for determining the distance between the target object and the mobile carrier according to the image depth value. Vehicle travel element 470 may be a brake, speed control, direction control, or other mechanical component of a moving vehicle.
Fig. 5 is a flowchart of a driving prompting method according to an embodiment of the invention. Referring to fig. 5, the manner of the present embodiment is applied to the driving indication system 40 in the above embodiment, and the detailed steps of providing warning according to the driving attention position in the present embodiment will be described below with reference to various elements in the driving indication system 40.
In step S501, the processor 430 acquires an environment image via the image acquisition device 450. In step S502, the processor 430 performs object detection on the environment image to obtain a target position of the image detection object. In step S503, the processor 430 detects the driving gaze position via the eye tracking device 410.
For example, fig. 6 is a schematic diagram of obtaining a gaze location, in accordance with an embodiment of the present invention. Referring to fig. 6, when the mobile carrier 61 travels on a road, the image capturing device 450 disposed on the mobile carrier 61 photographs in the traveling direction, and the eyeball tracking device 410 disposed on the mobile carrier 61 is used to detect the driving gaze position. Specifically, the processor 430 may acquire a reference setting value of the projection plane PL1 set by the eye tracking device 410, and may also acquire a projection position (Xe, Ye) of driving attention provided by the eye tracking device 410 on the projection plane PL 1. The reference values include the width W of the projection plane PL1, the height H, and the distance D from the moving carrier 61.
On the other hand, the image capturing device 450 can capture the environmental image Img1 in front of the mobile carrier 61 according to the viewing range and the installation position. The processor 430 may detect that the image check item S1 of the target object 62 (i.e., the traffic light) is present on the environment image Img1, and acquire a target position (Xc, Yc) at which the image check item S1 is located on the environment image Img 1. On the other hand, depending on the field of view of the image acquisition device 450 and the reference setting of the projection plane PL1, the processor 430 may convert the projection position (Xe, Ye) on the projection plane PL1 into a relative position (Xet, Yet) on the ambient image Img1 to acquire the driving gaze position. In other words, the processor 430 may map any one coordinate of the projection plane PL1 to the image coordinates of the environment image Img1 according to the field of view of the image capturing apparatus 450. Here, the processor 430 regards the relative position (Xet, Yet) as the driving gaze position. Accordingly, the processor 430 may determine whether the driving sight line drop point falls on the target object according to the target position (Xc, Yc) and the relative position (Xet, Yet) in the subsequent steps.
Fig. 7A and 7B are schematic diagrams illustrating a situation of obtaining a gaze location according to an embodiment of the invention. Referring to fig. 7A, the image capturing device 450 disposed outside the mobile carrier 61 can capture the lower environmental image Img2, and the image capturing device 490 disposed inside the mobile carrier 61 can capture the lower projection reference image Img 3. It should be noted that, in the example of fig. 7A, the position of the image capturing device 490 may be configured based on the projection plane of the eye tracking device 410, such that the field of view of the image capturing device 490 for generating the projected reference image Img3 is configured to be the same as the virtual space of the eye tracking device 410. The eyeball tracking device 410 is used for detecting the position of the driving sight line in the virtual space, and the projection plane of the eyeball tracking device 410 is a plane perpendicular to the ground in the virtual space. Accordingly, by performing image correction and matching on the environment image Img2 and the projection reference image Img3, the projection position (Xe, Ye) where the driver 63 focuses on the projection plane set by the eyeball tracking apparatus 410 can be converted into the relative position (Xet, Yet) on the environment image Img 2.
Referring to fig. 7B, compared to fig. 7A, the mobile carrier 61 does not have the image capturing device 490 therein. In the example of fig. 7B, the eyeball tracking apparatus 410 can detect the projection position (Xe, Ye) projected on the projection plane PL2 based on the reference setting value of the projection plane PL 2. The image capturing device 450 provided outside the mobile carrier 61 can capture the lower environment image Img2, and the projection position (Xe, Ye) on the projection plane PL2 set by the eyeball tracking device 410 can be converted into the relative position (Xet, Yet) on the environment image Img2 by performing image correction on the image Img2 and by performing field-of-view mapping with respect to the projection plane PL2 by the driver 63. As can be seen from the examples of fig. 7A and 7B, the projected position (Xe, Ye) can be mapped to the coordinate space of the target position (Xc, Yc), and thus the driving gaze position (Xet, Yet) can also be obtained.
Referring to fig. 5, in step S504, when the target object corresponding to the image detection object is detected, the processor 430 detects a distance between the mobile carrier and the target object by using the distance measuring device 460. Next, in step S505, the processor 430 determines whether the distance between the mobile carrier and the target object is smaller than a threshold value. If the determination in step S505 is no, it means that the target object is too far away from the mobile vehicle and does not need to be prompted to drive for attention, and in step S508, the processor 430 controls the warning device 420 not to activate the warning operation. For example, the distance measuring device 460 may measure a distance a1 (meter) between the mobile vehicle and the traffic light, and when the processor 430 determines that a1 (meter) is greater than the threshold B1 (meter), the processor 430 does not activate the warning device 420 and further does not determine whether the target position of the traffic light matches the target position of the fixation.
It should be noted that the setting of the threshold value can be determined according to the practical application. In an embodiment, the threshold value may have a corresponding value according to a shape of the target object. Taking the target object as an obstacle, an obstacle far away from the target object is not necessarily required to be noticed by driving, and thus the threshold value corresponding to the obstacle may be set to the first value. Taking the target object as the specific traffic signal as an example, the threshold corresponding to the specific traffic signal may be set to a second value different from the first value.
On the other hand, if the determination in step S505 is yes, in step S506, the processor 430 determines whether the gaze position matches the target position. As can be seen from the descriptions of fig. 6, fig. 7A and fig. 7B, the processor 430 can obtain the target position according to the environment image, and calculate the gaze position according to the projection position provided by the eye tracking apparatus 410. Thus, the processor 430 may further determine whether the gaze location is sufficiently close to the target location. In one embodiment, the processor 430 may determine whether the gaze location matches the target location directly according to the distance between the gaze location and the target location.
For example, fig. 8 is a schematic diagram of determining whether the gaze location matches the target location according to an embodiment of the invention. Referring to fig. 8, the processor 430 may determine whether a distance d1 between the target position (Xc, Yc) of the image detection object S2 on the environment image Img8 and the driving gaze position (Xet, Yet) is greater than a threshold value. When the distance d1 between the target position (Xc, Yc) of the image detection object S2 on the environment image and the driving gaze position (Xet, Yet) is greater than the threshold value, the processor 430 may determine that the gaze position does not coincide with the target position, and when the distance d1 between the target position (Xc, Yc) of the image detection object S2 on the environment image Img8 and the driving gaze position (Xet, Yet) is not greater than the threshold value, the processor 430 may determine that the gaze position coincides with the target position.
In addition, in an embodiment, the processor 430 may also determine whether the gazing position matches the target position according to a similarity between the image detection object and the image block including the gazing position. For example, fig. 9 is a schematic diagram of determining whether the gaze location matches the target location according to an embodiment of the invention. Referring to fig. 9, the processor 430 obtains a specific block Z1 on the environment image Img8 based on the driving gaze position (Xet, Yet) as an extension basis. The processor 430 determines the degree of image similarity between the specific tile Z1 and the image detection object S2. On the other hand, the processor 430 determines the degree of image similarity between the specific tile Z1 and the image tile Z2 including the image detection object S2. When the degree of image similarity between the specific block Z1 and the image detection object S2 is greater than the threshold value, the processor 430 may determine that the gaze position matches the target position. When the degree of image similarity between the specific block and the image detection object is not greater than the threshold value, the processor 430 may determine that the gaze location does not coincide with the target location.
Referring to fig. 5, if the determination in step S506 is yes, the driver' S sight line is located on the target object, and in step S508, the processor 430 controls the warning device 420 not to start the warning operation. If the determination in step S506 is no, the line of sight representing driving does not fall on the target object, and in step S507, the processor 430 controls the warning device 420 to start a warning operation. After the warning operation is initiated, in step S509, the processor 430 repeatedly executes steps S501 to S503. Next, in step S510, the processor 430 determines again whether the gaze position matches the target position.
If the determination in step S510 is yes, the warning indication indicating that the driving sight line has passed the warning operation is already on the target object, and in step S512, the processor 430 controls the warning device 420 to stop executing the warning operation. If the determination in step S510 is no, which represents whether the driving sight line is prompted by the warning operation or does not fall on the target object, in step S511, the processor 430 may control the vehicle driving element 470 of the mobile vehicle according to the target object corresponding to the image detection object. In one embodiment, the processor 430 may control the speed control device of the mobile vehicle according to the traffic signal or the obstacle to reduce the traveling speed of the mobile vehicle. That is, when the driving continues without noticing the target object, the processor 430 may control the vehicle driving element 470 of the mobile vehicle according to the shape of the target object, thereby intervening in the driving behavior. For example, if the target object is a speed limit mark, the processor 430 controls the speed control device of the mobile vehicle to reduce the moving speed. If the target object is an obstacle, the processor 430 controls the braking device of the mobile vehicle to perform automatic braking.
In summary, in the embodiment of the present invention, the eyeball tracking device is used to obtain the driving sight line landing point, and the present invention can determine whether the driving sight line lands on the target object around the mobile carrier. When the driving fixation position is not matched with the target position, the invention can provide warning to prompt the driver to concentrate on the target object needing attention. The invention provides warning timely according to the driving sight line so as to expect that the driving sight line really notices objects needing to be noticed on the road. In addition, the invention can further intervene in driving behaviors according to the driving sight so as to eliminate the occurrence of emergency and critical conditions. Therefore, the invention can provide driving assistance with high safety so as to reduce the occurrence probability of traffic accidents.
Although the present invention has been described with reference to the above embodiments, it should be understood that the invention is not limited to the embodiments, and various changes and modifications can be made by those skilled in the art without departing from the spirit and scope of the invention.

Claims (12)

1. A driving prompting method is applicable to a driving prompting system comprising an eyeball tracking device and an image acquisition device, wherein the driving prompting system is loaded on a mobile carrier, and the method comprises the following steps:
(a) acquiring an environment image via the image acquisition device;
(b) performing target object detection on the environment image to acquire a target position of an image detection object;
(c) detecting a gaze location of a driver via the eye tracking device;
(d) judging whether the fixation position is matched with the target position; and
(e) if the fixation position is not matched with the target position, the warning device is controlled to start the warning operation,
wherein step (d) further comprises:
acquiring a specific block on the environment image by taking the fixation position of the driving as an extension basis; and
determining the image similarity between the specific block and the image detection object,
wherein, when the image similarity between the specific block and the image detection object is greater than a second critical value, the gaze position matches the target position; when the image similarity between the specific block and the image detection object is not greater than the second critical value, the gaze position does not match the target position.
2. The driving advice method defined in claim 1, wherein after step (d), the method further comprises:
and if the watching position is matched with the target position, controlling the warning device not to start the warning operation.
3. The driving advice method defined in claim 1, wherein after step (e), the method further comprises:
repeatedly performing the steps (a) to (d);
after the steps (a) to (d) are repeatedly executed, if the fixation position is matched with the target position, controlling the warning device to stop executing the warning operation; and
after the steps (a) to (d) are repeatedly executed, if the fixation position does not coincide with the target position, controlling a vehicle driving element of the mobile vehicle according to a target object corresponding to the image detection object.
4. The driving prompting method according to claim 3, wherein the target object includes a traffic sign or an obstacle, and the step of controlling the vehicle driving element of the mobile vehicle according to the target object corresponding to the image detection object includes:
and controlling a speed control device of the mobile vehicle according to the traffic signal or the obstacle so as to reduce the running speed of the mobile vehicle.
5. The driving advice method according to claim 1, wherein the method further comprises:
when a target object corresponding to the image detection object is detected, detecting the distance between the mobile carrier and the target object by using a distance measuring device; and
if the distance between the mobile carrier and the target object is less than a threshold value, executing step (d).
6. The driving advice method defined in claim 1, wherein step (c) comprises:
acquiring a reference set value of a projection plane of the eyeball tracking device and a projection position of the driving fixation on the projection plane; and
and converting the projection position on the projection plane into a relative position on the environment image according to the visual field range of the image acquisition device and the reference set value of the projection plane so as to acquire the fixation position of the driving.
7. A driving advice system, comprising:
an image acquisition device;
an eyeball tracking device;
a memory storing a plurality of instructions; and
a processor coupled to the image acquisition device, the eye tracking device, and the memory, wherein the processor is configured to execute the plurality of instructions to:
acquiring an environment image via the image acquisition device;
performing target object detection on the environment image to acquire a target position of an image detection object;
detecting a gaze location of a driver via the eye tracking device;
judging whether the fixation position is matched with the target position; and
if the fixation position is not matched with the target position, the warning device is controlled to start the warning operation,
wherein the processor is further configured to execute the plurality of instructions to:
acquiring a specific block on the environment image by taking the fixation position of the driving as an extension basis; and
determining the image similarity between the specific block and the image detection object,
wherein, when the image similarity between the specific block and the image detection object is greater than a second critical value, the gaze position matches the target position; when the image similarity between the specific block and the image detection object is not greater than the second critical value, the gaze position does not match the target position.
8. The drive-prompting system of claim 7, wherein the processor is further configured to execute the plurality of instructions to:
and if the watching position is matched with the target position, controlling the warning device not to start the warning operation.
9. The driving notification system of claim 7, wherein the processor is coupled to a vehicle driving element of a mobile vehicle, and the processor is further configured to execute the instructions to:
if the fixation position is matched with the target position, controlling the warning device to stop executing the warning operation; and
and if the fixation position does not coincide with the target position, controlling the carrier driving element of the mobile carrier according to a target object corresponding to the image detection object.
10. The drive-prompting system of claim 9, wherein the target object comprises a traffic sign or an obstacle, and the processor is further configured to execute the plurality of instructions to:
and controlling a speed control device of the mobile vehicle according to the traffic signal or the obstacle so as to reduce the running speed of the mobile vehicle.
11. The driving advisory system of claim 7, wherein the driving advisory system further comprises a ranging device coupled to the processor, and the processor is further configured to execute the plurality of instructions to:
when a target object corresponding to the image detection object is detected, detecting the distance between the mobile carrier and the target object by using the distance measuring device; and
and if the distance between the mobile carrier and the target object is smaller than a critical value, judging whether the fixation position is matched with the target position.
12. The drive-prompting system of claim 7, wherein the processor is further configured to execute the plurality of instructions to:
acquiring a reference set value of a projection plane of the eyeball tracking device and a projection position of the driving fixation on the projection plane; and
and converting the projection position on the projection plane into a relative position on the environment image according to the visual field range of the image acquisition device and the reference set value of the projection plane so as to acquire the fixation position of the driving.
CN201711170806.5A 2017-11-22 2017-11-22 Driving prompting method and system Active CN109823344B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711170806.5A CN109823344B (en) 2017-11-22 2017-11-22 Driving prompting method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711170806.5A CN109823344B (en) 2017-11-22 2017-11-22 Driving prompting method and system

Publications (2)

Publication Number Publication Date
CN109823344A CN109823344A (en) 2019-05-31
CN109823344B true CN109823344B (en) 2021-10-22

Family

ID=66858070

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711170806.5A Active CN109823344B (en) 2017-11-22 2017-11-22 Driving prompting method and system

Country Status (1)

Country Link
CN (1) CN109823344B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111275011B (en) * 2020-02-25 2023-12-19 阿波罗智能技术(北京)有限公司 Mobile traffic light detection method and device, electronic equipment and storage medium
CN113942450A (en) * 2020-07-15 2022-01-18 宝能汽车集团有限公司 Vehicle-mounted intelligent driving early warning system and vehicle

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101500874A (en) * 2006-09-14 2009-08-05 丰田自动车株式会社 Sight-line end estimation device and driving assist device
CN102712317A (en) * 2010-01-14 2012-10-03 丰田自动车工程及制造北美公司 Combining driver and environment sensing for vehicular safety systems
CN103748622A (en) * 2011-08-10 2014-04-23 丰田自动车株式会社 Driving assistance device
US9805291B2 (en) * 2014-10-13 2017-10-31 Samsung Electronics Co., Ltd Method and apparatus for stereoscopically rendering three-dimensional content by including a left image and a right image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6036065B2 (en) * 2012-09-14 2016-11-30 富士通株式会社 Gaze position detection device and gaze position detection method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101500874A (en) * 2006-09-14 2009-08-05 丰田自动车株式会社 Sight-line end estimation device and driving assist device
CN102712317A (en) * 2010-01-14 2012-10-03 丰田自动车工程及制造北美公司 Combining driver and environment sensing for vehicular safety systems
CN103748622A (en) * 2011-08-10 2014-04-23 丰田自动车株式会社 Driving assistance device
US9805291B2 (en) * 2014-10-13 2017-10-31 Samsung Electronics Co., Ltd Method and apparatus for stereoscopically rendering three-dimensional content by including a left image and a right image

Also Published As

Publication number Publication date
CN109823344A (en) 2019-05-31

Similar Documents

Publication Publication Date Title
TWI653170B (en) Driving notification method and driving notification system
JP7397807B2 (en) Rider assistance system and method
JP4775391B2 (en) Obstacle detection device
CN104943695B (en) Driver intention assesses device
US11810452B2 (en) Notifying device and notifying system
US9827956B2 (en) Method and device for detecting a braking situation
EP3342664B1 (en) Posture information based pedestrian detection and pedestrian collision prevention apparatus and method
JP4872245B2 (en) Pedestrian recognition device
CN108263279A (en) The pedestrian detection and pedestrian impact avoiding device and method integrated based on sensor
US20140176714A1 (en) Collision prevention warning method and device capable of tracking moving object
US10861336B2 (en) Monitoring drivers and external environment for vehicles
US10600323B2 (en) Vehicle external notification device
JP2016009251A (en) Control device for vehicle
JP2018097515A (en) Drive assisting device, drive assisting method, and program thereof
US20150307025A1 (en) Identifying and Responding to Tailgating Vehicles
JP6827202B2 (en) Information display device, information display method and program
CN109823344B (en) Driving prompting method and system
JP4751894B2 (en) A system to detect obstacles in front of a car
US11858414B2 (en) Attention calling device, attention calling method, and computer-readable medium
US11685311B2 (en) System and method for warning a driver of a vehicle of an object in a proximity of the vehicle
JP5003473B2 (en) Warning device
EP3782840B1 (en) Aid for a driver with impaired field of view
WO2018212090A1 (en) Control device and control method
JP2018200701A (en) Vehicle control device
JP2018136713A (en) Driver's visual field estimation device and vehicle control device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant