CN111242065A - Portable vehicle-mounted intelligent driving system - Google Patents

Portable vehicle-mounted intelligent driving system Download PDF

Info

Publication number
CN111242065A
CN111242065A CN202010053700.2A CN202010053700A CN111242065A CN 111242065 A CN111242065 A CN 111242065A CN 202010053700 A CN202010053700 A CN 202010053700A CN 111242065 A CN111242065 A CN 111242065A
Authority
CN
China
Prior art keywords
image
driver
pixel point
unit
gray
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010053700.2A
Other languages
Chinese (zh)
Other versions
CN111242065B (en
Inventor
乔大雷
杨勇
杨松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Runyang Auto Parts Manufacturing Co ltd
Original Assignee
Jiangsu Runyang Auto Parts Manufacturing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Runyang Auto Parts Manufacturing Co ltd filed Critical Jiangsu Runyang Auto Parts Manufacturing Co ltd
Priority to CN202010053700.2A priority Critical patent/CN111242065B/en
Publication of CN111242065A publication Critical patent/CN111242065A/en
Application granted granted Critical
Publication of CN111242065B publication Critical patent/CN111242065B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a portable vehicle-mounted intelligent driving system, which comprises an image acquisition module, an image processing module and an alarm module, wherein the image acquisition module is used for acquiring images; the image acquisition module is used for acquiring a face image of a driver; the image processing module is used for identifying whether the driver is in a fatigue driving state or not according to the face image, and if the driver is identified to be in the fatigue driving state, the driver is warned through the warning module. The invention identifies the face image of the driver by an image identification mode, thereby judging whether the driver is in a fatigue driving state, and reminding the driver when the driver is in the fatigue driving state, thereby enhancing the driving safety of the driver and avoiding traffic accidents.

Description

Portable vehicle-mounted intelligent driving system
Technical Field
The invention relates to the field of automobiles, in particular to a portable vehicle-mounted intelligent driving system.
Background
In the driving process, a driver is fixed at a position for a long time, and needs to be highly concentrated to judge external messages, so that fatigue driving is easy to cause, and driving safety is not facilitated. Therefore, a portable vehicle-mounted intelligent driving system is needed to solve the above problems.
Disclosure of Invention
Aiming at the problems, the invention provides a portable vehicle-mounted intelligent driving system which comprises an image acquisition module, an image processing module and an alarm module;
the image acquisition module is used for acquiring a face image of a driver;
the image processing module is used for identifying whether the driver is in a fatigue driving state or not according to the face image, and if the driver is identified to be in the fatigue driving state, the driver is warned through the warning module.
The portable vehicle-mounted intelligent driving system further comprises a navigation module, and the navigation module is used for carrying out path navigation on a driver.
The alarm module comprises an audible and visual alarm unit, and the audible and visual alarm unit is used for visually and aurally reminding the driver.
The image processing module comprises a graying processing unit, a noise reduction unit, an illumination correction unit, a segmentation unit, a feature extraction unit and an identification unit;
the graying processing unit is used for performing graying processing on the face image to obtain a grayed image;
the noise reduction unit is used for carrying out noise reduction processing on the grayed image to obtain a noise reduction image;
the illumination correction unit is used for performing illumination correction on the noise-reduced image to obtain an illumination correction image;
the segmentation unit is used for carrying out image segmentation processing on the illumination correction image to obtain a segmented image;
the feature extraction unit is used for extracting features of the segmentation image to obtain face feature data of a driver;
the recognition unit is used for comparing the facial feature data with pre-stored standard feature data in a fatigue driving state, so that whether the driver is in the fatigue driving state or not is recognized.
The invention has the beneficial effects that: the invention identifies the face image of the driver by an image identification mode, thereby judging whether the driver is in a fatigue driving state, and reminding the driver when the driver is in the fatigue driving state, thereby enhancing the driving safety of the driver and avoiding traffic accidents.
Drawings
The invention is further illustrated by means of the attached drawings, but the embodiments in the drawings do not constitute any limitation to the invention, and for a person skilled in the art, other drawings can be obtained on the basis of the following drawings without inventive effort.
Fig. 1 is a diagram of an exemplary embodiment of a portable vehicle-mounted intelligent driving system according to the present invention.
Fig. 2 is a diagram of an exemplary embodiment of an image processing module according to the present invention.
Fig. 3 is a diagram of an exemplary embodiment of the coordinate axes of a two-dimensional histogram according to the present invention.
Reference numerals: the system comprises an image acquisition module 1, an image processing module 2, an alarm module 3, a graying processing unit 21, a noise reduction unit 22, an illumination correction unit 23, a segmentation unit 24, a feature extraction unit 25 and an identification unit 26.
Detailed Description
The invention is further described with reference to the following examples.
Referring to fig. 1, the portable vehicle-mounted intelligent driving system of the present invention includes an image acquisition module 1, an image processing module 2 and an alarm module 3;
the image acquisition module 1 is used for acquiring a face image of a driver;
the image processing module 2 is used for identifying whether the driver is in a fatigue driving state according to the facial image, and if the driver is identified to be in the fatigue driving state, the driver is warned through the warning module 3.
In one embodiment, the portable vehicle-mounted intelligent driving system further comprises a navigation module, and the navigation module is used for navigating the path for the driver.
In one embodiment, the alarm module 3 includes an audible and visual alarm unit for visually and audibly reminding the driver.
In one embodiment, as shown in fig. 2, the image processing module 2 includes a graying processing unit 21, a noise reduction unit 22, an illumination correction unit 23, a segmentation unit 24, a feature extraction unit 25, and a recognition unit 26;
the graying processing unit 21 is configured to perform graying processing on the face image to obtain a grayed image;
the noise reduction unit 22 is configured to perform noise reduction processing on the grayed image to obtain a noise-reduced image;
the illumination correction unit 23 is configured to perform illumination correction on the noise-reduced image to obtain an illumination corrected image;
the segmentation unit 24 is configured to perform image segmentation processing on the illumination correction image to obtain a segmented image;
the feature extraction unit 25 is configured to perform feature extraction on the segmented image to obtain facial feature data of the driver;
the recognition unit 26 is configured to compare the facial feature data with pre-stored standard feature data in a fatigue driving state, so as to recognize whether the driver is in the fatigue driving state.
In one embodiment, the graying the face image to obtain a grayed image includes:
and carrying out gray level processing on the face image by using a weighted average method to obtain a gray level image.
In another embodiment, the performing a graying process on the face image to obtain a grayed image includes:
converting the face image from an RGB color space to an XYZ color space to obtain a first converted image;
converting the first converted image from an XYZ color space to a Lab color space and taking a luminance component L as a second converted image;
establishing an objective function:
Figure BDA0002372084870000031
in the equation, L1 represents the total number of pixel points of the face image; phi1And phi2Denotes a weight parameter, [ phi ]12=1;fiExpressing the gray value of the ith pixel point in the gray image to be solved; l2 represents the total number of neighbor pixels of the ith pixel in the face image, when the ith pixel is a corner pixel, the total number of neighbor pixels is 3, when the ith pixel is an edge pixel, the total number of neighbor pixels is 6, otherwise, the total number of neighbor pixels of the ith pixel is 8; ri、Gi、BiRepresenting three component values of the ith pixel point in the face image in an RGB color space; rj、Gj、BjRepresenting three component values of a jth neighbor pixel point of an ith pixel point in the face image in an RGB color space; omegaijExpressing preset gray scale adjustment parameters; kiExpressing the gray value of the ith pixel point in the second conversion image; the ordering rule of the ith pixel point in the face image is the same as the ordering rule of the ith pixel point in the second conversion image;
finding TiRelative to fiDerivative of (2)
Figure BDA0002372084870000032
And letting said derivative be
Figure BDA0002372084870000033
Is equal to zero, thereby obtaining fiThe value of (c).
According to the embodiment of the invention, the contrast difference in the contrast face image in the gray-scale image is smaller by solving the objective function, so that the problem of detail information loss caused by inaccurate contrast in the traditional gray-scale processing process is effectively avoided, and the accuracy of the next noise reduction processing is effectively improved.
In an embodiment, the performing noise reduction processing on the grayed image to obtain a noise-reduced image includes: and carrying out noise reduction processing on the gray image by using a median filtering mode to obtain a noise reduction image.
In another embodiment, the performing noise reduction processing on the grayed image to obtain a noise-reduced image includes:
performing wavelet decomposition on the gray level image to obtain a high-frequency coefficient image HP of the gray level imagevAnd a low frequency coefficient image LP;
for HPvThe following adaptive function is used for processing:
Figure BDA0002372084870000041
in the formula, aHPvRepresenting the high frequency coefficient image after being processed by the adaptive function; v epsilon [ H1, H2, H3]H1, H2, H3 denote three high frequency subband images in wavelet decomposition; ya and yb represent preset threshold parameters; cz represents a sign function; ZS represents a variance estimation value of noise in the high-frequency subband image; aHPv(x, y) represents the gray value of the pixel point with the position of (x, y);
for LP, the following operation is performed:
Figure BDA0002372084870000042
in the formula, α 1 and α 2 are wavelet weight parameters, α 1+ α 2 is equal to 1, Cx,yRepresenting a set formed by pixels of L multiplied by L neighborhood of the pixel with (x, y) position in LP; LP (i, j) represents the gray value of the pixel point with the position (i, j) in the neighborhood; fcx,yRepresenting the variance of the gray value of the pixel points in the neighborhood; HA denotes the adjustment coefficient; avex,yRepresenting the mean value of the gray values of the pixel points in the neighborhood; aLP denotes the calculated low frequency coefficient image; gsfc represents the variance of gaussian filtering;
Figure BDA0002372084870000043
in the formula, cd represents a scale parameter; ZS2 denotes the noise standard deviation estimate in LP; fc (x, y) represents a compensation function,
Figure BDA0002372084870000044
in the formula, max (x, y) and min (x, y) respectively represent the maximum value and the minimum value of the gray scale in the neighborhood;
aLP is subjected to wavelet decomposition to obtain a high-frequency coefficient image tHPvaHP will bevAnd tHPvAnd reconstructing to obtain the noise reduction image.
According to the embodiment of the invention, by setting the adaptive function, different processing functions can be adopted to process the high-frequency coefficient image according to different relations between the gray value and the threshold value, so that the problems of detail loss and incomplete noise removal in the existing processing mode are solved, and noise in the high-frequency coefficient image is effectively processed. The low-frequency coefficient image is processed through the scale parameter, the Gaussian filter variance and the noise standard deviation estimation, the influence of neighbor pixels of the pixels in the low-frequency coefficient image is considered, meanwhile, the gray mean and the variance of the image are also considered, a compensation function is set, and the problems of fuzzy and halation of the boundary in the existing filtering technology are solved. And the next illumination correction treatment is facilitated.
In one embodiment, the performing illumination correction on the noise-reduced image to obtain an illumination-corrected image includes;
and performing illumination correction on the noise-reduced image by using gamma correction to obtain an illumination correction image.
According to the embodiment of the invention, the detail information of the darker part in the noise reduction map can be enhanced, so that the success rate of judging whether the driver is in a fatigue driving state under the condition of insufficient illumination is effectively improved.
In one embodiment, the performing an image segmentation process on the illumination correction image to obtain a segmented image includes:
the gray value of a pixel point with (x, y) in the illumination correction image is g (x, y), the average value of the gray values in the neighborhood range of m × m size of the pixel point is avg (x, y), and let g (x,y) is R, avg (x, y) is S, the maximum value Rmax of R, S and Smax are respectively used as the maximum value of the scales of two coordinate axes in the two-dimensional histogram, and the two-dimensional histogram is built; the value at (r, s) in the two-dimensional histogram is pr,sRepresenting the frequency of occurrence of a combination of a gray value r and a neighborhood gray average value s;
as shown in fig. 3, a diagonal line L1 passing through the origin of coordinates and dividing the region represented by the R axis and the S axis of the two-dimensional histogram into two parts having equal areas is drawn in the two-dimensional histogram, a parallel line is drawn from a point (R,0) on the R axis to intersect with the diagonal line L1, R ∈ [0, Rmax ], and a total number of intersections is H;
two line segments L1 and L2 parallel to the diagonal line L1 are made on two sides of the diagonal line L1, the Euclidean distances between L1, L2 and L1 are all len, and L1 and L2 divide a closed region Q together with the boundary of the two-dimensional histogram.
Projecting the integer coordinate points in the region Q to H intersection points in a diagonal line L1 according to a distance nearest principle, and counting the number of coordinate points projected to each intersection point to form a one-dimensional histogram;
solving the one-dimensional histogram by using an Otsu method to obtain a threshold value thre of a segmentation correction image;
threshold segmentation is performed on the corrected image using thre, obtaining a segmented image.
The distance nearest principle refers to that for integer coordinate points in the region Q, Euclidean distances among the rest H intersection points are respectively taken, and then points corresponding to the minimum Euclidean distances are selected to serve as projection targets of the integer coordinate points.
In the embodiment of the invention, the two-dimensional histogram about the gray value of the pixel point and the average value of the gray value of the neighborhood of the pixel point in the illumination correction image is formed, then the two-dimensional histogram is subjected to dimension reduction processing in a diagonal projection mode and is reduced into the one-dimensional histogram, and then the Otsu method is used for carrying out threshold solution on the one-dimensional histogram. The one-dimensional histogram solving threshold does not consider the relation of the expected neighborhood pixels of the pixels, and is seriously influenced by noise. The embodiment of the invention considers the relationship between the pixel point and the neighborhood pixel point in the illumination correction image, thereby reducing the influence of noise in the threshold value, effectively avoiding the problem that the traditional two-dimensional histogram needs a large amount of traversal operation for solving the threshold value after dimension reduction, and effectively accelerating the speed of solving the threshold value.
In one embodiment, the specific operation process of the identification unit 26 is as follows: and calculating a difference value between the face characteristic data and the standard characteristic data, and judging whether the difference value is smaller than a preset identification threshold value. And if the difference value is smaller than a preset identification threshold value, judging that the driver is in fatigue driving.
The invention identifies the face image of the driver by an image identification mode, thereby judging whether the driver is in a fatigue driving state, and reminding the driver when the driver is in the fatigue driving state, thereby enhancing the driving safety of the driver and avoiding traffic accidents.
From the above description of embodiments, it is clear for a person skilled in the art that the embodiments described herein can be implemented in hardware, software, firmware, middleware, code or any appropriate combination thereof. For a hardware implementation, a processor may be implemented in one or more of the following units: an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a processor, a controller, a microcontroller, a microprocessor, other electronic units designed to perform the functions described herein, or a combination thereof. For a software implementation, some or all of the procedures of an embodiment may be performed by a computer program instructing associated hardware. In practice, the program may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer readable media include computer storage media and communication media, where communication media
Including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. Computer-readable media can include, but is not limited to, RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the protection scope of the present invention, although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions can be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (7)

1. A portable vehicle-mounted intelligent driving system is characterized by comprising an image acquisition module, an image processing module and an alarm module;
the image acquisition module is used for acquiring a face image of a driver;
the image processing module is used for identifying whether the driver is in a fatigue driving state or not according to the face image, and if the driver is identified to be in the fatigue driving state, the driver is warned through the warning module.
2. The portable vehicle-mounted intelligent driving system according to claim 1, further comprising a navigation module, wherein the navigation module is used for navigating a path for a driver.
3. The portable vehicle-mounted intelligent driving system according to claim 1, wherein the alarm module comprises an audible and visual alarm unit, and the audible and visual alarm unit is used for visually and audibly reminding the driver.
4. The portable vehicle-mounted intelligent driving system according to claim 1, wherein the image processing module comprises a graying processing unit, a noise reduction unit, an illumination correction unit, a segmentation unit, a feature extraction unit and an identification unit;
the graying processing unit is used for performing graying processing on the face image to obtain a grayed image;
the noise reduction unit is used for carrying out noise reduction processing on the grayed image to obtain a noise reduction image;
the illumination correction unit is used for performing illumination correction on the noise-reduced image to obtain an illumination correction image;
the segmentation unit is used for carrying out image segmentation processing on the illumination correction image to obtain a segmented image;
the feature extraction unit is used for extracting features of the segmentation image to obtain face feature data of a driver;
the recognition unit is used for comparing the facial feature data with pre-stored standard feature data in a fatigue driving state, so that whether the driver is in the fatigue driving state or not is recognized.
5. The portable vehicle-mounted intelligent driving system according to claim 3, wherein the graying the face image to obtain a grayed image comprises:
converting the face image from an RGB color space to an XYZ color space to obtain a first converted image;
converting the first converted image from an XYZ color space to a Lab color space and taking a luminance component L as a second converted image;
establishing an objective function:
Figure FDA0002372084860000011
in the equation, L1 represents the total number of pixel points of the face image; phi1And phi2Denotes a weight parameter, [ phi ]12=1;fiExpressing the gray value of the ith pixel point in the gray image to be solved; l2 represents the total number of neighboring pixels of the ith pixel in the face image when the ith imageWhen the pixel point is a corner pixel point, the total number of the neighbor pixel points is 3, when the ith pixel point is an edge pixel point, the total number of the neighbor pixel points is 6, otherwise, the total number of the neighbor pixel points of the ith pixel point is 8; ri、Gi、BiRepresenting three component values of the ith pixel point in the face image in an RGB color space; rj、Gj、BjRepresenting three component values of a jth neighbor pixel point of an ith pixel point in the face image in an RGB color space; omegaijExpressing preset gray scale adjustment parameters; kiExpressing the gray value of the ith pixel point in the second conversion image; the ordering rule of the ith pixel point in the face image is the same as the ordering rule of the ith pixel point in the second conversion image;
finding TiRelative to fiDerivative of (2)
Figure FDA0002372084860000021
And letting said derivative be
Figure FDA0002372084860000022
Is equal to zero, thereby obtaining fiThe value of (c).
6. The portable vehicle-mounted intelligent driving system according to claim 3, wherein the performing noise reduction processing on the grayed image to obtain a noise-reduced image comprises:
performing wavelet decomposition on the gray level image to obtain a high-frequency coefficient image HP of the gray level imagevAnd a low frequency coefficient image LP;
for HPvThe following adaptive function is used for processing:
Figure FDA0002372084860000023
in the formula, aHPvRepresenting the high frequency coefficient image after being processed by the adaptive function; v epsilon [ H1, H2, H3]H1, H2, H3 denote three high frequency subband images in wavelet decomposition; ya and yb represent preset thresholdsA value parameter; cz represents a sign function; ZS represents a variance estimation value of noise in the high-frequency subband image; aHPv(x, y) represents the gray value of the pixel point with the position of (x, y);
for LP, the following operation is performed:
Figure FDA0002372084860000024
in the formula, α 1 and α 2 are wavelet weight parameters, α 1+ α 2 is equal to 1, Cx,yRepresenting a set formed by pixels of L multiplied by L neighborhood of the pixel with (x, y) position in LP; LP (i, j) represents the gray value of the pixel point with the position (i, j) in the neighborhood; fcx,yRepresenting the variance of the gray value of the pixel points in the neighborhood; HA denotes the adjustment coefficient; avex,yRepresenting the mean value of the gray values of the pixel points in the neighborhood; aLP denotes the calculated low frequency coefficient image; gsfc represents the variance of gaussian filtering;
Figure FDA0002372084860000031
in the formula, cd represents a scale parameter; ZS2 denotes the noise standard deviation estimate in LP; fc (x, y) represents a compensation function,
Figure FDA0002372084860000032
in the formula, max (x, y) and min (x, y) respectively represent the maximum value and the minimum value of the gray scale in the neighborhood;
aLP is subjected to wavelet decomposition to obtain a high-frequency coefficient image tHPvaHP will bevAnd tHPvAnd reconstructing to obtain the noise reduction image.
7. The portable vehicle-mounted intelligent driving system according to claim 3, wherein the illumination correction is performed on the noise-reduced image to obtain an illumination-corrected image, and the illumination-corrected image comprises;
and performing illumination correction on the noise-reduced image by using gamma correction to obtain an illumination correction image.
CN202010053700.2A 2020-01-17 2020-01-17 Portable vehicle-mounted intelligent driving system Active CN111242065B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010053700.2A CN111242065B (en) 2020-01-17 2020-01-17 Portable vehicle-mounted intelligent driving system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010053700.2A CN111242065B (en) 2020-01-17 2020-01-17 Portable vehicle-mounted intelligent driving system

Publications (2)

Publication Number Publication Date
CN111242065A true CN111242065A (en) 2020-06-05
CN111242065B CN111242065B (en) 2020-10-13

Family

ID=70866161

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010053700.2A Active CN111242065B (en) 2020-01-17 2020-01-17 Portable vehicle-mounted intelligent driving system

Country Status (1)

Country Link
CN (1) CN111242065B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112464726A (en) * 2020-10-30 2021-03-09 长威信息科技发展股份有限公司 Disaster prevention and reduction early warning system based on satellite remote sensing big data
CN113331792A (en) * 2021-07-02 2021-09-03 北京美医医学技术研究院有限公司 Skin care prompt system based on skin condition
CN113393198A (en) * 2021-06-29 2021-09-14 绵阳九洲北斗新时空能源有限公司 Logistics e-commerce service platform system based on Beidou navigation
CN113704721A (en) * 2021-09-03 2021-11-26 广州因陀罗软件有限公司 Game background centralized authority control method and system
CN114204680A (en) * 2021-12-13 2022-03-18 广州思泰信息技术有限公司 Multi-type automatic detection equipment fusion remote diagnosis system and method

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011048531A (en) * 2009-08-26 2011-03-10 Aisin Seiki Co Ltd Drowsiness detection device, drowsiness detection method, and program
CN105139583A (en) * 2015-06-23 2015-12-09 南京理工大学 Vehicle danger prompting method based on portable intelligent equipment
CN105726046A (en) * 2016-01-29 2016-07-06 西南交通大学 Method for detecting vigilance state of driver
CN106485191A (en) * 2015-09-02 2017-03-08 腾讯科技(深圳)有限公司 A kind of method for detecting fatigue state of driver and system
CN106557745A (en) * 2016-11-11 2017-04-05 吴怀宇 Human eyeball's detection method and system based on maximum between-cluster variance and gamma transformation
CN109770925A (en) * 2019-02-03 2019-05-21 闽江学院 A kind of fatigue detection method based on depth time-space network
CN109977930A (en) * 2019-04-29 2019-07-05 中国电子信息产业集团有限公司第六研究所 Method for detecting fatigue driving and device
CN110532976A (en) * 2019-09-03 2019-12-03 湘潭大学 Method for detecting fatigue driving and system based on machine learning and multiple features fusion

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011048531A (en) * 2009-08-26 2011-03-10 Aisin Seiki Co Ltd Drowsiness detection device, drowsiness detection method, and program
CN105139583A (en) * 2015-06-23 2015-12-09 南京理工大学 Vehicle danger prompting method based on portable intelligent equipment
CN106485191A (en) * 2015-09-02 2017-03-08 腾讯科技(深圳)有限公司 A kind of method for detecting fatigue state of driver and system
CN105726046A (en) * 2016-01-29 2016-07-06 西南交通大学 Method for detecting vigilance state of driver
CN106557745A (en) * 2016-11-11 2017-04-05 吴怀宇 Human eyeball's detection method and system based on maximum between-cluster variance and gamma transformation
CN109770925A (en) * 2019-02-03 2019-05-21 闽江学院 A kind of fatigue detection method based on depth time-space network
CN109977930A (en) * 2019-04-29 2019-07-05 中国电子信息产业集团有限公司第六研究所 Method for detecting fatigue driving and device
CN110532976A (en) * 2019-09-03 2019-12-03 湘潭大学 Method for detecting fatigue driving and system based on machine learning and multiple features fusion

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112464726A (en) * 2020-10-30 2021-03-09 长威信息科技发展股份有限公司 Disaster prevention and reduction early warning system based on satellite remote sensing big data
CN113393198A (en) * 2021-06-29 2021-09-14 绵阳九洲北斗新时空能源有限公司 Logistics e-commerce service platform system based on Beidou navigation
CN113331792A (en) * 2021-07-02 2021-09-03 北京美医医学技术研究院有限公司 Skin care prompt system based on skin condition
CN113704721A (en) * 2021-09-03 2021-11-26 广州因陀罗软件有限公司 Game background centralized authority control method and system
CN113704721B (en) * 2021-09-03 2024-05-28 广州因陀罗软件有限公司 Game background centralized authority control method and system
CN114204680A (en) * 2021-12-13 2022-03-18 广州思泰信息技术有限公司 Multi-type automatic detection equipment fusion remote diagnosis system and method
CN114204680B (en) * 2021-12-13 2023-01-31 广州思泰信息技术有限公司 Multi-type automatic detection equipment fusion remote diagnosis system and method

Also Published As

Publication number Publication date
CN111242065B (en) 2020-10-13

Similar Documents

Publication Publication Date Title
CN111242065B (en) Portable vehicle-mounted intelligent driving system
CN110930459B (en) Vanishing point extraction method, camera calibration method and storage medium
CN104899554A (en) Vehicle ranging method based on monocular vision
CN107392139B (en) Lane line detection method based on Hough transform and terminal equipment
CN109635656A (en) Vehicle attribute recognition methods, device, equipment and medium neural network based
CN102693426B (en) Method for detecting image salient regions
CN109919160B (en) Verification code identification method, device, terminal and storage medium
CN112069879B (en) Target person following method, computer-readable storage medium and robot
CN112528868B (en) Illegal line pressing judgment method based on improved Canny edge detection algorithm
CN107844761B (en) Traffic sign detection method and device
CN113988112B (en) Method, device and equipment for detecting lane line and storage medium
CN105447489B (en) A kind of character of picture OCR identifying system and background adhesion noise cancellation method
CN110674812B (en) Civil license plate positioning and character segmentation method facing complex background
Kortli et al. Efficient implementation of a real-time lane departure warning system
CN109063669B (en) Bridge area ship navigation situation analysis method and device based on image recognition
Chang et al. An efficient method for lane-mark extraction in complex conditions
CN117853484B (en) Intelligent bridge damage monitoring method and system based on vision
CN109278759B (en) Vehicle safe driving auxiliary system
CN114882332A (en) Target detection system based on image fusion
CN113449647B (en) Method, system, equipment and computer readable storage medium for fitting curved lane lines
KR20120009591A (en) Vehicle Collision Alarm System and Method
CN112699825A (en) Lane line identification method and device
CN113053164A (en) Parking space identification method using look-around image
CN113313968A (en) Parking space detection method and storage medium
CN111717114A (en) Vehicle-mounted pedestrian early warning system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant