CN108363996B - Intelligent vehicle all-round viewing method and device and computer readable storage medium - Google Patents

Intelligent vehicle all-round viewing method and device and computer readable storage medium Download PDF

Info

Publication number
CN108363996B
CN108363996B CN201810226899.7A CN201810226899A CN108363996B CN 108363996 B CN108363996 B CN 108363996B CN 201810226899 A CN201810226899 A CN 201810226899A CN 108363996 B CN108363996 B CN 108363996B
Authority
CN
China
Prior art keywords
image
vehicle
area
determining
suspicious
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810226899.7A
Other languages
Chinese (zh)
Other versions
CN108363996A (en
Inventor
刘新
宋朝忠
郭烽
王晓东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Echiev Autonomous Driving Technology Co ltd
Original Assignee
Shenzhen Echiev Autonomous Driving Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Echiev Autonomous Driving Technology Co ltd filed Critical Shenzhen Echiev Autonomous Driving Technology Co ltd
Priority to CN201810226899.7A priority Critical patent/CN108363996B/en
Publication of CN108363996A publication Critical patent/CN108363996A/en
Application granted granted Critical
Publication of CN108363996B publication Critical patent/CN108363996B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an intelligent vehicle all-around viewing method, which comprises the following steps: when a vehicle is started, acquiring a first area image corresponding to the vehicle; determining a training sample corresponding to the vehicle based on the first area image; and training the training sample, and determining whether foreign matters exist in the first area image, wherein when the foreign matters exist in the first area image, an alarm prompt signal is sent. The invention also discloses an intelligent vehicle all-around viewing device and a computer readable storage medium. The invention realizes the intelligent detection of the foreign matters around the vehicle, and enables the foreign matters to be positioned and an alarm prompt signal to be sent out at the first moment when the foreign matters are detected so as to prompt a user to avoid the foreign matters in time.

Description

Intelligent vehicle all-round viewing method and device and computer readable storage medium
Technical Field
The invention relates to the technical field of automotive electronics, in particular to an intelligent vehicle all-around viewing method and device and a computer readable storage medium.
Background
At present, when a vehicle is started or runs, because a plurality of inherent blind areas exist around the vehicle, a plurality of potential safety hazards exist, and if a driver neglects carelessly, a plurality of traffic safety accidents can be caused.
In the existing technology for detecting the surrounding environment of a vehicle, even if the blind area can be detected due to the existence of the surrounding blind area of the vehicle, if foreign matters exist in the blind area, the foreign matters cannot be clearly detected at the first time and a user is prompted to avoid the foreign matters, so that traffic safety accidents are easily caused.
The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.
Disclosure of Invention
The invention mainly aims to provide a vehicle intelligent all-around viewing method, a vehicle intelligent all-around viewing device and a computer readable storage medium, and aims to solve the technical problem that foreign matters around a vehicle cannot be intelligently detected and avoided in the starting or driving process of the vehicle.
In order to achieve the above object, the present invention provides an intelligent vehicle all-around viewing method, which comprises the following steps:
when a vehicle is started, acquiring a first area image corresponding to the vehicle;
determining a training sample corresponding to the vehicle based on the first area image;
and training the training sample, and determining whether foreign matters exist in the first area image, wherein when the foreign matters exist in the first area image, an alarm prompt signal is sent.
In one embodiment, the step of determining the training sample corresponding to the vehicle based on the first region image includes:
dividing the first area image into different second area images;
and determining a training sample corresponding to the vehicle based on the second area image.
In one embodiment, the step of dividing the first area image into different second area images includes:
acquiring a preset division ratio corresponding to the first area image;
and dividing the first area image into different second area images based on the preset dividing proportion.
In one embodiment, the step of determining the training sample corresponding to the vehicle based on the second region image includes:
detecting an image edge energy of the second region image;
and determining a training sample corresponding to the vehicle according to the image edge energy.
In one embodiment, the step of determining the training sample corresponding to the vehicle according to the image edge energy includes:
acquiring a preset energy threshold corresponding to the second area image;
when the image edge energy is larger than the preset energy threshold, determining that the second area image is a suspicious sample corresponding to the vehicle;
and determining a training sample corresponding to the vehicle according to the suspicious sample.
In an embodiment, the step of determining the training sample corresponding to the vehicle according to the suspicious sample includes:
when the area corresponding to the suspicious sample is determined to be a cross shooting area, acquiring an image differential image corresponding to the suspicious sample;
and determining a training sample corresponding to the vehicle based on the image difference map.
In one embodiment, after the step of acquiring the first area image corresponding to the vehicle when the vehicle is started, the vehicle intelligent all-around viewing method further includes:
when the vehicle state is a driving state, converting the first area image into a continuous gray image;
determining a lane line area in the first area image based on the continuous gray level image;
and when the vehicle deviates from the preset angle value of the lane line area, sending an alarm prompt signal.
In one embodiment, the step of determining the lane line region in the first region image based on the continuous gray image comprises:
when the continuous gray level image is obtained, screening suspicious lane line areas in the continuous gray level image;
and determining the lane line region in the first region image based on the suspicious lane line region.
In addition, to achieve the above object, the present invention provides an intelligent vehicle surround view apparatus, including: the system comprises a memory, a processor and a vehicle intelligent look-around program which is stored on the memory and can run on the processor, wherein the vehicle intelligent look-around program realizes the steps of any one of the vehicle intelligent look-around methods when being executed by the processor.
In addition, to achieve the above object, the present invention further provides a computer readable storage medium, having a vehicle intelligent looking-around program stored thereon, where the vehicle intelligent looking-around program is executed by a processor to implement the steps of the vehicle intelligent looking-around method according to any one of the above items.
The invention provides an intelligent vehicle all-around viewing method, which comprises the steps of acquiring a first area image corresponding to a vehicle when the vehicle is started, then determining a training sample corresponding to the vehicle based on the first area image, then training the training sample, determining whether foreign matters exist in the first area image, wherein, when the foreign matter is determined to exist in the first area image, an alarm prompt signal is sent, the acquisition of the first area image corresponding to the vehicle at present is realized, and the training sample is determined according to the first area image, thereby accurately determining whether foreign matters exist around the vehicle according to the training sample, further realizing intelligent detection of the foreign matters around the vehicle, and the foreign matter is positioned and an alarm prompt signal is sent out at the first moment when the foreign matter is detected, so that a user is prompted to avoid the foreign matter in time.
Drawings
FIG. 1 is a schematic structural diagram of an intelligent vehicle all-around device in a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a first embodiment of the intelligent vehicle all-around method of the present invention;
FIG. 3 is a flowchart illustrating a detailed process of the step of determining the training sample corresponding to the vehicle based on the first region image according to the first embodiment of the intelligent vehicle all-around method of the present invention;
FIG. 4 is a detailed flowchart of the step of dividing the first region image into different second region images according to the second embodiment of the intelligent vehicle all-around viewing method of the present invention;
FIG. 5 is a flowchart illustrating a detailed process of the step of determining the training sample corresponding to the vehicle based on the second area image according to the second embodiment of the intelligent vehicle all-around method of the present invention;
FIG. 6 is a flowchart illustrating a detailed process of the step of determining the training sample corresponding to the vehicle according to the image edge energy in the fourth embodiment of the intelligent vehicle all-around method of the present invention;
FIG. 7 is a flowchart illustrating a detailed process of the step of determining the training sample corresponding to the vehicle according to the suspicious sample in the fifth embodiment of the intelligent vehicle all-around method of the present invention;
FIG. 8 is a schematic flow chart diagram illustrating a seventh exemplary embodiment of an intelligent vehicle all-around method according to the present invention;
fig. 9 is a schematic flowchart of a detailed procedure of the step of determining the lane line region in the first region image based on the continuous gray-scale image according to the seventh embodiment of the intelligent vehicle all-around method of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
As shown in fig. 1, fig. 1 is a schematic structural diagram of a vehicle intelligent all-around device in a hardware operating environment according to an embodiment of the present invention.
The terminal of the embodiment of the invention can be a PC, and can also be a mobile terminal device with a display function, such as a smart phone, a tablet computer, an electronic book reader, an MP3(Moving Picture Experts Group Audio Layer III, dynamic video Experts compress standard Audio Layer 3) player, an MP4(Moving Picture Experts Group Audio Layer IV, dynamic video Experts compress standard Audio Layer 4) player, a portable computer, and the like.
As shown in fig. 1, the terminal may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
Optionally, the terminal may further include a camera, a Radio Frequency (RF) circuit, a sensor, an audio circuit, a WiFi module, and the like. Such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display screen according to the brightness of ambient light, and a proximity sensor that may turn off the display screen and/or the backlight when the mobile terminal is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), detect the magnitude and direction of gravity when the mobile terminal is stationary, and can be used for applications (such as horizontal and vertical screen switching, related games, magnetometer attitude calibration), vibration recognition related functions (such as pedometer and tapping) and the like for recognizing the attitude of the mobile terminal; of course, the mobile terminal may also be configured with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which are not described herein again.
Those skilled in the art will appreciate that the terminal structure shown in fig. 1 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
In the terminal shown in fig. 1, the network interface 1004 is mainly used for connecting a background server and communicating data with the background server; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; the memory 1005, which is a type of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a vehicle smart surround program, and the processor 1001 may be used to call the vehicle smart surround program stored in the memory 1005.
When the processor 1001 calls the vehicle smart look-around program stored in the memory 1005, the following operations are performed:
when a vehicle is started, acquiring a first area image corresponding to the vehicle;
determining a training sample corresponding to the vehicle based on the first area image;
and training the training sample, and determining whether foreign matters exist in the first area image, wherein when the foreign matters exist in the first area image, an alarm prompt signal is sent.
Further, the vehicle intelligent look-around program when executed by the processor further performs the following operations:
dividing the first area image into different second area images;
and determining a training sample corresponding to the vehicle based on the second area image.
Further, the vehicle intelligent look-around program when executed by the processor further performs the following operations:
acquiring a preset division ratio corresponding to the first area image;
and dividing the first area image into different second area images based on the preset dividing proportion.
Further, the vehicle intelligent look-around program when executed by the processor further performs the following operations:
detecting an image edge energy of the second region image;
and determining a training sample corresponding to the vehicle according to the image edge energy.
Further, the vehicle intelligent look-around program when executed by the processor further performs the following operations:
acquiring a preset energy threshold corresponding to the second area image;
when the image edge energy is larger than the preset energy threshold, determining that the second area image is a suspicious sample corresponding to the vehicle;
and determining a training sample corresponding to the vehicle according to the suspicious sample.
Further, the vehicle intelligent look-around program when executed by the processor further performs the following operations:
when the area corresponding to the suspicious sample is determined to be a cross shooting area, acquiring an image differential image corresponding to the suspicious sample;
and determining a training sample corresponding to the vehicle based on the image difference map.
Further, the vehicle intelligent look-around program when executed by the processor further performs the following operations:
when the vehicle state is a driving state, converting the first area image into a continuous gray image;
determining a lane line area in the first area image based on the continuous gray level image;
and sending an alarm prompt signal when the vehicle deviates from the preset angle value of the lane line area.
Further, the vehicle intelligent look-around program when executed by the processor further performs the following operations:
when the continuous gray level image is obtained, screening suspicious lane line areas in the continuous gray level image;
and determining the lane line region in the first region image based on the suspicious lane line region.
The invention further provides an intelligent vehicle all-around viewing method, and referring to fig. 2, fig. 2 is a schematic flow chart of a first embodiment of the intelligent vehicle all-around viewing method.
In this embodiment, the vehicle intelligent look-around method includes:
step S10, when a vehicle is started, acquiring a first area image corresponding to the vehicle;
when a vehicle is started, images of four ways of the vehicle are collected through a camera lens, when the images of the four ways of the vehicle are collected, the collected images of the four ways are corrected and spliced, and the spliced image is a first area image; specifically, four monitoring camera lenses are arranged around a vehicle body, double camera lenses are used for joint detection of cross detection blind areas, single camera lenses are used for detection of single camera blind areas, and images acquired by each camera lens are original images; and acquiring each original image based on the camera lens, and correcting and splicing the original images when each original image is acquired, so as to acquire a first area image corresponding to the vehicle.
Step S20, determining a training sample corresponding to the vehicle based on the first area image;
based on the acquired first area image, a training sample corresponding to the vehicle can be determined; specifically, when a first area image is acquired, a preset division ratio is acquired, the first area image is divided based on the preset division ratio to obtain second area images, and each of the divided second area images can constitute the first area image; when the second area image is obtained, detecting the image edge energy of the second area image, and simultaneously obtaining a preset energy threshold corresponding to the second area image; judging the image edge energy of the second area image and the preset energy threshold, and determining that the second area image is a suspicious sample corresponding to the vehicle when the image edge energy is greater than the preset energy threshold; when a suspicious sample corresponding to the vehicle is determined, an original image corresponding to the suspicious sample is obtained, and if the suspicious sample corresponds to two different original images, a region corresponding to the suspicious sample is determined to be a cross camera region; if the suspicious sample only corresponds to a single original image, determining that the area corresponding to the suspicious sample is a non-cross shooting area; when the area corresponding to the suspicious sample is determined to be the cross shooting area, acquiring an image difference image corresponding to the suspicious sample, and performing significance detection on the image difference image, wherein two original images corresponding to the area with high significance in the image difference image are used as training samples; and when the area corresponding to the suspicious sample is determined to be a non-cross shooting area, acquiring an original image corresponding to the suspicious sample, and performing sequence image change detection on the original image, wherein the area with remarkable change in the original image corresponding to the suspicious sample is used as a training sample.
Step S30, training the training sample, and determining whether a foreign object exists in the first area image, wherein when it is determined that a foreign object exists in the first area image, an alarm prompt signal is sent.
In this embodiment, when determining a training sample, the training sample may be trained through a radial basis function neural network, a training result corresponding to the training sample is obtained, and whether a foreign object exists in a first region image corresponding to the vehicle may be determined according to the training result; for example, a training sample is trained, and if the training result corresponding to the training sample is 1, the training sample is a foreign object sample, so that it is determined that a foreign object exists in the first image region corresponding to the vehicle; when the foreign matter is determined to exist in the first area image, an alarm prompt signal is automatically sent out, the foreign matter is positioned based on the first area image, and the current position of the foreign matter is obtained to prompt a user that the foreign matter exists in a certain position around the current vehicle.
In addition, when the vehicle state is detected to be the driving state, firstly, the front area where the vehicle drives is subjected to movement detection through each camera of the vehicle body, and the movement detection of the front area is carried out according to different scenes, so that the movement detection can be carried out by using a correlation and optical flow algorithm; when the movement detection of the front area where the vehicle runs is completed, the difference operation is carried out on the images obtained by the movement detection, wherein the images with large difference are trained through a radial basis function neural network, whether foreign bodies exist in the front area where the vehicle runs currently is determined according to the training result, and when the foreign bodies exist in the front area where the vehicle runs currently is determined, the current foreign bodies are positioned and an alarm prompt signal is sent to prompt a user.
The intelligent vehicle all-around viewing method provided by the embodiment acquires the first area image corresponding to the vehicle when the vehicle is started, then determining a training sample corresponding to the vehicle based on the first area image, then training the training sample, determining whether foreign matters exist in the first area image, wherein, when the foreign matter is determined to exist in the first area image, an alarm prompt signal is sent, the acquisition of the first area image corresponding to the vehicle at present is realized, and the training sample is determined according to the first area image, thereby accurately determining whether foreign matters exist around the vehicle according to the training sample, further realizing intelligent detection of the foreign matters around the vehicle, and the foreign matter is positioned and an alarm prompt signal is sent out at the first moment when the foreign matter is detected, so that a user is prompted to avoid the foreign matter in time.
Based on the first embodiment, a second embodiment of the vehicle intelligent looking-around method of the present invention is proposed, and referring to fig. 3, in the present embodiment, step S20 includes:
step S40, dividing the first area image into different second area images;
the second area image is a sub-grid area obtained by dividing the first area image, namely the first area image is divided into a plurality of sub-grid areas, the sub-grid areas are the second area image, and the first area image can be formed by a plurality of second area images; specifically, when a first region image is acquired, acquiring a preset dividing ratio of the first region image, wherein the preset dividing ratio is a preset dividing ratio of the first region image; for example, the first region image is divided into 48 × 48 grid regions, where 48 × 48 is the preset division ratio, and the first region image can be divided into 48 × 48 second region images according to the preset division ratio.
Step S50, determining a training sample corresponding to the vehicle based on the second region image.
When a second area image is acquired, a training sample corresponding to the vehicle can be determined based on the second area image, wherein the training sample is a statistical sample for training; specifically, when a second area image is acquired, detecting image edge energy of the second area image, and acquiring a preset energy threshold corresponding to the second area image; judging the size of the image edge energy of the second area image and the preset energy threshold, and determining the second area image as a suspicious sample when the image edge energy is greater than the preset energy threshold; when the second area image is determined to be a suspicious sample, acquiring an original image corresponding to the suspicious sample, and if the suspicious sample corresponds to two different original images, determining that the area corresponding to the suspicious sample is a cross camera area; when the area corresponding to the suspicious sample is determined to be the cross shooting area, acquiring a corresponding image difference image through two different original images corresponding to the suspicious sample; when the image difference image is obtained, carrying out significance detection on the image difference image, and if no region with high significance is detected in the image difference image, determining that two original images corresponding to the image difference image cannot be used as training samples of the vehicle; and if the image difference image is detected to have a region with high significance, taking two original images corresponding to the region with high significance in the image difference image as training samples.
According to the intelligent vehicle all-around viewing method provided by the embodiment, the first area image is divided into different second area images, then the training sample corresponding to the vehicle is determined based on the second area images, the second area images are obtained, and the training sample corresponding to the vehicle is determined through the second area images, so that whether foreign matters exist around the vehicle can be rapidly and accurately detected in the process of detecting whether the foreign matters exist around the vehicle according to the training sample.
Based on the second embodiment, a third embodiment of the vehicle intelligent looking-around method of the present invention is proposed, and referring to fig. 4, in this embodiment, step S40 includes:
step S41, acquiring a preset division ratio corresponding to the first area image;
in this embodiment, the preset dividing ratio is a preset proportional size for dividing the first area image; when the first area image is acquired, the first area image can be divided according to the preset dividing proportion; specifically, when the first region image is acquired, the stored preset division ratio is acquired, wherein the preset division ratio may be preset to be different according to different accuracy of the first region image and the accuracy requirement of the second region image acquired by the target.
Step S42, based on the preset dividing ratio, dividing the first area image into different second area images.
In this embodiment, the second area image is a sub-grid area obtained by dividing the first area image; when a preset division ratio is obtained, the first area image can be divided according to the preset division ratio, and different second area images are obtained by dividing the first area image; each of the second region images may constitute the first region image.
According to the intelligent vehicle all-around viewing method provided by the embodiment, the preset division ratio corresponding to the first area image is obtained, then the first area image is divided into different second area images based on the preset division ratio, the second area image is obtained, the first area image is divided into the plurality of subarea images by dividing the first area image, so that when a training sample is obtained later, the training sample can be obtained more accurately, and whether foreign matters exist around the vehicle or not can be determined more intelligently and accurately.
Based on the second embodiment, a fourth embodiment of the vehicle intelligent looking-around method of the present invention is proposed, and referring to fig. 5, in this embodiment, step S50 includes:
step S51, detecting image edge energy of the second region image;
the image edge energy is the energy value of each second area image edge, and whether the second area image is a suspicious sample can be determined by detecting the image edge energy of the second area image; wherein, the suspicious sample indicates that a foreign object may exist in the second region image; specifically, when the first area image is divided to obtain each second area image, image mean filtering is performed on each second area image; and when the mean filtering of each second area image is finished, performing edge texture detection on each second area image, and performing edge texture detection on the second area image to detect the image edge energy of the second area image.
And step S52, determining a training sample corresponding to the vehicle according to the image edge energy.
The training sample is a statistical sample to be trained, whether the training sample comprises foreign matters or not can be determined by determining the training sample and training the training sample, and the training sample corresponding to the vehicle can be determined according to the image edge energy; specifically, when image edge energy of each second region image is detected, a preset energy threshold value is obtained; if the image edge energy of the second area image is detected to be larger than the preset energy threshold, determining that the second area image is a suspicious sample; if the image edge energy of the second area image is detected to be smaller than the preset energy threshold, determining that the second area image is a non-suspicious sample; when the second area image is determined to be a suspicious sample, if the area corresponding to the suspicious sample is a cross shooting area, acquiring an image difference image of the suspicious sample, and then performing saliency detection on the image difference image, wherein two original images corresponding to the area with high saliency in the image difference image are used as training samples; and if the area corresponding to the suspicious sample is a non-cross shooting area, directly carrying out single-image sequence image change detection on the original image corresponding to the suspicious sample, wherein the area with remarkable change in the original image corresponding to the suspicious sample is determined as the training sample corresponding to the vehicle.
According to the intelligent vehicle all-round looking method provided by the embodiment, the training sample corresponding to the vehicle is determined by detecting the image edge energy of the second area image and then determining the training sample corresponding to the vehicle according to the image edge energy, so that the training sample corresponding to the vehicle can be accurately determined according to the image edge energy, and whether foreign matters exist around the vehicle or not can be more intelligently and accurately determined when the training sample is obtained.
Based on the fourth embodiment, a fifth embodiment of the intelligent vehicle surround view method of the present invention is provided, and referring to fig. 6, in this embodiment, step S52 includes:
step S60, acquiring a preset energy threshold corresponding to the second area image;
in this embodiment, the preset energy threshold is an image edge energy threshold of a preset and stored second region image; determining whether the second area image is a suspicious sample corresponding to the vehicle by judging the detected image edge energy of the second area image and the preset energy threshold; specifically, when the image edge energy of the second area image is detected, a stored preset energy threshold corresponding to the second area image is acquired; the preset energy threshold value can be set by self-definition, and the preset energy threshold value is the default threshold value of the system when the preset energy threshold value is not set by self-definition.
Step S70, when the image edge energy is greater than the preset energy threshold, determining that the second area image is a suspicious sample corresponding to the vehicle;
in this embodiment, when the preset energy threshold corresponding to the second area image is obtained, comparing the detected image edge energy of the second area image with the preset energy threshold, and if the detected image edge energy of the second area image is greater than the preset energy threshold, determining that the second area image is a suspicious sample corresponding to the vehicle; and if the image edge energy of the second area image is detected to be smaller than the preset energy threshold, determining that the second area image is a non-suspicious sample.
And step S80, determining a training sample corresponding to the vehicle according to the suspicious sample.
In this embodiment, if the detected image edge energy of the second area image is greater than the preset energy threshold, it is determined that the second area image is a suspicious sample corresponding to the vehicle; according to the suspicious sample, a training sample corresponding to the vehicle can be determined; specifically, when the second area image is determined to be a suspicious sample, an original image corresponding to the suspicious sample is obtained; the first area image is formed by splicing all original images acquired and shot through a camera lens, and therefore each second area image corresponds to different original images; when an original image corresponding to the suspicious sample is obtained, if the suspicious sample corresponds to two different original images, determining that a region corresponding to the suspicious sample is a cross camera region; when the area corresponding to the suspicious sample is determined to be the cross shooting area, acquiring an image difference image corresponding to the suspicious sample, and determining a training sample corresponding to the vehicle based on the image difference image; if the suspicious sample only corresponds to one original image, determining that the area corresponding to the suspicious sample is a non-cross shooting area, and performing single-image sequence image change detection on the original image corresponding to the suspicious sample, wherein the area with remarkable change in the original image corresponding to the suspicious sample is determined as a training sample corresponding to the vehicle.
According to the intelligent vehicle all-round looking method provided by the embodiment, the preset energy threshold corresponding to the second area image is obtained, then when the image edge energy is larger than the preset energy threshold, the second area image is determined to be a suspicious sample corresponding to the vehicle, then the training sample corresponding to the vehicle is determined according to the suspicious sample, the suspicious sample is determined, and the training sample corresponding to the vehicle is more accurately obtained through the determination of the suspicious sample, so that whether foreign matters exist around the vehicle can be quickly and accurately detected in the process of detecting whether foreign matters exist around the vehicle according to the training sample.
Based on the fifth embodiment, a sixth embodiment of the intelligent around-the-vehicle method of the invention is proposed, and referring to fig. 7, in the present embodiment, step S80 includes:
step S81, when the area corresponding to the suspicious sample is determined to be the cross shooting area, acquiring an image differential map corresponding to the suspicious sample;
the first area image is formed by splicing all original images acquired and shot through a camera lens, so that each second area image corresponds to different original images respectively; different camera shooting detection modes are respectively adopted for different sight areas around the vehicle, specifically, the camera shooting detection modes can be divided into two modes, and a double-camera lens combined detection mode is adopted for cross detection dead zones around the vehicle; for the single camera detection blind area around the vehicle, a single camera lens detection mode is adopted; when the second area image is determined to be a suspicious sample, acquiring an original image corresponding to the suspicious sample, and if the suspicious sample corresponds to two different original images, determining that the area corresponding to the suspicious sample is a cross camera area; and when the area corresponding to the suspicious sample is determined to be the cross shooting area, performing image difference on the two acquired original images, for example, performing image difference on the acquired original image 1 and the acquired original image 2, so as to acquire an image difference image corresponding to the suspicious sample.
And step S82, determining a training sample corresponding to the vehicle based on the image difference map.
In this embodiment, based on the image difference map corresponding to the second region image, the training sample corresponding to the vehicle may be determined; specifically, when an image difference map is acquired, saliency detection is performed on the image difference map, that is, whether a region with high saliency exists in the image difference map is detected; if the image difference image is detected to have a region with high significance, two original images corresponding to the region with high significance in the image difference image are used as training samples; and if no region with high significance is detected in the image difference image, determining that two original images corresponding to the region with high significance in the image difference image cannot be used as training samples of the vehicle.
According to the method for intelligently looking around the vehicle, when the area corresponding to the suspicious sample is determined to be the cross camera shooting area, the image difference image corresponding to the suspicious sample is obtained, then the training sample corresponding to the vehicle is determined based on the image difference image, so that when the area corresponding to the suspicious sample is determined to be the cross camera shooting area, the image difference image is obtained, and the training sample corresponding to the vehicle is further accurately determined according to the image difference image, so that whether foreign matters exist around the vehicle can be quickly and accurately detected in the process of detecting whether foreign matters exist around the vehicle.
Based on the first embodiment, a seventh embodiment of the vehicle intelligent looking-around method of the present invention is proposed, and referring to fig. 8, in this embodiment, after step S10, the vehicle intelligent looking-around method further includes:
step S90, when the vehicle state is the driving state, converting the first area image into a continuous gray image;
when a vehicle is started, after a first area image corresponding to the vehicle is collected, the vehicle state of the vehicle is detected; when the vehicle state of the vehicle is a driving state, acquiring a preset detection range of the vehicle, and acquiring a first area image in the preset detection range according to the preset detection range; for example, if the preset detection range is a detection range 20cm away from the vehicle body in the first region image, acquiring a first region image within a distance range 20cm away from the vehicle body; converting a first area image in the preset detection range into a continuous gray image, wherein the continuous gray image comprises an intensity gray image and a color difference gray image, specifically, acquiring a preset gray value, and adjusting the RGB value of the first area image to the preset gray value to obtain the intensity gray image; the color difference gray image is a HIS image, H, S, I values corresponding to R, G, B of the first region image are calculated through functions, and the values corresponding to R, G, B of the first region image are adjusted to the values corresponding to H, S, I obtained through calculation, so that the color difference gray image can be obtained.
Step S100, determining a lane line area in the first area image based on the continuous gray level image;
based on the continuous gray-scale image corresponding to the vehicle, the position of the lane line driven by the vehicle in the continuous gray-scale image can be determined; specifically, when a continuous gray image corresponding to the vehicle is acquired, the continuous gray image comprises an intensity gray image and a color difference gray image, probability distribution statistics is performed on the intensity gray image, a gray value of one probability point is selected as a segmentation threshold, and the intensity gray image is segmented based on the segmentation threshold; when the intensity gray level image is segmented, screening can be carried out through an algorithm of connected domain contour detection, and suspicious white lane line areas in the intensity gray level image are screened out; analyzing the significance of the color difference gray level image, and determining whether a suspicious yellow lane line area exists in the color difference gray level image or not through the significance analysis; when the suspicious yellow lane line area exists in the color difference gray level image, screening through an algorithm of connected domain contour detection so as to screen out the suspicious yellow lane line area in the color difference gray level image; the suspicious lane line area comprises a suspicious white lane line area and a suspicious yellow lane line area; when the suspicious lane line area is screened out, screening and judging corresponding areas of the intensity gray level image and the color aberration gray level image respectively through a radial basis function neural network, and determining the lane line area in the first area image; in addition, the initial angle of the lane line in the suspicious lane line area is calculated through the M.K.Hu invariant moment, and the initial angle of the lane line calculated in the mode may have larger errors; therefore, when the initial angle of the lane line is obtained through calculation, the outermost boundary point of the lane line is positioned through a boundary tracking algorithm, and the boundary points on the two sides along the longitudinal axis are secondarily and accurately positioned by respectively using a straight line fitting machine learning algorithm, so that the lane line area in the first area image can be further accurately positioned.
And step S200, when the vehicle deviates from the preset angle value of the lane line area, sending an alarm prompt signal.
In this embodiment, when the lane line area in the first area image is determined, the driving route of the vehicle is tracked based on the lane line area, and when the vehicle deviates from the preset angle value of the lane line area, an alarm prompt signal is sent; specifically, when a lane line area in the first area image is determined, a preset angle value of the vehicle is obtained, wherein the preset angle value is a preset warning angle value of the vehicle deviating from a lane line and is stored in advance; and tracking the current driving route of the vehicle, and automatically sending out an alarm prompt signal when the driving route, namely a lane, of the vehicle deviates from the preset angle value of the lane line area in the first area image so as to prompt a user that the vehicle currently drives and deviates from the lane line.
According to the intelligent vehicle all-around viewing method provided by the embodiment, when the vehicle state is the driving state, the first area image is converted into the continuous gray image, then the lane line area in the first area image is determined based on the continuous gray image, and then when the vehicle deviates from the preset angle value of the lane line area, the alarm prompt signal is sent, so that the lane line area is accurately positioned in the driving state of the vehicle, and when the vehicle deviates from the lane line area by a certain angle, the alarm prompt signal is sent to prompt a user.
Based on the seventh embodiment, an eighth embodiment of the vehicle intelligent looking-around method of the present invention is proposed, and referring to fig. 9, in this embodiment, step S100 includes:
step S110, when the continuous gray level image is obtained, screening suspicious lane line areas in the continuous gray level image;
in this embodiment, the continuous gray image includes an intensity gray image and a color difference gray image; screening suspicious lane line areas in the continuous gray level image, namely analyzing the intensity gray level image and the color aberration gray level image respectively, and screening suspicious white lane line areas and suspicious yellow lane line areas which correspond to the intensity gray level image and the color aberration gray level image respectively; wherein, the suspicious lane line area comprises the suspicious white lane line area and the suspicious yellow lane line area; specifically, if the intensity gray level image is analyzed to screen out a suspicious white lane line region, probability distribution statistics is carried out on the intensity gray level image to obtain a probability distribution curve corresponding to the continuous gray level image; obtaining a preset window size and a preset filter coefficient of a gaussian filter window, for example, the preset window size of the gaussian filter window is set to 1 × 5, the preset filter coefficient is set to {0.065, 0.243, 0.383, 0.243, 0.065}, and based on the preset window size and the preset filter coefficient, gaussian filtering is performed on the probability distribution curve; the segmentation threshold corresponding to the intensity gray level image can be obtained by performing Gaussian filtering on the probability distribution curve; specifically, filtered curve data corresponding to the probability distribution curve can be obtained through the gaussian filtering, and a gray value corresponding to the maximum peak top in the curve data is selected as a background gray value; adding a preset segmentation initial value to the background gray level as a segmentation threshold value, and performing image segmentation on the intensity gray level image based on the segmentation threshold value, namely determining whether a suspicious white lane line region exists in the intensity gray level image; meanwhile, color significance analysis is carried out on the color difference gray image, wherein the color difference gray image is an HIS (hue, saturation and intensity) chroma image converted from an original image, and the color significance analysis is carried out on the color difference gray image, namely whether a suspicious yellow lane area exists in the color difference gray image can be determined through a hue gray image H and a saturation gray image S; when a suspicious white lane line region exists in the intensity gray level image and a suspicious yellow lane line region exists in the color difference gray level image, respectively screening the suspicious white lane line region from the intensity gray level image and the suspicious yellow lane line region from the color difference gray level image by a connected domain contour detection algorithm; specifically, the lane line pixels may be represented by a first preset pixel, the non-lane line may be represented by a second preset pixel, the pixel range of the grayscale image is 0 to 255, the lane line pixels may be represented by 255, and the non-lane line is represented by 0; based on the first preset pixel and the second preset pixel, a suspicious white lane line region can be screened out from the intensity gray level image and a suspicious yellow lane line region can be screened out from the color difference gray level image through a connected domain contour detection algorithm.
Step S120, determining a lane line region in the first region image based on the suspicious lane line region.
In this embodiment, when a suspicious lane line region in the continuous grayscale image is screened out, based on the suspicious lane line region, a lane line region in the first region image may be determined; specifically, when a suspicious lane line region in the continuous gray level image is screened out, screening and judging a corresponding region of the continuous gray level image through a radial basis function neural network; determining a lane line region in the first region image according to a judgment result of the radial basis function neural network, determining that the suspicious lane line region is the lane line region in the first region image when the judgment result is a preset judgment result, and determining that the suspicious lane line region is not the lane line region in the first region image when the judgment result is not the preset judgment result.
In the method for intelligently looking around a vehicle provided by this embodiment, when the continuous grayscale image is obtained, the suspicious lane line region in the continuous grayscale image is screened, and then the lane line region in the first region image is determined based on the suspicious lane line region, so that the lane line region is determined, and when the vehicle deviates from the lane line region by a certain angle, an alarm prompt signal can be timely sent out to prompt a user
In addition, to achieve the above object, the present invention further provides a computer-readable storage medium, on which a vehicle intelligent looking-around program is stored, which when executed by a processor implements the following operations:
when a vehicle is started, acquiring a first area image corresponding to the vehicle;
determining a training sample corresponding to the vehicle based on the first area image;
and training the training sample, and determining whether foreign matters exist in the first area image, wherein when the foreign matters exist in the first area image, an alarm prompt signal is sent.
Further, the vehicle intelligent look-around program when executed by the processor further performs the following operations:
dividing the first area image into different second area images;
and determining a training sample corresponding to the vehicle based on the second area image.
Further, the vehicle intelligent look-around program when executed by the processor further performs the following operations:
acquiring a preset division ratio corresponding to the first area image;
and dividing the first area image into different second area images based on the preset dividing proportion.
Further, the vehicle intelligent look-around program when executed by the processor further performs the following operations:
detecting an image edge energy of the second region image;
and determining a training sample corresponding to the vehicle according to the image edge energy.
Further, the vehicle intelligent look-around program when executed by the processor further performs the following operations:
acquiring a preset energy threshold corresponding to the second area image;
when the image edge energy is larger than the preset energy threshold, determining that the second area image is a suspicious sample corresponding to the vehicle;
and determining a training sample corresponding to the vehicle according to the suspicious sample.
Further, the vehicle intelligent look-around program when executed by the processor further performs the following operations:
when the area corresponding to the suspicious sample is determined to be a cross shooting area, acquiring an image differential image corresponding to the suspicious sample;
and determining a training sample corresponding to the vehicle based on the image difference map.
Further, the vehicle intelligent look-around program when executed by the processor further performs the following operations:
when the vehicle state is a driving state, converting the first area image into a continuous gray image;
determining a lane line region in the first region image based on the continuous gray image;
and sending an alarm prompt signal when the vehicle deviates from the preset angle value of the lane line area.
Further, the vehicle intelligent look-around program when executed by the processor further performs the following operations:
when the continuous gray level image is obtained, screening suspicious lane line areas in the continuous gray level image;
and determining the lane line region in the first region image based on the suspicious lane line region.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention or the portions contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) as described above and includes several instructions for enabling a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (8)

1. An intelligent vehicle all-around method is characterized by comprising the following steps:
when a vehicle is started, acquiring a first area image corresponding to the vehicle;
dividing the first area image into different second area images, and detecting the image edge energy of the second area images;
determining a training sample corresponding to the vehicle according to the image edge energy;
and training the training sample, and determining whether foreign matters exist in the first area image, wherein when the foreign matters exist in the first area image, an alarm prompt signal is sent.
2. The vehicle intelligent look-around method of claim 1, wherein the step of dividing the first region image into different second region images comprises:
acquiring a preset division ratio corresponding to the first area image;
and dividing the first area image into different second area images based on the preset division ratio.
3. The method of claim 1, wherein the step of determining the training sample corresponding to the vehicle based on the image edge energy comprises:
acquiring a preset energy threshold corresponding to the second area image;
when the image edge energy is larger than the preset energy threshold, determining that the second area image is a suspicious sample corresponding to the vehicle;
and determining a training sample corresponding to the vehicle according to the suspicious sample.
4. The method of claim 3, wherein the step of determining the training sample corresponding to the vehicle based on the suspicious sample comprises:
when the area corresponding to the suspicious sample is determined to be a cross shooting area, acquiring an image differential image corresponding to the suspicious sample;
and determining a training sample corresponding to the vehicle based on the image difference map.
5. The intelligent vehicle surround view method according to claim 1, wherein after the step of acquiring the first area image corresponding to the vehicle when the vehicle is started, the intelligent vehicle surround view method further comprises:
when the vehicle state is a driving state, converting the first area image into a continuous gray image;
determining a lane line region in the first region image based on the continuous gray image;
and sending an alarm prompt signal when the vehicle deviates from the preset angle value of the lane line area.
6. The vehicle intelligent look-around method of claim 5, wherein the step of determining a lane line region in the first region image based on the continuous grayscale image comprises:
when the continuous gray level image is obtained, screening suspicious lane line areas in the continuous gray level image;
and determining the lane line region in the first region image based on the suspicious lane line region.
7. A vehicle intelligent looking around device, characterized in that, the vehicle intelligent looking around device includes: a memory, a processor and a vehicle intelligent look-around program stored on the memory and executable on the processor, the vehicle intelligent look-around program when executed by the processor implementing the steps of the vehicle intelligent look-around method of any one of claims 1 to 6.
8. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon a vehicle intelligent look-around program, which when executed by a processor implements the steps of the vehicle intelligent look-around method of any one of claims 1 to 6.
CN201810226899.7A 2018-03-19 2018-03-19 Intelligent vehicle all-round viewing method and device and computer readable storage medium Active CN108363996B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810226899.7A CN108363996B (en) 2018-03-19 2018-03-19 Intelligent vehicle all-round viewing method and device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810226899.7A CN108363996B (en) 2018-03-19 2018-03-19 Intelligent vehicle all-round viewing method and device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN108363996A CN108363996A (en) 2018-08-03
CN108363996B true CN108363996B (en) 2022-05-10

Family

ID=63000978

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810226899.7A Active CN108363996B (en) 2018-03-19 2018-03-19 Intelligent vehicle all-round viewing method and device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN108363996B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102632839A (en) * 2011-02-15 2012-08-15 汽车零部件研究及发展中心有限公司 Back sight image cognition based on-vehicle blind area early warning system and method
CN106183979A (en) * 2016-07-07 2016-12-07 广州鹰瞰信息科技有限公司 A kind of method and apparatus vehicle reminded according to spacing
CN106608221A (en) * 2015-10-26 2017-05-03 比亚迪股份有限公司 Vehicle blind area detecting system and method
CN107031623A (en) * 2017-03-16 2017-08-11 浙江零跑科技有限公司 A kind of road method for early warning based on vehicle-mounted blind area camera
CN107133559A (en) * 2017-03-14 2017-09-05 湖北工业大学 Mobile object detection method based on 360 degree of panoramas

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102632839A (en) * 2011-02-15 2012-08-15 汽车零部件研究及发展中心有限公司 Back sight image cognition based on-vehicle blind area early warning system and method
CN106608221A (en) * 2015-10-26 2017-05-03 比亚迪股份有限公司 Vehicle blind area detecting system and method
CN106183979A (en) * 2016-07-07 2016-12-07 广州鹰瞰信息科技有限公司 A kind of method and apparatus vehicle reminded according to spacing
CN107133559A (en) * 2017-03-14 2017-09-05 湖北工业大学 Mobile object detection method based on 360 degree of panoramas
CN107031623A (en) * 2017-03-16 2017-08-11 浙江零跑科技有限公司 A kind of road method for early warning based on vehicle-mounted blind area camera

Also Published As

Publication number Publication date
CN108363996A (en) 2018-08-03

Similar Documents

Publication Publication Date Title
CN108052624B (en) Point cloud data processing method and device and computer readable storage medium
CN108009543B (en) License plate recognition method and device
Wu et al. Lane-mark extraction for automobiles under complex conditions
CN107944351B (en) Image recognition method, image recognition device and computer-readable storage medium
KR102404149B1 (en) Driver assistance system and method for object detection and notification
US9262693B2 (en) Object detection apparatus
JP4482599B2 (en) Vehicle periphery monitoring device
KR20180068578A (en) Electronic device and method for recognizing object by using a plurality of senses
CN108846336B (en) Target detection method, device and computer readable storage medium
CN111507324B (en) Card frame recognition method, device, equipment and computer storage medium
US20160232415A1 (en) Detection detection of cell phone or mobile device use in motor vehicle
CN109635700B (en) Obstacle recognition method, device, system and storage medium
CN111967396A (en) Processing method, device and equipment for obstacle detection and storage medium
CN108881846B (en) Information fusion method and device and computer readable storage medium
KR101620425B1 (en) System for lane recognition using environmental information and method thereof
CN108363996B (en) Intelligent vehicle all-round viewing method and device and computer readable storage medium
Itu et al. An efficient obstacle awareness application for android mobile devices
JP2017174380A (en) Recognition device, method for recognizing object, program, and storage medium
CN116363100A (en) Image quality evaluation method, device, equipment and storage medium
KR101199959B1 (en) System for reconnizaing road sign board of image
CN112784817B (en) Method, device and equipment for detecting lane where vehicle is located and storage medium
JP6825299B2 (en) Information processing equipment, information processing methods and programs
EP2579229A1 (en) Vehicle surroundings monitoring device
CN113989774A (en) Traffic light detection method and device, vehicle and readable storage medium
CN114373081A (en) Image processing method and device, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant