CN112767723B - Road condition detection method, computer storage device, vehicle-mounted terminal and vehicle - Google Patents
Road condition detection method, computer storage device, vehicle-mounted terminal and vehicle Download PDFInfo
- Publication number
- CN112767723B CN112767723B CN201911072569.8A CN201911072569A CN112767723B CN 112767723 B CN112767723 B CN 112767723B CN 201911072569 A CN201911072569 A CN 201911072569A CN 112767723 B CN112767723 B CN 112767723B
- Authority
- CN
- China
- Prior art keywords
- road condition
- vehicle
- block
- road
- detection area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/09—Arrangements for giving variable traffic instructions
- G08G1/0962—Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
- G08G1/0967—Systems involving transmission of highway information, e.g. weather, speed limits
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/02—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/09—Arrangements for giving variable traffic instructions
- G08G1/0962—Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
- G08G1/0967—Systems involving transmission of highway information, e.g. weather, speed limits
- G08G1/096708—Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control
Abstract
The application discloses a road condition detection method, a computer storage device, a vehicle-mounted terminal and a vehicle, wherein the detection method comprises the steps of obtaining a road condition image of a current driving road of the vehicle, and determining a road condition detection area from the road condition image; dividing the road condition detection area into at least one block; determining a target block with a preset target object in at least one block; and determining the adjacent target blocks as the areas to be detected, and detecting the road conditions of the areas to be detected. The detection method can improve the road condition detection efficiency.
Description
Technical Field
The application relates to the technical field of intelligent driving, in particular to a road condition detection method, a computer storage device, a vehicle-mounted terminal and a vehicle.
Background
With the continuous development of science and technology, automobiles become more and more intelligent as indispensable vehicles in people's lives.
At present, Advanced Driving Assistance Systems (ADAS) can be installed in automobiles to help users to better survey road conditions, so that users can conveniently master driving environments in time to make corresponding operation behaviors in advance.
However, the advanced driving assistance system in the prior art usually detects the whole image of the current road condition, so that the speed of detecting the vehicle is slow, and the requirements of users cannot be met.
Disclosure of Invention
The application provides a road condition detection method, a computer storage device, a vehicle-mounted terminal and a vehicle, which can quickly identify a target object on a current road surface, so that a driver or the vehicle can timely make corresponding measures, and the driving safety is improved.
In order to solve the above technical problem, the present application provides a road condition detection method, including:
acquiring a road condition image of a current driving road of a vehicle, and determining a road condition detection area from the road condition image; dividing the road condition detection area into at least one block; determining a target block with a preset target object in at least one block; and determining the adjacent target blocks as the areas to be detected, and detecting the road conditions of the areas to be detected.
Optionally, the step of obtaining the road condition image of the current driving road of the vehicle includes controlling a camera to shoot the current driving road to obtain the road condition image; the step of determining the road condition detection area from the road condition image includes: extracting lane lines and vanishing points from the road condition image, wherein the lane lines are boundary lines of a current driving road, and the vanishing points are intersection points of at least two boundary lines; and marking the area formed by the lane line and the vanishing point as a road condition detection area. The road condition detection area can be obtained through the lane lines and the vanishing points of the road condition image.
Optionally, the step of extracting the lane line and the vanishing point from the road condition image includes: extracting at least two lane lines from the image; fitting at least two lane lines to obtain at least two fitting curves, wherein the intersection point of the at least two fitting curves is a vanishing point; and calculating a vanishing point according to the fitted curve. By fitting the lane lines, vanishing points can be obtained.
Optionally, the step of dividing the road condition detection area into at least one block includes: n first parallel straight lines are arranged in the road condition detection area along the driving direction of the vehicle, and the distance between every two adjacent first parallel straight lines is a preset value; and dividing each first parallel straight line into m equal parts to obtain m-1 equal division points, and connecting the vanishing points and the equal division points to divide the road condition detection area into n × m blocks. The road condition detection area can be divided into at least one block by connecting the vanishing point with the equal dividing point.
Optionally, when the lane line is a curve, the step of dividing the road condition detection area into at least one block includes: and arranging k second parallel straight lines in the road condition detection area along the horizontal direction, dividing each second parallel straight line into i equal parts to obtain i-1 equal division points, and connecting the vanishing point and the equal division points to divide the road condition detection area into k x i blocks. The road condition detection area can be divided into at least one block by connecting the vanishing point with the equal dividing point.
Optionally, the step of determining that a target block of a preset target object exists in the at least one block includes: and making the at least one block into a gradient histogram, and determining a target block with a preset target object in the at least one block when the amplitude of the gradient histogram exceeds a preset threshold value. Through the magnitude calculation of the gradient histogram, the target block can be determined.
Optionally, the step of detecting the road condition of the area to be detected includes detecting the road condition of the area to be detected by using a support vector machine method or a deep learning method, so as to detect the target object in the area to be detected. The target object can be quickly detected by a support vector machine or deep learning.
In order to solve the above technical problem, the present application provides a computer storage medium, on which a computer program is stored, and the computer program is executed by a processor to implement the above detection method.
In order to solve the above technical problem, the present application provides a vehicle-mounted terminal, where the vehicle-mounted terminal includes a memory and a processor, the memory is connected to the processor, the memory stores a computer program, and the computer program is executed by the processor to implement the detection method.
In order to solve the technical problem, the application provides a vehicle, and the vehicle includes a vehicle body and the vehicle-mounted terminal, and the vehicle-mounted terminal is installed on the vehicle body.
The application provides a road condition detection method, which comprises the steps of obtaining a road condition image of a current driving road of a vehicle, and determining a road condition detection area from the road condition image; dividing the road condition detection area into at least one block; determining a target block with a preset target object in at least one block; and determining the adjacent target blocks as the areas to be detected, and detecting the road conditions of the areas to be detected. By the method, only the area to be detected is required to be detected, the whole image of the image is not required to be detected, the image detection time is saved, and therefore the road condition detection efficiency of the current driving road can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flow chart of an embodiment of a traffic detection method according to the present application;
fig. 2 is a schematic flow chart of another embodiment of the traffic detection method of the present application;
fig. 3 is a schematic flow chart of another embodiment of the traffic detection method of the present application;
FIG. 4 is a schematic view of a section of the traffic detection method of FIG. 3;
fig. 5 is a schematic flow chart of a traffic status detection method according to another embodiment of the present application;
FIG. 6 is a schematic view of a section of the traffic detection method of FIG. 5;
FIG. 7 is a schematic mechanical diagram of an embodiment of the vehicle-mounted detecting device of the present application;
FIG. 8 is a schematic structural diagram of an embodiment of a computer storage medium according to the present application;
fig. 9 is a schematic structural diagram of an embodiment of the vehicle-mounted terminal according to the present application.
Detailed Description
In order to make those skilled in the art better understand the technical solution of the present application, the detection method of a traffic light, the computer storage medium, the vehicle-mounted terminal and the vehicle provided by the present invention are further described in detail below with reference to the accompanying drawings and the detailed description.
Currently, in an automatic driving or ADAS system, a current picture needs to be detected so as to acquire traffic information. However, in the related art, the detection method is to detect the whole image, and the algorithm has a large amount of calculation, so that the recognition efficiency is slow, and a long time is required to obtain the detection result.
Based on the above, the present application provides a road condition detection method, which can screen the current picture, and can obtain the road condition information only by identifying the local area of the picture, thereby increasing the detection speed. The detection method can be applied to an automatic driving or ADAS system, wherein the automatic driving or ADAS system can comprise a camera and a controller, the camera is used for obtaining images, and the controller is used for processing the images and obtaining traffic information.
Referring to fig. 1, fig. 1 is a schematic flow chart of an embodiment of the traffic status detection method of the present application. The detection method of the embodiment comprises the following steps:
s11: and acquiring a road condition image of a current driving road of the vehicle, and determining a road condition detection area from the road condition image.
The controller can acquire the road condition image of the current driving road of the vehicle and determine the road condition detection area from the road condition image. The road condition image acquired by the controller can be an image shot by the controller control camera, and the camera can be arranged in the vehicle inner area or the vehicle outer area to shoot the current road condition. In addition, the road condition image that the controller obtained also can be the image of shooing through other electronic equipment, for example the road condition image that the intelligent glasses that the user wore shot, and intelligent glasses send the road condition image to the controller, and the controller can acquire the road condition image.
The controller may further identify a road condition detection zone from the road condition image. The road condition detection region may be an area in which the vehicle is about to travel or may travel.
For example, a lane area may exist in the road condition image of the current driving road of the vehicle, which is acquired by the controller, and is used for driving the vehicle; the green belt is used for planting ornamental plants; the sidewalk is used for people to go out. The road condition detection area required to be obtained by the controller is the lane area where the vehicle runs. The controller can identify the lane area in the current road condition image and detect the lane area. Since vehicles are generally impossible to appear in the green belt and the sidewalk, the green belt and the sidewalk are not detected in the embodiment, so that the detection efficiency of the controller on the current road condition image is improved.
S12: the road condition detection area is divided into at least one block.
The controller can divide the road condition detection area into a plurality of blocks and then detect each block. Compared with the whole road condition detection area, the controller can detect the blocks so that the detection result is more accurate.
S13: and determining a target block with a preset target object in the at least one block.
The controller may detect at least one block, determine whether a preset target object exists in the block, and determine, if so, the block in which the preset target object exists as the target block. The preset target object may include a pedestrian or a vehicle, and since the road condition detection region is divided into a plurality of blocks in the above steps, the pedestrian or the vehicle may also be divided into a plurality of parts, and thus, the preset target object may include a part and an entirety of the pedestrian or the vehicle.
Optionally, the controller may further make the block into a gradient histogram, and determine the magnitude of the gradient histogram to determine whether the target object is included in the block. Specifically, the controller may determine whether the amplitude of the gradient histogram of each block exceeds a preset threshold, and if so, the controller determines that a target object exists in the block. Specifically, the calculation of the gradient histogram can be divided into the following steps:
a) preprocessing: the aspect ratio of the adjustment block is 1: 2.
b) Calculating a gradient image: necessary information is removed and critical information is retained.
c) Gradient histograms were computed in an 8 by 8 grid: the block is subdivided into 8 by 8 grids, each of which computes a gradient histogram. One grid contains 8 x 3-192 pixel values, two values (magnitude and direction) per pixel gradient, and 8 x 2-128 values per grid. Extracting the amplitude and the direction from the 8-by-8 grids, and processing to obtain a 9-bit histogram corresponding to the grids.
d)16 × 16 block normalization: the 36 x 1 histogram is seen as 4 9 x 1 histograms, then the window is shifted by 8 pixels, a normalized 36 x 1 size vector is calculated and the process is repeated through the image. The magnitude of the histogram after the normalization process does not change with the change of the brightness.
The feature vector can be obtained by the method, and whether the block comprises a preset target object or not can be determined.
S14: and determining the adjacent target blocks as the areas to be detected, and detecting the road conditions of the areas to be detected.
The controller can connect adjacent target blocks and determine the target blocks as the area to be detected, and the road condition of the area to be detected is detected. The road condition image may include a plurality of regions to be detected, and if the target regions are not adjacent, it is determined that the target region is a single region to be detected. The controller may detect a plurality of regions to be detected to determine whether each region to be detected includes a target object.
Optionally, the controller may detect the road condition of the area to be detected by using a support vector machine method or a deep learning method, so as to detect the target object in the area to be detected.
The application provides a road condition detection method, which comprises the steps of obtaining a road condition image of a current driving road of a vehicle, and determining a road condition detection area from the road condition image; dividing the road condition detection area into at least one block; determining a target block with a preset target object in at least one block; and determining the adjacent target blocks as the areas to be detected, and detecting the road conditions of the areas to be detected. By the method, only the area to be detected is required to be identified, the whole picture of the road condition image is not required to be identified, and the efficiency of road condition detection can be improved.
Referring to fig. 2, fig. 2 is a schematic flow chart of another embodiment of the traffic status detection method of the present application. The detection method of the embodiment comprises the following steps:
s21: and controlling a camera to shoot the current driving road to obtain a road condition image.
The controller can control the camera to shoot images and send the images to the controller, and the controller can acquire the images.
S22: and extracting a lane line and a vanishing point from the road condition image, wherein the lane line is a boundary line of the current driving road, and the vanishing point is an intersection point of at least two boundary lines.
The controller may extract lane lines and vanishing points from the road condition image. Wherein the lane line may be a boundary line of a current driving road, and the vanishing point may be an intersection of at least two boundary lines.
Specifically, the controller can extract at least two lane lines from the road condition image, and the controller can obtain at least two fitting curves by fitting the at least two lane lines, wherein the intersection point of the at least two fitting curves is a vanishing point, and the vanishing point is calculated according to the at least two fitting curves. For example, when two lane lines are straight lines, the straight line equations of the two lane lines may be: k is1X+b1And Y ═ k2X+b2And the intersection point of the two straight lines, namely the coordinates of the vanishing point of the two lane lines can be obtained through calculation.
S23: and marking an area formed by the lane line and the vanishing point as a road condition detection area.
The controller may mark an area surrounded by at least two lane lines and the vanishing point as a road condition detection area.
S24: the road condition detection area is divided into at least one block.
S25: and determining a target block with a preset target object in the at least one block.
S26: and determining the adjacent target blocks as the areas to be detected, and detecting the road conditions of the areas to be detected.
Steps S24 to S26 are the same as steps S12 to S14 in the above embodiments, and detailed description thereof is omitted, and reference may be made to the above embodiments.
Referring to fig. 3 and 4, fig. 3 is a schematic flow chart of a traffic detection method according to another embodiment of the present application, and fig. 4 is a schematic partition diagram of the traffic detection method of fig. 3. The detection method in this embodiment includes the following steps, and the same steps as those in the above embodiment are not described again:
s31: and acquiring a road condition image of a current driving road of the vehicle, and determining a road condition detection area from the road condition image.
S32: n first parallel straight lines are arranged in the road condition detection area along the driving direction of the vehicle, and the distance between every two adjacent first parallel straight lines is a preset value.
The controller can set n first parallel straight lines in the road condition detection area along the driving direction of the vehicle, wherein the distance between two adjacent first parallel straight lines can be a preset value. When the road condition detection area is a straight road without a curve, the n first parallel straight lines may be parallel to the horizontal direction.
S33: and dividing each first parallel straight line into m equal parts to obtain m-1 equal division points, and connecting the vanishing points and the equal division points to divide the road condition detection area into n × m blocks.
The controller can divide each first parallel straight line into m equal parts to obtain m-1 equal parts, the a-th equal parts (a is more than or equal to 1 and less than or equal to m-1) on each first parallel straight line are respectively connected with the vanishing point to form m-1 intersecting straight lines intersecting the vanishing point, and the n first parallel straight lines and the m-1 intersecting straight lines can divide the road condition detection area into n blocks. As shown in fig. 4, the road condition detection area of the straight road is divided into a plurality of blocks.
S34: and determining a target block with a preset target object in the at least one block.
Continuing with fig. 4, the cart is the target object, the cart occupies 6 blocks in the n × m blocks, and the 6 blocks where the cart exists are determined as the target blocks.
S35: and determining the adjacent target blocks as the areas to be detected, and detecting the road conditions of the areas to be detected.
In this embodiment, the controller may connect the adjacent 6 target blocks, determine the target blocks as the areas to be detected, and further detect the road conditions of the areas to be detected. The detection step is described in detail in the above embodiments, and is not described herein again.
In addition, the steps in this embodiment are the same as those in the above embodiment, and are not described herein again.
In this embodiment, the vehicle detection area is divided into a plurality of blocks according to the first parallel straight line and the vanishing point, the block with the preset target object is determined as the target block, and the adjacent target blocks are connected for detection.
Referring to fig. 5 and fig. 6, fig. 5 is a schematic flow chart of a traffic detection method according to another embodiment of the present application, and fig. 6 is a schematic partition diagram of the traffic detection method of fig. 5. The detection method in this embodiment includes the following steps, which can be applied to the case where the lane line is a curve, and the same steps as those in the above embodiment are not described again:
s51: and acquiring a road condition image of a current driving road of the vehicle, and determining a road condition detection area from the road condition image.
S52: and k second parallel straight lines are arranged in the road condition detection area along the horizontal direction.
The controller may set k second parallel straight lines in the horizontal direction in the road condition detection area. When the road condition detection region is a curved road, the second parallel straight line may be set in the horizontal direction.
S53: and dividing each second parallel straight line into i equal parts to obtain i-1 equal division points, and connecting the vanishing point and the equal division points to divide the road condition detection area into k × i blocks.
The controller can divide each second parallel straight line into i equal parts so as to obtain i-1 equal parts, the b-th equal parts (b is more than or equal to 1 and less than or equal to m-1) on each second parallel straight line are respectively connected with the vanishing point to form i-1 intersecting curves intersecting the vanishing point, and the k second parallel straight lines and the i-1 intersecting curves can divide the road condition detection area into k × i blocks. As shown in fig. 6, the road condition detection area of the curve is divided into several blocks.
S54: and determining a target block with a preset target object in the at least one block.
Continuing with fig. 6, where the cart is the target object, the cart occupies 6 blocks in the n × m blocks, and the 6 blocks where the cart exists are determined as the target blocks.
S55: and determining the adjacent target blocks as the areas to be detected, and detecting the road conditions of the areas to be detected.
In this embodiment, the controller may connect the adjacent 6 target blocks, determine the target blocks as the areas to be detected, and further detect the road conditions of the areas to be detected. The detection step is described in detail in the above embodiments, and is not described herein again.
In addition, the steps in this embodiment are the same as those in the above embodiment, and are not described herein again.
In this embodiment, the vehicle detection area of the curve is divided into a plurality of blocks according to the second parallel straight line and the vanishing point, the block in which the preset target object exists is determined as the target block, and the adjacent target blocks are connected to perform detection.
Based on the detection method, the application also provides a vehicle-mounted detection device. Referring to fig. 7, fig. 7 is a schematic mechanism diagram of an embodiment of a vehicle-mounted detection device according to the present application. The in-vehicle detection apparatus 700 includes the acquisition detection module 71 and the processing module 72 connected to each other.
The acquisition detection module 71 is configured to perform the steps involved in the detection in any of the above embodiments; the processing module 72 is configured to perform the steps related to the processing in any of the embodiments, and the processing module 72 may include a controller configured to perform the steps related to the processing in any of the embodiments.
In some embodiments, the controller may be configured to acquire a road condition image of a current driving road of the vehicle, and determine a road condition detection region from the road condition image; dividing the road condition detection area into at least one block; determining a target block with a preset target object in at least one block; and determining the adjacent target blocks as the areas to be detected, and detecting the road conditions of the areas to be detected.
Based on the detection method, the application also provides a computer storage medium. Referring to fig. 8, fig. 8 is a schematic structural diagram of a computer storage medium according to an embodiment of the present application. The computer storage medium 800 has stored thereon a computer program 81, which computer program 81, when executed by a processor, implements the method of any of the embodiments described above. The steps and principles thereof have been described in detail in the above detection method, and are not described herein again.
Further, the computer storage medium 800 may also be various media that can store program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic tape, or an optical disk.
Based on the detection method, the application also provides a vehicle-mounted terminal. Referring to fig. 9, fig. 9 is a schematic mechanism diagram of a vehicle-mounted terminal according to an embodiment of the present application. The in-vehicle terminal 900 comprises a memory 91, a processor 92 and a camera 93, the camera 93 can be connected with the processor 92, the memory 91 stores a computer program, and the computer program realizes the method of any of the above embodiments when executed by the processor 92. The steps and principles thereof have been described in detail in the above detection method, and are not described herein again.
In the present embodiment, the processor 92 may also be referred to as a CPU (Central Processing Unit). The processor 92 may be an integrated circuit chip having signal processing capabilities. The processor 92 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Based on the detection method, the application also provides a vehicle. The vehicle comprises a vehicle body and the vehicle-mounted terminal, and the vehicle-mounted terminal can be mounted on the vehicle body.
It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. In addition, for convenience of description, only a part of structures related to the present application, not all of the structures, are shown in the drawings. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first", "second", etc. in this application are used to distinguish between different objects and not to describe a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.
Claims (9)
1. A road condition detection method is characterized by comprising the following steps:
acquiring a road condition image of a current driving road of a vehicle, and determining a road condition detection area from the road condition image;
dividing the road condition detection area into at least one block;
determining a target block with a preset target object in the at least one block;
determining the adjacent target blocks as areas to be detected, and detecting road conditions of the areas to be detected;
wherein the step of determining the traffic detection area from the traffic image comprises:
extracting lane lines and vanishing points from the road condition image, wherein the lane lines are boundary lines of the current driving road, and the vanishing points are intersection points of at least two boundary lines;
the area formed by the lane line and the vanishing point is the road condition detection area;
the step of extracting the lane line and the vanishing point from the image includes:
extracting at least two lane lines from the image;
fitting the at least two lane lines to obtain at least two fitting curves, wherein the intersection point of the at least two fitting curves is the vanishing point;
and calculating the vanishing point according to the at least two fitting curves.
2. The detecting method according to claim 1, wherein the step of obtaining the road condition image of the current driving road of the vehicle comprises:
and controlling a camera to shoot the current driving road to obtain the road condition image.
3. The method as claimed in claim 2, wherein the step of dividing the traffic detection area into at least one block comprises:
n first parallel straight lines are arranged in the road condition detection area along the driving direction of the vehicle, and the distance between every two adjacent first parallel straight lines is a preset value;
dividing each first parallel straight line into m equal parts to obtain m-1 equal parts, and connecting the vanishing point with the equal parts to divide the road condition detection area into n × m blocks.
4. The detecting method according to claim 2, wherein when the lane line is a curved line, the step of dividing the road condition detecting region into at least one block comprises:
k second parallel straight lines are arranged in the road condition detection area along the horizontal direction;
dividing each second parallel straight line into i equal parts to obtain i-1 equal parts, and connecting the vanishing point with the equal parts to divide the road condition detection area into k × i blocks.
5. The detection method according to claim 3 or 4, wherein the step of determining that a target block of a preset target object exists in the at least one block comprises:
and making the at least one block into a gradient histogram, and determining that a target block of a preset target object exists in the at least one block when the amplitude of the gradient histogram exceeds a preset threshold value.
6. The detection method according to claim 5, wherein the step of detecting the road condition of the area to be detected comprises:
and detecting the road condition of the area to be detected by using a support vector machine method or a deep learning method so as to detect the target object in the area to be detected.
7. A computer storage medium, characterized in that the computer storage medium has stored thereon a computer program which, when being executed by a processor, carries out the method according to any one of claims 1-6.
8. An in-vehicle terminal, characterized in that the in-vehicle terminal comprises a memory and a processor, the memory being connected to the processor, the memory storing a computer program which, when executed by the processor, implements the method according to any of claims 1-6.
9. A vehicle characterized by comprising a vehicle body and the in-vehicle terminal according to claim 8 mounted on the vehicle body.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911072569.8A CN112767723B (en) | 2019-11-05 | 2019-11-05 | Road condition detection method, computer storage device, vehicle-mounted terminal and vehicle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911072569.8A CN112767723B (en) | 2019-11-05 | 2019-11-05 | Road condition detection method, computer storage device, vehicle-mounted terminal and vehicle |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112767723A CN112767723A (en) | 2021-05-07 |
CN112767723B true CN112767723B (en) | 2022-04-22 |
Family
ID=75692850
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911072569.8A Active CN112767723B (en) | 2019-11-05 | 2019-11-05 | Road condition detection method, computer storage device, vehicle-mounted terminal and vehicle |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112767723B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006140636A (en) * | 2004-11-10 | 2006-06-01 | Toyota Motor Corp | Obstacle detecting device and method |
CN104166834A (en) * | 2013-05-20 | 2014-11-26 | 株式会社理光 | Pavement detection method and pavement detection device |
CN104380341A (en) * | 2012-06-19 | 2015-02-25 | 市光工业株式会社 | Object detection device for area around vehicle |
CN110334678A (en) * | 2019-07-12 | 2019-10-15 | 哈尔滨理工大学 | A kind of pedestrian detection method of view-based access control model fusion |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5372680B2 (en) * | 2009-09-24 | 2013-12-18 | 日立オートモティブシステムズ株式会社 | Obstacle detection device |
JP6344638B2 (en) * | 2013-03-06 | 2018-06-20 | 株式会社リコー | Object detection apparatus, mobile device control system, and object detection program |
JP6393230B2 (en) * | 2015-04-20 | 2018-09-19 | 株式会社日立製作所 | Object detection method and image search system |
CN109934075A (en) * | 2017-12-19 | 2019-06-25 | 杭州海康威视数字技术股份有限公司 | Accident detection method, apparatus, system and electronic equipment |
CN108875723B (en) * | 2018-01-03 | 2023-01-06 | 北京旷视科技有限公司 | Object detection method, device and system and storage medium |
CN108197590B (en) * | 2018-01-22 | 2020-11-03 | 海信集团有限公司 | Pavement detection method, device, terminal and storage medium |
CN108399360B (en) * | 2018-01-22 | 2021-12-24 | 海信集团有限公司 | Continuous obstacle detection method, device and terminal |
CN109271905B (en) * | 2018-09-03 | 2021-11-19 | 东南大学 | Black smoke vehicle detection method based on single-frame image |
-
2019
- 2019-11-05 CN CN201911072569.8A patent/CN112767723B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006140636A (en) * | 2004-11-10 | 2006-06-01 | Toyota Motor Corp | Obstacle detecting device and method |
CN104380341A (en) * | 2012-06-19 | 2015-02-25 | 市光工业株式会社 | Object detection device for area around vehicle |
CN104166834A (en) * | 2013-05-20 | 2014-11-26 | 株式会社理光 | Pavement detection method and pavement detection device |
CN110334678A (en) * | 2019-07-12 | 2019-10-15 | 哈尔滨理工大学 | A kind of pedestrian detection method of view-based access control model fusion |
Also Published As
Publication number | Publication date |
---|---|
CN112767723A (en) | 2021-05-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Son et al. | Real-time illumination invariant lane detection for lane departure warning system | |
Wu et al. | Lane-mark extraction for automobiles under complex conditions | |
US8290265B2 (en) | Method and apparatus for segmenting an object region of interest from an image | |
US9489586B2 (en) | Traffic sign recognizing apparatus and operating method thereof | |
US9818301B2 (en) | Lane correction system, lane correction apparatus and method of correcting lane | |
CN108629292B (en) | Curved lane line detection method and device and terminal | |
US20180075748A1 (en) | Pedestrian recognition apparatus and method | |
US8155381B2 (en) | Vehicle headlight detecting method and apparatus, and region-of-interest segmenting method and apparatus | |
JP6442834B2 (en) | Road surface height shape estimation method and system | |
CN107491738B (en) | Parking space detection method and system, storage medium and electronic equipment | |
KR20200132714A (en) | Method and device for detecting illegal parking, electronic device, and computer-readable medium | |
CN107748882B (en) | Lane line detection method and device | |
CN110929655B (en) | Lane line identification method in driving process, terminal device and storage medium | |
CN104376741A (en) | Parking lot state detection method and system | |
CN109478329B (en) | Image processing method and device | |
CN107527017B (en) | Parking space detection method and system, storage medium and electronic equipment | |
JP2007310706A (en) | Vehicle periphery monitoring device | |
JP2009064175A (en) | Object detection device and object detection method | |
US9747507B2 (en) | Ground plane detection | |
KR101667835B1 (en) | Object localization using vertical symmetry | |
CN111652060A (en) | Laser radar-based height-limiting early warning method and device, electronic equipment and storage medium | |
CN108197590B (en) | Pavement detection method, device, terminal and storage medium | |
CN103116757A (en) | Three-dimension information restoration and extraction method for identifying spilled articles on roads | |
JP2009025910A (en) | Obstacle detection device, obstacle detection system, and obstacle detection method | |
US9600894B2 (en) | Image processing apparatus and computer-readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP03 | Change of name, title or address | ||
CP03 | Change of name, title or address |
Address after: 233000 building 4, national financial incubation Industrial Park, 17 Yannan Road, high tech Zone, Bengbu City, Anhui Province Patentee after: Dafu Technology (Anhui) Co.,Ltd. Address before: 518104 First, Second and Third Floors of A1, A2, A3 101, A4 of Shajing Street, Shajing Street, Baoan District, Shenzhen City, Guangdong Province Patentee before: SHENZHEN TATFOOK TECHNOLOGY Co.,Ltd. |