CN111695627A - Road condition detection method and device, electronic equipment and readable storage medium - Google Patents

Road condition detection method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN111695627A
CN111695627A CN202010530074.1A CN202010530074A CN111695627A CN 111695627 A CN111695627 A CN 111695627A CN 202010530074 A CN202010530074 A CN 202010530074A CN 111695627 A CN111695627 A CN 111695627A
Authority
CN
China
Prior art keywords
road condition
optical flow
video frame
flow density
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010530074.1A
Other languages
Chinese (zh)
Inventor
阳勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010530074.1A priority Critical patent/CN111695627A/en
Publication of CN111695627A publication Critical patent/CN111695627A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3453Special cost functions, i.e. other than distance or default speed limit of road segments
    • G01C21/3492Special cost functions, i.e. other than distance or default speed limit of road segments employing speed data or traffic data, e.g. real-time or historical
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Abstract

The application relates to the technical field of vehicle navigation, and discloses a road condition detection method, a road condition detection device, electronic equipment and a readable storage medium, wherein the road condition detection method comprises the following steps: receiving a road condition video from video acquisition equipment; extracting a plurality of video frame images from the road condition video, and acquiring a plurality of optical flow density images based on the plurality of video frame images; the optical flow density image is used for representing dynamic information formed by the movement of each pixel point in the video frame image relative to an object; and classifying the video frame images and the optical flow density images respectively, and determining a road condition result based on the video frame images and the optical flow density images. The road condition detection method provided by the application does not only depend on traffic flow speed, and can obtain more accurate road condition results.

Description

Road condition detection method and device, electronic equipment and readable storage medium
Technical Field
The application relates to the technical field of vehicle navigation, in particular to a road condition detection method, a road condition detection device, electronic equipment and a readable storage medium.
Background
In map service, it is usually necessary to determine road condition information, such as whether a road is congested, so as to plan a reasonable navigation route for a user, and help a city to construct a traffic early warning and schedule a city traffic system.
Currently, when analyzing road condition information, it is common to collect positioning point information of a vehicle GPS (Global positioning system) on a road, calculate a real-time speed of the vehicle on each road section, and determine a road section congestion condition by combining speeds of a plurality of vehicles on the same road section. This approach is more dependent on traffic flow speed, and the road condition results obtained when traffic flow is calculated to fluctuate may be less accurate.
Disclosure of Invention
The purpose of this application aims at detecting the road conditions that can be more accurate at least, specially proposes following technical scheme:
in a first aspect, a road condition detection method is provided, including:
receiving a road condition video from video acquisition equipment;
extracting a plurality of video frame images from the road condition video, and acquiring a plurality of optical flow density images based on the plurality of video frame images;
the optical flow density image is used for representing dynamic information formed by the movement of each pixel point in the video frame image relative to an object;
and classifying the video frame images and the optical flow density images respectively, and determining a road condition result based on the video frame images and the optical flow density images.
In an optional embodiment of the first aspect, before receiving the video of the road condition from the video capturing device, the method further includes:
acquiring positioning information of a vehicle, and determining driving information of the vehicle based on the acquired positioning information; the driving information includes at least one of a real-time position and a driving speed of the vehicle;
if the driving information of the vehicle meets the preset condition, sending a video acquisition instruction to video acquisition equipment;
receiving the road condition video from the video acquisition equipment, including:
and receiving the road condition video sent by the video acquisition equipment in response to the video acquisition instruction.
In an alternative embodiment of the first aspect, acquiring a plurality of optical flow density images based on a plurality of video frame images comprises:
respectively acquiring the motion speed and the motion direction of each pixel in any two adjacent video frame images in the plurality of video frame images;
and determining the optical flow density image corresponding to the two-frame video frame image based on the motion speed and the motion direction of each pixel in the two adjacent frame video frame images.
In an optional embodiment of the first aspect, the classifying the plurality of video frame images and the plurality of optical flow density images respectively, and determining the road condition result based on the video frame images and the optical flow density images, includes:
acquiring an image feature sequence based on a plurality of video frame images, and acquiring an optical flow density feature sequence based on a plurality of optical flow density images;
classifying the image feature sequence to obtain a first classification probability;
classifying the light stream density characteristic sequence to obtain a second classification probability;
determining a road condition result based on the first classification probability and the second classification probability; the road condition result comprises any one of smooth, slow and congested road conditions.
In an alternative embodiment of the first aspect, acquiring a sequence of image features based on a plurality of video frame images and acquiring a sequence of optical flow density features based on a plurality of optical flow density images comprises:
extracting a first image feature of each video frame image in a plurality of video frame images;
sequentially splicing the first image features based on the time sequence of the video frame images to obtain an image feature sequence;
extracting a second image feature of each optical flow density image in the plurality of optical flow density images;
and sequentially splicing the plurality of second image features based on the time sequence of the plurality of optical flow density images to obtain an optical flow density feature sequence.
In an alternative embodiment of the first aspect, extracting the first image feature of each of the plurality of video frame images comprises:
for each video frame image of the plurality of video frame images, inputting the video frame image into a convolutional neural network;
and taking the input features of the classification layer of the convolutional neural network as first image features.
In an optional embodiment of the first aspect, the image feature sequence is classified to obtain a first classification probability; classifying the optical flow density characteristic sequence to obtain a second classification probability, wherein the second classification probability comprises the following steps:
inputting the image feature sequence into a first classification model to obtain a corresponding first classification probability;
and inputting the optical flow density feature sequence into a second classification model to obtain a corresponding second classification probability.
In an optional embodiment of the first aspect, determining the road condition result based on the first classification probability and the second classification probability includes:
determining road condition probability based on the first classification probability and the second classification probability;
and determining the range of the numerical interval in which the road condition probability is positioned, and determining the road condition result corresponding to the range of the numerical interval.
In an optional embodiment of the first aspect, the image feature sequence is classified to obtain a first classification probability; classifying the optical flow density characteristic sequence to obtain a second classification probability, wherein the second classification probability comprises the following steps:
inputting the image characteristic sequence into a first classification model to obtain first classification probabilities corresponding to a plurality of candidate road conditions respectively;
and inputting the optical flow density characteristic sequence into a second classification model to obtain second classification probabilities corresponding to a plurality of candidate road conditions respectively.
In an optional embodiment of the first aspect, determining the road condition result based on the first classification probability and the second classification probability includes:
determining road condition probabilities respectively corresponding to the candidate road conditions based on first classification probabilities respectively corresponding to the candidate road conditions and second classification probabilities respectively corresponding to the candidate road conditions;
and taking the candidate road condition corresponding to the maximum road condition probability as a road condition result.
In a second aspect, a traffic detection device is provided, which includes:
the acquisition module is used for receiving the road condition video from the video acquisition equipment;
the extraction module is used for extracting a plurality of video frame images from the road condition video and acquiring a plurality of optical flow density images based on the plurality of video frame images;
the optical flow density image is used for representing dynamic information formed by the movement of each pixel point in the video frame image relative to an object;
and the classification module is used for classifying the video frame images and the optical flow density images respectively and determining a road condition result based on the video frame images and the optical flow density images.
In an optional embodiment of the second aspect, the traffic condition detecting device further includes a sending module, configured to:
acquiring positioning information of a vehicle, and determining driving information of the vehicle based on the acquired positioning information; the driving information includes at least one of a real-time position and a driving speed of the vehicle;
if the driving information of the vehicle meets the preset condition, sending a video acquisition instruction to video acquisition equipment;
the acquisition module is used for receiving the road condition video from the video acquisition equipment:
and receiving the road condition video sent by the video acquisition equipment in response to the video acquisition instruction.
In an alternative embodiment of the second aspect, the extraction module, when acquiring the plurality of optical flow density images based on the plurality of video frame images, is configured to:
respectively acquiring the motion speed and the motion direction of each pixel in any two adjacent video frame images in the plurality of video frame images;
and determining the optical flow density image corresponding to the two-frame video frame image based on the motion speed and the motion direction of each pixel in the two adjacent frame video frame images.
In an optional embodiment of the second aspect, the classification module, when classifying the plurality of video frame images and the plurality of optical flow density images respectively and determining the road condition result based on the video frame images and the optical flow density images, is configured to:
acquiring an image feature sequence based on a plurality of video frame images, and acquiring an optical flow density feature sequence based on a plurality of optical flow density images;
classifying the image feature sequence to obtain a first classification probability;
classifying the light stream density characteristic sequence to obtain a second classification probability;
determining a road condition result based on the first classification probability and the second classification probability; the road condition result comprises any one of smooth, slow and congested road conditions.
In an alternative embodiment of the second aspect, the classification module, when acquiring the sequence of image features based on the plurality of video frame images and acquiring the sequence of optical-flow density features based on the plurality of optical-flow density images, is configured to:
extracting a first image feature of each video frame image in a plurality of video frame images;
sequentially splicing the first image features based on the time sequence of the video frame images to obtain an image feature sequence;
extracting a second image feature of each optical flow density image in the plurality of optical flow density images;
and sequentially splicing the plurality of second image features based on the time sequence of the plurality of optical flow density images to obtain an optical flow density feature sequence.
In an alternative embodiment of the second aspect, the classification module, when extracting the first image feature of each of the plurality of video frame images, is configured to:
for each video frame image of the plurality of video frame images, inputting the video frame image into a convolutional neural network;
and taking the input features of the classification layer of the convolutional neural network as first image features.
In an optional embodiment of the second aspect, the classification module is configured to classify the image feature sequence to obtain a first classification probability; and classifying the optical flow density characteristic sequence to obtain a second classification probability, wherein the second classification probability is used for:
inputting the image feature sequence into a first classification model to obtain a corresponding first classification probability;
and inputting the optical flow density feature sequence into a second classification model to obtain a corresponding second classification probability.
In an optional embodiment of the second aspect, the classification module, when determining the road condition result based on the first classification probability and the second classification probability, is configured to:
determining road condition probability based on the first classification probability and the second classification probability;
and determining the range of the numerical interval in which the road condition probability is positioned, and determining the road condition result corresponding to the range of the numerical interval.
In an optional embodiment of the second aspect, the classification module is configured to classify the image feature sequence to obtain a first classification probability; and classifying the optical flow density characteristic sequence to obtain a second classification probability, wherein the second classification probability is used for:
inputting the image characteristic sequence into a first classification model to obtain first classification probabilities corresponding to a plurality of candidate road conditions respectively;
and inputting the optical flow density characteristic sequence into a second classification model to obtain second classification probabilities corresponding to a plurality of candidate road conditions respectively.
In an optional embodiment of the second aspect, the classification module, when determining the road condition result based on the first classification probability and the second classification probability, is configured to:
determining road condition probabilities respectively corresponding to the candidate road conditions based on first classification probabilities respectively corresponding to the candidate road conditions and second classification probabilities respectively corresponding to the candidate road conditions;
and taking the candidate road condition corresponding to the maximum road condition probability as a road condition result.
In a third aspect, an electronic device is provided, which includes a memory, a processor, and a computer program stored in the memory and capable of running on the processor, and when the processor executes the computer program, the road condition detection method shown in the first aspect of the present application is implemented.
In a fourth aspect, a computer-readable storage medium is provided, in which a computer program is stored, and the computer program, when executed by a processor, implements the road condition detection method according to the first aspect of the present application.
The beneficial effect that technical scheme that this application provided brought is:
the road condition result is judged by combining the road condition video and the optical flow density image acquired from the road condition video, and the road condition video starts from the visual perception of human vision and can capture the information of the crowdedness degree of vehicles, the pedestrian bunching and the like in the visual field range; the light stream density image starts from dynamic information expressed by relative motion, captures information such as driving speed, relative speed between vehicles and the like, does not depend on the speed of the traffic stream, and can obtain more accurate road condition results.
Further, when the real-time position or the running speed of the vehicle is judged to meet the preset conditions, namely the vehicle is possibly in a position needing abnormity, an abnormal traffic state or a traffic condition needing vigilance, a video acquisition instruction is sent to the video acquisition equipment, and the video acquisition equipment is only indicated to acquire the road condition video when needed, so that resources can be effectively saved.
Furthermore, the first classification model and the second classification model are adopted for classification, so that video information can be selectively memorized, the generalization capability is stronger, and the classification effect is more accurate.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is an application environment diagram of a road condition detection method according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a road condition detection method according to an embodiment of the present application;
FIG. 3 is a schematic illustration of an optical flow density image provided in an example of the present application;
FIG. 4 is a schematic illustration of an optical flow density image provided in an example of the present application;
fig. 5 is a schematic flow chart of a road condition detection method according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a video frame image and an optical flow density image provided in an example of an embodiment of the present application;
fig. 7 is a schematic flow chart of a road condition detection method according to an embodiment of the present application;
FIG. 8 is a schematic diagram of a scheme for extracting features using a convolutional neural network according to an embodiment of the present disclosure;
fig. 9 is a schematic diagram of a scheme for determining a traffic probability in an example provided in the present application;
fig. 10 is a schematic diagram of a scheme for determining a traffic probability in an example provided in the present application;
FIG. 11 is a schematic flow chart of a method for detecting road conditions according to an example provided herein;
FIG. 12 is a schematic flow chart diagram illustrating a method for detecting road conditions according to an example provided herein;
fig. 13 is a schematic structural diagram of a road condition detecting device according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of a road condition detecting device according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of an electronic device for road condition detection according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
In the map service, real-time traffic road condition information is used as a basic function, so that a user can know road congestion conditions conveniently, a travel route is planned, a movement plan is reasonably arranged, and a city can be helped to construct traffic early warning and schedule a city traffic system. The accurate road condition can provide better ETA (Estimated Time of Arrival) service and path planning, and saves urban road resources and user Time.
The common real-time road condition detection methods include two types:
the first method is to calculate the real-time speed of the vehicle on each road section by collecting the GPS positioning point information of the vehicle on the road, and to fuse the speeds of a plurality of vehicles on the same road section, and to determine the road section congestion condition by the speed. The method has the advantages of simplicity, directness and basic method for producing real-time road conditions by most map manufacturers at present;
the method does not allow errors caused by calculation of the traffic flow speed, has strong dependence on driving behaviors of users, is too complex in links related to production flow, cannot solve the problem of road conditions under specific conditions, and can cause error release of the road conditions due to calculation fluctuation of the traffic flow speed;
in the second method, a traffic management department deploys sensors or coils on the road, and the traffic flow on the road is sensed by the sensors to determine the traffic jam condition. The method has the advantages that all vehicles passing through the specified road point can be collected, and the information is sufficient;
the method has large engineering quantity and narrow coverage area to roads, mainly focuses on expressways and urban expressways, is difficult to relate to other roads, and hardly has reference and utilization to rich visual information.
The application provides a road condition detection method, a road condition detection device, an electronic device and a computer-readable storage medium, which aim to solve the technical problems in the prior art.
The following describes the technical solutions of the present application and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
The road condition detection method provided by the application can be applied to the application environment shown in fig. 1. The vehicle is provided with a video acquisition device 101 capable of acquiring road condition videos, the video acquisition device 101 is in network communication with a server 102, and the server 102 can also be in network communication with a vehicle-mounted terminal 103 on the vehicle; specifically, the server 102 receives a road condition video of the video acquisition device, extracts a plurality of video frame images from the road condition video, and acquires a plurality of optical flow density images based on the plurality of video frame images; classifying the video frame images and the optical flow density images, and determining a road condition result based on the video frame images and the optical flow density images; the result of the road condition is transmitted to the in-vehicle terminal 103.
Those skilled in the art will understand that the "terminal" used herein may be a Mobile phone, a tablet computer, a PDA (Personal Digital Assistant), an MID (Mobile Internet Device), etc.; a "server" may be implemented as a stand-alone server or as a server cluster comprised of multiple servers.
It can be understood that fig. 1 shows an application scenario in an example, and the application scenario of the traffic condition detection method of the present application is not limited, in the above scenario, a server performs traffic condition detection, and in other application scenarios, network communication may also be performed between a vehicle-mounted terminal and a video capture device, and the vehicle-mounted terminal performs traffic condition detection; the vehicle-mounted terminal can also have a video acquisition function, and the vehicle-mounted terminal acquires the road condition video, detects the road condition and the like.
A possible implementation manner is provided in the embodiment of the present application, and as shown in fig. 2, a road condition detection method is provided, which is described by taking the application of the method to the server in fig. 1 as an example, and may include the following steps:
step S201, receiving a road condition video from a video capture device.
The road condition (road condition) may be a technical condition of an existing road bed, a road surface, a structure, an accessory facility, and the like, or may include a condition that the road surface and the accessory facility are damaged.
Specifically, the video acquisition device can automatically acquire road condition videos at preset intervals and send the videos to the server, acquire the road condition videos when receiving a video acquisition instruction of the server, and acquire the road condition videos when receiving a video acquisition instruction of a vehicle-mounted terminal or a user.
Step S202, a plurality of video frame images are extracted from the road condition video, and a plurality of optical flow density images are obtained based on the plurality of video frame images.
Wherein the optical flow density is a concept of motion detection for describing an observation target, a surface, or an edge caused by a motion relative to an observer; the optical flow density image is used to represent dynamic information formed by the movement of each pixel point in the video frame image with respect to the object, and may represent dynamic information with respect to the vehicle, for example.
Specifically, after a plurality of video frame images are extracted from the road condition video, corresponding optical flow density images can be acquired according to adjacent video frame images of every two frames.
In a specific implementation process, for any pixel point in an image between two adjacent frames of images in a road condition video, the optical flow densities of the two adjacent frames can form a corresponding point, and the optical flow densities formed by accumulating a plurality of images can represent the motion track of a target object.
As shown in fig. 3, fig. 3 is a schematic view illustrating visualization of optical flow density in an example, taking a person crossing a road as an example, when a vehicle is stationary, relative motion between the vehicle and the person is motion of the person, and a motion trajectory direction of the person is a trajectory direction formed by connecting a plurality of points shown in the figure, i.e., a direction transversely crossing the road.
As shown in fig. 4, fig. 4 is a schematic view of visualization of optical flow density in another example, taking a vehicle driving on a road as an example, the relative motion between the vehicle and the road is along a motion track of the vehicle, which is a track direction formed by connecting a plurality of points shown in the figure, that is, the vehicle driving along the road.
It can be understood that fig. 3 and fig. 4 do not show the motion trajectories of all the pixel points, and only show the motion trajectories of a small number of the pixel points for illustration.
S203, classifying the video frame images and the optical flow density images respectively, and determining a road condition result based on the video frame images and the optical flow density images.
Specifically, a trained classification model may be obtained, and the video frame image and the optical flow density image are classified by the trained classification model, where a specific classification process will be described in detail below.
The road condition result may include any one of smooth, slow and congested road conditions.
Specifically, the road condition corresponding to the road condition video can be determined by classifying the video frame images; the road condition corresponding to the optical flow density image can be determined by classifying the optical flow density image; the road condition corresponding to the road condition video and the road condition corresponding to the optical flow density image are combined, so that the actual road condition information can be more accurately obtained.
The road condition detection method provided by the embodiment judges the road condition result by combining the road condition video and the optical flow density image acquired from the road condition video, and the road condition video starts from the visual perception of human vision, and can capture the information of the crowdedness degree of vehicles, the pedestrian bunching and the like in the visual field range; the light stream density image starts from dynamic information expressed by relative motion, captures information such as driving speed, relative speed between vehicles and the like, does not depend on the speed of the traffic stream, and can obtain more accurate road condition results.
A possible implementation manner is provided in the embodiment of the present application, before the step S201 of receiving the road condition video from the video capture device, the method may further include:
(1) acquiring positioning information of a vehicle, and determining driving information of the vehicle based on the acquired positioning information; the travel information includes at least one of a real-time position and a travel speed of the vehicle.
Specifically, the positioning information of the vehicle may be acquired through the GPS, so as to determine the real-time position of the vehicle, for example, which street the vehicle is located in real time.
The driving speed of the vehicle can also be calculated through the positioning information respectively corresponding to different times, for example, the real-time position of the vehicle at the previous time is obtained, the real-time position of the vehicle at the current time is obtained, the actual route distance between the two positions and the time difference between the previous time and the current time are obtained according to the prestored map information, and the driving speed of the vehicle is calculated.
(2) And if the driving information of the vehicle meets the preset condition, sending a video acquisition instruction to the video acquisition equipment.
The step S201 of receiving the road condition video from the video capturing device may include:
and receiving the road condition video sent by the video acquisition equipment in response to the video acquisition instruction.
The preset condition can be that the vehicle is possibly in an abnormal position, an abnormal traffic state or a traffic condition needing to be vigilant; the meeting of the preset condition may include that the real-time position of the vehicle meets a preset specific position, for example, the vehicle is located at the cross street, within 100 meters from the traffic light, and the like; the preset condition may be that the vehicle speed of the vehicle is abnormal, for example, less than a set speed, such as less than 20km/h, or the vehicle is abnormally stopped, etc.
As shown in fig. 5, when it is determined that the real-time position or the driving speed of the vehicle meets the preset condition, that is, the vehicle may be in a position that needs to be abnormal, an abnormal traffic state, or a traffic condition that needs to be alerted, a video acquisition instruction is sent to the video acquisition device, and the video acquisition device is instructed to acquire the road condition video only when needed, so that resources can be effectively saved.
The specific process of acquiring the optical flow density image will be described in further detail below.
A possible implementation manner is provided in the embodiment of the present application, and the acquiring a plurality of optical flow density images based on a plurality of video frame images in step S202 may include:
(1) respectively acquiring the motion speed and the motion direction of each pixel in any two adjacent video frame images in the plurality of video frame images;
(2) and determining an optical flow density image corresponding to the two-frame video frame image based on the motion speed and the motion direction of each pixel in the two adjacent frame video frame images.
Specifically, the road condition video contains rich dynamic information, and the most obvious of the information is relative motion. In order to extract dynamic information, an optical flow density algorithm is adopted to extract the optical flow density between adjacent video frames, and the specific principle is as follows:
calculating the motion speed and motion direction of each pixel in two adjacent frames of images, and assuming that the position of the point A in the t-th frame is (x)1,y1) The position of the point A at the t +1 th frame is (x)2,y2) The motion of point a can be simply expressed as:
(ux,vy)=(x2,y2)-(x1,y1) (1)
and accumulating the optical flow densities formed by the plurality of graphs to represent the motion trail of the target object. As shown in fig. 6, the left side is a video frame image obtained from the road condition video, only one video frame image is shown on the left side in fig. 6, each two adjacent video frame images can obtain a corresponding optical flow density image, for each pixel point, the optical flow densities of two adjacent frames can form a corresponding point, the optical flow density images are accumulated, that is, the points corresponding to the optical flow densities formed by a plurality of consecutive adjacent video frame images are connected, and an image formed by accumulating a plurality of optical flow density images shown on the right side in fig. 6 can be obtained.
The above embodiments illustrate specific processes for acquiring optical flow density images, and the following describes specific processes for determining road condition results in further detail with reference to the accompanying drawings and embodiments.
As shown in fig. 7, the step S203 of classifying the plurality of video frame images and the plurality of optical flow density images respectively and determining the road condition result based on the video frame images and the optical flow density images may include:
in step S310, an image feature sequence is acquired based on the plurality of video frame images, and an optical flow density feature sequence is acquired based on the plurality of optical flow density images.
For each video frame image in the plurality of video frame images, first image characteristics corresponding to the video frame image can be obtained, and then an image characteristic sequence is obtained according to the plurality of first image characteristics; for each optical flow density image in the plurality of optical flow density images, a second image feature corresponding to the optical flow density image may be acquired, and then an optical flow density feature sequence may be derived from the plurality of second image features.
Specifically, the step S310 of acquiring an image feature sequence based on a plurality of video frame images and acquiring an optical flow density feature sequence based on a plurality of optical flow density images may include:
(1) a first image feature of each of a plurality of video frame images is extracted.
Specifically, extracting a first image feature of each of the plurality of video frame images may include:
a. for each video frame image in the plurality of video frame images, inputting the video frame image into a convolutional neural network;
b. and taking the input features of the classification layer of the convolutional neural network as first image features.
Specifically, a Convolutional Neural Network (CNN) is a kind of feed forward Neural network (fed forward Neural network) that includes convolution calculation and has a deep structure, and is one of the representative algorithms of deep learning (deep learning).
In this embodiment, an inclusion v3 network in the trained convolutional neural network may be used, as shown in fig. 8, an inclusion v3 structure is shown in the figure, a middle part is not shown, and a full-connection classification layer and a softmax layer of the inclusion v3 network may be removed, that is, a video frame image is input into the inclusion v3 network, and an input of the full-connection classification layer, that is, an output of a previous layer of the full-connection classification layer, that is, a dropout layer, is used as the first image feature.
In other embodiments, other convolutional neural networks may be used for feature extraction.
For the training of the convolutional neural network, a sample image and standard image characteristics can be obtained, the sample image can be input into the initial convolutional neural network to obtain corresponding sample characteristics, a loss function is calculated based on the sample characteristics and the standard image characteristics, and parameters of the convolutional neural network are adjusted based on the loss function to obtain the trained convolutional neural network.
(2) And sequentially splicing the first image features based on the time sequence of the video frame images to obtain an image feature sequence.
For example, the first image feature obtained by the plurality of video frame images is x in sequence1,x2,x3……xnWherein x is1,x2,x3……xnCan be vector or matrix, and the image characteristic sequence is [ x1,x2,x3……xn]。
(3) A second image feature of each of the plurality of optical flow density images is extracted.
Specifically, the second image feature of the extracted optical flow density image is the same as the first image feature of the extracted video frame image, and is not described herein again.
(4) And sequentially splicing the plurality of second image features based on the time sequence of the plurality of optical flow density images to obtain an optical flow density feature sequence.
Specifically, the process of stitching the second image features is the same as the process of stitching the first image features, and is not described herein again.
Step S320, classifying the image feature sequences to obtain a first classification probability; and classifying the optical flow density characteristic sequence to obtain a second classification probability.
Specifically, the image feature sequence may be input to the trained classification model to obtain a corresponding first classification probability, obtain one first classification probability, and further obtain a plurality of first classification probabilities corresponding to a plurality of candidate road conditions, respectively.
Similarly, the optical flow density feature sequence may be input to the trained classification model to obtain a corresponding second classification probability, obtain one second classification probability, and obtain a plurality of second classification probabilities corresponding to the plurality of candidate road conditions, respectively.
And step S330, determining a road condition result based on the first classification probability and the second classification probability.
The road condition result comprises any one of smooth traffic, slow traffic and congestion.
Specifically, if a first classification probability and a second classification probability are obtained, a final road condition probability is determined based on the first classification probability and the second classification probability, and a corresponding road condition result is determined based on a numerical value corresponding to the road condition probability.
Specifically, if the first classification probability and the second classification probability corresponding to the multiple candidate road condition results are obtained, the road condition probability corresponding to each candidate road condition can be determined, so that the final road condition result is determined.
The candidate road conditions may include smooth, smooth and slow running, slow running and slow traffic jam, congestion, and extreme congestion.
The specific process of classifying and determining the road condition result may include the following two cases:
the first condition is as follows:
step S320, classifying the image feature sequence to obtain a first classification probability; classifying the optical flow density feature sequence to obtain a second classification probability may include:
(1) inputting the image feature sequence into a first classification model to obtain a corresponding first classification probability;
(2) and inputting the optical flow density feature sequence into a second classification model to obtain a corresponding second classification probability.
Specifically, the first classification model and the second classification model may be a many-to-one (many-to-one) structure, where the input is a sequence and the output is a single value.
The first classification model and the second classification model can be LSTM (Long Short-Term Memory network) models, the LSTM can selectively memorize video information, and the classification effect is more accurate due to stronger generalization capability.
At this time, the determining the road condition result based on the first classification probability and the second classification probability in step S330 may include:
(1) determining road condition probability based on the first classification probability and the second classification probability;
(2) and determining the range of the numerical interval in which the road condition probability is positioned, and determining the road condition result corresponding to the range of the numerical interval.
Specifically, the weighted sum of the first classification probability and the second classification probability may be calculated to obtain the road condition probability, for example, the road condition probability is calculated by referring to the following formula:
P=aPAppear+(1-a)Pflow(2)
in the formula: pAppearRepresenting a first classification probability; pflowRepresenting a second classification probability; p represents the road condition probability; and a represents a weight coefficient corresponding to a preset first classification probability.
Specifically, a plurality of numerical value intervals may be preset, and each numerical value interval corresponds to one candidate road condition.
For example, if P >0.60 is congestion, P <0.48 is clear, and 0.48< ═ P < ═ 0.60 is slow.
As shown in fig. 9, the image feature sequence and the optical flow density feature sequence are respectively input into the first classification model and the second classification model, a first classification probability and a second classification probability are respectively obtained, and then the final road condition probability is determined based on the first classification probability and the second classification probability.
Case two:
step S320, classifying the image feature sequence to obtain a first classification probability; classifying the optical flow density feature sequence to obtain a second classification probability may include:
(1) inputting the image characteristic sequence into a first classification model to obtain first classification probabilities corresponding to a plurality of candidate road conditions respectively;
(2) and inputting the optical flow density characteristic sequence into a second classification model to obtain second classification probabilities corresponding to a plurality of candidate road conditions respectively.
Specifically, the first and second classification models may be of a many-to-many (many-to-many) structure, where the input is a sequence and the output is a plurality of values.
At this time, the determining the road condition result based on the first classification probability and the second classification probability in step S330 may include:
(1) determining road condition probabilities respectively corresponding to the candidate road conditions based on first classification probabilities respectively corresponding to the candidate road conditions and second classification probabilities respectively corresponding to the candidate road conditions;
(2) and taking the candidate road condition corresponding to the maximum road condition probability as a road condition result.
Specifically, the weighted sum of the first classification probability and the second classification probability corresponding to each candidate road condition may also be calculated with reference to formula (2) in case one to obtain the road condition probability, and then the candidate road condition corresponding to the maximum road condition probability is used as the road condition result.
For example, the corresponding probabilities of smooth traffic, slow traffic and congestion are finally obtained to be 0.2, 0.6 and 0.2 respectively, and then the final road condition result is slow traffic.
As shown in fig. 10, the image feature sequence and the optical flow density feature sequence are respectively input into the first classification model and the second classification model, to obtain a first classification probability and a second classification probability corresponding to a plurality of candidate road conditions, i.e., the first classification probability and the second classification probability of the candidate road condition 1, the candidate road condition 2, and the candidate road condition 3, respectively, and then the road condition probabilities corresponding to the plurality of candidate road conditions are determined based on the first classification probability and the second classification probability.
Aiming at the training of the first classification model and the second classification model, a plurality of sample images can be obtained, each sample image is provided with a corresponding road condition result, the image characteristic sequences corresponding to the sample images are extracted and then input into the initial classification model, a loss function is calculated based on the classification results and the preset road condition results, and the parameters of the initial classification model are adjusted based on the loss function to obtain the first classification model or the second classification model.
In order to better understand the above road condition detecting method, as shown in fig. 11, an example of the road condition detecting method of the present invention is described in detail as follows:
in one example, the road condition detection method provided by the present application may include the following steps:
step S1101 of acquiring positioning information of a vehicle, and determining travel information of the vehicle based on the acquired positioning information;
step S1102, determining whether the driving information of the vehicle meets a preset condition, if so, executing step S1103; if not, executing step S1101;
step S1103, sending a video acquisition instruction to video acquisition equipment;
step S1104, obtaining a road condition video returned by the video acquisition equipment based on the video acquisition instruction;
step S1105, extracting a plurality of video frame images from the road condition video; acquiring a plurality of optical flow density images based on a plurality of video frame images;
step S1106, acquiring an image feature sequence based on a plurality of video frame images; acquiring an optical flow density feature sequence based on a plurality of optical flow density images;
step S1107, classifying the image feature sequence to obtain a first classification probability; classifying the light stream density characteristic sequence to obtain a second classification probability;
step S1108, determining road condition probability based on the first classification probability and the second classification probability;
and S1109, determining a road condition result based on the road condition probability.
In order to better understand the above road condition detecting method, as shown in fig. 12, an example of the road condition detecting method of the present invention is described in detail as follows:
in one example, the road condition detection method provided by the present application may include the following steps:
s1, acquiring video frame images of the road condition video, extracting optical flow characteristics, and acquiring optical flow density images based on the video frame images;
s2, extracting the characteristics of the video frame image to obtain an image characteristic sequence;
s3, extracting the features of the optical flow density image to obtain an optical flow density feature sequence;
s4, inputting the image feature sequence into LSTM for classification;
s5, inputting the optical flow density image into LSTM for classification;
and S6, determining a road condition result based on the two classification results.
According to the road condition detection method, the road condition result is judged by combining the road condition video and the optical flow density image acquired from the road condition video, and the road condition video starts from the visual perception of human vision, so that the information such as the crowdedness of vehicles, pedestrian bunching and the like in the visual field range can be captured; the light stream density image starts from dynamic information expressed by relative motion, captures information such as driving speed, relative speed between vehicles and the like, does not depend on the speed of the traffic stream, and can obtain more accurate road condition results.
Further, when the real-time position or the running speed of the vehicle is judged to meet the preset conditions, namely the vehicle is possibly in a position needing abnormity, an abnormal traffic state or a traffic condition needing vigilance, a video acquisition instruction is sent to the video acquisition equipment, and the video acquisition equipment is only indicated to acquire the road condition video when needed, so that resources can be effectively saved.
Furthermore, the first classification model and the second classification model are adopted for classification, so that video information can be selectively memorized, the generalization capability is stronger, and the classification effect is more accurate.
A possible implementation manner is provided in the embodiment of the present application, as shown in fig. 13, a traffic detection device 130 is provided, where the traffic detection device 130 may include: an acquisition module 1301, an extraction module 1302, and a classification module 1303, wherein,
an obtaining module 1301, configured to receive a road condition video from a video acquisition device;
an extracting module 1302, configured to extract a plurality of video frame images from a road condition video, and obtain a plurality of optical flow density images based on the plurality of video frame images;
the optical flow density image is used for representing dynamic information formed by the movement of each pixel point in the video frame image relative to an object;
and the classification module 1303 is configured to classify the plurality of video frame images and the plurality of optical flow density images, and determine a road condition result based on the video frame images and the optical flow density images.
In the embodiment of the present application, a possible implementation manner is provided, as shown in fig. 14, the road condition detecting device 130 further includes a sending module 1300 configured to:
acquiring positioning information of a vehicle, and determining driving information of the vehicle based on the acquired positioning information; the driving information includes at least one of a real-time position and a driving speed of the vehicle;
if the driving information of the vehicle meets the preset condition, sending a video acquisition instruction to video acquisition equipment;
the acquisition module is used for receiving the road condition video from the video acquisition equipment:
and receiving the road condition video sent by the video acquisition equipment in response to the video acquisition instruction.
In an embodiment of the present application, a possible implementation manner is provided, and when the extracting module 1302 acquires a plurality of optical flow density images based on a plurality of video frame images, is configured to:
respectively acquiring the motion speed and the motion direction of each pixel in any two adjacent video frame images in the plurality of video frame images;
and determining an optical flow density image corresponding to the two-frame video frame image based on the motion speed and the motion direction of each pixel in the two adjacent frame video frame images.
In the embodiment of the present application, a possible implementation manner is provided, and the classification module 1303 is configured to classify the plurality of video frame images and the plurality of optical flow density images, and when determining the road condition based on the classification of the video frame images and the optical flow density images, configured to:
acquiring an image feature sequence based on a plurality of video frame images, and acquiring an optical flow density feature sequence based on a plurality of optical flow density images;
classifying the image feature sequence to obtain a first classification probability;
classifying the light stream density characteristic sequence to obtain a second classification probability;
determining a road condition result based on the first classification probability and the second classification probability; the road condition result comprises any one of smooth, slow and congested road conditions.
In the embodiment of the present application, a possible implementation manner is provided, and when the classification module 1303 acquires an image feature sequence based on a plurality of video frame images and acquires an optical flow density feature sequence based on a plurality of optical flow density images, the classification module is configured to:
extracting a first image feature of each video frame image in a plurality of video frame images;
sequentially splicing the first image features based on the time sequence of the video frame images to obtain an image feature sequence;
extracting a second image feature of each optical flow density image in the plurality of optical flow density images;
and sequentially splicing the plurality of second image features based on the time sequence of the plurality of optical flow density images to obtain an optical flow density feature sequence.
In the embodiment of the present application, a possible implementation manner is provided, and when extracting the first image feature of each video frame image in the multiple video frame images, the classification module 1303 is configured to:
for each video frame image in the plurality of video frame images, inputting the video frame image into a convolutional neural network;
and taking the input features of the classification layer of the convolutional neural network as first image features.
In the embodiment of the present application, a possible implementation manner is provided, and the classification module 1303 classifies the image feature sequences to obtain a first classification probability; and classifying the optical flow density characteristic sequence to obtain a second classification probability, wherein the second classification probability is used for:
inputting the image feature sequence into a first classification model to obtain a corresponding first classification probability;
and inputting the optical flow density feature sequence into a second classification model to obtain a corresponding second classification probability.
In the embodiment of the present application, a possible implementation manner is provided, and when determining the road condition result based on the first classification probability and the second classification probability, the classification module 1303 is configured to:
determining road condition probability based on the first classification probability and the second classification probability;
and determining the range of the numerical interval in which the road condition probability is positioned, and determining the road condition result corresponding to the range of the numerical interval.
In the embodiment of the present application, a possible implementation manner is provided, and the classification module 1303 classifies the image feature sequences to obtain a first classification probability; and classifying the optical flow density characteristic sequence to obtain a second classification probability, wherein the second classification probability is used for:
inputting the image characteristic sequence into a first classification model to obtain first classification probabilities corresponding to a plurality of candidate road conditions respectively;
and inputting the optical flow density characteristic sequence into a second classification model to obtain second classification probabilities corresponding to a plurality of candidate road conditions respectively.
In the embodiment of the present application, a possible implementation manner is provided, and when determining the road condition result based on the first classification probability and the second classification probability, the classification module 1303 is configured to:
determining road condition probabilities respectively corresponding to the candidate road conditions based on first classification probabilities respectively corresponding to the candidate road conditions and second classification probabilities respectively corresponding to the candidate road conditions;
and taking the candidate road condition corresponding to the maximum road condition probability as a road condition result.
The road condition detection device judges the road condition result by combining the road condition video and the optical flow density image acquired from the road condition video, and the road condition video starts from the visual perception of human vision and can capture the information of the crowdedness degree of vehicles, the pedestrian bunching and the like in the visual field range; the light stream density image starts from dynamic information expressed by relative motion, captures information such as driving speed, relative speed between vehicles and the like, does not depend on the speed of the traffic stream, and can obtain more accurate road condition results.
Further, when the real-time position or the running speed of the vehicle is judged to meet the preset conditions, namely the vehicle is possibly in a position needing abnormity, an abnormal traffic state or a traffic condition needing vigilance, a video acquisition instruction is sent to the video acquisition equipment, and the video acquisition equipment is only indicated to acquire the road condition video when needed, so that resources can be effectively saved.
Furthermore, the first classification model and the second classification model are adopted for classification, so that video information can be selectively memorized, the generalization capability is stronger, and the classification effect is more accurate.
The traffic condition detection device with pictures according to the embodiments of the present disclosure can execute the traffic condition detection method with pictures according to the embodiments of the present disclosure, and the implementation principles thereof are similar, the actions executed by each module in the traffic condition detection device with pictures according to the embodiments of the present disclosure correspond to the steps in the traffic condition detection method with pictures according to the embodiments of the present disclosure, and for the detailed functional description of each module of the traffic condition detection device with pictures, reference may be specifically made to the description in the traffic condition detection method with corresponding pictures shown in the foregoing, and no further description is given here.
Based on the same principle as the method shown in the embodiments of the present disclosure, embodiments of the present disclosure also provide an electronic device, which may include but is not limited to: a processor and a memory; a memory for storing computer operating instructions; and the processor is used for executing the road condition detection method shown in the embodiment by calling the computer operation instruction. Compared with the prior art, the road condition detection method in the application does not depend on traffic flow speed, and more accurate road condition results can be obtained.
In an alternative embodiment, there is provided an electronic device, as shown in fig. 15, the electronic device 4000 shown in fig. 15 including: a processor 4001 and a memory 4003. Processor 4001 is coupled to memory 4003, such as via bus 4002. Optionally, the electronic device 4000 may further comprise a transceiver 4004. In addition, the transceiver 4004 is not limited to one in practical applications, and the structure of the electronic device 4000 is not limited to the embodiment of the present application.
The Processor 4001 may be a CPU (Central Processing Unit), a general-purpose Processor, a DSP (Digital Signal Processor), an ASIC (Application specific integrated Circuit), an FPGA (Field Programmable Gate Array) or other Programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor 4001 may also be a combination that performs a computational function, including, for example, a combination of one or more microprocessors, a combination of a DSP and a microprocessor, or the like.
Bus 4002 may include a path that carries information between the aforementioned components. The bus 4002 may be a PCI (Peripheral Component Interconnect) bus, an EISA (extended industry Standard Architecture) bus, or the like. The bus 4002 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 15, but this is not intended to represent only one bus or type of bus.
The Memory 4003 may be a ROM (Read Only Memory) or other types of static storage devices that can store static information and instructions, a RAM (Random Access Memory) or other types of dynamic storage devices that can store information and instructions, an EEPROM (Electrically erasable programmable Read Only Memory), a CD-ROM (Compact Read Only Memory) or other optical disk storage, optical disk storage (including Compact disk, laser disk, optical disk, digital versatile disk, blu-ray disk, etc.), a magnetic disk storage medium or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to.
The memory 4003 is used for storing application codes for executing the scheme of the present application, and the execution is controlled by the processor 4001. Processor 4001 is configured to execute application code stored in memory 4003 to implement what is shown in the foregoing method embodiments.
Among them, electronic devices include but are not limited to: mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 15 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
The present application provides a computer-readable storage medium, on which a computer program is stored, which, when running on a computer, enables the computer to execute the corresponding content in the foregoing method embodiments. Compared with the prior art, the road condition detection method in the application does not depend on traffic flow speed, and more accurate road condition results can be obtained.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to perform the methods shown in the above embodiments.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present disclosure may be implemented by software or hardware. The name of the module does not form a limitation to the module itself in some cases, for example, the acquiring module may also be described as a "module receiving the traffic video from the video capturing device".
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

Claims (13)

1. A road condition detection method is characterized by comprising the following steps:
receiving a road condition video from video acquisition equipment;
extracting a plurality of video frame images from the road condition video, and acquiring a plurality of optical flow density images based on the plurality of video frame images;
the optical flow density image is used for representing dynamic information formed by the movement of each pixel point in the video frame image relative to an object;
and classifying the video frame images and the optical flow density images respectively, and determining a road condition result based on the video frame images and the optical flow density images.
2. The traffic condition detection method according to claim 1, wherein before receiving the traffic condition video from the video capture device, the method further comprises:
acquiring positioning information of a vehicle, and determining driving information of the vehicle based on the acquired positioning information; the driving information includes at least one of a real-time position and a driving speed of the vehicle;
if the driving information of the vehicle meets a preset condition, sending a video acquisition instruction to the video acquisition equipment;
the receiving of the road condition video from the video acquisition device includes:
and receiving the road condition video sent by the video acquisition equipment in response to the video acquisition instruction.
3. The method as claimed in claim 1, wherein the obtaining of the plurality of optical flow density images based on the plurality of video frame images comprises:
respectively acquiring the motion speed and the motion direction of each pixel in any two adjacent video frame images in the plurality of video frame images;
and determining an optical flow density image corresponding to the two-frame video frame image based on the motion speed and the motion direction of each pixel in the two adjacent frame video frame images.
4. The method as claimed in claim 1, wherein the classifying the video frame images and the optical flow density images respectively, and determining the traffic condition result based on the video frame images and the optical flow density images comprises:
acquiring an image feature sequence based on a plurality of video frame images, and acquiring an optical flow density feature sequence based on a plurality of optical flow density images;
classifying the image feature sequence to obtain a first classification probability;
classifying the optical flow density characteristic sequence to obtain a second classification probability;
determining the road condition result based on the first classification probability and the second classification probability; the road condition result comprises any one of smooth traffic, slow traffic and congestion.
5. The method as claimed in claim 4, wherein the step of obtaining the image feature sequence based on the plurality of video frame images and obtaining the optical flow density feature sequence based on the plurality of optical flow density images comprises:
extracting a first image feature of each video frame image in the plurality of video frame images, and sequentially splicing the plurality of first image features based on the time sequence of the plurality of video frame images to obtain an image feature sequence;
and extracting second image features of each optical flow density image in the plurality of optical flow density images, and sequentially splicing the plurality of second image features based on the time sequence of the plurality of optical flow density images to obtain the optical flow density feature sequence.
6. The method as claimed in claim 5, wherein the extracting the first image feature of each of the plurality of video frame images comprises:
for each video frame image of a plurality of video frame images, inputting the video frame image into a convolutional neural network;
and taking the input features of the classification layer of the convolutional neural network as the first image features.
7. The road condition detection method according to claim 4, wherein the image feature sequence is classified to obtain a first classification probability; classifying the optical flow density feature sequence to obtain a second classification probability, wherein the second classification probability comprises the following steps:
inputting the image feature sequence into a first classification model to obtain a corresponding first classification probability;
and inputting the optical flow density feature sequence into a second classification model to obtain a corresponding second classification probability.
8. The method as claimed in claim 7, wherein the determining the traffic status result based on the first classification probability and the second classification probability comprises:
determining a road condition probability based on the first classification probability and the second classification probability;
and determining a numerical range in which the road condition probability is positioned, and determining a road condition result corresponding to the numerical range.
9. The road condition detection method according to claim 4, wherein the image feature sequence is classified to obtain a first classification probability; classifying the optical flow density feature sequence to obtain a second classification probability, wherein the second classification probability comprises the following steps:
inputting the image feature sequence into a first classification model to obtain first classification probabilities corresponding to a plurality of candidate road conditions respectively;
and inputting the optical flow density characteristic sequence into a second classification model to obtain second classification probabilities corresponding to a plurality of candidate road conditions respectively.
10. The method as claimed in claim 10, wherein the determining the traffic status result based on the first classification probability and the second classification probability comprises:
determining road condition probabilities respectively corresponding to the candidate road conditions based on first classification probabilities respectively corresponding to the candidate road conditions and second classification probabilities respectively corresponding to the candidate road conditions;
and taking the candidate road condition corresponding to the maximum road condition probability as the road condition result.
11. A road condition detection device, comprising:
the acquisition module is used for receiving the road condition video from the video acquisition equipment;
the extraction module is used for extracting a plurality of video frame images from the road condition video and acquiring a plurality of optical flow density images based on the plurality of video frame images;
the optical flow density image is used for representing dynamic information formed by the movement of each pixel point in the video frame image relative to an object;
and the classification module is used for classifying the video frame images and the optical flow density images respectively and determining a road condition result based on the video frame images and the optical flow density images.
12. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the road condition detecting method according to any one of claims 1 to 10 when executing the program.
13. A computer-readable storage medium, wherein a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the method for detecting a road condition as claimed in any one of claims 1 to 10 is implemented.
CN202010530074.1A 2020-06-11 2020-06-11 Road condition detection method and device, electronic equipment and readable storage medium Pending CN111695627A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010530074.1A CN111695627A (en) 2020-06-11 2020-06-11 Road condition detection method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010530074.1A CN111695627A (en) 2020-06-11 2020-06-11 Road condition detection method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN111695627A true CN111695627A (en) 2020-09-22

Family

ID=72480344

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010530074.1A Pending CN111695627A (en) 2020-06-11 2020-06-11 Road condition detection method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN111695627A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113066285A (en) * 2021-03-15 2021-07-02 北京百度网讯科技有限公司 Road condition information determining method and device, electronic equipment and storage medium
JP2022133946A (en) * 2021-03-02 2022-09-14 トヨタ自動車株式会社 Server, data collection system, program, and data collection method
WO2022205632A1 (en) * 2021-03-31 2022-10-06 北京市商汤科技开发有限公司 Target detection method and apparatus, device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180211117A1 (en) * 2016-12-20 2018-07-26 Jayant Ratti On-demand artificial intelligence and roadway stewardship system
WO2018153211A1 (en) * 2017-02-22 2018-08-30 中兴通讯股份有限公司 Method and apparatus for obtaining traffic condition information, and computer storage medium
CN109147331A (en) * 2018-10-11 2019-01-04 青岛大学 A kind of congestion in road condition detection method based on computer vision
CN109753984A (en) * 2017-11-07 2019-05-14 北京京东尚科信息技术有限公司 Video classification methods, device and computer readable storage medium
CN110889328A (en) * 2019-10-21 2020-03-17 大唐软件技术股份有限公司 Method, device, electronic equipment and storage medium for detecting road traffic condition

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180211117A1 (en) * 2016-12-20 2018-07-26 Jayant Ratti On-demand artificial intelligence and roadway stewardship system
WO2018153211A1 (en) * 2017-02-22 2018-08-30 中兴通讯股份有限公司 Method and apparatus for obtaining traffic condition information, and computer storage medium
CN109753984A (en) * 2017-11-07 2019-05-14 北京京东尚科信息技术有限公司 Video classification methods, device and computer readable storage medium
CN109147331A (en) * 2018-10-11 2019-01-04 青岛大学 A kind of congestion in road condition detection method based on computer vision
CN110889328A (en) * 2019-10-21 2020-03-17 大唐软件技术股份有限公司 Method, device, electronic equipment and storage medium for detecting road traffic condition

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
啊扑: "Faster R-CNN全解析", pages 251 - 260, Retrieved from the Internet <URL:https://blog.csdn.net/qq_42450404/article/details/88804798> *
张磊: "从结构、原理到实现,Faster R-CNN全解析(原创)", pages 1 - 10, Retrieved from the Internet <URL:https://zhuanlan.zhihu.com/p/32702387> *
麻花团子: "R-CNN家族及相关分支(主体思想)", pages 1 - 10, Retrieved from the Internet <URL:https://zhuanlan.zhihu.com/p/101941541> *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022133946A (en) * 2021-03-02 2022-09-14 トヨタ自動車株式会社 Server, data collection system, program, and data collection method
JP7359175B2 (en) 2021-03-02 2023-10-11 トヨタ自動車株式会社 Server, data collection system, program and data collection method
CN113066285A (en) * 2021-03-15 2021-07-02 北京百度网讯科技有限公司 Road condition information determining method and device, electronic equipment and storage medium
WO2022205632A1 (en) * 2021-03-31 2022-10-06 北京市商汤科技开发有限公司 Target detection method and apparatus, device and storage medium

Similar Documents

Publication Publication Date Title
KR102189262B1 (en) Apparatus and method for collecting traffic information using edge computing
CN111626208B (en) Method and device for detecting small objects
CN111091708B (en) Vehicle track prediction method and device
CN112417953B (en) Road condition detection and map data updating method, device, system and equipment
CN112560999B (en) Target detection model training method and device, electronic equipment and storage medium
CN111695627A (en) Road condition detection method and device, electronic equipment and readable storage medium
CN111862605B (en) Road condition detection method and device, electronic equipment and readable storage medium
KR20180046798A (en) Method and apparatus for real time traffic information provision
CN112307978B (en) Target detection method and device, electronic equipment and readable storage medium
CN113950611B (en) Method and data processing system for predicting road properties
Giyenko et al. Application of convolutional neural networks for visibility estimation of CCTV images
CN110852258A (en) Object detection method, device, equipment and storage medium
KR20190043396A (en) Method and system for generating and providing road weather information by using image data of roads
CN115470884A (en) Platform for perception system development of an autopilot system
JP2019074849A (en) Drive data analyzer
Humayun et al. Smart traffic management system for metropolitan cities of kingdom using cutting edge technologies
Toyungyernsub et al. Dynamics-aware spatiotemporal occupancy prediction in urban environments
CN112732860B (en) Road extraction method, device, readable storage medium and equipment
JP2013239087A (en) Information processing system and moving bodies
CN113160272A (en) Target tracking method and device, electronic equipment and storage medium
CN112288702A (en) Road image detection method based on Internet of vehicles
Das et al. Why slammed the brakes on? auto-annotating driving behaviors from adaptive causal modeling
Rachman et al. Camera Self-Calibration: Deep Learning from Driving Scenes
WO2019228654A1 (en) Method for training a prediction system and system for sequence prediction
CN115909126A (en) Target detection method, apparatus and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination