CN112446344B - Road condition prompting method and device, electronic equipment and computer readable storage medium - Google Patents

Road condition prompting method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN112446344B
CN112446344B CN202011422608.5A CN202011422608A CN112446344B CN 112446344 B CN112446344 B CN 112446344B CN 202011422608 A CN202011422608 A CN 202011422608A CN 112446344 B CN112446344 B CN 112446344B
Authority
CN
China
Prior art keywords
road
slippery
image
wet
road image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011422608.5A
Other languages
Chinese (zh)
Other versions
CN112446344A (en
Inventor
颜立峰
俞益洲
吴子丰
李一鸣
乔昕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Shenrui Bolian Technology Co Ltd
Shenzhen Deepwise Bolian Technology Co Ltd
Original Assignee
Beijing Shenrui Bolian Technology Co Ltd
Shenzhen Deepwise Bolian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Shenrui Bolian Technology Co Ltd, Shenzhen Deepwise Bolian Technology Co Ltd filed Critical Beijing Shenrui Bolian Technology Co Ltd
Priority to CN202011422608.5A priority Critical patent/CN112446344B/en
Publication of CN112446344A publication Critical patent/CN112446344A/en
Application granted granted Critical
Publication of CN112446344B publication Critical patent/CN112446344B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/18Status alarms
    • G08B21/24Reminder alarms, e.g. anti-loss alarms

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application provides a road condition prompting method and device, electronic equipment and a computer readable storage medium, comprising: the method comprises the steps of obtaining a road image sequence, wherein the road image sequence comprises a plurality of frames of first road images, sequentially splicing every adjacent N frames of first road images in the plurality of frames of first road images to obtain M second road images, obtaining a road slippery characteristic diagram corresponding to each second road image, obtaining the slippery degree and the position information of a slippery area included in a road according to each road slippery characteristic diagram and the first road images with the same time sequence as that of the slippery characteristic diagram, and generating slippery alarm prompt information according to the slippery degree and the position information of the slippery area of the road. By applying the method provided by the application, the slippery alarm prompt information can be generated according to the slippery degree of the slippery area of the road and the position information of the slippery area, so that the safe passing of visually-impaired people can be guaranteed.

Description

Road condition prompting method and device, electronic equipment and computer readable storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a road condition prompting method and apparatus, an electronic device, and a computer-readable storage medium.
Background
With the development of socioeconomic science and technology, the safety travel problem of the visually impaired people is paid more and more attention, and the blind guiding device for assisting the visually impaired people to travel is widely applied.
At present, most blind guiding devices can only identify the ponding area in the road, but cannot identify the slippery obstacles influencing the passage of visually impaired people in the road. However, some objects which do not generally cause obstacles to the passage of visually impaired people in non-rainfall weather tend to turn into wet and slippery obstacles which seriously affect the passage, such as smooth tile pavements, moss pavements, smooth metal pavements, and smooth uphill and downhill pavements, which tend to cause the visually impaired people to slip down in rainy weather. Therefore, how to identify the slippery obstacles in the road and ensure the safe passing of the visually impaired people becomes a problem to be solved urgently.
Disclosure of Invention
In order to achieve the above object, the present application provides the following technical solutions:
a road condition prompting method comprises the following steps:
acquiring a plurality of frames of first road images;
sequentially splicing every adjacent N frames of first road images in the multiple frames of first road images to obtain M second road images; the time sequence of the second road image is as follows: the time sequence of the last frame of first road image in the N frames of first road images; wherein N is an integer greater than 2, and M is an integer greater than 0;
acquiring a road wet-skid characteristic map corresponding to each second road image;
acquiring the slippery degree of a slippery area and the position information of the slippery area included in the road according to each road slippery characteristic diagram and a first road image with the same time sequence as that of the slippery characteristic diagram; the time sequence of the road slippery characteristic diagram is the time sequence of the second road image corresponding to the road slippery characteristic diagram;
and generating wet and slippery alarm prompt information according to the wet and slippery degree and the position information.
Optionally, the splicing, in the method, each adjacent N frames of the first road images in the multiple frames of the first road images in sequence to obtain M second road images includes:
and sequentially splicing the adjacent N frames of first road images into a second road image with the channel number of 3N and the length and width dimensions same as those of the first road image, wherein the channel number of the first road image is 3.
Optionally, the method for acquiring the road wet-skid characteristic map corresponding to each second road image includes:
inputting each second road image into a pre-trained first neural network model to obtain the road wet-skid characteristic diagram corresponding to the second road image;
the training sample of the first neural network model comprises a training image and a labeled image corresponding to the training image, wherein the labeled image corresponding to the training image is an image which is obtained by labeling a road slippery area in the training image as a first identifier in advance.
Optionally, the acquiring, according to each road slippery characteristic map and the first road image with the same time sequence as the time sequence of the slippery characteristic map, the slippery degree of the slippery area included in the road and the position information of the slippery area includes:
splicing the road slippery characteristic diagram and a target first road image aiming at each road slippery characteristic diagram to obtain a third road image, wherein the target first road image of the road slippery characteristic diagram is a first road image with the same time sequence as the road slippery characteristic diagram;
inputting each third road image into a pre-trained long-short term memory unit (LSTM) network to obtain a first wet-skid degree of the wet-skid area corresponding to each third road image and first position information of the wet-skid area;
and determining the slippery degree of the slippery area and the position information of the slippery area included in the road according to the first slippery degree and the first position information corresponding to each third road image.
In the above method, optionally, the LSTM network includes a plurality of sequentially connected LSTM units, an input of the LSTM unit includes the third road image and a first feature output by a last LSTM unit of the LSTM unit, and an output of the LSTM unit includes the first wet-skid degree, the first location information, and a second feature.
Optionally, in the foregoing method, the recursive update formula of the LSTM unit is:
ws(t),pos(t),h(t),c(t)=LSTM(IMG(t),f wet (t),h(t-1),c(t-1))
f is wet (t) is the third road image of the t-th frame, the img (t) is the first road map of the t-th frame, the ws (t) is the first wet-skid degree corresponding to the third road image of the t-th frame, and the pos (t) is the first position information corresponding to the third road image of the t-th frame; the h (t) is a hidden layer feature output by the tth LSTM unit, the c (t) is a long memory hidden layer feature output by the tth LSTM unit, the h (t-1) is a hidden layer feature output by the t-1 LSTM unit, and the c (t-1) is a long memory hidden layer feature output by the t-1 LSTM unit.
In the method, optionally, the position information of the slippery region includes distance information between the slippery region and an image acquisition device, and polar angle information formed between the slippery region and the image acquisition device, where the image acquisition device is a device for acquiring the road image sequence.
An apparatus for data processing, comprising:
the device comprises a first acquisition unit, a second acquisition unit and a processing unit, wherein the first acquisition unit is used for acquiring a road image sequence which comprises a plurality of frames of first road images;
the splicing unit is used for sequentially splicing every adjacent N frames of first road images to obtain M second road images; the time sequence of the second road image is as follows: the time sequence of the last frame of first road image in the N frames of first road images; wherein N is an integer greater than 2, and M is an integer greater than 0;
the second acquisition unit is used for acquiring a road wet-skid characteristic map corresponding to each second road image;
a third obtaining unit, configured to obtain a wet-skid degree of a wet-skid area included in the road and position information of the wet-skid area according to each road wet-skid feature map and a first road image having a time sequence identical to that of the wet-skid feature map; the time sequence of the road slippery characteristic diagram is the time sequence of the second road image corresponding to the road slippery characteristic diagram;
and the fourth acquisition unit is used for generating wet and slippery alarm prompt information according to the wet and slippery degree and the position information.
An electronic device, comprising: a processor and a memory for storing a program; the processor is used for running the program to realize the road condition prompting method.
A computer-readable storage medium, wherein instructions are stored in the computer-readable storage medium, and when the instructions are executed on a computer, the instructions cause the computer to execute the method for prompting a road condition.
The method and the device comprise the following steps: acquiring multiframe first road images, sequentially splicing every adjacent N frames of first road images in the multiframe first road images to obtain M second road images, acquiring a road slippery characteristic diagram corresponding to each second road image, acquiring the slippery degree and the position information of a slippery area included by a road according to each road slippery characteristic diagram and the first road images with the same time sequence as that of the slippery characteristic diagram, and generating slippery alarm prompt information according to the slippery degree and the position information of the slippery area of the road. According to the method, the second road image is obtained by splicing the N adjacent frames of the first road images, so that the spliced second road image is integrated with the image information of the first road images in different time sequences and has rich image information. The slippery characteristic diagram is obtained according to the second road image, so that the obtained slippery characteristic diagram has rich slippery characteristic information, and the obtained slippery degree and position information of the slippery area have higher accuracy according to the road slippery characteristic diagram and the first road image with the same time sequence as that of the slippery characteristic diagram. And finally, generating wet and slippery alarm prompt information according to the wet and slippery degree of the wet and slippery area of the road and the position information of the wet and slippery area, thereby ensuring the safe passing of the visually impaired people.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a road condition prompting method provided in an embodiment of the present application;
fig. 2 is a flowchart of a method for acquiring a slippery degree of a slippery area of a road and location information of the slippery area according to an embodiment of the present application;
FIG. 3 is a block diagram of an LSTM network provided by an embodiment of the present application;
FIG. 4 is a block diagram of an LSTM unit provided in an embodiment of the present application;
FIG. 5 is a diagram of a model architecture for data processing according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The execution main body of the embodiment of the application can be an image processing device with a road slippery area identification function, and can be applied to scenes for assisting visually-impaired people in road navigation.
Fig. 1 is a method for prompting a road condition according to an embodiment of the present application, which may include the following steps:
s101, acquiring a plurality of frames of first road images.
The multiple frames of first road images may be a continuous road image sequence or a discontinuous road image sequence, and the discontinuous road image sequence may be, for example, a sequence formed by extracting a first frame of first road image, a third frame of first road image, and a fifth frame of first road image from a road image video. The discontinuous road image sequence is advantageous for reducing the data processing amount. For convenience of description, the following description is made in the form of a road image sequence.
The road image sequence is an image sequence obtained by shooting a road area within a preset shooting time, the road image sequence comprises a plurality of frames of first road images, and the first road images are the images of the road area shot at one shooting moment within the shooting time.
The road image sequence can be captured by a video capture function configured by the image processing device.
By acquiring the road image sequence, the road image sequence comprises the characteristic information of relative motion between the slippery area and the image processing equipment, and the method is favorable for acquiring the position information between the slippery area and the image processing equipment more accurately compared with the method for acquiring only a single static road image.
The slippery area refers to an area in the road area, where the slippery degree of the road surface is greater than a threshold value, and the slippery degree may be determined according to the illumination characteristic and the texture characteristic of the road surface, for example, the slippery degree may be determined according to the illumination intensity of the road surface and the smoothness of the texture.
S102, sequentially splicing every adjacent N frames of first road images in the multiple frames of first road images to obtain M second road images.
Specifically, M is T-N +1, where T is the total number of first road images included in the road image sequence, where T > N, N is an integer greater than 2, and M is an integer greater than 0.
For example, the road image sequence includes 8 frames of first road images, where N is equal to 4, a first second road image is obtained by stitching a 1 st frame of first road image to a 4 th frame of first road image, a second road image is obtained by stitching a 2 nd frame of first road image to a 5 th frame of first road image, a third second road image is obtained by stitching a 3 rd frame of first road image to a 6 th frame of first road image, and so on, a last second road image is obtained by stitching a 5 th frame of first road image to an 8 th frame of first road image, and the total number of the stitched second road images is 8-4+1 to 5.
In this embodiment, the time sequence of any one second road image is the time sequence of the last first road image in the N first road images. For example, the second road image is obtained by stitching the 1 st frame first road image to the 4 th frame first road image, and the time sequence of the second road image is the time sequence of the 4 th frame first road image.
In this embodiment, the first road image is an RGB image, and the number of channels of the image is 3 color channels of red (R), green (G), and blue (B). In this step, N adjacent frames of first road images are spliced to obtain a second road image, specifically: and splicing N frames of adjacent first road images with the channel number of 3 into a second road image with the channel number of 3N and the same length and width dimensions as the first road image. In this embodiment, N is optionally set to 5.
In this embodiment, the second road image is obtained by stitching N frames of adjacent first road images, so that the second road image has rich information of the road image, and includes not only the image information of each frame of first road image in the N frames, but also dynamic feature information of a video image formed by the N frames of adjacent first road images, so that feature information of a slippery area, such as contour information of the slippery area and a degree of slippery, can be more accurately obtained from the second image road.
S103, acquiring a road wet and slippery characteristic diagram corresponding to each second road image.
In this embodiment, the road slippery characteristic map includes a first position point representation determined in the length direction and the width direction, and a slippery score of a position point corresponding to the first position point in the second road image. The first position point is any one position point of the road slippery characteristic diagram. The road hydroplaning feature map may be an image in which the number of channels is 1 and the length and width dimensions are the same as the second road image. The position point of the second road image corresponds to the first position point of the road slippery characteristic diagram, and the length coordinate and the width coordinate corresponding to the position point of the second road image are the same as those of the first position point.
The specific implementation manner of this step may be: and inputting each second road image into a pre-trained first neural network model to obtain a road wet-skid characteristic diagram corresponding to the second road image.
The training sample of the first neural network model comprises a training image and a labeled image corresponding to the training image, wherein the labeled image corresponding to the training image is an image which is obtained by labeling a road slippery area in the training image as a first identifier in advance.
The first neural network model is obtained by training a preset basic model by adopting a training sample, and the basic model can use ResNet, DenseNet, EfficientNet and the like. The trained first neural network model can output a road wet-skid characteristic diagram corresponding to the second road image according to the input second road image.
And S104, acquiring the slippery degree of a slippery area and the position information of the slippery area included in the road according to the slippery characteristic diagram of each road and the first road image with the same time sequence as that of the slippery characteristic diagram.
In the present embodiment, the degree of wetness of the slippery region is the degree of accumulation and smoothness of the slippery region. The degree of hydroplaning can be reflected by a hydroplaning score, with a higher hydroplaning score indicating a more severe degree of hydroplaning. And the position information of the slippery area comprises distance information of the slippery area and the image acquisition equipment and polar angle information formed between the slippery area and the image acquisition equipment.
The time sequence of the road slippery characteristic diagram is the time sequence of the second road image corresponding to the road slippery characteristic diagram. The second road image corresponding to the road slippery characteristic map is obtained according to the road slippery characteristic map. According to each road wet-skid feature map and the first road image with the same time sequence as the wet-skid feature map, acquiring the wet-skid degree and the position information of the wet-skid area included in the road, specifically according to M road wet-skid feature maps with different time sequences and M first road images with different time sequences, acquiring the wet-skid degree and the position information of the wet-skid area included in the road.
The road slippery characteristic map is an image formed by slippery scores corresponding to each position point in a road area, and the position information between the slippery area and the image processing equipment can be acquired more accurately by combining a plurality of road slippery characteristic maps and a plurality of first road images.
The specific implementation of this embodiment may refer to the flowchart shown in fig. 2.
And S105, generating wet and slippery alarm prompt information according to the wet and slippery degree and the position information of the wet and slippery area.
For example, when the slippery degree of the slippery area is greater than a preset value, the slippery alarm prompt information including the position information of the slippery area is generated, so that the visually impaired people can acquire the information of the slippery area, and the visually impaired people can be assisted to safely pass through the area.
The method and the device comprise the following steps: acquiring multiframe first road images, sequentially splicing every adjacent N frames of first road images in the multiframe first road images to obtain M second road images, acquiring a road slippery characteristic diagram corresponding to each second road image, acquiring the slippery degree and the position information of a slippery area included by a road according to each road slippery characteristic diagram and the first road images with the same time sequence as that of the slippery characteristic diagram, and generating slippery alarm prompt information according to the slippery degree and the position information of the slippery area of the road. According to the method, the second road image is obtained by splicing the N adjacent frames of the first road images, so that the spliced second road image is integrated with the image information of the first road images in different time sequences and has rich image information. The slippery characteristic diagram is obtained according to the second road image, so that the obtained slippery characteristic diagram has rich slippery characteristic information, and the obtained slippery degree and position information of the slippery area have higher accuracy according to the road slippery characteristic diagram and the first road image with the same time sequence as that of the slippery characteristic diagram. And finally, generating wet and slippery alarm prompt information according to the wet and slippery degree of the wet and slippery area of the road and the position information of the wet and slippery area, thereby ensuring the safe passing of the visually impaired people.
Fig. 2 is an implementation method for acquiring the slippery degree of the slippery area of the road and the location information of the slippery area in S104 of the above embodiment, which may include the following steps:
s201, splicing the road slippery characteristic graph and the target first road image according to each road slippery characteristic graph to obtain a third road image.
The target first road image of the road slippery characteristic map is the first road image with the same time sequence as the road slippery characteristic map. In this embodiment, the length and the width of the road slippery characteristic map are the same as those of the first road image, for example, if the number of channels of the road slippery characteristic map is K and the target first road image is 3, the number of channels of the third road image obtained by stitching is K +3, and the length and the width are respectively the same as those of the road slippery characteristic map.
In this embodiment, since the total number of the road slippery characteristic maps is M, the number of the third road images obtained by stitching is also M.
S202, inputting each third road image into a long-short term memory unit (LSTM) network trained in advance, and obtaining a first wet-skid degree of a wet-skid area corresponding to each third road image and first position information of the wet-skid area.
Referring to fig. 3, fig. 3 is a block diagram of an LSTM network, which includes a plurality of LSTM units connected in sequence, as shown in fig. 3.
The input of each LSTM unit includes a third road image and a first feature of an output of a last LSTM unit of the LSTM unit, the output including a first degree of hydroplaning, first location information, and a second feature. The first features input by the LSTM unit comprise a long memory hidden layer feature c (t-1) and a hidden layer feature h (t-1) output by the last LSTM unit, and the second features output by the LSTM unit comprise a long memory hidden layer feature c (t) and a hidden layer feature h (t) output by the LSTM unit. Where t denotes the sequence number of the LSTM unit. The first degree of slip is a degree of slip of the wet slip region output by the LSTM unit, and the first position information is position information of the wet slip region output by the LSTM unit. As shown in FIG. 3, f wet (t) is the wet and slippery characteristic diagram of the t-th road, and IMG (t) is f wet (t) target first road image, f wet (t) and IMG (t) are superposed to obtain a third road image which is used as the input of an LSTM unit, and similarly, f in the figure wet (t-4) is a wet and slippery characteristic diagram of the t-4 th road, and IMG (t-4) is f wet (t-4) target first road image, f wet (t-3) is a wet and slippery characteristic diagram of the t-3 th road, and IMG (t-3) is f wet (t-3) the target first road image.
When the total number of LSTM units is M, the third road image input by the LSTM unit has the same rank in the M third road images as the LSTM unit in the M LSTM units, for example, if the third road image is the 2 nd image in the M third road images, the rank in the M LSTM units of the LSTM unit receiving the third road image is also the 2 nd.
In the M LSTM units included in the LSTM network, the structure of each LSTM unit is the same, referring to fig. 4, fig. 4 is a structure diagram of the LSTM unit, taking the t-th LSTM unit as an example, the input of the LSTM unit is a road wet and slippery characteristic diagram f wet (t) a third road image obtained by splicing the target first road image IMG (t) and a long memory hidden image output by a last LSTM unitHidden layer feature h (t-1) including layer feature c (t-1) and output of the last LSTM unit, first wet-slip degree ws (t) whose output is a wet-slip region, and first position information pos (t). Where t is greater than 1, the input of the first LSTM unit is the first third road image.
In the LSTM cell, tanh is the hyperbolic tangent function of the LSTM cell. Sigmoid is an activation function of LSTM unit, and plays a role of updating data. In this embodiment, the recursive update formula of the LSTM unit is:
ws(t),pos(t),h(t),c(t)=LSTM(IMG(t),f wet (t),h(t-1),c(t-1))
f wet (t) is a third road image of a t-th frame, IMG (t) is a first road image of the t-th frame, ws (t) is a first wet and slippery degree corresponding to the third road image of the t-th frame, pos (t) is first position information corresponding to the third road image of the t-th frame, h (t) is a hidden layer feature output by the t-th LSTM unit, c (t) is a long memory hidden layer feature output by the t-th LSTM unit, h (t-1) is a hidden layer feature output by the t-1-th LSTM unit, and c (t-1) is a long memory hidden layer feature output by the t-1-th LSTM unit.
S203, determining the wet and slippery degree and the position information of the wet and slippery area of the road according to the first wet and slippery degree and the first position information corresponding to each third road image.
For example, the average value of the first wet-skid degrees corresponding to each third road image may be used as the wet-skid degree of the wet-skid region, and one target position may be determined from the first position information corresponding to each third road image as the position information of the wet-skid region. The target position may be a central position of an area covered by all the first position information.
According to the scheme provided by the application, the long-term and short-term memory unit LSTM network is a time sequence-based analysis and calculation network, and time sequence characteristics can be extracted from a plurality of images with different time sequences, so that the slippery degree of a slippery area of a road and the position information of the slippery area are more accurately acquired.
Fig. 5 is a model architecture diagram for data processing provided in an embodiment of the present application, which includes a backbone network and a splicing unit, and the LSTM network described in the foregoing embodiment. The backbone network is a network structure used for extracting features of images in a deep learning algorithm and comprises a convolution layer, a pooling layer, a normalization function, an activation function and the like. The backbone network can be a neural network currently mainstream, such as ResNet, densnet, and EfficientNet.
The input of the backbone network is M second road images described in the foregoing embodiment, and the output is M road wet and slippery characteristic maps, denoted as f, with the size of the number of channels being 1 and the dimensions of the length and the width being the same as those of the second road images wet (t)。
And the splicing unit is used for splicing the M second road images and the M target first road images respectively to obtain M third road images implemented in the previous step.
The input of the LSTM network is M third road images, and the output is the slippery degree and position information of the slippery area. In this embodiment, the low-dimensional features may be converted into the high-dimensional features through the LSTM network, so as to obtain the position information of the slippery region in the three-dimensional scene.
Fig. 6 is a schematic structural diagram of a data processing apparatus 600 according to an embodiment of the present application, including:
a first acquiring unit 601 configured to acquire a plurality of frames of first road images;
the splicing unit 602 is configured to splice every adjacent N frames of first road images in multiple frames of first road images in sequence to obtain M second road images; the time sequence of the second road image is the time sequence of the last frame of first road image in the N frames of first road images; wherein N is an integer greater than 2, and M is an integer greater than 0;
a second obtaining unit 603, configured to obtain a road wet and slippery feature map corresponding to each second road image;
a third obtaining unit 604, configured to obtain a slippery degree of a slippery area included in each road and position information of the slippery area according to each road slippery characteristic map and a first road image with a time sequence identical to that of the slippery characteristic map; the time sequence of the road slippery characteristic diagram is the time sequence of the second road image corresponding to the road slippery characteristic diagram;
a generating unit 605, configured to generate wet and slippery alarm prompt information according to the wet and slippery degree and the position information.
In the above apparatus, optionally, the stitching unit 602 stitches every adjacent N frames of the multiple frames of the first road images in sequence, and the specific implementation manner of obtaining M second road images is as follows: and sequentially splicing the adjacent N frames of first road images into a second road image with the channel number of 3N and the length and width dimensions same as those of the first road image, wherein the channel number of the first road image is 3.
In the above apparatus, optionally, the specific implementation of the second obtaining unit 603 obtaining the road slippery characteristic map corresponding to each second road image is as follows: inputting each second road image into a pre-trained first neural network model to obtain the road wet-skid characteristic diagram corresponding to the second road image; the training sample of the first neural network model comprises a training image and a marked image corresponding to the training image, wherein the marked image corresponding to the training image is an image which marks a road slippery area in the training image as a first mark in advance.
The third obtaining unit 604 obtains the slippery degree of the slippery area included in the road and the position information of the slippery area according to each road slippery characteristic map and the first road image with the same time sequence as the time sequence of the slippery characteristic map, and the specific implementation manner of the third obtaining unit is that:
splicing the road slippery characteristic graph and a target first road image to obtain a third road image aiming at each road slippery characteristic graph, wherein the target first road image of the road slippery characteristic graph is the first road image with the same time sequence as the road slippery characteristic graph;
inputting each third road image into a pre-trained long-short term memory unit (LSTM) network to obtain a first wet-skid degree of the wet-skid area corresponding to each third road image and first position information of the wet-skid area;
and determining the slippery degree of the slippery area and the position information of the slippery area included in the road according to the first slippery degree and the first position information corresponding to each third road image.
Optionally, the LSTM network includes a plurality of sequentially connected LSTM units, an input of the LSTM unit includes the third road image and a first feature output by a last LSTM unit of the LSTM unit, and an output of the LSTM unit includes the first wet-skid degree, the first location information, and a second feature.
Optionally, in the above apparatus, the formula for recursively updating the LSTM unit is as follows:
ws(t),pos(t),h(t),c(t)=LSTM(IMG(t),f wet (t),h(t-1),c(t-1))
f wet (t) is a third road image of the t-th frame, img (t) is a first road image of the t-th frame, ws (t) is the first wet-skid degree corresponding to the third road image of the t-th frame, and pos (t) is first position information corresponding to the third road image of the t-th frame; h (t) is a hidden layer characteristic output by the t-th LSTM unit, c (t) is a long memory hidden layer characteristic output by the t-th LSTM unit, h (t-1) is a hidden layer characteristic output by the t-1-th LSTM unit, and c (t-1) is a long memory hidden layer characteristic output by the t-1-th LSTM unit.
Optionally, in the apparatus described above, the position information of the slippery region includes distance information between the slippery region and an image acquisition device, and polar angle information formed between the slippery region and the image acquisition device, where the image acquisition device is a device for acquiring the road image sequence.
The device that this application implementation provided includes: acquiring multiframe first road images, sequentially splicing every adjacent N frames of first road images in the multiframe first road images to obtain M second road images, acquiring a road slippery characteristic diagram corresponding to each second road image, acquiring the slippery degree and the position information of a slippery area included by a road according to each road slippery characteristic diagram and the first road images with the same time sequence as that of the slippery characteristic diagram, and generating slippery alarm prompt information according to the slippery degree and the position information of the slippery area of the road. According to the method, the second road image is obtained by splicing the N adjacent frames of the first road images, so that the spliced second road image is integrated with the image information of the first road images in different time sequences and has rich image information. The slippery characteristic diagram is obtained according to the second road image, so that the obtained slippery characteristic diagram has rich slippery characteristic information, and the obtained slippery degree and position information of the slippery area have higher accuracy according to the road slippery characteristic diagram and the first road image with the same time sequence as that of the slippery characteristic diagram. And finally, generating wet and slippery alarm prompt information according to the wet and slippery degree of the wet and slippery area of the road and the position information of the wet and slippery area, thereby ensuring the safe passing of the visually impaired people.
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application, including: a processor 701 and a memory 702, the memory 702 is used for storing programs, the processor 701 is used for operating the programs to realize the method for identifying the road abnormity provided by the application, namely, the following steps are executed:
acquiring a plurality of frames of first road images;
sequentially splicing every adjacent N frames of first road images in the multiple frames of first road images to obtain M second road images; the time sequence of the second road image is the time sequence of the last frame of first road image in the N frames of first road images; wherein N is an integer greater than 2, and M is an integer greater than 0;
acquiring a road wet-skid characteristic map corresponding to each second road image;
acquiring the slippery degree of a slippery area and the position information of the slippery area included in the road according to each road slippery characteristic diagram and a first road image with the same time sequence as that of the slippery characteristic diagram; the time sequence of the road slippery characteristic diagram is the time sequence of the second road image corresponding to the road slippery characteristic diagram;
and generating wet and slippery alarm prompt information according to the wet and slippery degree and the position information.
An embodiment of the present application further provides a storage medium, where instructions are stored in the computer-readable storage medium, and when the instructions are executed on a computer, the instructions cause the computer to execute the method for identifying a road anomaly provided in the present application, that is, execute the following steps:
acquiring a plurality of frames of first road images;
sequentially splicing every adjacent N frames of first road images in the frames of first road images to obtain M second road images; the time sequence of the second road image is as follows: the time sequence of the last frame of first road image in the N frames of first road images; wherein N is an integer greater than 2, and M is an integer greater than 0;
acquiring a road wet-skid characteristic map corresponding to each second road image;
acquiring the slippery degree of a slippery area and the position information of the slippery area included by the road according to each road slippery characteristic diagram and a first road image with the same time sequence as that of the slippery characteristic diagram; the time sequence of the road slippery characteristic diagram is the time sequence of the second road image corresponding to the road slippery characteristic diagram;
and generating wet and slippery alarm prompt information according to the wet and slippery degree and the position information.
The functions described in the method of the embodiment of the present application, if implemented in the form of software functional units and sold or used as independent products, may be stored in a storage medium readable by a computing device. Based on such understanding, part of the contribution to the prior art of the embodiments of the present application or part of the technical solution may be embodied in the form of a software product stored in a storage medium and including several instructions for causing a computing device (which may be a personal computer, a server, a mobile computing device or a network device) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (7)

1. A road condition prompting method is characterized by comprising the following steps:
acquiring a plurality of frames of first road images; the multiple frames of first road images are road image sequences; acquiring a road image sequence to enable the road image sequence to contain characteristic information of relative motion between a slippery area and image processing equipment;
sequentially splicing every adjacent N frames of first road images in the multiple frames of first road images to obtain M second road images; the time sequence of the second road image is as follows: the time sequence of the last frame of first road image in the N frames of first road images; wherein N is an integer greater than 2, and M is an integer greater than 0;
acquiring a road wet-skid characteristic map corresponding to each second road image;
acquiring the slippery degree of a slippery area and the position information of the slippery area included in a road according to each road slippery characteristic diagram and a first road image with the same time sequence as that of the slippery characteristic diagram; the time sequence of the road slippery characteristic diagram is the time sequence of the second road image corresponding to the road slippery characteristic diagram;
generating wet and slippery alarm prompt information according to the wet and slippery degree and the position information;
the acquiring the slippery degree of the slippery area and the position information of the slippery area included in the road according to each road slippery characteristic diagram and the first road image with the same time sequence as the time sequence of the slippery characteristic diagram comprises the following steps:
for each road slippery characteristic map, splicing the road slippery characteristic map and a target first road image to obtain a third road image, wherein the target first road image is a first road image with the same time sequence as the road slippery characteristic map;
inputting each third road image into a pre-trained long-short term memory unit (LSTM) network to obtain a first wet-skid degree of the wet-skid area corresponding to each third road image and first position information of the wet-skid area;
determining the slippery degree of the slippery area and the position information of the slippery area included in the road according to the first slippery degree and the first position information corresponding to each third road image;
the LSTM network comprises a plurality of LSTM units which are connected in sequence; the input of the LSTM unit includes the third road image and a first feature layer output by a last of the LSTM units of the LSTM unit, the output of the LSTM unit includes the first degree of hydroplaning, the first location information, and a second feature layer;
the recursive update formula of the LSTM unit is as follows:
ws(t),pos(t),h(t),c(t)=LSTM(IMG(t),f wet (t),h(t-1),c(t-1))
f is wet (t) is the third road image of the t-th frame, the img (t) is the first road map of the t-th frame, the ws (t) is the first wet-skid degree corresponding to the third road image of the t-th frame, and the pos (t) is the first position information corresponding to the third road image of the t-th frame; h (t) is a hidden layer output by the tth LSTM unit, c (t) is a long memory hidden layer output by the tth LSTM unit, h (t-1) is a hidden layer output by the t-1 LSTM unit, and c (t-1) isAnd (t-1) a long memory hidden layer output by the LSTM unit.
2. The method according to claim 1, wherein the sequentially stitching each adjacent N frames of the first road images in the plurality of frames of first road images to obtain M second road images comprises:
and sequentially splicing the adjacent N frames of first road images into a second road image with the channel number of 3N and the length and width dimensions same as those of the first road image, wherein the channel number of the first road image is 3.
3. The method of claim 1, wherein the obtaining of the road hydroplaning feature map corresponding to each second road image comprises:
inputting each second road image into a pre-trained first neural network model to obtain the road slippery characteristic diagram corresponding to the second road image;
the training sample of the first neural network model comprises a training image and a labeled image corresponding to the training image, wherein the labeled image corresponding to the training image is an image which is obtained by labeling a road slippery area in the training image as a first identifier in advance.
4. The method according to claim 1, wherein the position information of the slippery region includes distance information of the slippery region from an image capture device, and polar angle information formed between the slippery region and the image capture device, the image capture device being a device that captures the road image sequence.
5. A road condition prompting device is characterized by comprising:
the first acquisition unit is used for acquiring a plurality of frames of first road images; the multiple frames of first road images are road image sequences; acquiring a road image sequence to enable the road image sequence to contain characteristic information of relative motion between a slippery area and image processing equipment;
the splicing unit is used for sequentially splicing every adjacent N frames of first road images in the multiple frames of first road images to obtain M second road images; the time sequence of the second road image is as follows: the time sequence of the last frame of first road image in the N frames of first road images; wherein N is an integer greater than 2, and M is an integer greater than 0;
the second acquisition unit is used for acquiring a road wet-skid characteristic map corresponding to each second road image;
a third obtaining unit, configured to obtain a wet-skid degree of a wet-skid area included in the road and position information of the wet-skid area according to each road wet-skid feature map and a first road image having a time sequence identical to that of the wet-skid feature map; the time sequence of the road slippery characteristic diagram is the time sequence of the second road image corresponding to the road slippery characteristic diagram;
the acquiring the slippery degree of the slippery area and the position information of the slippery area included in the road according to each road slippery characteristic diagram and the first road image with the same time sequence as the time sequence of the slippery characteristic diagram comprises the following steps:
for each road slippery characteristic map, splicing the road slippery characteristic map and a target first road image to obtain a third road image, wherein the target first road image is a first road image with the same time sequence as the road slippery characteristic map;
inputting each third road image into a pre-trained long-short term memory unit (LSTM) network to obtain a first wet-skid degree of the wet-skid area corresponding to each third road image and first position information of the wet-skid area;
determining the slippery degree of the slippery area and the position information of the slippery area included in the road according to the first slippery degree and the first position information corresponding to each third road image;
the LSTM network comprises a plurality of LSTM units which are connected in sequence; the input of the LSTM unit includes the third road image and a first feature layer output by a last of the LSTM units of the LSTM unit, the output of the LSTM unit includes the first degree of hydroplaning, the first location information, and a second feature layer;
the recursive update formula of the LSTM unit is as follows:
ws(t),pos(t),h(t),c(t)=LSTM(IMG(t),f wet (t),h(t-1),c(t-1))
f is wet (t) is the third road image of the t-th frame, the img (t) is the first road map of the t-th frame, the ws (t) is the first wet-skid degree corresponding to the third road image of the t-th frame, and the pos (t) is the first position information corresponding to the third road image of the t-th frame; the h (t) is a hidden layer output by the tth LSTM unit, the c (t) is a long memory hidden layer output by the tth LSTM unit, the h (t-1) is a hidden layer output by the t-1 th LSTM unit, and the c (t-1) is a long memory hidden layer output by the t-1 th LSTM unit;
and the fourth acquisition unit is used for generating wet and slippery alarm prompt information according to the wet and slippery degree and the position information.
6. An electronic device, comprising: a processor and a memory for storing a program; the processor is configured to run the program to implement the method for prompting a road condition according to any one of claims 1 to 4.
7. A computer-readable storage medium having stored therein instructions, which when executed on a computer, cause the computer to execute the method for road condition notification according to any one of claims 1-4.
CN202011422608.5A 2020-12-08 2020-12-08 Road condition prompting method and device, electronic equipment and computer readable storage medium Active CN112446344B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011422608.5A CN112446344B (en) 2020-12-08 2020-12-08 Road condition prompting method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011422608.5A CN112446344B (en) 2020-12-08 2020-12-08 Road condition prompting method and device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN112446344A CN112446344A (en) 2021-03-05
CN112446344B true CN112446344B (en) 2022-09-16

Family

ID=74740550

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011422608.5A Active CN112446344B (en) 2020-12-08 2020-12-08 Road condition prompting method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112446344B (en)

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103034862B (en) * 2012-12-14 2015-07-15 北京诚达交通科技有限公司 Road snow and rain state automatic identification method based on feature information classification
US9594964B2 (en) * 2014-06-12 2017-03-14 GM Global Technology Operations LLC Vision-based wet road surface detection using texture analysis
CN104200673B (en) * 2014-09-01 2016-04-06 西南交通大学 A kind of road surface slippery situation detection method based on road image
CN111209777A (en) * 2018-11-21 2020-05-29 北京市商汤科技开发有限公司 Lane line detection method and device, electronic device and readable storage medium
CN109800661A (en) * 2018-12-27 2019-05-24 东软睿驰汽车技术(沈阳)有限公司 A kind of road Identification model training method, roads recognition method and device
CN111737524A (en) * 2019-03-19 2020-10-02 上海大学 Information integration method in road abnormity monitoring system
CN111723605A (en) * 2019-03-19 2020-09-29 上海大学 Road icing detection system and method based on video processing
CN110246102B (en) * 2019-06-13 2022-05-31 中国人民解放军陆军炮兵防空兵学院 Method for clearly processing video in rainy days
CN111080593B (en) * 2019-12-07 2023-06-16 上海联影智能医疗科技有限公司 Image processing device, method and storage medium
CN111898581B (en) * 2020-08-12 2024-05-17 成都佳华物链云科技有限公司 Animal detection method, apparatus, electronic device, and readable storage medium

Also Published As

Publication number Publication date
CN112446344A (en) 2021-03-05

Similar Documents

Publication Publication Date Title
JP6832504B2 (en) Object tracking methods, object tracking devices and programs
CN110555390B (en) Pedestrian re-identification method, device and medium based on semi-supervised training mode
Borji et al. Adaptive object tracking by learning background context
CN103745203B (en) View-based access control model notes the object detecting and tracking method with average drifting
CN110659589B (en) Pedestrian re-identification method, system and device based on attitude and attention mechanism
US9911053B2 (en) Information processing apparatus, method for tracking object and program storage medium
US20150169989A1 (en) Foreground object detection from multiple images
KR20210028185A (en) Human posture analysis system and method
CN111310705A (en) Image recognition method and device, computer equipment and storage medium
CN108564120B (en) Feature point extraction method based on deep neural network
CN107918773B (en) Face living body detection method and device and electronic equipment
US9036920B2 (en) Method of detecting feature points of an object in a system for motion detection
CN109117723B (en) Blind road detection method based on color pattern analysis and semantic segmentation
Mustapha et al. Towards nonuniform illumination face enhancement via adaptive contrast stretching
CN113052008A (en) Vehicle weight recognition method and device
CN110969642B (en) Video filtering method and device, electronic equipment and storage medium
CN112926461A (en) Neural network training and driving control method and device
CN106203428A (en) The image significance detection method merged based on blur estimation
CN111382870A (en) Method and device for training neural network
CN110276288A (en) A kind of personal identification method and device based on biological characteristic
CN112613471B (en) Face living body detection method, device and computer readable storage medium
CN112446344B (en) Road condition prompting method and device, electronic equipment and computer readable storage medium
CN111784660B (en) Method and system for analyzing frontal face degree of face image
JP6713422B2 (en) Learning device, event detection device, learning method, event detection method, program
Kim et al. Facial landmark extraction scheme based on semantic segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant