CN115147795A - Bus station water-splashing prevention method, device, equipment and medium based on image recognition - Google Patents

Bus station water-splashing prevention method, device, equipment and medium based on image recognition Download PDF

Info

Publication number
CN115147795A
CN115147795A CN202210815393.6A CN202210815393A CN115147795A CN 115147795 A CN115147795 A CN 115147795A CN 202210815393 A CN202210815393 A CN 202210815393A CN 115147795 A CN115147795 A CN 115147795A
Authority
CN
China
Prior art keywords
contour
vehicle
recognition
image
bus station
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210815393.6A
Other languages
Chinese (zh)
Inventor
何金金
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Changan Automobile Co Ltd
Original Assignee
Chongqing Changan Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Changan Automobile Co Ltd filed Critical Chongqing Changan Automobile Co Ltd
Priority to CN202210815393.6A priority Critical patent/CN115147795A/en
Publication of CN115147795A publication Critical patent/CN115147795A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting

Abstract

The invention provides a method, a device, equipment and a medium for preventing water splashing in a bus station based on image recognition, wherein the method comprises the following steps: acquiring a road image in the driving process of a vehicle; inputting a road image into a pre-trained contour recognition model, and recognizing the road image through the pre-trained contour recognition model to obtain a recognition result, wherein the recognition result comprises contour information of a plurality of objects; when the identification result comprises a target object contour, matching the target object contour with a pre-constructed reference contour to obtain a matching result; and generating a vehicle speed control instruction according to the matching result, and controlling the running speed of the vehicle through the vehicle speed control instruction so as to avoid splashing when the vehicle runs to the ponding area. The method and the device accurately determine whether the bus station exists in front of the road by carrying out contour recognition on the road image and matching the road image with the reference contour, and generate the vehicle speed control instruction according to the matching result so as to achieve the purpose of water splashing prevention.

Description

Bus station splash-proof method, device, equipment and medium based on image recognition
Technical Field
The application relates to the technical field of computer vision, in particular to a bus station water splashing prevention method, device, equipment and medium based on image recognition.
Background
At present, part of roads in front of a bus station form an uneven area due to heavy pressure, accumulated water can be formed in the area in rainy days, and when an automobile runs through the area, the accumulated water can be splashed onto passengers waiting for the bus in the bus station if the speed is too high. The behavior that the automobile driver does not decelerate and drive through the ponding area not only seriously influences the trip experience of the waiting passenger, but also brings fine processing to the automobile driver if the passenger reports the alarm at the moment, thereby causing unnecessary loss.
Disclosure of Invention
In view of the above-mentioned shortcomings of the prior art, the present invention provides a method, device, equipment and medium for preventing water splashing at a bus stop based on image recognition, so as to solve the above-mentioned technical problems.
The invention provides a bus station water splashing prevention method based on image recognition, which comprises the following steps:
acquiring a road image in the driving process of a vehicle;
inputting the road image into a pre-trained contour recognition model, and recognizing the road image through the pre-trained contour recognition model to obtain a recognition result, wherein the recognition result comprises contour information of a plurality of objects;
when the identification result comprises a target object contour, matching the target object contour with a pre-constructed reference contour to obtain a matching result;
and generating a vehicle speed control instruction according to the matching result, and controlling the running speed of the vehicle through the vehicle speed control instruction so as to avoid splashing when the vehicle runs to the ponding area.
In an embodiment of the present application, before the inputting the road image into the pre-trained image contour recognition model, the method includes:
intercepting an image of a region to be identified in the road image according to a preset region of interest;
carrying out image segmentation on the to-be-identified area image to obtain road images of a plurality of objects;
and carrying out gray processing and binarization processing on the road images of the plurality of objects to obtain a preprocessed road image.
In an embodiment of the present application, after performing a graying process and a binarization process on the road images of the plurality of objects to obtain a preprocessed road image, the method includes:
acquiring a road environment sample data set, and segmenting the road environment sample data set into a training data set and a verification data set according to a preset segmentation proportion, wherein the training data set and the verification data set comprise road environment sample images and real labels;
inputting the training data set into a pre-constructed contour recognition model for iterative training, and updating parameters of a target function of the pre-constructed contour recognition model to obtain an initial contour recognition model;
inputting the verification data set into the initial contour recognition model, outputting a verification result through the initial contour recognition model, and adjusting the hyper-parameters of the initial contour recognition model according to the verification result to obtain the pre-trained contour recognition model.
In an embodiment of the application, the target object contour comprises a bus station contour and/or a waiting passenger contour, and the pre-constructed reference contour comprises a bus station reference contour and/or a waiting passenger reference contour;
the generating of the vehicle speed control instruction according to the matching result comprises:
if the matching result is that the bus station contour is successfully matched with the bus station reference contour, generating a vehicle deceleration control instruction, and controlling the vehicle to decelerate according to the vehicle deceleration control instruction;
if the matching result is that the contour of the waiting passenger is successfully matched with the reference contour of the waiting passenger, generating a vehicle deceleration control command, and controlling the vehicle to decelerate according to the vehicle deceleration control command;
and if the matching result is that the matching of the bus station contour and the bus station reference contour fails, or the matching result is that the matching of the waiting passenger contour and the waiting passenger reference contour fails, generating a normal vehicle running instruction, and controlling the vehicle to run according to the current speed according to the normal vehicle running instruction.
In an embodiment of the application, the target object contour comprises a bus station signboard contour, and the pre-constructed reference contour comprises a bus station signboard reference contour;
the generating of the vehicle speed control instruction according to the matching result comprises:
if the matching result is that the bus station signboard profile is successfully matched with the bus station signboard reference profile, generating a vehicle deceleration control command, and controlling the vehicle to decelerate according to the vehicle deceleration control command;
and if the matching result is that the matching of the contour of the bus station signboard and the reference contour of the bus station signboard fails, generating a normal vehicle running instruction, and controlling the vehicle to run at the current speed according to the normal vehicle running instruction.
In an embodiment of the application, after the generating the vehicle speed control command according to the matching result, the method includes:
inputting the road image into a pre-trained ponding recognition model, and outputting road ponding information through the pre-trained ponding recognition model, wherein the road ponding information comprises the area of a ponding area and the distance between the vehicle and the ponding area;
and if the area of the ponding area is larger than a preset area threshold value, or the distance between the vehicle and the ponding area is larger than a preset distance threshold value, generating lane change prompt information, wherein the lane change prompt information is used for prompting a driver of the vehicle to start lane change at the current position.
In an embodiment of the application, the matching the recognition result with a pre-constructed reference contour to obtain a matching result includes:
acquiring the current position of a vehicle and a navigation map, and determining bus station information in front of the current vehicle position in the navigation map according to a preset query range;
if the matching result is successful and the bus station information is that no bus station exists, or the matching result is failed and the bus station information is that a bus station exists, inputting the road image into a pre-trained contour recognition model for re-recognition to obtain a new recognition result;
matching the new recognition result with a pre-constructed reference contour to obtain a new matching result;
and regenerating a vehicle speed control command according to the new matching result.
In an embodiment of the present application, a bus station splash protection device based on image recognition is provided, the device includes:
the image acquisition module is used for acquiring a road image in the running process of a vehicle;
the contour recognition module is used for inputting the road image into a pre-trained contour recognition model, recognizing the road image through the pre-trained contour recognition model and obtaining a recognition result, wherein the recognition result comprises contour information of a plurality of objects;
the contour matching module is used for matching the target object contour with a pre-constructed reference contour when the identification result comprises the target object contour to obtain a matching result;
and the instruction generating module is used for generating a vehicle speed control instruction according to the matching result, and controlling the vehicle running speed through the vehicle speed control instruction so as to avoid water splashing when the vehicle runs to the ponding area.
In an embodiment of the present application, an electronic device is provided, which includes:
one or more processors;
a storage device to store one or more programs that, when executed by the one or more processors, cause the electronic device to implement the bus stop splash guard method based on image recognition as described above.
In an embodiment of the present application, there is provided a computer-readable storage medium having stored thereon a computer program, which, when executed by a processor of a computer, causes the computer to execute the image recognition-based bus stop splash prevention method as described above.
The invention has the beneficial effects that: acquiring a road image in the driving process of a vehicle; inputting the road image into a pre-trained contour recognition model, and recognizing the road image through the pre-trained contour recognition model to obtain a recognition result; when the recognition result comprises the target object contour, matching the target object contour with a pre-constructed reference contour to obtain a matching result, and matching contour information output by a contour recognition model with the reference contour, so that the condition that a bus station or a waiting passenger is wrongly recognized due to model parameters is avoided; and generating a vehicle speed control instruction according to the matching result, controlling the vehicle running speed through the vehicle speed control instruction to avoid splashing when the vehicle runs to a ponding area, and directly controlling the vehicle to decelerate without operation of a driver when a bus station or a waiting passenger appears in front of a road without operating by a driver, so that the instantaneity of splashing prevention is ensured.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort. In the drawings:
fig. 1 is a schematic diagram of an implementation environment of a bus stop water-splashing prevention method based on image recognition according to an exemplary embodiment of the present application;
FIG. 2 is a flow chart illustrating a method for bus stop splash protection based on image recognition in an exemplary embodiment of the present application;
FIG. 3 is a flowchart of road image preprocessing after step S210 in the embodiment shown in FIG. 2;
FIG. 4 is a flow chart of model training after step S330 in the embodiment shown in FIG. 3;
FIG. 5 is a block diagram of a bus stop splash guard based on image recognition shown in an exemplary embodiment of the present application;
FIG. 6 illustrates a schematic structural diagram of a computer system suitable for use to implement the electronic device of the embodiments of the present application.
Detailed Description
Other advantages and effects of the present invention will become apparent to those skilled in the art from the disclosure herein, wherein the embodiments of the present invention are described in detail with reference to the accompanying drawings and preferred embodiments. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It should be understood that the preferred embodiments are illustrative of the invention only and are not limiting upon the scope of the invention.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the components related to the present invention are only shown in the drawings rather than drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.
In the following description, numerous details are set forth to provide a more thorough explanation of embodiments of the present invention, however, it will be apparent to one skilled in the art that embodiments of the present invention may be practiced without these specific details, and in other embodiments, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring embodiments of the present invention.
Firstly, machine vision mainly studies to simulate human vision function by using a computer, obtains an image by a camera and the like, converts the image into a digital image signal, sends the digital image signal to a processing terminal, obtains required information by using software, makes correct calculation and judgment, identifies the form and motion of a three-dimensional scene and an object of an objective world by using a digital image processing algorithm and an identification algorithm, and controls the motion of equipment on site according to an identification result.
The image recognition is one of machine vision technologies, and refers to a technology for processing, analyzing and understanding an image by using a computer to recognize various targets and objects in different modes, and a series of enhancement and reconstruction technical means are performed on an image with poor quality, so that the image quality is effectively improved. The processes of image recognition technology are classified into the following categories: the method comprises the steps of image data acquisition, data preprocessing, feature extraction and selection, classifier design and classification decision. The contour recognition is a specific application in an image recognition technology, and in a digital image containing an object and a background, the influence of the background, the texture inside the object and noise interference is ignored, and a certain technology and method are adopted to realize the process of extracting the contour of the object. Contour recognition is an important basis for target detection, shape analysis, and target tracking technologies.
Machine Learning (ML) is a multi-domain cross discipline, and relates to a plurality of disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Among them, machine learning is the core of artificial intelligence, is the fundamental approach to make computers have intelligence, and is applied in various fields of artificial intelligence. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and teaching learning.
Based on the strong learning ability of machine learning, the target in the road image of the machine learning model can be identified through the machine learning process aiming at a large amount of historical tracks, so that the identified contour information is more accurate and credible. Illustratively, the machine learning model may include a neural network-based supervision model, such as a two-class machine learning model, which is trained by using a large amount of road environment data, so that the machine learning model performs model parameter adjustment during the training process, and the adjusted model parameters accurately identify the bus station contour, the waiting passenger contour, and the bus station signboard contour in the road environment.
In the Region Of Interest (ROI), in machine vision and image processing, a Region to be processed is outlined in a form Of a box, a circle, an ellipse, an irregular polygon, or the like from a processed image, and is called a Region Of Interest. In the field of image processing, a region of interest is an image region selected from an image, which is the focus of interest for image recognition, analysis, and which is delineated for further processing.
Fig. 1 is a schematic diagram of an implementation environment of a bus stop water splashing prevention method based on image recognition according to an exemplary embodiment of the present application. As shown in fig. 1, the terminal device 101 includes, but is not limited to, a vehicle-mounted terminal with local computing capability, the terminal device 101 includes, but is not limited to, communicating with the cloud 103 through the network 102, and the terminal device 101 includes, but is not limited to, performing an operation on the database, for example, extracting reference profile data from the reference profile library or inputting reference profile data. The vehicle-mounted terminal comprises but is not limited to a man-machine interaction screen, a processor and a memory. The man-machine interaction screen comprises but is not limited to a road image outline recognition result and a vehicle deceleration prompt. The processor includes, but is not limited to, a processor for performing corresponding operations in response to the human-computer interaction or performing calculation processing on the acquired data to generate instructions. The memory is used for storing relevant stored data, such as pre-trained profile recognition model reference information, a reference profile library, road images and the like.
In one embodiment of the application, a 360-degree high-definition camera installed in front of a vehicle shoots a road in front of the vehicle to obtain a high-definition road image; inputting the high-definition road image into the terminal equipment 101 for preprocessing to obtain a preprocessed road image; transmitting the preprocessed road image to a cloud end 103, wherein a pre-trained contour recognition model is stored in the cloud end 103, and performing contour recognition on the preprocessed road image in the pre-trained contour recognition model to obtain contour information of a plurality of objects; when the contour information of the plurality of objects comprises the contour information of the target object, comparing the contour of the target object with a reference contour which is constructed in advance in the cloud 103 to obtain a comparison result; the comparison result is transmitted to the terminal device 102 through the network 102 to generate a vehicle control command, and the vehicle control command is transmitted to a brake component of the vehicle to control the running speed of the vehicle.
It should be noted that, because of the limitation of the storage capability and the computing capability of the in-vehicle terminal, the profile recognition model may be trained in advance in the cloud, for example.
For example, in the environment that a vehicle runs on a road in rainy days, the terrain sinks or cracks due to pressure on the front road of a part of bus station, and at the moment, a rainwater catchment area is easily caused. At present, the area and the depth of the accumulated water are identified and analyzed in part of technologies, so that whether deceleration early warning information is sent to a vehicle driver or not is determined, but part of drivers cannot decelerate according to the early warning information, and water splashing can still be formed. To solve these problems, embodiments of the present application respectively propose a bus station splash prevention method based on image recognition, a bus station splash prevention apparatus based on image recognition, an electronic device, a computer-readable storage medium, and a computer program, which will be described in detail below.
Referring to fig. 2, fig. 2 is a flowchart illustrating a method for preventing water splashing at a bus stop based on image recognition according to an exemplary embodiment of the present application. The method may be applied to the implementation environment shown in fig. 1, and is specifically executed by the cloud end 103 in the implementation environment. It should be understood that the method may be applied to other exemplary implementation environments and is specifically executed by devices in other implementation environments, and the embodiment does not limit the implementation environment to which the method is applied.
As shown in fig. 2, in an exemplary embodiment, the image recognition-based bus stop splashwater prevention method includes steps S210 to S240, which are described in detail as follows:
in step S210, a road image during the travel of the vehicle is acquired.
Illustratively, a high-definition image of a driving road of the vehicle is acquired by a 360-degree high-definition camera installed in front of the vehicle every 10s, and then the road image is transmitted to the terminal device 101 for preprocessing.
In one embodiment of the present application, after acquiring the road image during the driving of the vehicle, the method further includes preprocessing the road image. Referring to fig. 3, fig. 3 is a flowchart of the road image preprocessing after step S210 in the embodiment shown in fig. 2, which is described in detail as follows:
s310, intercepting an image of a region to be identified in the road image according to a preset region of interest.
In the embodiment, the areas containing the bus stations, the waiting passengers and the bus station signboard in the road image are intercepted through the interested areas, so that the time for subsequent image processing and recognition is reduced, noise is removed, and the accuracy of the image recognition result is improved.
And S320, carrying out image segmentation on the image of the area to be recognized to obtain road images of a plurality of objects.
Illustratively, in the present embodiment, image segmentation is performed on an image in the region of interest to further extract an object in the region to be identified. The object to be recognized in the region to be recognized may be extracted, for example, using a threshold-based segmentation method, or a region-based segmentation method, or an edge-based segmentation method, resulting in road images of multiple objects.
And S330, carrying out gray processing and binarization processing on the road images of the objects to obtain a preprocessed road image.
For example, in order to improve the accuracy of the image recognition result, before the image is recognized, the image needs to be subjected to a graying process and a binarization process. In this embodiment, a weighted average method is adopted to perform graying processing on road images of a plurality of objects, so as to perform weighted average on three components (red, green and bundle components, namely red, green and blue components) in the images according to different weights, thereby obtaining processed road images; and processing the road image after the graying processing by adopting one of a double-peak method, a P parameter method, an iteration method and an OTSU method, setting the pixel gray larger than a certain critical gray value as a gray maximum value, and setting the pixel gray smaller than the critical gray value as a gray minimum value.
In an embodiment of the present application, after the image is preprocessed, a training process of the contour recognition model in the cloud is further included. Referring to fig. 4, fig. 4 is a flowchart of model training after step S330 in the embodiment shown in fig. 3, which is described in detail as follows:
s410, acquiring a road environment sample data set, and dividing the road environment sample data set into a training data set and a verification data set according to a preset division ratio;
s420, inputting the training data set into a pre-constructed contour recognition model for iterative training, and updating parameters of a target function of the pre-constructed contour recognition model to obtain an initial contour recognition model;
s430, inputting the verification data set into the initial contour recognition model, outputting a verification result through the initial contour recognition model, and adjusting the hyper-parameters of the initial contour recognition model according to the verification result to obtain a pre-trained contour recognition model.
In the above steps S410 to S430, the training data set and the verification data set include road environment sample images and real labels, and the model is trained through the images and the labels to update parameters of the objective function in the model, so as to obtain a trained contour recognition model.
In step S220, the road image is input into the pre-trained contour recognition model, and the road image is recognized by the pre-trained contour recognition model, so as to obtain a recognition result.
Illustratively, after a road image is input into a pre-trained contour recognition model, contour recognition is performed on the pre-processed road image in the model, in the application, recognition is mainly performed on the contour of an object in the image, and the recognition result includes but is not limited to a bus station contour, a waiting passenger contour and a bus station signboard contour. It can be understood that besides containing bus stations, waiting passengers and bus station stop boards, the road image also comprises common road objects such as trees and guardrails, and therefore bus station outlines, waiting passenger outlines and bus station signboard outlines can be screened out according to the marks in the outline recognition model.
In step S230, when the identification result includes the target object contour, the target object contour is matched with a pre-constructed reference contour, so as to obtain a matching result.
In an embodiment of the present application, the target object contour includes, but is not limited to, a bus stop contour, a waiting passenger contour, and a bus stop signboard contour, and when the target object contour is included in the recognition result, the target object contour needs to be matched with a reference contour in a reference contour library stored in the cloud 103. The reference profiles in the reference profile library are profiles of various types which are constructed in advance, and can include a bus station profile 1, a bus station profile 2 and a bus station profile 3, for example, similarity matching is carried out on the bus station profile in the target object with the bus station profile 1, the bus station profile 2 and the bus station profile 3, and if the similarity of the bus station profile in the target object with any one of the bus station profile 1, the bus station profile 2 and the bus station profile 3 is within a preset threshold range, matching is successful; and if the similarity between the bus station contour in the target object and any one of the bus station contour 1, the bus station contour 2 and the bus station contour 3 is not within the preset threshold range, the matching fails. The contour of the waiting passenger and the contour of the bus station signboard can also be matched by the matching method, and the contour of the target object is matched with the contour which is constructed in advance in the reference contour library to obtain a matching result.
It should be noted that when the recognition result is output through the pre-trained contour recognition model, the parameters of the target function in the model still affect the accuracy of the recognition result, so that a matching step is added in the application, and when the contour of the target object in the recognition result is matched with the reference contour, a vehicle speed control instruction is generated to ensure that the bus station, the waiting passenger and the bus station signboard in the road obtained in the application are consistent with the actual condition.
In step S240, a vehicle speed control command is generated according to the matching result, and the vehicle running speed is controlled by the vehicle speed control command to avoid splash formation when the vehicle runs to the ponding area.
In an embodiment of the application, when the target object contour includes a bus stop contour and/or a waiting passenger contour, and the pre-constructed reference contour includes a bus stop reference contour and/or a waiting passenger reference contour, the step of generating the vehicle speed control instruction according to the matching result in step S240 is described in detail as follows:
if the matching result is that the bus station contour is successfully matched with the bus station reference contour, generating a vehicle deceleration control instruction, and controlling the vehicle to decelerate according to the vehicle deceleration control instruction;
if the matching result is that the contour of the waiting passenger is successfully matched with the reference contour of the waiting passenger, generating a vehicle deceleration control instruction, and controlling the vehicle to decelerate according to the vehicle deceleration control instruction;
and if the matching result is that the matching between the bus station contour and the bus station reference contour fails or the matching result is that the matching between the waiting passenger contour and the waiting passenger reference contour fails, generating a normal vehicle running instruction, and controlling the vehicle to run according to the current speed according to the normal vehicle running instruction.
In this embodiment, two recognition results, i.e., a recognition success result and a recognition failure result, are mainly included. When the recognition result is that the recognition is successful, the fact that a bus station and/or a passenger waiting for the bus really exists in the road ahead can be determined, and at the moment, the speed of the vehicle needs to be controlled to be reduced so as to avoid water splashing when the vehicle runs through a water accumulation area; and when the recognition result is that the recognition is failed, the fact that the bus station and/or the passenger waiting for the bus does not exist in the road ahead can be determined, and the vehicle can be controlled to run at the normal speed.
In an embodiment of the application, when the target object contour includes a bus stop signboard contour and the pre-constructed reference contour includes a bus stop signboard reference contour, the step of generating the vehicle speed control command according to the matching result in step S240 is described in detail as follows:
if the matching result is that the bus station signboard profile is successfully matched with the bus station signboard reference profile, generating a vehicle deceleration control command, and controlling the vehicle to decelerate according to the vehicle deceleration control command;
and if the matching result is that the matching of the contour of the bus station signboard and the reference contour of the bus station signboard fails, generating a normal vehicle running instruction, and controlling the vehicle to run at the current speed according to the normal vehicle running instruction.
In this embodiment, the two types of identification failures are mainly used. When the identification result is that the identification is successful, the fact that a bus station signboard exists in a road ahead and a vehicle is about to enter a ponding area in front of the bus station can be determined, and at the moment, the vehicle needs to be controlled to decelerate and slowly run through the ponding area; when the recognition result is recognition failure, the fact that no bus station signboard exists in the road in front can be determined, namely, no bus station exists in front, and the vehicle can be controlled to run at normal speed.
In an embodiment of the present application, after step S240, a step of controlling the vehicle to change lane is further included, which is described in detail as follows:
inputting a road image into a pre-trained ponding recognition model, and outputting road ponding information through the pre-trained ponding recognition model, wherein the road ponding information comprises the area of a ponding area and the distance between a vehicle and the ponding area;
and if the area of the ponding area is larger than a preset area threshold value, or the distance between the vehicle and the ponding area is larger than a preset distance threshold value, generating lane change prompting information, wherein the lane change prompting information is used for prompting a driver of the vehicle to start lane change at the current position.
In this embodiment, when a target object exists in the road image and a water accumulation area exists in front of the target object, the driver of the vehicle can be reminded of changing the road at a proper distance to further avoid splashing. For example, road images are input into a pre-trained ponding recognition model, and a ponding area S and a distance d between a vehicle and a ponding area are output; when the S is larger than a preset threshold value, the water accumulation area is too large, and water splashing can still be caused by only decelerating running; when d is larger than a preset threshold value, the fact that the distance in front is enough for the vehicle to change the lane is indicated, and traffic accidents caused by insufficient distance and sudden lane change can be avoided; therefore, when the S and the d are larger than the preset threshold value, the cloud end 103 generates lane change prompt information and reminds a driver of a vehicle to change lanes within a safe distance through visual display or voice reminding in the terminal device 101, so that accumulated water in a water accumulation area is prevented from being splashed onto passengers waiting for the bus in the bus station.
In an embodiment of the present application, after step S240, a process of matching the bus stop actual information may be further included, which is described in detail as follows:
acquiring the current position of a vehicle and a navigation map, and determining bus station information in front of the current vehicle position in the navigation map according to a preset query range;
if the matching result is successful and the bus station information is that no bus station exists, or the matching result is failed and the bus station information is that a bus station exists, inputting the road image into a pre-trained contour recognition model for re-recognition to obtain a new recognition result;
matching the new recognition result with a pre-constructed reference contour to obtain a new matching result;
and regenerating a vehicle speed control command according to the new matching result.
In the embodiment, the matching result may still generate an erroneous recognition result due to reasons such as image quality and model parameters, so that the information of the target image in the road image is inconsistent with the actual situation; or when the bus station information is matched with the reference contour in the reference contour library, the obtained similarity is not consistent with the actual situation, and at the moment, the bus station information in the current map is required to be used as a reference so as to further accurately identify the result. For example, the current position of the vehicle is determined by the vehicle positioning information, then the number, position, distance and the like of bus stops near the vehicle are determined in the navigation map according to the range of 100m, and when the matching result does not accord with the information in the navigation map, the target object contour can be re-identified in the contour identification model and then matched to generate the vehicle control instruction. By the method, the recognition result and the matching result of the target object in the road information can be further ensured to be consistent with the actual condition, and the generation of wrong control instructions or early warning prompts is avoided.
Fig. 5 is a block diagram illustrating a bus stop splash protection device based on image recognition according to an exemplary embodiment of the present application. The apparatus may be applied to the implementation environment shown in fig. 1, and is specifically configured in the terminal device 101, or in the cloud terminal 103, or in the terminal device 101 and the cloud terminal 103. The apparatus may also be applied to other exemplary implementation environments, and is specifically configured in other devices, and the embodiment does not limit the implementation environment to which the apparatus is applied.
As shown in fig. 5, the exemplary image recognition-based bus stop splash protection apparatus includes:
the image acquisition module 501 is configured to acquire a road image during a driving process of a vehicle, wherein the road image is acquired by a high-definition camera installed in front of the vehicle at intervals;
the contour recognition module 502 is configured to input the road image into a pre-trained contour recognition model, and recognize the road image through the pre-trained contour recognition model to obtain a recognition result, where the recognition result includes contour information of multiple objects;
a contour matching module 503, configured to match the target object contour with a pre-constructed reference contour when the identification result includes the target object contour, so as to obtain a matching result;
and the instruction generating module 504 is configured to generate a vehicle speed control instruction according to the matching result, and control a vehicle running speed through the vehicle speed control instruction so as to avoid water splashing when the vehicle runs to the ponding area.
It should be noted that the image recognition-based bus station water splashing prevention device provided in the above embodiment and the image recognition-based bus station water splashing prevention method provided in the above embodiment belong to the same concept, and specific ways of the modules and units to perform operations have been described in detail in the method embodiment, and are not described herein again. In practical applications, the road condition refreshing apparatus provided in the above embodiment may distribute the above functions by different functional modules according to requirements, that is, divide the internal structure of the apparatus into different functional modules to complete all or part of the above described functions, which is not limited herein.
An embodiment of the present application further provides an electronic device, including: one or more processors; a storage device for storing one or more programs, which when executed by the one or more processors, cause the electronic device to implement the image recognition-based bus stop splash protection method provided in the above-described embodiments.
FIG. 6 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application. It should be noted that the computer system 600 of the electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 6, the computer system 600 includes a Central Processing Unit (CPU) 601, which can perform various appropriate actions and processes, such as executing the methods described in the above embodiments, according to a program stored in a Read-Only Memory (ROM) 602 or a program loaded from a storage portion 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for system operation are also stored. The CPU 601, ROM 602, and RAM 603 are connected to each other via a bus 604. An Input/Output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output section 607 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage section 608 including a hard disk and the like; and a communication section 609 including a Network interface card such as a LAN (Local Area Network) card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that the computer program read out therefrom is installed into the storage section 608 as necessary.
In particular, according to embodiments of the application, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising a computer program for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611. When the computer program is executed by a Central Processing Unit (CPU) 601, various functions defined in the system of the present application are executed.
It should be noted that the computer readable media shown in the embodiments of the present application may be computer readable signal media or computer readable storage media or any combination of the two. The computer readable storage medium may be, for example, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM), a flash Memory, an optical fiber, a portable Compact Disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer-readable signal medium may comprise a propagated data signal with a computer-readable computer program embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. The computer program embodied on the computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
Another aspect of the present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor of a computer, causes the computer to execute the above-mentioned image recognition-based bus stop splash prevention method. The computer-readable storage medium may be included in the electronic device described in the above embodiment, or may exist separately without being incorporated in the electronic device.
Another aspect of the application also provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions to enable the computer device to execute the bus station water splashing prevention method based on image recognition provided in the various embodiments.
The foregoing embodiments are merely illustrative of the principles of the present invention and its efficacy, and are not to be construed as limiting the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present invention.

Claims (10)

1. The bus station water splashing prevention method based on image recognition is characterized by comprising the following steps:
acquiring a road image in the driving process of a vehicle;
inputting the road image into a pre-trained contour recognition model, and recognizing the road image through the pre-trained contour recognition model to obtain a recognition result, wherein the recognition result comprises contour information of a plurality of objects;
when the identification result comprises a target object contour, matching the target object contour with a pre-constructed reference contour to obtain a matching result;
and generating a vehicle speed control instruction according to the matching result, and controlling the running speed of the vehicle through the vehicle speed control instruction so as to avoid splashing when the vehicle runs to the ponding area.
2. The bus station splash-proof method based on image recognition as claimed in claim 1, wherein before inputting the road image into a pre-trained image contour recognition model, the method comprises:
intercepting an image of a region to be identified in the road image according to a preset region of interest;
carrying out image segmentation on the to-be-identified area image to obtain road images of a plurality of objects;
and carrying out gray processing and binarization processing on the road images of the plurality of objects to obtain a preprocessed road image.
3. The image recognition-based bus stop splash prevention method according to claim 2, wherein after the road images of the plurality of objects are subjected to graying processing and binarization processing to obtain the preprocessed road images, the method comprises the following steps:
the method comprises the steps of obtaining a road environment sample data set, and dividing the road environment sample data set into a training data set and a verification data set according to a preset dividing proportion, wherein the training data set and the verification data set comprise road environment sample images and real labels;
inputting the training data set into a pre-constructed contour recognition model for iterative training, and updating parameters of a target function of the pre-constructed contour recognition model to obtain an initial contour recognition model;
inputting the verification data set into the initial contour recognition model, outputting a verification result through the initial contour recognition model, and adjusting the hyper-parameters of the initial contour recognition model according to the verification result to obtain the pre-trained contour recognition model.
4. The image recognition-based bus station water splash prevention method according to claim 1, wherein the target object contour comprises a bus station contour and/or a waiting passenger contour, and the pre-constructed reference contour comprises a bus station reference contour and/or a waiting passenger reference contour;
the generating of the vehicle speed control instruction according to the matching result comprises:
if the matching result is that the bus station contour is successfully matched with the bus station reference contour, generating a vehicle deceleration control instruction, and controlling the vehicle to decelerate according to the vehicle deceleration control instruction;
if the matching result is that the contour of the waiting passenger is successfully matched with the reference contour of the waiting passenger, generating a vehicle deceleration control instruction, and controlling the vehicle to decelerate according to the vehicle deceleration control instruction;
and if the matching result is that the matching of the bus station contour and the bus station reference contour fails, or the matching result is that the matching of the waiting passenger contour and the waiting passenger reference contour fails, generating a normal vehicle running instruction, and controlling the vehicle to run according to the current speed according to the normal vehicle running instruction.
5. The image recognition-based bus stop water splash prevention method according to claim 1, wherein the target object contour comprises a bus stop signboard contour, and the pre-constructed reference contour comprises a bus stop signboard reference contour;
the generating of the vehicle speed control instruction according to the matching result comprises:
if the matching result is that the contour of the bus station signboard is successfully matched with the reference contour of the bus station signboard, generating a vehicle deceleration control instruction, and controlling the vehicle to decelerate according to the vehicle deceleration control instruction;
and if the matching result is that the matching of the contour of the bus station signboard and the reference contour of the bus station signboard fails, generating a normal vehicle running instruction, and controlling the vehicle to run at the current speed according to the normal vehicle running instruction.
6. The bus stop splash-proof method based on image recognition as claimed in claim 1, wherein after generating a vehicle speed control command according to the matching result, the method comprises:
inputting the road image into a pre-trained ponding recognition model, and outputting road ponding information through the pre-trained ponding recognition model, wherein the road ponding information comprises the area of a ponding area and the distance between the vehicle and the ponding area;
and if the area of the ponding area is larger than a preset area threshold value, or the distance between the vehicle and the ponding area is larger than a preset distance threshold value, generating lane change prompt information, wherein the lane change prompt information is used for prompting a driver of the vehicle to start lane change at the current position.
7. The bus station splash-proof method based on image recognition as claimed in claim 1, wherein the matching of the recognition result with a pre-constructed reference contour after obtaining a matching result comprises:
acquiring the current position of a vehicle and a navigation map, and determining bus station information in front of the current vehicle position in the navigation map according to a preset query range;
if the matching result is successful and the bus station information does not exist, or the matching result is failed and the bus station information exists, inputting the road image into a pre-trained contour recognition model for re-recognition to obtain a new recognition result;
matching the new recognition result with a pre-constructed reference contour to obtain a new matching result;
and regenerating a vehicle speed control command according to the new matching result.
8. The utility model provides a bus station splashproof water installation based on image recognition which characterized in that, the device includes:
the image acquisition module is used for acquiring a road image in the running process of a vehicle;
the contour recognition module is used for inputting the road image into a pre-trained contour recognition model, recognizing the road image through the pre-trained contour recognition model and obtaining a recognition result, wherein the recognition result comprises contour information of a plurality of objects;
the contour matching module is used for matching the target object contour with a pre-constructed reference contour when the identification result comprises the target object contour to obtain a matching result;
and the instruction generating module is used for generating a vehicle speed control instruction according to the matching result, and controlling the vehicle running speed through the vehicle speed control instruction so as to avoid water splashing when the vehicle runs to the ponding area.
9. An electronic device, characterized in that the electronic device comprises:
one or more processors;
storage means for storing one or more programs that, when executed by the one or more processors, cause the electronic device to implement the image recognition-based bus stop splash protection method as recited in any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that a computer program is stored thereon, which, when being executed by a processor of a computer, causes the computer to execute the image recognition-based bus stop splash protection method according to any one of claims 1 to 7.
CN202210815393.6A 2022-07-08 2022-07-08 Bus station water-splashing prevention method, device, equipment and medium based on image recognition Pending CN115147795A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210815393.6A CN115147795A (en) 2022-07-08 2022-07-08 Bus station water-splashing prevention method, device, equipment and medium based on image recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210815393.6A CN115147795A (en) 2022-07-08 2022-07-08 Bus station water-splashing prevention method, device, equipment and medium based on image recognition

Publications (1)

Publication Number Publication Date
CN115147795A true CN115147795A (en) 2022-10-04

Family

ID=83412840

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210815393.6A Pending CN115147795A (en) 2022-07-08 2022-07-08 Bus station water-splashing prevention method, device, equipment and medium based on image recognition

Country Status (1)

Country Link
CN (1) CN115147795A (en)

Similar Documents

Publication Publication Date Title
Chen et al. High-resolution vehicle trajectory extraction and denoising from aerial videos
US11840239B2 (en) Multiple exposure event determination
Huang et al. Vehicle detection and inter-vehicle distance estimation using single-lens video camera on urban/suburb roads
CN103324930B (en) A kind of registration number character dividing method based on grey level histogram binaryzation
CN108830246B (en) Multi-dimensional motion feature visual extraction method for pedestrians in traffic environment
Kuang et al. Bayes saliency-based object proposal generator for nighttime traffic images
CN112232314A (en) Vehicle control method and device for target detection based on deep learning
KR20210052031A (en) Deep Learning based Traffic Flow Analysis Method and System
Zhang et al. End to end video segmentation for driving: Lane detection for autonomous car
Kavitha et al. Pothole and object detection for an autonomous vehicle using yolo
Al Mamun et al. Lane marking detection using simple encode decode deep learning technique: SegNet
CN113255444A (en) Training method of image recognition model, image recognition method and device
CN114694060B (en) Road casting detection method, electronic equipment and storage medium
CN113297939B (en) Obstacle detection method, obstacle detection system, terminal device and storage medium
CN113033363A (en) Vehicle dense target detection method based on deep learning
CN110555425A (en) Video stream real-time pedestrian detection method
CN116052189A (en) Text recognition method, system and storage medium
CN115147795A (en) Bus station water-splashing prevention method, device, equipment and medium based on image recognition
CN105069410A (en) Unstructured road recognition method and device
CN116977484A (en) Image desensitizing method, device, electronic equipment and storage medium
CN114822044A (en) Driving safety early warning method and device based on tunnel
CN113435350A (en) Traffic marking detection method, device, equipment and medium
CN112861701A (en) Illegal parking identification method and device, electronic equipment and computer readable medium
CN112949595A (en) Improved pedestrian and vehicle safety distance detection algorithm based on YOLOv5
Gizatullin et al. Automatic car license plate detection based on the image weight model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination