CN116612454A - Vehicle image processing method, device, computer equipment and storage medium - Google Patents
Vehicle image processing method, device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN116612454A CN116612454A CN202310521048.6A CN202310521048A CN116612454A CN 116612454 A CN116612454 A CN 116612454A CN 202310521048 A CN202310521048 A CN 202310521048A CN 116612454 A CN116612454 A CN 116612454A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- type
- image
- information
- specific
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 20
- 238000003860 storage Methods 0.000 title claims abstract description 18
- 238000012545 processing Methods 0.000 claims abstract description 192
- 238000001514 detection method Methods 0.000 claims abstract description 44
- 238000000605 extraction Methods 0.000 claims description 54
- 238000004590 computer program Methods 0.000 claims description 24
- 238000000034 method Methods 0.000 claims description 24
- 230000006870 function Effects 0.000 claims description 21
- 238000012549 training Methods 0.000 claims description 16
- 238000004422 calculation algorithm Methods 0.000 claims description 11
- 230000008569 process Effects 0.000 claims description 8
- 238000001914 filtration Methods 0.000 claims description 4
- 238000012544 monitoring process Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 8
- 239000011159 matrix material Substances 0.000 description 6
- 238000004140 cleaning Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 238000003062 neural network model Methods 0.000 description 4
- 230000004913 activation Effects 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 238000010606 normalization Methods 0.000 description 3
- 230000004075 alteration Effects 0.000 description 2
- 230000010365 information processing Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 229910003460 diamond Inorganic materials 0.000 description 1
- 239000010432 diamond Substances 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Traffic Control Systems (AREA)
Abstract
The embodiment of the application provides a vehicle image processing method, a device, computer equipment and a storage medium, relating to the technical field of computers, wherein the vehicle image processing method comprises the following steps: acquiring a traffic image to be detected; detecting the vehicle area of the traffic image to be detected to obtain a vehicle area image corresponding to the traffic image to be detected; detecting the specific vehicle type of the vehicle area image to obtain the type information of the specific vehicle in the traffic image to be detected; and acquiring the position information of the specific vehicle, and sending processing information to the vehicle according to the type information of the specific vehicle and the position information of the specific vehicle. The technical problems that the existing special-attribute vehicle detection accuracy is low and the path planning processing cannot be performed are solved, and the technical effects that the accuracy of special-attribute vehicle detection is improved and the follow-up path planning processing based on special-attribute vehicle information is convenient are achieved.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to a vehicle image processing method, apparatus, computer device, and storage medium.
Background
With the continuous development of scientific technology, intelligent traffic technologies such as automatic driving, auxiliary driving, intelligent navigation and the like have been increasingly applied to the daily life of users. In the application of intelligent transportation technology, especially, vehicles with specific properties such as police cars, fire-fighting vehicles, engineering rescue vehicles, ambulances and the like need to be cleared when emergency tasks are executed, so that it is important to identify the vehicles with the specific properties.
At present, a target detection model based on deep learning can be adopted to detect the vehicle in the related art, a manual annotation data set is usually firstly carried out, and the target detection model is trained according to the annotated data set, however, the annotation of the vehicle with the specific attribute is lacking in the data set in the method, so that the accuracy of detecting the vehicle with the specific attribute by the trained target detection model is poor, and subsequent path planning processing cannot be carried out based on the vehicle with the specific attribute.
Disclosure of Invention
The embodiment of the application provides a vehicle image processing method, a device, computer equipment and a storage medium.
In a first aspect of an embodiment of the present application, there is provided a vehicle image processing method including
Acquiring a traffic image to be detected;
detecting a vehicle region by using the vehicle image processing traffic image to be detected to obtain a vehicle region image corresponding to the vehicle image processing traffic image to be detected;
detecting the specific vehicle type of the vehicle image processing vehicle area image to obtain the type information of the specific vehicle in the vehicle image processing to-be-detected traffic image;
position information of a vehicle image processing specific vehicle is acquired, and processing information is transmitted to the vehicle according to type information of the vehicle image processing specific vehicle and position information of the vehicle image processing specific vehicle.
In an optional embodiment of the present application, performing specific vehicle type detection on a vehicle image processing vehicle area image to obtain type information of a specific vehicle in a traffic image to be detected, includes:
performing feature extraction on the vehicle image processing vehicle region image through the trained feature extraction network to obtain unknown type vehicle features;
comparing the vehicle image processing unknown type vehicle characteristics with a plurality of pre-stored specific type vehicle characteristics to obtain type information of specific vehicles in the traffic image to be detected, wherein the vehicle image processing specific type vehicle characteristics are obtained by extracting characteristics of specific vehicle area images through a vehicle image processing characteristic extraction network.
In an optional embodiment of the present application, comparing a vehicle image processing unknown type vehicle feature with a plurality of pre-stored specific type vehicle features to obtain type information of a specific vehicle in a traffic image to be detected, including:
processing the unknown type vehicle features and the plurality of specific type vehicle features based on the vehicle images, calculating a distance between the unknown type vehicle and each specific type vehicle;
and searching for a vehicle with the minimum distance smaller than a preset threshold value from the vehicles of the specific type in the vehicle image processing, and determining the type corresponding to the vehicle in the vehicle image processing as the type information of the specific vehicle in the traffic image to be detected.
In an optional embodiment of the present application, the vehicle region detection is performed on a traffic image to be detected processed by a vehicle image, to obtain a vehicle region image corresponding to the traffic image to be detected processed by the vehicle image, including:
inputting the traffic image to be detected in the vehicle image processing to the area detection network, and determining N anchor frames in the traffic image to be detected in the vehicle image processing, wherein N is an integer greater than or equal to 1;
extracting features of the corresponding areas of the anchor frames of each vehicle image processing, and obtaining the position offset and the size offset corresponding to each anchor frame of each vehicle image processing in the traffic image to be detected by the vehicle image processing;
Determining a preset frame corresponding to the vehicle image processing anchor frame based on the position of the vehicle image processing anchor frame, the size of the vehicle image processing anchor frame, the position offset and the size offset corresponding to the vehicle image processing anchor frame;
and filtering invalid preset frames in the preset frames corresponding to the N anchor frames in the vehicle image processing, and obtaining a vehicle image processing vehicle region image based on the regions corresponding to the rest anchor frames.
In an alternative embodiment of the present application, the training process of the vehicle image processing feature extraction network includes:
acquiring a historical traffic image; the vehicle type information is marked in the vehicle image processing historical traffic image;
inputting the historical traffic image processed by the vehicle image into an initial classification network for classification processing to obtain a predicted vehicle type; the vehicle image processing initial classification network comprises a feature extraction layer and a classification layer;
predicting a loss function between the vehicle type based on the vehicle image processing and the vehicle type information marked by the vehicle image processing, and performing iterative training on an initial vehicle image processing classification network by adopting an iterative algorithm according to the minimization of the loss function to obtain a classification network;
and removing the classification layer structure from the network structure of the vehicle image processing classification network to obtain a vehicle image processing characteristic extraction network.
In an alternative embodiment of the present application, transmitting processing information to a vehicle according to type information of a specific vehicle image processing and vehicle image processing position information, includes:
when the type information of the specific vehicle is the first type vehicle, determining lane information of the first type vehicle on a road according to the position information of the first type vehicle processed by the vehicle image;
transmitting vehicle image processing lane information and prompt information to other vehicles on a vehicle image processing road so as to prompt the other vehicles to let the vehicle image processing first type vehicle pass according to the vehicle image processing lane information, wherein the other vehicles are vehicles in a preset area range on the vehicle image processing road, wherein the preset area range is centered on the vehicle image processing first type vehicle;
the road condition information of the vehicle image processing road is monitored in real time, and lane pushing information is sent to the vehicle image processing first type vehicle according to the vehicle image processing road condition information.
In an alternative embodiment of the present application, transmitting processing information to a vehicle according to type information of a specific vehicle image processing and vehicle image processing position information, includes:
Detecting an object to be processed corresponding to the vehicle image processing second type vehicle and position information of the object to be processed by the vehicle image processing when the type information of the specific vehicle is the second type vehicle;
the road condition information of the vehicle image processing road is monitored in real time, and the position information of the vehicle image processing object to be processed is sent to the vehicle image processing second type vehicle according to the vehicle image processing road condition information and the position information of the vehicle image processing object to be processed.
In a second aspect of the embodiment of the present application, there is provided a vehicle image processing apparatus including:
the acquisition module is used for acquiring the traffic image to be detected;
the area detection module is used for detecting the vehicle area of the traffic image to be detected processed by the vehicle image to obtain a vehicle area image corresponding to the traffic image to be detected processed by the vehicle image;
the type detection module is used for detecting the specific vehicle type of the vehicle image processing vehicle area image to obtain the type information of the specific vehicle in the vehicle image processing traffic image to be detected;
and the transmitting module is used for acquiring the position information of the vehicle image processing specific vehicle and transmitting the processing information to the vehicle according to the type information of the vehicle image processing specific vehicle and the position information of the vehicle image processing specific vehicle.
In a third aspect of the embodiment of the present application, there is provided a computer apparatus including: comprising a memory storing a computer program and a processor implementing the steps of any of the methods described above when the processor executes the computer program.
In a fourth aspect of embodiments of the present application, there is provided a computer readable storage medium having stored thereon a computer program, characterized in that the computer program when executed by a processor implements the steps of the method of any of the above.
According to the vehicle image information processing method provided by the embodiment of the application, the traffic image to be detected is obtained, the vehicle area detection is carried out on the traffic image to be detected, the vehicle area image corresponding to the traffic image to be detected is obtained, the specific vehicle type detection is carried out on the vehicle area image, the type information of the specific vehicle in the traffic image to be detected is obtained, then the position information of the specific vehicle is obtained, and the processing information is sent to the vehicle according to the type information of the specific vehicle and the position information of the specific vehicle. Compared with the prior art, the technical scheme of the application has the advantages that on one hand, after the traffic image to be detected is obtained, the vehicle area image corresponding to the traffic image to be detected can be accurately determined by detecting the vehicle area of the traffic image to be detected, data guiding information is provided for the subsequent determination of the type information of the specific vehicle, and on the other hand, the vehicle type characteristics are extracted in a finer granularity by detecting the specific vehicle type of the vehicle area image, so that the type information in the vehicle area image is identified based on the more detailed characteristics, the identification accuracy of the specific vehicle type can be effectively improved, and further, the subsequent path planning processing can be performed according to the type information and the position information of the specific vehicle, so that processing information such as prompting for evacuation can be sent to different vehicles, and the traffic can be evacuated.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1 is a schematic diagram of a computer device according to an embodiment of the present application;
FIG. 2 is a flow chart of a vehicle image processing method according to an embodiment of the present application;
FIG. 3 is a flowchart of a method for obtaining type information of a specific vehicle in a traffic image to be detected according to an embodiment of the present application;
FIG. 4 is a flowchart of a method for training a feature extraction network according to one embodiment of the application;
FIG. 5 is a schematic view of a vehicle image processing apparatus according to an embodiment of the present application;
Detailed Description
In carrying out the present application, the inventors have found that the accuracy of detecting vehicles of a particular attribute is currently poor.
In view of the above problems, in an embodiment of the present application, a vehicle image processing method is provided to improve accuracy of detecting a specific type of vehicle.
The scheme in the embodiment of the application can be realized by adopting various computer languages, such as object-oriented programming language Java, an transliteration script language JavaScript and the like.
In order to make the technical solutions and advantages of the embodiments of the present application more apparent, the following detailed description of exemplary embodiments of the present application is provided in conjunction with the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present application and not exhaustive of all embodiments. It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other.
The application environment of the file processing method provided by the embodiment of the application is briefly described below:
fig. 1 is a schematic structural diagram of an exemplary computer device according to an embodiment of the present application. The computer device may be a terminal. As shown in fig. 1, the computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium may be, for example, a magnetic disk. The nonvolatile storage medium stores a file (which may be a file to be processed or a file after processing), an operating system, a computer program, and the like. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a vehicle image processing method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
Optionally, the terminal may be a terminal device in various AI application scenarios. For example, the terminal may be a notebook computer, a tablet computer, a desktop computer, a vehicle-mounted terminal, an intelligent voice interaction device, an intelligent home appliance, a mobile device, an aircraft, etc., and the mobile device may be various types of terminals such as a smart phone, a portable music player, a personal digital assistant, a dedicated messaging device, a portable game device, etc., which are not particularly limited in the embodiments of the present application.
The vehicle image processing scheme provided by the embodiment of the application can be applied to common automatic driving, vehicle navigation scenes, map data acquisition scenes, road data acquisition scenes, intelligent traffic scenes, auxiliary driving scenes and the like. In the above application scenario, it is generally required to collect a road real image, then analyze the road real image to obtain type information of specific traffic, and perform subsequent operations based on the information, such as planning a travel route, controlling automatic driving of a vehicle, generating and sending prompt information to the vehicle.
Referring to fig. 2, the following embodiment uses the above-mentioned computer device as an execution subject, and applies the vehicle image processing method provided by the embodiment of the present application to the above-mentioned computer device to detect type information of a specific vehicle in a traffic image to be detected, and specifically describes an example of sending the processing information to the vehicle. The vehicle image processing method provided by the embodiment of the application comprises the following steps 201-205:
Step 201, acquiring a traffic image to be detected.
The traffic image to be detected may be an image obtained by photographing a traffic panorama including a specific vehicle. The traffic image to be detected can comprise a specific type of vehicle, a common type of vehicle and background information. The specific type of vehicle refers to a special purpose vehicle for executing an emergency task, for example, police car, fire truck, engineering rescue vehicle, supervision car, ambulance and the like, the common type of vehicle refers to a vehicle with normal passenger carrying function, for example, a common car, bus and the like, and the background information refers to image information except the specific type of vehicle and the common type of vehicle in the traffic image to be detected, for example, vehicles, roads, rods, buildings, sky, ground, trees and the like.
The traffic image to be detected may be an image photographed in various different scenes, for example, an image photographed in different roads, different weather, different angles for a picture including a specific type of vehicle.
In the embodiment of the application, the image acquisition device is called to acquire the image of the picture containing the specific type of vehicle so as to acquire the traffic image to be detected, the traffic image to be detected can be acquired through a cloud, the traffic image to be detected can be acquired through a database or a blockchain, and the traffic image to be detected can be acquired through the importing of external equipment.
In one possible implementation manner, the image capturing device may be a video camera or a still camera, or may be a radar device such as a laser radar, a millimeter wave radar, or the like, and the image capturing device is an image capturing device located in a vehicle, such as a vehicle recorder, a video camera mounted on a windshield of the vehicle, and having a lens oriented in a running direction of the vehicle, or the like.
It should be noted that, the traffic image to be detected may be in an image sequence format, a three-dimensional point cloud image format, or a video image format, which is not limited in the embodiment of the present application.
Step 202, detecting the vehicle area of the traffic image to be detected, and obtaining a vehicle area image corresponding to the traffic image to be detected by processing the vehicle image.
After the computer equipment obtains the traffic image to be detected, the vehicle region detection can be carried out on the traffic image to be detected through a preset feature extraction rule, and a vehicle region image corresponding to the traffic image to be detected is obtained. The vehicle region image may include a target region of a different type of vehicle, and the target region of the different type of vehicle refers to an image including a specific type of vehicle and an image including a general type of vehicle. The target area may be a rectangular area, a circular area, a triangular area, or the like.
Optionally, the feature extraction rule refers to a feature extraction policy preset for an image to be identified according to an actual application scene, and the feature extraction policy may be a trained region detection model, a general feature extraction algorithm, or the like.
As an implementation manner, feature extraction processing can be performed on the traffic image to be detected through the area detection model, so that target areas of different types of vehicles in the traffic image to be detected are obtained. The regional detection model is a network structure model with the characteristic extraction capability of different vehicle types by training sample data. The area detection model is a neural network model which is input into a traffic image to be detected, output into a target area of different types of vehicles in the traffic image to be detected, namely, output into a vehicle area image, has the capability of carrying out image recognition on the traffic image to be detected, and can predict the vehicle area image. The area detection model may include a multi-layer network structure, where the network structures of different layers perform different processing on the data input thereto, and transmit the output result thereof to the next network layer until the last network layer performs processing, so as to obtain target areas of different types of vehicles in the traffic image to be detected.
It should be noted that, the above-mentioned implementation manner of performing the vehicle region detection on the traffic image to be detected to obtain the vehicle region image corresponding to the traffic image to be detected is merely an example, and the embodiment of the present application is not limited thereto.
In order to reduce the calculation amount in the image processing process, after the computer equipment acquires the target areas corresponding to the traffic images to be detected of different types of vehicles, the computer equipment can directly cut the target areas corresponding to the different types of vehicles so as to obtain the vehicle area images.
And 203, detecting the specific vehicle type of the vehicle area image to obtain the type information of the specific vehicle in the traffic image to be detected.
The vehicle region image refers to a vehicle image of unknown type, and may be an image including a specific type of vehicle or an image including a general type of vehicle. The vehicle region image may be plural or one.
After the vehicle region image is acquired, vehicle features in the vehicle region image can be extracted, then a specific type vehicle feature library is acquired, each specific type vehicle feature is stored in the specific type vehicle feature library, for example, the specific type vehicle feature library can comprise ambulance vehicle features, fire truck vehicle features, engineering truck vehicle features, supervision vehicle features, ambulance vehicle features and the like, the vehicle features extracted from the vehicle region image are compared with each pre-stored specific type vehicle feature one by one, the similarity between the vehicle features and different specific type vehicle features can be calculated, and the specific type vehicle with the largest similarity is used as the type information of the specific vehicle in the traffic image to be detected.
The specific type vehicle feature library can be flexibly configured according to the image feature information of the specific type vehicle in the actual application scene, and is constructed by summarizing and arranging the features of different specific vehicle types, element forms, structures and the like.
Step 204, acquiring the position information of the specific vehicle, and transmitting the processing information to the vehicle according to the type information of the specific vehicle and the position information of the specific vehicle.
The position information of the specific vehicle refers to position information of the specific vehicle on an actual road. Specifically, the position information of the specific vehicle in the traffic image to be detected can be acquired first, and then the position information of the specific vehicle in the traffic image to be detected is subjected to coordinate conversion processing according to the mapping relation between the image coordinate system and the world coordinate system, so that the position information of the specific vehicle in the actual road is obtained.
After the position information of the specific vehicle in the actual road and the type information of the specific vehicle are determined, the processing requirements of different specific vehicles can be acquired, and the processing information is generated and sent to other corresponding vehicles on the road according to the processing requirements of different specific vehicles. The other vehicle may be a vehicle having an association relationship with a specific vehicle, and may be, for example, a vehicle within a predetermined area centered on the specific vehicle.
The processing requirement may include executing an urgent task, requiring other vehicles to give way, or cleaning an object to be processed, and acquiring position information of the object to be processed.
According to the vehicle image information processing method provided by the embodiment of the application, the traffic image to be detected is obtained, the vehicle area detection is carried out on the traffic image to be detected, the vehicle area image corresponding to the traffic image to be detected is obtained, the specific vehicle type detection is carried out on the vehicle area image, the type information of the specific vehicle in the traffic image to be detected is obtained, then the position information of the specific vehicle is obtained, and the processing information is sent to the vehicle according to the type information of the specific vehicle and the position information of the specific vehicle. Compared with the prior art, the technical scheme of the application has the advantages that on one hand, after the traffic image to be detected is acquired, the vehicle area image corresponding to the traffic image to be detected can be accurately determined by detecting the vehicle area of the traffic image to be detected, data guiding information is provided for the type information of the specific vehicle to be determined later, on the other hand, the vehicle type characteristics are extracted in a finer granularity by detecting the specific vehicle type of the vehicle area image, so that the type information in the vehicle area image is identified based on the more detailed characteristics, the identification accuracy of the specific vehicle type can be effectively improved, and further, path planning processing can be performed according to the type information and the position information of the specific vehicle to send processing information, such as prompting for letting to evacuate traffic, to different vehicles.
In an alternative embodiment of the present application, referring to fig. 3, the step 203 of detecting a specific vehicle type of the vehicle area image to obtain type information of a specific vehicle in the traffic image to be detected includes the following steps:
and 301, extracting features of the vehicle region image through a trained feature extraction network to obtain unknown vehicle features.
The feature extraction network is a network structure model with feature extraction capability learned by training sample data. The feature extraction network is a neural network model which is input into a vehicle region image, output into unknown type vehicle features, has the capability of extracting features of the vehicle region image, and can extract the vehicle features. The feature extraction network may include a multi-layer network structure, where the network structures of different layers perform different processing on the data input thereto, and transmit the output result thereof to the next network layer until the last network layer performs processing to obtain the unknown type vehicle feature.
Optionally, the feature extraction network may include a plurality of convolution layers, and the feature extraction processing is performed on the vehicle area image through the convolution layers in the feature extraction network, so as to obtain the unknown vehicle feature. The convolution layer is used for extracting vehicle features in the vehicle region image. The above-mentioned unknown type vehicle feature refers to a vehicle feature whose type information of the vehicle is unknown, and may refer to a specific vehicle feature or a general vehicle feature.
Step 302, comparing the unknown type vehicle characteristics with a plurality of pre-stored specific type vehicle characteristics to obtain type information of specific vehicles in the traffic image to be detected, wherein the specific type vehicle characteristics are obtained by extracting characteristics of specific vehicle area images through a vehicle image processing characteristic extraction network.
The plurality of pre-stored specific type vehicle features may be stored in a feature type vehicle feature library, or may be obtained by acquiring a specific vehicle region image of a known specific vehicle type, performing feature extraction on the specific vehicle region image through a feature extraction network to obtain a specific type vehicle feature, and then storing the specific type vehicle feature in association with a corresponding specific vehicle type, that is, each specific vehicle type corresponds to the specific type vehicle feature one by one. The prestored plurality of specific type vehicle features can be flexibly configured according to the image feature information of the specific type vehicle in the actual application scene.
After the unknown type vehicle features are determined, the unknown type vehicle features can be compared with a plurality of pre-stored specific type vehicle features, the matching degree between the unknown type vehicle features and each specific type vehicle feature in the plurality of specific type vehicle features is judged, the specific type vehicle feature with the highest matching degree is searched, and the corresponding type information is determined as the type information of the specific vehicle in the traffic image to be detected.
For example, after obtaining the unknown type vehicle feature, comparing the unknown type vehicle feature with the pre-stored a-type vehicle feature, the pre-stored b-type vehicle feature and the pre-stored c-type vehicle feature, calculating the matching degree between the unknown type vehicle feature and the a-type vehicle feature, the pre-stored b-type vehicle feature and the pre-stored c-type vehicle feature, and determining the type information of the specific vehicle in the traffic image to be detected by assuming that the matching degree between the a-type vehicle feature and the unknown type vehicle feature is highest.
In the embodiment, the vehicle region image is subjected to feature extraction through the trained feature extraction network to obtain the unknown type vehicle features, and the unknown type vehicle features are compared with the prestored plurality of specific type vehicle features, so that the prestored plurality of specific type vehicle features can be used as reference information, and the type information of the specific vehicle in the traffic image to be detected can be obtained more accurately.
In an optional embodiment of the present application, the step 302 of comparing the unknown type vehicle feature with a plurality of pre-stored specific type vehicle features to obtain the type information of the specific vehicle in the traffic image to be detected includes the following steps:
The method comprises the steps of calculating the distance between an unknown type vehicle and each specific type vehicle based on the characteristics of the unknown type vehicle and the characteristics of a plurality of specific types of vehicles, searching vehicles with the minimum distance less than a preset threshold value from the specific types of vehicles, and determining the type corresponding to the vehicles as type information of the specific vehicles in the traffic image to be detected.
Specifically, the distance between the specific type vehicle and the unknown type vehicle feature is calculated based on the unknown type vehicle feature and each specific type vehicle feature in the plurality of specific type vehicle features, which may be a euclidean distance or a cosine distance between the specific type vehicle and the unknown type vehicle feature, after each distance is determined, the vehicle feature with the distance smaller than a preset threshold value is searched from all specific type vehicles corresponding to the specific type vehicle feature and is used as an intermediate vehicle feature, then the intermediate vehicle feature with the minimum distance is searched from the intermediate vehicle feature, and the vehicle type corresponding to the intermediate vehicle feature with the minimum distance is used as the type information of the specific vehicle in the traffic image to be detected.
In the embodiment, by calculating the distance between the unknown type vehicle and each specific type vehicle, the vehicle with the distance smaller than the preset threshold and the smallest distance is searched from the specific type vehicles, and the characteristics of the unknown type vehicle and the characteristics of the specific type vehicle can be compared in a finer granularity, so that the type of the unknown type vehicle is determined based on more comprehensive information, and the type information of the specific vehicle in the traffic image to be detected is accurately obtained.
In an optional embodiment of the present application, the step 202 of detecting the vehicle area from the traffic image to be detected to obtain the vehicle area image corresponding to the traffic image to be detected includes the following steps:
inputting the traffic image to be detected in the vehicle image processing to the area detection network, and determining N anchor frames in the traffic image to be detected in the vehicle image processing, wherein N is an integer greater than or equal to 1; then, extracting features of the corresponding areas of the anchor frames for processing the vehicle images to obtain position offset and size offset corresponding to each anchor frame for processing the vehicle images in the traffic images to be detected; determining a preset frame corresponding to the vehicle image processing anchor frame based on the position of the vehicle image processing anchor frame, the size of the vehicle image processing anchor frame, the position offset and the size offset corresponding to the vehicle image processing anchor frame; and finally filtering invalid preset frames in the preset frames corresponding to the N anchor frames in the vehicle image processing, and obtaining a vehicle image processing vehicle region image based on the regions corresponding to the rest anchor frames.
It should be noted that, the anchor frame is used for selecting a preset area in the input image of the model, for example, an area where the vehicle is located may be selected, and the anchor frame refers to a plurality of prior frames defined by a preset algorithm with an anchor point as a center, and the shape of the prior frames may be, for example, a rectangle, a triangle, a diamond, a circle, and the like.
The N anchor frames are obtained based on historical traffic images and real class results of the historical traffic images, and the historical traffic images comprise marked vehicle areas. When the shape of the prior frame is rectangular, it may be a plurality of prior frames having different sizes and aspect ratios generated centering on each pixel, the sizes and the aspect ratios being obtained based on the history traffic image and the real class result training of the history traffic image. The historical traffic image includes a marked vehicle region. When the shape of the prior frame is a circle, it may be a plurality of prior frames with different radii generated centering on each pixel, the radii being obtained based on the historical traffic image and the real class result training of the historical traffic image.
The area detection network is a neural network model which is input into a traffic image to be detected, outputs recognition results of position offset and size offset corresponding to anchor frames, has the capability of detecting the vehicle area of the traffic image to be detected, and can predict the position offset and the size offset corresponding to each anchor frame. The detection network is used for establishing a relation between the position offset and the size offset of the traffic image to be detected and the anchor frame, and the model parameters of the detection network are in an optimal state.
The area detection network may include, but is not limited to, a convolution layer, a normalization layer, and an activation function, which may include one layer or may also include multiple layers. The convolution layer is used for extracting features of edges and texture features in the traffic image to be detected; the normalization layer is used for performing normalization processing on the image features obtained by the convolution layer, for example, the average value can be divided by the variance to obtain normal distribution with zero average value and one variance, and gradient explosion and gradient disappearance can be prevented; the activation function may be a Sigmoid function, a Tanh function, or a ReLU function, and the result may be mapped to between 0 and 1 by performing activation function processing on the normalized feature map.
It is understood that the position offset may include an abscissa offset and an ordinate offset of a center point of the preset frame relative to the anchor frame, and the size offset may include a width offset and a height offset of the preset frame relative to the anchor frame.
After feature extraction is performed on the corresponding area of each anchor frame to obtain the position offset and the size offset corresponding to each anchor frame in the traffic image to be detected, a preset correction algorithm can be adopted to correct the anchor frame according to the position of the anchor frame, the size of the anchor frame and the position offset and the size offset corresponding to the anchor frame, so as to obtain the preset frame corresponding to the anchor frame.
The remaining anchor frame corresponding area refers to the selected area of the anchor frame. The invalid preset frame refers to a preset frame having an excessively large deviation from a real predicted frame, and the real preset frame refers to a preset frame including only a vehicle region. The invalid preset frame may be, for example, a preset frame that does not completely include the vehicle region, or a preset frame that includes not only the vehicle region but also the background region.
It will be appreciated that a large number of preset frames may be generated at the same target location during the vehicle region detection process, and these preset frames may overlap with each other, where a Non-Maximum-Suppression (NMS) algorithm is used to repair the redundant invalid preset frames, and determine the optimal target preset frame, that is, determine the vehicle region image based on the region selected by the remaining anchor frames. The maximum suppression algorithm may be, for example, a DIoU-NMS algorithm.
According to the embodiment of the application, the position offset and the size offset corresponding to each type of anchor frame can be accurately determined by inputting the traffic image to be detected into the area detection network, so that the preset frame corresponding to the anchor frame is accurately determined based on the position of the anchor frame, the size of the anchor frame and the position offset and the size offset corresponding to the anchor frame, and the accuracy of determining the vehicle area image is improved.
In an alternative embodiment of the present application, there is also provided a process for training a feature extraction network, as shown in fig. 4, the process including the steps of:
step 401, acquiring a historical traffic image; the historical traffic image is marked with vehicle type information.
Step 402, inputting the historical traffic image into an initial classification network for classification processing to obtain a predicted vehicle type; the initial classification network comprises a feature extraction layer and a classification layer.
Step 403, based on the loss function between the predicted vehicle type and the marked vehicle type information, performing iterative training on the initial classification network by adopting an iterative algorithm according to the minimization of the loss function to obtain the classification network.
And step 404, removing the classification layer structure from the network structure of the classification network to obtain a feature extraction network.
The number of the history traffic image may be plural or one, and the area including the vehicle may be cut out from the collected road image and used as the history traffic image. Wherein each historical traffic image can include at least one vehicle, which can be a specific type of vehicle or a common type of vehicle. The historical traffic image is marked with vehicle type information, namely, the historical traffic image comprises the vehicle marked with specific types, such as an automatic cleaning vehicle, an automatic driving retail vehicle, an automatic patrol vehicle and the like, and the historical traffic image is also marked with common types of vehicles. In this embodiment, since the historical traffic image of the area where the vehicle is located is extracted and trained, the required data set is smaller, and the original data set which does not distinguish the special vehicles does not need to be changed.
Alternatively, the initial classification network may be a network including, but not limited to, a similar vgg, resnet, etc. The initial classification network is a neural network model which inputs a historical traffic image, outputs the historical traffic image as a predicted vehicle type, has the capability of extracting features of the historical traffic image and can predict the vehicle type in the historical traffic image. The initial classification network may be an initial model during iterative training, that is, the model parameters of the initial classification network are in an initial state, or may be a model adjusted in a previous iteration training, that is, the model parameters of the initial classification network are in an intermediate state.
The initial classification network may include a feature extraction layer and a classification layer, input the historical traffic image into the initial feature extraction layer to perform feature extraction to obtain a corresponding sample graph, and then perform classification processing through the classification layer to obtain an output result, where the output result is a predicted vehicle type. And then, predicting a loss function between the vehicle type and the vehicle type information marked by the vehicle image processing, and performing iterative training on the vehicle image processing initial classification network by adopting an iterative algorithm according to the minimization of the loss function to obtain the classification network. In the process of inputting the historical traffic image into the initial feature extraction layer for feature extraction, a series of trainable weight values can be adopted to multiply and add the features of the upper layer, so as to obtain a sample graph, and then an output result is obtained through the classification layer.
Optionally, the updating of the parameters in the initial classification network may be updating of matrix parameters such as a weight matrix and a bias matrix in the initial classification network. Wherein the weight matrix and the bias matrix include, but are not limited to, matrix parameters in a feature extraction layer and a classification layer in the initial classification network.
After training to obtain a classification network, the classification layer structure can be removed from the network structure of the classification network, and the feature extraction layer is reserved, so that the feature extraction network is obtained. The feature extraction network is used for extracting features to obtain feature extraction results.
In the embodiment, the classification network is trained firstly, then the classification layer is removed, and the feature extraction network is obtained, so that the trained feature extraction network is better, further, more accurate feature extraction on the vehicle region image can be realized, and the determined type information of the feature vehicle is more accurate.
In an alternative embodiment of the present application, the above step 204 is further provided: according to the type information and the position information of the specific vehicle, a specific implementation mode for transmitting the processing information to the vehicle comprises the following steps:
when the type information of the specific vehicle is the first type vehicle, determining lane information of the vehicle image processing first type vehicle on a road according to the position information of the vehicle image processing first type vehicle, and sending vehicle image processing lane information and prompt information to other vehicles on the vehicle image processing road so as to prompt the other vehicles to let the vehicle image processing first type vehicle pass according to the lane information, wherein the other vehicles are vehicles in a preset area range on the vehicle image processing road centering on the vehicle image processing first type vehicle; the road condition information of the vehicle image processing road is monitored in real time, and lane pushing information is sent to the vehicle image processing first type vehicle according to the vehicle image processing road condition information.
The first type of vehicle refers to a specific vehicle that needs to perform an emergency task, and also refers to a specific vehicle that needs priority, and may be, for example, an ambulance, police car, fire truck, engineering truck, or the like. The position information of the first type vehicle refers to a specific coordinate position of the first type vehicle in the road, wherein the road may include a plurality of lanes, for example, a first lane, a second lane, a third lane, and the like, and the lane information refers to a lane in which the first type vehicle is located in the road.
Specifically, when the type information of the specific vehicle is the first type vehicle, the position information of the first type vehicle can be acquired, the position information of the first type vehicle in the traffic image to be detected is firstly determined, then the position information of the first type vehicle in the traffic image to be detected is converted into the position information of the first type vehicle on the road according to the mapping relation between the image coordinate system and the world coordinate system, and the lane information of the first type vehicle on the road, namely the lane position, is determined according to the position information of the first type vehicle on the road.
After the lane information of the first type vehicle on the road is determined, other vehicles with an association relationship with the first type vehicle can be obtained, wherein the association relationship refers to vehicles affecting the first type vehicle to travel on the road, for example, the vehicles can be in a preset area range with the first type vehicle as a center. The computer device then sends lane information and prompt information for the first type of vehicle to the other vehicles to prompt the other vehicles to let the first type of vehicle pass according to the lane information. The computer equipment can also monitor road condition information of the road in real time, and then send lane pushing information to the first type of vehicle according to the road condition information, so that the first type of vehicle can drive according to the lane pushing information. The road condition information may include accident detection information and road congestion information.
In this embodiment, when the type information of the specific vehicle is the first type vehicle, the corresponding message is sent to other vehicles and the first type vehicle, so that the other vehicles can be prompted to let go for the first type vehicle, and an optimal route is planned for the first type vehicle according to the accident detection condition and the congestion condition of the whole road section, so as to avoid delaying an emergency task.
In an alternative embodiment of the present application, the above step 204 is further provided: according to the type information and the position information of the specific vehicle, a specific implementation mode for transmitting the processing information to the vehicle comprises the following steps:
when the type information of the specific vehicle is the second type vehicle, detecting the object to be processed corresponding to the second type vehicle in the vehicle image processing and the position information of the object to be processed in the vehicle image processing, monitoring the road condition information of the road in the vehicle image processing in real time, and sending the position information of the object to be processed in the vehicle image processing to the second type vehicle according to the road condition information in the vehicle image processing and the position information of the object to be processed in the vehicle image processing.
It should be noted that, the second type vehicle refers to a specific vehicle that needs to perform a special task, such as a sprinkler, a garbage disposal vehicle, etc., the object to be processed corresponding to the second type vehicle refers to an object that has an association relationship with the second type vehicle, for example, when the second type vehicle is a garbage disposal vehicle, the object to be processed corresponding to the second type vehicle may refer to garbage, and the position information of the object to be processed refers to the position information of the object to be processed on the road.
For some objects, since the vehicle shielding may not be seen, when the type information of the specific vehicle is the second type vehicle (for example, a garbage cleaning vehicle), the object to be processed (for example, garbage) in the traffic image to be detected can be detected through the object detection model, so that the object to be processed is detected, the position information of the object to be processed in the traffic image to be detected is obtained, and the position information of the object to be processed in the traffic image to be detected is converted into the position information of the object to be processed in the road according to the mapping relation between the image coordinate system and the world coordinate system. The object detection model may be a pre-trained network model for detecting an object.
After the position information of the object to be processed (for example, garbage) is determined, road condition information of a road is monitored in real time, and the position information of the object to be processed is sent to a second type vehicle (for example, a garbage cleaning vehicle) according to the road condition information and the position information of the object to be processed, so that the second type vehicle (for example, the garbage cleaning vehicle) can go to a destination for processing the object to be processed according to the position information of the object to be processed.
In this embodiment, when the type information of the specific vehicle is the second type vehicle, the position information of the object to be processed is detected, so that the position information of the object to be physical is sent to the second type vehicle, so that the second type vehicle reaches the destination as soon as possible, time is saved, and efficiency of processing the object to be physical and executing tasks is improved.
It should be understood that, although the steps in the flowchart are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the figures may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor does the order in which the sub-steps or stages are performed necessarily performed in sequence, but may be performed alternately or alternately with at least a portion of other steps or other steps.
Referring to fig. 5, an embodiment of the present application provides a vehicle image processing apparatus 500, including:
an acquisition module 510, configured to acquire a traffic image to be detected;
the area detection module 520 is configured to perform vehicle area detection on a traffic image to be detected processed by the vehicle image, so as to obtain a vehicle area image corresponding to the traffic image to be detected processed by the vehicle image;
the type detection module 530 is configured to perform specific vehicle type detection on the vehicle image processing vehicle area image, so as to obtain type information of a specific vehicle in the vehicle image processing traffic image to be detected;
A transmitting module 540 for acquiring the position information of the vehicle image processing specific vehicle, and transmitting the processing information to the vehicle according to the type information of the vehicle image processing specific vehicle and the position information of the vehicle image processing specific vehicle.
In an alternative embodiment of the present application, the type detection module 530 is specifically configured to: performing feature extraction on the vehicle image processing vehicle region image through the trained feature extraction network to obtain unknown type vehicle features;
comparing the vehicle image processing unknown type vehicle characteristics with a plurality of pre-stored specific type vehicle characteristics to obtain type information of specific vehicles in the traffic image to be detected, wherein the vehicle image processing specific type vehicle characteristics are obtained by extracting characteristics of specific vehicle area images through a vehicle image processing characteristic extraction network.
In an alternative embodiment of the present application, the type detection module 530 is further configured to: processing the unknown type vehicle features and the plurality of specific type vehicle features based on the vehicle images, calculating a distance between the unknown type vehicle and each specific type vehicle;
and searching for a vehicle with the minimum distance smaller than a preset threshold value from the vehicles of the specific type in the vehicle image processing, and determining the type corresponding to the vehicle in the vehicle image processing as the type information of the specific vehicle in the traffic image to be detected.
In an alternative embodiment of the present application, the area detection module 520 is specifically configured to: inputting the traffic image to be detected in the vehicle image processing to the area detection network, and determining N anchor frames in the traffic image to be detected in the vehicle image processing, wherein N is an integer greater than or equal to 1;
extracting features of the corresponding areas of the anchor frames of each vehicle image processing, and obtaining the position offset and the size offset corresponding to each anchor frame of each vehicle image processing in the traffic image to be detected by the vehicle image processing;
determining a preset frame corresponding to the vehicle image processing anchor frame based on the position of the vehicle image processing anchor frame, the size of the vehicle image processing anchor frame, the position offset and the size offset corresponding to the vehicle image processing anchor frame;
and filtering invalid preset frames in the preset frames corresponding to the N anchor frames in the vehicle image processing, and obtaining a vehicle image processing vehicle region image based on the regions corresponding to the rest anchor frames.
In an alternative embodiment of the application, the device is further configured to:
acquiring a historical traffic image; the vehicle type information is marked in the vehicle image processing historical traffic image;
inputting the historical traffic image processed by the vehicle image into an initial classification network for classification processing to obtain a predicted vehicle type; the vehicle image processing initial classification network comprises a feature extraction layer and a classification layer;
Predicting a loss function between the vehicle type based on the vehicle image processing and the vehicle type information marked by the vehicle image processing, and performing iterative training on an initial vehicle image processing classification network by adopting an iterative algorithm according to the minimization of the loss function to obtain a classification network;
and removing the classification layer structure from the network structure of the vehicle image processing classification network to obtain a vehicle image processing characteristic extraction network.
In an alternative embodiment of the present application, the sending module 540 is specifically configured to:
when the type information of the specific vehicle is the first type vehicle, determining lane information of the first type vehicle on a road according to the position information of the first type vehicle processed by the vehicle image;
transmitting vehicle image processing lane information and prompt information to other vehicles on a vehicle image processing road so as to prompt the other vehicles to let the vehicle image processing first type vehicle pass according to the vehicle image processing lane information, wherein the other vehicles are vehicles in a preset area range on the vehicle image processing road, wherein the preset area range is centered on the vehicle image processing first type vehicle;
the road condition information of the vehicle image processing road is monitored in real time, and lane pushing information is sent to the vehicle image processing first type vehicle according to the vehicle image processing road condition information.
In an alternative embodiment of the present application, the sending module 540 is further configured to:
detecting an object to be processed corresponding to the vehicle image processing second type vehicle and position information of the object to be processed by the vehicle image processing when the type information of the specific vehicle is the second type vehicle;
the road condition information of the vehicle image processing road is monitored in real time, and the position information of the vehicle image processing object to be processed is sent to the vehicle image processing second type vehicle according to the vehicle image processing road condition information and the position information of the vehicle image processing object to be processed.
The specific limitation of the vehicle image processing apparatus may be referred to as limitation of the vehicle image processing method hereinabove, and will not be described herein. The respective modules in the above-described vehicle image processing apparatus may be implemented in whole or in part by software, hardware, and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, the internal structure of which may be as shown in FIG. 1. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is for storing data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a vehicle image processing method as above. Comprising the following steps: comprising a memory storing a computer program and a processor which when executing the computer program performs any of the steps of the vehicle image processing method as described above.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, can implement any of the steps in the vehicle image processing method as above.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.
Claims (10)
1. A vehicle image processing method, characterized by comprising:
acquiring a traffic image to be detected;
detecting the vehicle area of the traffic image to be detected to obtain a vehicle area image corresponding to the traffic image to be detected;
detecting the specific vehicle type of the vehicle area image to obtain the type information of the specific vehicle in the traffic image to be detected;
and acquiring the position information of the specific vehicle, and sending processing information to the vehicle according to the type information of the specific vehicle and the position information of the specific vehicle.
2. The method according to claim 1, wherein performing specific vehicle type detection on the vehicle region image to obtain type information of a specific vehicle in a traffic image to be detected, comprises:
extracting features of the vehicle region image through a trained feature extraction network to obtain unknown vehicle features;
Comparing the unknown type vehicle characteristics with a plurality of pre-stored specific type vehicle characteristics to obtain type information of specific vehicles in the traffic image to be detected, wherein the specific type vehicle characteristics are obtained by extracting characteristics of specific vehicle area images through the characteristic extraction network.
3. The method according to claim 2, wherein comparing the unknown type of vehicle feature with a plurality of pre-stored specific type of vehicle features to obtain type information of the specific vehicle in the traffic image to be detected, comprises:
calculating a distance between the unknown type of vehicle and each specific type of vehicle based on the unknown type of vehicle features and a plurality of specific type of vehicle features;
and searching vehicles with the minimum distance which are smaller than a preset threshold value from the specific type of vehicles, and determining the type corresponding to the vehicles as type information of the specific vehicles in the traffic image to be detected.
4. The method of claim 1, wherein the detecting the vehicle region from the traffic image to be detected to obtain the vehicle region image corresponding to the traffic image to be detected comprises:
inputting the traffic image to be detected into a region detection network, and determining N anchor frames in the traffic image to be detected, wherein N is an integer greater than or equal to 1;
Extracting features of the corresponding areas of the anchor frames to obtain position offset and size offset corresponding to each anchor frame in the traffic image to be detected;
determining a preset frame corresponding to the anchor frame based on the position of the anchor frame, the size of the anchor frame, the position offset corresponding to the anchor frame and the size offset;
and filtering invalid preset frames in the preset frames corresponding to the N anchor frames, and obtaining the vehicle region image based on the regions corresponding to the rest anchor frames.
5. The method of claim 2, wherein the training process of the feature extraction network comprises:
acquiring a historical traffic image; the historical traffic image is marked with vehicle type information;
inputting the historical traffic image into an initial classification network for classification processing to obtain a predicted vehicle type; the initial classification network comprises a feature extraction layer and a classification layer;
based on the loss function between the predicted vehicle type and the marked vehicle type information, performing iterative training on the initial classification network by adopting an iterative algorithm according to the minimization of the loss function to obtain a classification network;
and removing the classification layer structure from the network structure of the classification network to obtain the characteristic extraction network.
6. The method according to claim 1, wherein transmitting the processing information to the vehicle based on the type information of the specific vehicle and the position information includes:
when the type information of the specific vehicle is a first type vehicle, determining lane information of the first type vehicle on a road according to the position information of the first type vehicle;
the lane information and the prompt information are sent to other vehicles on the road so as to prompt the other vehicles to let the first type of vehicles according to the lane information, wherein the other vehicles are vehicles in a preset area range on the road with the first type of vehicles as centers;
and monitoring the road condition information of the road in real time, and sending lane pushing information to the first type of vehicle according to the road condition information.
7. The method according to claim 1, wherein transmitting the processing information to the vehicle based on the type information of the specific vehicle and the position information includes:
detecting an object to be processed corresponding to a second type vehicle and position information of the object to be processed when the type information of the specific vehicle is the second type vehicle;
And monitoring road condition information of the road in real time, and sending the position information of the object to be processed to the second type vehicle according to the road condition information and the position information of the object to be processed.
8. A vehicle image processing apparatus characterized by comprising:
the acquisition module is used for acquiring the traffic image to be detected;
the area detection module is used for detecting the vehicle area of the traffic image to be detected to obtain a vehicle area image corresponding to the traffic image to be detected;
the type detection module is used for detecting the type of the specific vehicle in the vehicle area image to obtain the type information of the specific vehicle in the traffic image to be detected;
and the sending module is used for acquiring the position information of the specific vehicle and sending processing information to the vehicle according to the type information of the specific vehicle and the position information of the specific vehicle.
9. A computer device, comprising: comprising a memory and a processor, said memory storing a computer program, characterized in that the processor implements the steps of the method according to any one of claims 1 to 7 when said computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310521048.6A CN116612454A (en) | 2023-05-10 | 2023-05-10 | Vehicle image processing method, device, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310521048.6A CN116612454A (en) | 2023-05-10 | 2023-05-10 | Vehicle image processing method, device, computer equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116612454A true CN116612454A (en) | 2023-08-18 |
Family
ID=87679251
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310521048.6A Pending CN116612454A (en) | 2023-05-10 | 2023-05-10 | Vehicle image processing method, device, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116612454A (en) |
-
2023
- 2023-05-10 CN CN202310521048.6A patent/CN116612454A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112132130B (en) | Real-time license plate detection method and system for whole scene | |
CN113160272B (en) | Target tracking method and device, electronic equipment and storage medium | |
CN111931683B (en) | Image recognition method, device and computer readable storage medium | |
CN112926461B (en) | Neural network training and driving control method and device | |
CN115204044A (en) | Method, apparatus and medium for generating trajectory prediction model and processing trajectory information | |
CN112949578B (en) | Vehicle lamp state identification method, device, equipment and storage medium | |
CN111967396A (en) | Processing method, device and equipment for obstacle detection and storage medium | |
CN114926766A (en) | Identification method and device, equipment and computer readable storage medium | |
WO2022243337A2 (en) | System for detection and management of uncertainty in perception systems, for new object detection and for situation anticipation | |
CN114648709A (en) | Method and equipment for determining image difference information | |
CN115294169A (en) | Vehicle tracking method and device, electronic equipment and storage medium | |
CN111401143A (en) | Pedestrian tracking system and method | |
Kim et al. | Small object detection (SOD) system for comprehensive construction site safety monitoring | |
Nagaraj et al. | Edge-based street object detection | |
CN111144361A (en) | Road lane detection method based on binaryzation CGAN network | |
CN112215042A (en) | Parking space limiter identification method and system and computer equipment | |
CN115984723A (en) | Road damage detection method, system, device, storage medium and computer equipment | |
CN116612454A (en) | Vehicle image processing method, device, computer equipment and storage medium | |
CN114445787A (en) | Non-motor vehicle weight recognition method and related equipment | |
CN114429631A (en) | Three-dimensional object detection method, device, equipment and storage medium | |
CN111881833A (en) | Vehicle detection method, device, equipment and storage medium | |
CN115485746A (en) | License plate character recognition method, device, equipment and storage medium | |
CN113744304A (en) | Target detection tracking method and device | |
CN114627400A (en) | Lane congestion detection method and device, electronic equipment and storage medium | |
EP4451232A1 (en) | Scene recognition method, method for obtaining scene data, device, medium, and vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |