CN112287882A - Clay car attribute identification method and device, electronic equipment and storage medium - Google Patents

Clay car attribute identification method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112287882A
CN112287882A CN202011298939.2A CN202011298939A CN112287882A CN 112287882 A CN112287882 A CN 112287882A CN 202011298939 A CN202011298939 A CN 202011298939A CN 112287882 A CN112287882 A CN 112287882A
Authority
CN
China
Prior art keywords
image
muck
processed
analysis
cargo
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011298939.2A
Other languages
Chinese (zh)
Inventor
张翼
李玮
黄志龙
李辰
廖强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Jiahua Chain Cloud Technology Co ltd
Original Assignee
Chengdu Jiahua Chain Cloud Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Jiahua Chain Cloud Technology Co ltd filed Critical Chengdu Jiahua Chain Cloud Technology Co ltd
Priority to CN202011298939.2A priority Critical patent/CN112287882A/en
Publication of CN112287882A publication Critical patent/CN112287882A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/63Scene text, e.g. street names
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/625License plates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a muck vehicle attribute identification method and device, electronic equipment and a storage medium. The method comprises the following steps: acquiring an image to be processed, wherein the image to be processed comprises a muck car object; carrying out straw mat cover analysis on the image to be processed to obtain straw mat cover classification results; if the covered classification result indicates that the muck truck object is not covered, carrying cargo analysis is carried out on the image to be processed to obtain a cargo carrying result; and if the loading result represents that the muck vehicle object is in a loading state, determining that the muck vehicle object does not meet the requirement of going to the road. According to the embodiment of the application, the acquired image to be processed containing the muck vehicle is subjected to analysis of the covering state and analysis of the loading state, so that whether the muck vehicle meets the attribute required by the road can be identified, the method is realized without additionally installing hardware, and the cost is reduced.

Description

Clay car attribute identification method and device, electronic equipment and storage medium
Technical Field
The application relates to the technical field of image processing, in particular to a muck vehicle attribute identification method and device, electronic equipment and a storage medium.
Background
At present, the problems of illegal transportation, untight covering and the like of slag soil transport vehicles in various provinces and cities in the whole country are still outstanding, the slag soil transport routes are not effectively standardized, and some slag soil vehicles run on a standard route, but the covering is not tight, the scattering omission is serious, the dust on roads is serious, and the air quality in the cities is influenced to a certain extent.
The existing identification method for the transport state of the muck truck is mainly a data fusion algorithm based on a sensor, and can identify whether the top cover of the muck truck is effectively closed (covered state) and whether the muck truck carries cargo (cargo carrying state) through sensor data fusion by placing a non-contact sensor in the carriage of the muck truck or on the top cover.
Wherein, the hardware for identifying the covering state mainly comprises: 1. the two radar sensors are used for transmitting electromagnetic wave signals and determining the time difference between receiving action and transmitting action 2. the microcontroller is used for determining the airtightness of the top covers on the two sides of the muck truck based on the time difference 1. Therefore, each slag car in the jurisdiction must be modified in hardware, for example: the radar sensor, the microcontroller, the communication module and the like need to be added, and after the hardware is installed, frequent maintenance and calibration are needed, which is high in cost.
Disclosure of Invention
An object of the embodiments of the present application is to provide a method and an apparatus for identifying an attribute of a muck truck, an electronic device, and a storage medium, so as to solve the problem in the prior art that the cost is high when identifying the attribute of the muck truck.
In a first aspect, an embodiment of the present application provides a muck vehicle attribute identification method, including: acquiring an image to be processed, wherein the image to be processed comprises a muck car object; carrying out straw mat cover analysis on the image to be processed to obtain straw mat cover classification results; if the covered classification result indicates that the muck truck object is not covered, carrying cargo analysis is carried out on the image to be processed to obtain a cargo carrying result; and if the loading result represents that the muck vehicle object is in a loading state, determining that the muck vehicle object does not meet the requirement of going to the road.
According to the embodiment of the application, the acquired image to be processed containing the muck vehicle is subjected to analysis of the covering state and analysis of the loading state, so that whether the muck vehicle meets the attribute required by the road can be identified, the method is realized without additionally installing hardware, and the cost is reduced.
Further, the acquiring the image to be processed includes: acquiring video stream data, and performing framing processing on the video stream data to obtain multiple corresponding frames of original images; and identifying the multi-frame original image by using a muck truck identification model to obtain the image to be processed.
Generally, most of the collected video streams are collected by an image collecting device arranged on a road, and in order to find out a muck truck object from the collected video streams, the video streams are firstly analyzed to obtain a plurality of frames of original images, and then a muck truck identification module is utilized to identify a muck truck from the plurality of frames of original images, so that the attribute of the muck truck can be identified in the subsequent process.
Further, after obtaining the corresponding multiple frames of original images, the method further includes: calculating the similarity of two adjacent original images in a plurality of original images, and if the similarity is greater than a preset threshold value, rejecting one of the two original images to obtain a screened image; the identification of the multi-frame original image by using the muck truck identification model comprises the following steps: and identifying the screened image by using a muck truck identification model.
According to the method and the device, the images in the static state are deleted by dynamically detecting the multi-frame original images, so that only valuable images are reserved, and the efficiency of identifying the muck truck is improved.
Further, the calculating the similarity of the two adjacent frames of original images includes: respectively preprocessing two frames of original images to obtain a first image and a second image after preprocessing; respectively calculating hash values of a first image and a second image to obtain a first hash value corresponding to the first image and a second hash value corresponding to the second image; and calculating the similarity of the two frames of original images according to the first hash value and the second hash value.
According to the embodiment of the application, the similarity of the two images is determined by utilizing the Hash value, so that the dynamic detection of the multi-frame original image is realized, and the valuable image can be effectively identified.
Further, after determining that the muck vehicle object does not meet the requirement for going to the road, the method further comprises: detecting the license plate of the image to be processed to obtain the license plate position information corresponding to the muck vehicle object; cutting the image to be processed according to the license plate position information to obtain a license plate image; and performing text recognition on the license plate image to obtain license plate information of the muck vehicle object, and reporting the license plate information to a specified terminal.
According to the method and the device, the license plate recognition is carried out on the muck vehicle which does not accord with the requirement of going on the road, and the license plate information is reported to the appointed terminal, so that the corresponding control is carried out on the muck vehicle.
Further, performing straw mat cover analysis on the image to be processed to obtain straw mat cover classification results, including: carrying out straw cover analysis on the image to be processed by using a straw cover analysis model to obtain a straw cover classification result; and the covering classification result is used for representing whether the slag car object is covered with the covering.
According to the embodiment of the application, the covering state of the slag car can be quickly obtained by using the covering analysis module, so that a foundation is laid for the attribute identification of the slag car in the next step.
Further, the cargo analysis of the image to be processed to obtain a cargo result includes: carrying out cargo carrying analysis on the image to be processed by utilizing a cargo carrying analysis model to obtain a cargo carrying result; and the loading result is used for representing whether the muck truck object is empty or not.
According to the embodiment of the application, whether the muck truck is loaded with articles or not can be quickly obtained through the cargo analysis model, and whether the muck truck meets the requirement of going on the road or not can be further obtained according to the covering state and the cargo state.
In a second aspect, an embodiment of the present application provides a muck vehicle attribute identification device, including: the image acquisition module is used for acquiring an image to be processed, wherein the image to be processed comprises a muck car object; the straw mat cover analysis module is used for carrying out straw mat cover analysis on the image to be processed to obtain a straw mat cover classification result; the cargo analysis module is used for carrying out cargo analysis on the image to be processed to obtain a cargo result if the covered classification result indicates that the muck vehicle object is not covered; and the attribute obtaining module is used for determining that the muck vehicle object does not meet the requirement of going to the road if the cargo-carrying result represents that the muck vehicle object is in a cargo-carrying state.
In a third aspect, an embodiment of the present application provides an electronic device, including: the system comprises a processor, a memory and a bus, wherein the processor and the memory are communicated with each other through the bus; the memory stores program instructions executable by the processor, the processor being capable of performing the method of the first aspect when invoked by the program instructions.
In a fourth aspect, an embodiment of the present application provides a non-transitory computer-readable storage medium, including: the non-transitory computer readable storage medium stores computer instructions that cause the computer to perform the method of the first aspect.
Additional features and advantages of the present application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the embodiments of the present application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is a schematic flow chart of a method for identifying an attribute of a muck truck according to an embodiment of the present application;
FIG. 2 is a schematic flow chart illustrating another method for identifying properties of a slag car according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a muck vehicle attribute identification device provided in an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
Fig. 1 is a schematic flow chart of a method for identifying an attribute of a muck vehicle according to an embodiment of the present disclosure, and as shown in fig. 1, the method may be applied to an electronic device, where the electronic device may be a terminal or a server, and the terminal device may specifically be a smart phone, a tablet computer, a Personal Digital Assistant (PDA), or the like. The method comprises the following steps:
step 101: acquiring an image to be processed, wherein the image to be processed comprises a muck car object;
step 102: carrying out straw mat cover analysis on the image to be processed to obtain straw mat cover classification results;
step 103: if the covered classification result indicates that the muck truck object is not covered, carrying cargo analysis is carried out on the image to be processed to obtain a cargo carrying result;
step 104: and if the loading result represents that the muck vehicle object is in a loading state, determining that the muck vehicle object does not meet the requirement of going to the road.
In step 101, because a large number of cameras are arranged on the road today for collecting image information of vehicles coming from and going to the road, the image to be processed may be collected by the cameras arranged on the road, or may be collected by an image collecting device specially arranged on the road, and then the collected image to be processed is sent to the electronic device; if the electronic equipment has an image acquisition function, the electronic equipment can be arranged on a road, and the electronic equipment acquires the passing muck car to obtain the image to be processed. The image to be processed comprises the muck truck object, so that the covering state and the load state of the muck truck can be conveniently identified in the follow-up process, the image to be processed can comprise the complete muck truck object, and can also only comprise a truck hopper of the muck truck object.
In step 102, after acquiring the image to be processed, the electronic device performs a covering analysis on the muck car object in the image to be processed, wherein the covering analysis is performed to judge whether a top cover on a hopper of the muck car object is in a covering state. Before analysis, the electronic equipment can detect the object of the image to be processed, identify the position of the object of the muck car from the image to be processed, and perform cover analysis on the muck car to obtain a cover classification result.
The straw mat cover classification result can be in a straw mat cover state or an unfinished straw mat cover state. The covered state means that the top cover of the slag car hopper is in a closed state, and the uncovered state means that the top cover of the slag car hopper is not in a closed state. When the muck car object is in a covered state, whether goods are loaded in a car hopper or not meets the requirement of going on the road, so the following steps are not performed on the muck car object. If the muck truck object is not covered, step 103 needs to be executed to analyze the cargo state.
In step 103, if the covered classification result indicates that the muck vehicle object is in an uncovered state, that is, the top cover of the muck vehicle object is in an open state, then the muck vehicle object in the uncovered state needs to be further subjected to cargo analysis to determine whether cargo is loaded in the hopper of the muck vehicle object. Generally, when a load is loaded in a hopper of a slag car, a top cover is required to be covered to prevent the load in the hopper from being scattered. If no goods are loaded in the hopper of the slag car, the top cover of the hopper can not be closed. The loading result of the muck truck can be obtained by carrying out loading analysis on the muck truck object in the image to be processed. It will be appreciated that the loading result may be a loaded state or an unloaded state.
In step 104, the muck vehicle attributes can be classified as meeting the requirement for going to the road and not meeting the requirement for going to the road, and if the muck vehicle object is identified as not covered and is in the loading state, the muck vehicle object is not met with the requirement for going to the road. On the contrary, if the muck vehicle object is identified as not covered and is in an idle state, the muck vehicle object meets the requirement of going on the road; in addition, if the muck truck object is identified to be in a covered state, the muck truck object meets the requirement of getting on the road no matter whether the truck hopper is empty or not.
According to the embodiment of the application, the acquired image to be processed containing the muck vehicle is subjected to analysis of the covering state and analysis of the loading state, so that whether the muck vehicle meets the attribute required by the road can be identified, the method is realized without additionally installing hardware, and the cost is reduced.
On the basis of the above embodiment, the acquiring an image to be processed includes:
acquiring video stream data, and performing framing processing on the video stream data to obtain multiple corresponding frames of original images;
and identifying the multi-frame original image by using a muck truck identification model to obtain the image to be processed.
In a specific implementation, the electronic device may receive video stream data, and for the video stream data, an image containing the muck truck object needs to be extracted from the video stream data, and then the subsequent step analysis is performed based on the image. After the electronic device receives the video stream data, the electronic device performs framing processing on the video stream data, and it can be understood that the formats of the video streams acquired by different video acquisition devices are different, and then the corresponding framing methods may also be different. In addition, the video can be converted into multi-frame images through conversion software.
After the multi-frame original images are obtained through the frame dividing processing, as the muck car object is not arranged on each frame of image, the image containing the muck car object needs to be extracted from the multi-frame original images to be used as the image to be processed, and the useless image processing is avoided. The embodiment of the application provides a muck car identification model, each frame of original image is input into the muck car identification model, and the muck car identification model outputs whether the original image contains muck cars or not. It can be understood that the muck truck identification module can be constructed by a Single Shot Detector (SSD) model, and is a lightweight target detection model.
The SSD algorithm has the following characteristics:
(1) the operation of the candidate box is removed, and the Anchor mechanism is adopted. The Anchor mechanism actually uses every point as the center point of the candidate region.
(2) Directly regressing the category and position of the target, and in the conventional target detection algorithm, the position of the target region is generally extracted by a candidate box. And continuously sampling from the original picture to find a target area. Typically sliding window downsampling.
(3) And (3) feature map prediction of different scales can be carried out, so that adaptation to different object sizes can be completed.
In order to improve the accuracy of the muck vehicle identification model, video stream data of different regions, different scenes and different illumination can be collected in advance, and the video stream data is subjected to framing processing. And training the SSD model by using the training image containing the slag car as a positive training example and the training image not containing the slag car as a negative training example to obtain a trained slag car identification model. In addition, in order to improve the generalization ability of the muck vehicle, when a training image containing the muck vehicle is selected, images corresponding to various types of muck vehicles can be acquired. After training is completed, the prediction capability of the muck car recognition model can be evaluated, for example: the evaluation can be performed using a test set and using a mean average accuracy (mAP) method, where the mAP is the calculation of the accuracy P for each class in the test set at different thresholdsNP, wherein PNIndicating accuracy, i.e. for one sheetPicture, calculating the number of correct predictions for that category divided by the number of total real targets for that category. Average it
Figure BDA0002785752010000081
And obtaining the Average Precision (AP) of each category, and then averaging each category to obtain the final mAP.
The threshold value of the mAP can be preset, if the threshold value is larger than the threshold value, the muck vehicle identification model obtained by training meets the precision requirement, otherwise, the muck vehicle identification model needs to be trained again until the precision requirement is met.
Generally, most of the collected video streams are collected by an image collecting device arranged on a road, and in order to find out a muck truck object from the collected video streams, the video streams are firstly analyzed to obtain a plurality of frames of original images, and then a muck truck identification module is utilized to identify a muck truck from the plurality of frames of original images, so that the attribute of the muck truck can be identified in the subsequent process.
On the basis of the above embodiment, after obtaining the corresponding multiple frames of original images, the method further includes:
calculating the similarity of two adjacent original images in a plurality of original images, and if the similarity is greater than a preset threshold value, rejecting one of the two original images to obtain a screened image;
the identification of the multi-frame original image by using the muck truck identification model comprises the following steps:
and identifying the screened image by using a muck truck identification model.
In a specific implementation, since the same muck vehicle may appear in multiple original images or a muck vehicle is parked at the roadside, the muck vehicle is also included in the multiple original images. In order to reduce the calculation amount and reduce the repeated evaluation of the model to the still picture or the same muck truck, the images which are highly similar in a plurality of frames of images can be removed. In practical application, the electronic device may acquire a video stream with a period of historical duration or a real-time video stream, and for the real-time video stream, frame caching is performed on the video stream, after a new image frame comes, a current frame original image is compared with a cached frame original image, that is, the similarity of the two frames of original images is calculated, if the similarity is greater than a preset threshold value, the image is still, and the current frame original image is discarded; if the similarity is not greater than the preset threshold, the picture is moved, the original image of the current frame is updated to be the original image of the cache frame, and the subsequent screening is continued. For a video stream with a period of historical duration, a previous frame of original image in two adjacent frames of original images can be used as a buffer frame of original image, and a next frame of original image can be used as a current frame of original image.
It can be understood that the similarity between two frames of original images can be calculated by using a perceptual hash algorithm, which is as follows:
the first step is as follows: respectively preprocessing two frames of original images to obtain a first image and a second image after preprocessing;
wherein the pre-treatment step may comprise the following:
(1) reducing the size; the fastest way to remove high frequencies and details is to reduce the size by keeping the structure bright and dark. For example, the original picture is reduced to a size of 8x8 for a total of 64 pixels. The picture difference caused by different sizes and proportions is abandoned.
(2) Simplifying colors; and converting the reduced picture into 64-level gray. That is, all pixels have 64 colors in total.
(3) Calculating DCT (discrete cosine transform); DCT is the frequency clustering and the ladder shape of the picture decomposition, although JPEG uses 8 × 8 DCT transform, here 32 × 32 DCT transform.
(4) Reducing DCT; although the result of DCT is a matrix of 32 x 32 size, we only need to retain the 8x8 matrix in the upper left corner, which part presents the lowest frequencies in the picture.
(5) Calculating an average value; the average of all 64 values was calculated.
(6) Further reducing DCT; this is the most important step, and based on the 8 × 8 DCT matrix, a hash value of 64 bits of 0 or 1 is set, and "1" is set for the DCT mean values greater than or equal to "1", and "0" is set for the DCT mean values smaller than "0". The results do not tell us about the low frequency of authenticity, but only roughly the relative proportion of the frequency we have with respect to the mean. As long as the overall structure of the picture remains unchanged, the hash result value is unchanged. The influence of gamma correction or color histogram adjustment can be avoided.
The second step is that: respectively calculating hash values of a first image and a second image to obtain a first hash value corresponding to the first image and a second hash value corresponding to the second image;
setting 64bit to 64bit long integer, the order of combination is not important as long as it is guaranteed that all pictures are in the same order. The 32 x 32 DCT is converted to a 32 x 32 image. The comparison results, combined together, form a 64-bit integer, which is the fingerprint of the picture. The order of the combination is not important as long as it is guaranteed that all pictures are in the same order (e.g., left to right, top to bottom, big-endian).
The third step: and calculating the similarity of the two frames of original images according to the first hash value and the second hash value.
Looking at how many of the 64 bits are different. In theory, this is equivalent to calculating the "Hammingdistance" (Hammingdistance). If the different data bits do not exceed 5, the two pictures are very similar; if it is greater than 10, it is indicated that these are two different pictures. If the data bit is larger than 5 and smaller than 10, the judgment can be made manually, and of course, a compromise mode can be selected, namely if the data bit which is not the same is not larger than 7, the two images are similar, and if the data bit is larger than 7, the two images are not similar. In a specific application, the specific determination value may be set according to an actual situation, which is not specifically limited in this embodiment of the application.
After the screening by the method, the screened image is obtained, and the screened image can be identified by utilizing the muck truck identification model.
According to the method and the device, the images in the static state are deleted by dynamically detecting the multi-frame original images, so that only valuable images are reserved, and the efficiency of identifying the muck truck is improved.
On the basis of the above embodiment, after determining that the muck car object does not meet the requirement for going to the road, the method further comprises:
detecting the license plate of the image to be processed to obtain the license plate position information corresponding to the muck vehicle object;
cutting the image to be processed according to the license plate position information to obtain a license plate image;
and performing text recognition on the license plate image to obtain license plate information of the muck vehicle object, and reporting the license plate information to a specified terminal.
In a specific implementation process, if the attribute of the muck truck is detected to be not in accordance with the requirement of going to the road, the muck truck needs to be controlled, license plate information corresponding to the muck truck object can be identified from the image to be processed, specifically, license plate position information corresponding to the muck truck object is identified by using a license plate detection model, and the muck truck object is cut out from the image to be processed according to the license plate position information to obtain a license plate image. It can be understood that the image to be processed may include other vehicles in addition to the muck vehicle object, and in order to obtain accurate license plate information corresponding to the muck vehicle, the muck vehicle object may be identified to obtain position information of the muck vehicle object in the image to be processed, and then the license plate position of the muck vehicle object may be identified.
After the license plate image is obtained, the specific content of the license plate is identified by using the text identification model, and the license plate information of the muck vehicle object is obtained.
It should be noted that the license plate detection process only aims at the condition that the images to be processed include the license plate of the muck car, and can acquire a plurality of frames of original images before and after the images to be processed for determining the license plate of the muck car. The license plate detection model can be constructed by an SSD model, and the text recognition model can be constructed by a CRNN network. CRNN is an abbreviation for conditional recovery Neural Network, a Network for end-to-end word recognition. In order to improve the accuracy of the license plate detection model and the text recognition model, the license plate detection model and the text recognition model can be trained in advance. Aiming at the training of the license plate detection model, a plurality of vehicle images containing license plates can be collected, then the license plates on the vehicle images are labeled manually, namely the license plates are framed through a rectangular frame. And training the license plate detection model by using the marked vehicle image. Aiming at the training of the text recognition model, a plurality of images containing license plates are required to be collected and labeled, namely the specific content of the license plates is labeled, and then the text recognition model is trained by utilizing the labeled images.
According to the method and the device, the license plate recognition is carried out on the muck vehicle which does not accord with the requirement of going on the road, and the license plate information is reported to the appointed terminal, so that the corresponding control is carried out on the muck vehicle.
On the basis of the above embodiment, performing a straw mat analysis on the image to be processed to obtain a straw mat classification result, including:
carrying out straw cover analysis on the image to be processed by using a straw cover analysis model to obtain a straw cover classification result; and the covering classification result is used for representing whether the slag car object is covered with the covering.
In the specific implementation process, the image to be processed is input into a straw mat cover analysis model, and the straw mat cover analysis model outputs whether a muck vehicle object in the image to be processed is straw mat covered or not. In order to improve the prediction accuracy of the mat cover analysis model, the muck car image in the mat cover state can be collected as a training positive case, the muck car image in the non-mat cover state can be collected as a training negative case, and the mat cover analysis model is trained to obtain the trained mat cover analysis model. In addition, the prediction capability of the straw mat cover analysis model can be evaluated to judge whether the prediction accuracy requirement is met. For example, a metric of f1-score may be used. Wherein:
Figure BDA0002785752010000121
f1 is the value F1-score; p is the accuracy, and
Figure BDA0002785752010000122
r is recall rate, and
Figure BDA0002785752010000123
TP is the number of samples with correct predicted answers; FP predicts other classes as the sample number of the class if the other classes are wrong; FN predicts the class label as the number of samples of other classes for error. And judging whether the value of F1 is greater than a preset threshold value, if so, indicating that the prediction precision of the thatch cover analysis model meets the requirement, otherwise, continuing to train the thatch cover analysis model until the value of F1 is greater than the preset threshold value.
According to the embodiment of the application, the covering state of the slag car can be quickly obtained by using the covering analysis module, so that a foundation is laid for the attribute identification of the slag car in the next step.
On the basis of the above embodiment, the performing cargo analysis on the image to be processed to obtain a cargo result includes:
carrying out cargo carrying analysis on the image to be processed by utilizing a cargo carrying analysis model to obtain a cargo carrying result; and the loading result is used for representing whether the muck truck object is empty or not.
In a specific implementation process, the image to be processed is input into the cargo analysis model, and the cargo analysis model analyzes the image to be processed to obtain a cargo result. In order to improve the accuracy of the cargo analysis model prediction, the cargo analysis model can be trained in advance, and the specific process is as follows: and collecting images of the muck car loaded with goods in a non-covered state and images of the muck car in a non-covered state and in an unloaded state as training images to train the cargo analysis model, so as to obtain the trained cargo analysis model. It can be understood that the estimation of the prediction capability of the cargo analysis model may refer to an estimation method of the prediction capability of the tarpaulin analysis model, and the embodiments of the present application are not described in detail.
According to the embodiment of the application, whether the muck truck is loaded with articles or not can be quickly obtained through the cargo analysis model, and whether the muck truck meets the requirement of going on the road or not can be further obtained according to the covering state and the cargo state.
Fig. 2 is a schematic flow chart of another method for identifying an attribute of a slag car according to an embodiment of the present application, and as shown in fig. 2, the method includes:
step 201: acquiring video stream data; the method for acquiring video stream data may be various, which is not described herein again with reference to the above embodiment, and after the video stream data is acquired, the video stream data is subjected to framing processing to obtain multiple frames of original images;
step 202: detecting a muck vehicle; carrying out slag car detection on each frame of image to obtain an image to be processed containing a slag car object;
step 203: analysis of a straw cover; carrying out straw cover analysis on the image to be processed by using a straw cover analysis module to obtain straw cover classification results;
step 204: whether to cover the straw mat; if the covered classification result indicates that the muck vehicle object is not covered, executing step 205; otherwise, go to step 202;
step 205: carrying cargo analysis; carrying out carrying analysis on the image to be processed by using the carrying analysis model to obtain a carrying result;
step 206: whether the load is empty or not; if the cargo-carrying result indicates that the muck vehicle object is loaded with the cargo, executing the step 207, otherwise, executing the step 202;
step 207: detecting a license plate; detecting a license plate of an image to be processed to obtain license plate position information corresponding to a muck vehicle object, and cutting the image to be processed according to the position information to obtain a license plate image; performing text recognition on the license plate image to obtain license plate information of the vehicle object;
step 208: reporting; and reporting the license plate information to a specified terminal.
Fig. 3 is a schematic structural diagram of an attribute identification device for a muck truck according to an embodiment of the present application, where the device may be a module, a program segment, or a code on an electronic device. It should be understood that the apparatus corresponds to the above-mentioned embodiment of the method of fig. 1, and can perform various steps related to the embodiment of the method of fig. 1, and the specific functions of the apparatus can be referred to the description above, and the detailed description is appropriately omitted here to avoid redundancy. The device includes: an image acquisition module 301, a tarpaulin analysis module 302, a cargo analysis module 303 and an attribute acquisition module 304, wherein:
the image acquisition module 301 is configured to acquire an image to be processed, where the image to be processed includes a muck car object; the cover analysis module 302 is used for performing cover analysis on the image to be processed to obtain a cover classification result; the cargo analysis module 303 is configured to perform cargo analysis on the image to be processed to obtain a cargo result if the covered classification result indicates that the muck vehicle object is in an uncovered state; the attribute obtaining module 304 is configured to determine that the muck vehicle object does not meet the requirement for getting on the road if the loading result indicates that the muck vehicle object is in the loading state.
On the basis of the foregoing embodiment, the image acquisition module 301 is specifically configured to:
acquiring video stream data, and performing framing processing on the video stream data to obtain multiple corresponding frames of original images;
and identifying the multi-frame original image by using a muck truck identification model to obtain the image to be processed.
On the basis of the above embodiment, the apparatus further includes a screening module configured to:
calculating the similarity of two adjacent original images in a plurality of original images, and if the similarity is greater than a preset threshold value, rejecting one of the two original images to obtain a screened image;
the identification of the multi-frame original image by using the muck truck identification model comprises the following steps:
and identifying the screened image by using a muck truck identification model.
On the basis of the above embodiment, the screening module is specifically configured to:
respectively preprocessing two frames of original images to obtain a first image and a second image after preprocessing;
respectively calculating hash values of a first image and a second image to obtain a first hash value corresponding to the first image and a second hash value corresponding to the second image;
and calculating the similarity of the two frames of original images according to the first hash value and the second hash value.
On the basis of the above embodiment, the device further includes a license plate detection module for:
detecting the license plate of the image to be processed to obtain the license plate position information corresponding to the muck vehicle object;
cutting the image to be processed according to the license plate position information to obtain a license plate image;
and performing text recognition on the license plate image to obtain license plate information of the muck vehicle object, and reporting the license plate information to a specified terminal.
On the basis of the above embodiment, the straw mat cover analysis module 302 is specifically configured to:
carrying out straw cover analysis on the image to be processed by using a straw cover analysis model to obtain a straw cover classification result; and the covering classification result is used for representing whether the slag car object is covered with the covering.
On the basis of the above embodiment, the cargo analysis module 303 is specifically configured to:
carrying out cargo carrying analysis on the image to be processed by utilizing a cargo carrying analysis model to obtain a cargo carrying result; and the loading result is used for representing whether the muck truck object is empty or not.
Fig. 4 is a schematic structural diagram of an entity of an electronic device provided in an embodiment of the present application, and as shown in fig. 4, the electronic device includes: a processor (processor)401, a memory (memory)402, and a bus 403; wherein the content of the first and second substances,
the processor 401 and the memory 402 complete communication with each other through the bus 403;
the processor 401 is configured to call the program instructions in the memory 402 to execute the methods provided by the above-mentioned method embodiments, for example, including: acquiring an image to be processed, wherein the image to be processed comprises a muck car object; carrying out straw mat cover analysis on the image to be processed to obtain straw mat cover classification results; if the covered classification result indicates that the muck truck object is not covered, carrying cargo analysis is carried out on the image to be processed to obtain a cargo carrying result; and if the loading result represents that the muck vehicle object is in a loading state, determining that the muck vehicle object does not meet the requirement of going to the road.
The processor 401 may be an integrated circuit chip having signal processing capabilities. The Processor 401 may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. Which may implement or perform the various methods, steps, and logic blocks disclosed in the embodiments of the present application. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The Memory 402 may include, but is not limited to, Random Access Memory (RAM), Read Only Memory (ROM), Programmable Read Only Memory (PROM), Erasable Read Only Memory (EPROM), Electrically Erasable Read Only Memory (EEPROM), and the like.
The present embodiment discloses a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, enable the computer to perform the method provided by the above-mentioned method embodiments, for example, comprising: acquiring an image to be processed, wherein the image to be processed comprises a muck car object; carrying out straw mat cover analysis on the image to be processed to obtain straw mat cover classification results; if the covered classification result indicates that the muck truck object is not covered, carrying cargo analysis is carried out on the image to be processed to obtain a cargo carrying result; and if the loading result represents that the muck vehicle object is in a loading state, determining that the muck vehicle object does not meet the requirement of going to the road.
The present embodiments provide a non-transitory computer-readable storage medium storing computer instructions that cause the computer to perform the methods provided by the above method embodiments, for example, including: acquiring an image to be processed, wherein the image to be processed comprises a muck car object; carrying out straw mat cover analysis on the image to be processed to obtain straw mat cover classification results; if the covered classification result indicates that the muck truck object is not covered, carrying cargo analysis is carried out on the image to be processed to obtain a cargo carrying result; and if the loading result represents that the muck vehicle object is in a loading state, determining that the muck vehicle object does not meet the requirement of going to the road.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
In addition, units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Furthermore, the functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. A muck truck attribute identification method is characterized by comprising the following steps:
acquiring an image to be processed, wherein the image to be processed comprises a muck car object;
carrying out straw mat cover analysis on the image to be processed to obtain straw mat cover classification results;
if the covered classification result indicates that the muck truck object is not covered, carrying cargo analysis is carried out on the image to be processed to obtain a cargo carrying result;
and if the loading result represents that the muck vehicle object is in a loading state, determining that the muck vehicle object does not meet the requirement of going to the road.
2. The method of claim 1, wherein the acquiring the image to be processed comprises:
acquiring video stream data, and performing framing processing on the video stream data to obtain multiple corresponding frames of original images;
and identifying the multi-frame original image by using a muck truck identification model to obtain the image to be processed.
3. The method of claim 2, wherein after obtaining the corresponding plurality of frames of original images, the method further comprises:
calculating the similarity of two adjacent original images in a plurality of original images, and if the similarity is greater than a preset threshold value, rejecting one of the two original images to obtain a screened image;
the identification of the multi-frame original image by using the muck truck identification model comprises the following steps:
and identifying the screened image by using a muck truck identification model.
4. The method according to claim 3, wherein the calculating the similarity between the two adjacent frames of original images comprises:
respectively preprocessing two frames of original images to obtain a first image and a second image after preprocessing;
respectively calculating hash values of a first image and a second image to obtain a first hash value corresponding to the first image and a second hash value corresponding to the second image;
and calculating the similarity of the two frames of original images according to the first hash value and the second hash value.
5. The method of claim 1, wherein after determining that the muck vehicle object does not meet the on-road requirement, the method further comprises:
detecting the license plate of the image to be processed to obtain the license plate position information corresponding to the muck vehicle object;
cutting the image to be processed according to the license plate position information to obtain a license plate image;
and performing text recognition on the license plate image to obtain license plate information of the muck vehicle object, and reporting the license plate information to a specified terminal.
6. The method according to claim 1, wherein performing a straw cover analysis on the image to be processed to obtain a straw cover classification result comprises:
carrying out straw cover analysis on the image to be processed by using a straw cover analysis model to obtain a straw cover classification result; and the covering classification result is used for representing whether the slag car object is covered with the covering.
7. The method of claim 1, wherein the performing a cargo analysis on the image to be processed to obtain a cargo result comprises:
carrying out cargo carrying analysis on the image to be processed by utilizing a cargo carrying analysis model to obtain a cargo carrying result; and the loading result is used for representing whether the muck truck object is empty or not.
8. The utility model provides a dregs car attribute recognition device which characterized in that includes:
the image acquisition module is used for acquiring an image to be processed, wherein the image to be processed comprises a muck car object;
the straw mat cover analysis module is used for carrying out straw mat cover analysis on the image to be processed to obtain a straw mat cover classification result;
the cargo analysis module is used for carrying out cargo analysis on the image to be processed to obtain a cargo result if the covered classification result indicates that the muck vehicle object is not covered;
and the attribute obtaining module is used for determining that the muck vehicle object does not meet the requirement of going to the road if the cargo-carrying result represents that the muck vehicle object is in a cargo-carrying state.
9. An electronic device, comprising: a processor, a memory, and a bus, wherein,
the processor and the memory are communicated with each other through the bus;
the memory stores program instructions executable by the processor, the processor invoking the program instructions to perform the method of any one of claims 1-7.
10. A non-transitory computer-readable storage medium storing computer instructions which, when executed by a computer, cause the computer to perform the method of any one of claims 1-7.
CN202011298939.2A 2020-11-18 2020-11-18 Clay car attribute identification method and device, electronic equipment and storage medium Pending CN112287882A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011298939.2A CN112287882A (en) 2020-11-18 2020-11-18 Clay car attribute identification method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011298939.2A CN112287882A (en) 2020-11-18 2020-11-18 Clay car attribute identification method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112287882A true CN112287882A (en) 2021-01-29

Family

ID=74399639

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011298939.2A Pending CN112287882A (en) 2020-11-18 2020-11-18 Clay car attribute identification method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112287882A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113705334A (en) * 2021-07-14 2021-11-26 深圳市有为信息技术发展有限公司 Method and device for supervising engineering muck truck, vehicle-mounted terminal and vehicle

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109416250A (en) * 2017-10-26 2019-03-01 深圳市锐明技术股份有限公司 Carriage status detection method, carriage status detection device and the terminal of haulage vehicle
CN110348388A (en) * 2019-06-24 2019-10-18 贵州黔岸科技有限公司 Image pre-processing method, device, storage medium and system
CN111401162A (en) * 2020-03-05 2020-07-10 上海眼控科技股份有限公司 Illegal auditing method for muck vehicle, electronic device, computer equipment and storage medium
CN111898581A (en) * 2020-08-12 2020-11-06 成都佳华物链云科技有限公司 Animal detection method, device, electronic equipment and readable storage medium
CN111950368A (en) * 2020-07-09 2020-11-17 深圳神目信息技术有限公司 Freight vehicle monitoring method, device, electronic equipment and medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109416250A (en) * 2017-10-26 2019-03-01 深圳市锐明技术股份有限公司 Carriage status detection method, carriage status detection device and the terminal of haulage vehicle
CN110348388A (en) * 2019-06-24 2019-10-18 贵州黔岸科技有限公司 Image pre-processing method, device, storage medium and system
CN111401162A (en) * 2020-03-05 2020-07-10 上海眼控科技股份有限公司 Illegal auditing method for muck vehicle, electronic device, computer equipment and storage medium
CN111950368A (en) * 2020-07-09 2020-11-17 深圳神目信息技术有限公司 Freight vehicle monitoring method, device, electronic equipment and medium
CN111898581A (en) * 2020-08-12 2020-11-06 成都佳华物链云科技有限公司 Animal detection method, device, electronic equipment and readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张文学等: "《大数据挖掘技术及其在医药领域的应用》", 武汉理工大学出版社, pages: 114 *
蒋荟等: "铁路货运计量安全检测监控系统关键技术", 中国铁道科学, vol. 34, no. 4, pages 137 - 144 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113705334A (en) * 2021-07-14 2021-11-26 深圳市有为信息技术发展有限公司 Method and device for supervising engineering muck truck, vehicle-mounted terminal and vehicle

Similar Documents

Publication Publication Date Title
CN109416250B (en) Carriage state detection method and device for transport vehicle and terminal
CN111310645A (en) Overflow bin early warning method, device, equipment and storage medium for cargo accumulation amount
US9460367B2 (en) Method and system for automating an image rejection process
CN111160270B (en) Bridge monitoring method based on intelligent video recognition
CN111325769A (en) Target object detection method and device
CN114565895B (en) Security monitoring system and method based on intelligent society
CN113449632B (en) Vision and radar perception algorithm optimization method and system based on fusion perception and automobile
CN114170580A (en) Highway-oriented abnormal event detection method
CN112329569A (en) Freight vehicle state real-time identification method based on image deep learning system
CN104125436A (en) Early warning method and system for traffic accident detection
CN108764017B (en) Bus passenger flow statistical method, device and system
Bi et al. A new method of target detection based on autonomous radar and camera data fusion
CN110298302B (en) Human body target detection method and related equipment
CN112287882A (en) Clay car attribute identification method and device, electronic equipment and storage medium
CN116052090A (en) Image quality evaluation method, model training method, device, equipment and medium
CN111222394A (en) Muck truck overload detection method, device and system
CN110909599A (en) Detection method, device and system for covering of muck truck
CN111009136A (en) Method, device and system for detecting vehicles with abnormal running speed on highway
Jain et al. Vehicle license plate recognition
CN116091964A (en) High-order video scene analysis method and system
Łubkowski et al. Assessment of quality of identification of data in systems of automatic licence plate recognition
Glasl et al. Video based traffic congestion prediction on an embedded system
CN111402185A (en) Image detection method and device
CN115376106A (en) Vehicle type identification method, device, equipment and medium based on radar map
CN112016514B (en) Traffic sign recognition method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination