CN110598033A - Intelligent self-checking vehicle method and device and computer readable storage medium - Google Patents

Intelligent self-checking vehicle method and device and computer readable storage medium Download PDF

Info

Publication number
CN110598033A
CN110598033A CN201910761970.6A CN201910761970A CN110598033A CN 110598033 A CN110598033 A CN 110598033A CN 201910761970 A CN201910761970 A CN 201910761970A CN 110598033 A CN110598033 A CN 110598033A
Authority
CN
China
Prior art keywords
vehicle
image
checking
gradient
intelligent self
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910761970.6A
Other languages
Chinese (zh)
Other versions
CN110598033B (en
Inventor
黎聪明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Property and Casualty Insurance Company of China Ltd
Original Assignee
Ping An Property and Casualty Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Property and Casualty Insurance Company of China Ltd filed Critical Ping An Property and Casualty Insurance Company of China Ltd
Priority to CN201910761970.6A priority Critical patent/CN110598033B/en
Publication of CN110598033A publication Critical patent/CN110598033A/en
Application granted granted Critical
Publication of CN110598033B publication Critical patent/CN110598033B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Library & Information Science (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an artificial intelligence technology, and discloses an intelligent self-checking vehicle method, which comprises the following steps: generating a tag set according to a vehicle image set of a vehicle image library; receiving a vehicle checking image set, preprocessing and segmenting the vehicle checking image set to obtain a vehicle direction gradient characteristic atlas set of the vehicle checking image set, and taking the vehicle direction gradient characteristic atlas set as a training set; training a pre-constructed intelligent self-checking vehicle model by using the training set and the label set, outputting a vehicle characteristic image with the highest matching degree with the training set, and finishing the training of the intelligent self-checking vehicle model; and identifying the vehicle inspection image uploaded by the user according to the trained intelligent self-inspection vehicle model and the vehicle image set of the vehicle image library, and outputting a self-inspection result of the vehicle inspection image uploaded by the user. The invention also provides an intelligent self-checking vehicle checking device and a computer readable storage medium. The invention realizes the accurate identification of the vehicle inspection image.

Description

Intelligent self-checking vehicle method and device and computer readable storage medium
Technical Field
The invention relates to the technical field of big data, in particular to an intelligent self-checking vehicle method and device based on user behaviors and a computer readable storage medium.
Background
In recent years, due to the rapid development of science and technology and the improvement of the living standard of people, the number of automobiles is continuously increased, traffic accidents on roads are also continuously increased, and the influence is that the life insurance industry examines and verifies automobile insurance. However, the single amount of the documents in the whole country every day is large, the burden of auditors is also large, the industrial risk cannot be better controlled, the manual audit influences the failure of the document output, the document output speed is low, and the labor cost is high.
Disclosure of Invention
The invention provides an intelligent self-checking vehicle method, an intelligent self-checking vehicle device and a computer readable storage medium, and mainly aims to provide an efficient self-checking vehicle method for a user when the user performs vehicle verification.
In order to achieve the purpose, the invention provides an intelligent self-checking vehicle method, which comprises the following steps:
acquiring vehicle images historically stored by a user from a vehicle image library, and establishing labels for the vehicle images in the vehicle image library to generate a label set;
receiving the vehicle-checking image set of the user, and carrying out preprocessing operation on the vehicle-checking image set to obtain a target vehicle-checking image set;
segmenting the vehicle of the target vehicle-checking image set by an edge detection method and a threshold segmentation method to obtain a local key point image of the vehicle;
establishing a direction gradient characteristic atlas of the vehicle according to the local key point image of the vehicle, and taking the direction gradient characteristic atlas as a training set;
training a pre-constructed intelligent self-checking vehicle model by using the training set to obtain a training value, inputting the training value and the label set into a loss function of the intelligent self-checking vehicle model to obtain a loss function value, and finishing the training of the intelligent self-checking vehicle model until the loss function value is smaller than a preset threshold value;
inputting the vehicle inspection image uploaded by the user into the trained intelligent self-inspection vehicle model to obtain the characteristic image with the highest matching degree of the vehicle inspection image, traversing and comparing the characteristic image with the highest matching degree with the vehicle image library by utilizing a logic comparison program, and outputting the self-inspection result of the vehicle inspection image uploaded by the user.
Optionally, the receiving the vehicle-checking image set of the user, and performing a preprocessing operation on the vehicle-checking image set to obtain a target vehicle-checking image set includes:
converting the car inspection images in the car inspection image set into gray level images through histogram equalization; contrast enhancement is carried out on the gray level image by using a contrast stretching method; and denoising the contrast-enhanced gray level image by using Gaussian filtering to obtain the target car inspection image set.
Optionally, the segmenting the vehicle in the target vehicle-testing image set by using an edge detection method and a threshold segmentation method to obtain the local key point image of the vehicle includes:
interface positioning is carried out on the vehicle of the target vehicle-checking image set by using a Canny edge detection method, the amplitude and the direction of the gradient of the vehicle are calculated through the finite difference of first-order partial derivatives, and the amplitude of a non-local maximum value point in the gradient of the vehicle is set to be zero, so that a refined vehicle edge image is obtained;
the method comprises the steps of segmenting the vehicle edge image by using a double threshold method, amplifying key points in the segmented vehicle edge image by using a region growing method, and connecting the segmented vehicle edge image through edge connection so as to obtain a local key point image of the vehicle.
Optionally, the establishing a directional gradient feature atlas set of the vehicle according to the local key point image of the vehicle includes:
calculating the gradient amplitude G (x, y) and the gradient direction sigma (x, y) of each pixel point (x, y) in the local key point image of the vehicle to form a gradient matrix of the local key point image of the vehicle, and dividing the gradient matrix into small cell units;
calculating the gradient size and direction of each pixel point in the cell unit, counting a gradient direction histogram, and calculating the sum of pixel gradients of each direction channel in the gradient direction histogram;
accumulating the sum of the pixel gradients of each direction channel to form a vector, combining the cell units into a block, normalizing the vector in the block to obtain a characteristic vector, and connecting the characteristic vectors to obtain a direction gradient characteristic map set of the vehicle.
Optionally, the training of the pre-constructed intelligent self-checking vehicle model by using the training set to obtain a training value includes:
inputting the training set to an input layer of a convolutional neural network of the intelligent self-checking vehicle model, and extracting a feature vector by performing convolution operation on the training set through presetting a group of filters in the convolutional layer of the convolutional neural network;
and performing pooling operation on the feature vector by using a pooling layer of the convolutional neural network, inputting the pooled feature vector to a full-link layer, and performing normalization processing and calculation on the pooled feature vector through an activation function of the convolutional neural network to obtain the training value.
In addition, in order to achieve the above object, the present invention further provides an intelligent self-checking vehicle device, which includes a memory and a processor, wherein the memory stores an intelligent self-checking vehicle program that can run on the processor, and when the intelligent self-checking vehicle program is executed by the processor, the following steps are implemented:
acquiring vehicle images historically stored by a user from a vehicle image library, and establishing labels for the vehicle images in the vehicle image library to generate a label set;
receiving the vehicle-checking image set of the user, and carrying out preprocessing operation on the vehicle-checking image set to obtain a target vehicle-checking image set;
segmenting the vehicle of the target vehicle-checking image set by an edge detection method and a threshold segmentation method to obtain a local key point image of the vehicle;
establishing a direction gradient characteristic atlas of the vehicle according to the local key point image of the vehicle, and taking the direction gradient characteristic atlas as a training set;
training a pre-constructed intelligent self-checking vehicle model by using the training set to obtain a training value, inputting the training value and the label set into a loss function of the intelligent self-checking vehicle model to obtain a loss function value, and finishing the training of the intelligent self-checking vehicle model until the loss function value is smaller than a preset threshold value;
inputting the vehicle inspection image uploaded by the user into the trained intelligent self-inspection vehicle model to obtain the characteristic image with the highest matching degree of the vehicle inspection image, traversing and comparing the characteristic image with the highest matching degree with the vehicle image library by utilizing a logic comparison program, and outputting the self-inspection result of the vehicle inspection image uploaded by the user.
Optionally, the receiving the vehicle-checking image set of the user, and performing a preprocessing operation on the vehicle-checking image set to obtain a target vehicle-checking image set includes:
converting the car inspection images in the car inspection image set into gray level images through histogram equalization; contrast enhancement is carried out on the gray level image by using a contrast stretching method; and denoising the contrast-enhanced gray level image by using Gaussian filtering to obtain the target car inspection image set.
Optionally, the segmenting the vehicle in the target vehicle-testing image set by using an edge detection method and a threshold segmentation method to obtain the local key point image of the vehicle includes:
interface positioning is carried out on the vehicle of the target vehicle-checking image set by using a Canny edge detection method, the amplitude and the direction of the gradient of the vehicle are calculated through the finite difference of first-order partial derivatives, and the amplitude of a non-local maximum value point in the gradient of the vehicle is set to be zero, so that a refined vehicle edge image is obtained;
the method comprises the steps of segmenting the vehicle edge image by using a double threshold method, amplifying key points in the segmented vehicle edge image by using a region growing method, and connecting the segmented vehicle edge image through edge connection so as to obtain a local key point image of the vehicle.
Optionally, the establishing a directional gradient feature atlas set of the vehicle according to the local key point image of the vehicle includes:
calculating the gradient amplitude G (x, y) and the gradient direction sigma (x, y) of each pixel point (x, y) in the local key point image of the vehicle to form a gradient matrix of the local key point image of the vehicle, and dividing the gradient matrix into small cell units;
calculating the gradient size and direction of each pixel point in the cell unit, counting a gradient direction histogram, and calculating the sum of pixel gradients of each direction channel in the gradient direction histogram;
accumulating the sum of the pixel gradients of each direction channel to form a vector, combining the cell units into a block, normalizing the vector in the block to obtain a characteristic vector, and connecting the characteristic vectors to obtain a direction gradient characteristic map set of the vehicle.
In addition, to achieve the above object, the present invention also provides a computer readable storage medium having an intelligent self-checking vehicle program stored thereon, which can be executed by one or more processors to implement the steps of the intelligent self-checking vehicle method as described above.
According to the intelligent self-checking vehicle inspection method, the device and the computer readable storage medium, when the user performs vehicle inspection through the vehicle inspection image, the training of the intelligent self-checking vehicle inspection model is completed by combining the acquired vehicle inspection image set and the vehicle image set of the vehicle image library, so that an efficient self-checking vehicle inspection method is provided for the user.
Drawings
Fig. 1 is a schematic flow chart of an intelligent self-checking vehicle method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of an internal structure of the intelligent self-checking vehicle device according to an embodiment of the present invention;
fig. 3 is a schematic block diagram of an intelligent self-checking vehicle-checking program in the intelligent self-checking vehicle-checking device according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The invention provides an intelligent self-checking vehicle method. Fig. 1 is a schematic flow chart of an intelligent self-checking vehicle method according to an embodiment of the present invention. The method may be performed by an apparatus, which may be implemented by software and/or hardware.
In this embodiment, the intelligent self-checking vehicle method includes:
s1, obtaining a vehicle image set stored by a user history from a vehicle image library, and establishing labels for the vehicle image set of the vehicle image library to generate a label set.
In the preferred embodiment of the present invention, the user may be a certain enterprise related to vehicle insurance, such as china security. The vehicle image source in the invention mainly comprises the following two modes: the first method is as follows: the method is obtained by the popularization and acquisition of the offline vehicle insurance of Chinese safe business personnel; the second method comprises the following steps: and performing on-line signing acquisition according to the vehicle insurance app of Chinese safety and/or the vehicle insurance official network of Chinese safety. Further, the invention builds labels for the vehicle images in the vehicle image library, thereby generating a label set. For example, tags belonging to an insured vehicle and not belonging to an insured vehicle are established separately from license plate authentication of the vehicle.
And S2, receiving the car-checking image set of the user, and carrying out preprocessing operation on the car-checking image set to obtain a target car-checking image set.
In a preferred embodiment of the present invention, the vehicle-checking image set is mainly derived from vehicle images uploaded by a vehicle owner. The pre-processing operations include contrast enhancement, graying, and noise reduction. The method comprises the steps that a car inspection image in a car inspection image set is converted into a gray image through histogram equalization; contrast enhancement is carried out on the gray level image by using a contrast stretching method; and denoising the contrast-enhanced gray level image by using Gaussian filtering to obtain the target car inspection image set.
In detail, the specific implementation steps of contrast enhancement, graying processing and noise reduction are as follows:
a. graying treatment:
the histogram equalization is a process of having the same number of pixel points on each gray level, and aims to make the image distributed and homogenized in the whole dynamic variation range of the gray level, improve the brightness distribution state of the image and enhance the visual effect of the image. In the embodiment of the present invention, the histogram equalization processing includes: counting a histogram of the vehicle inspection image set with the improved contrast; calculating new gray scale of the vehicle inspection image after transformation by adopting cumulative distribution function according to the counted histogram; and replacing the old gray scale with the new gray scale, and simultaneously combining the gray scales which are equal or approximate to each other to obtain a balanced vehicle-checking image set. Preferably, the invention converts the car inspection image containing the color image into the gray image by using the proportional methods, wherein the proportional methods are that the three components of the current pixel are respectively R, G and B, and the converted pixel component value Y is obtained by using a color conversion formula, so that the gray image of the color image is obtained. The color conversion formula is:
Y=0.3R+0.59G+0.11B
b. contrast enhancement:
the contrast refers to the contrast between the brightness maximum and minimum in the imaging system, wherein low contrast increases the difficulty of image processing. In the preferred embodiment of the present invention, a contrast stretching method is used to achieve the purpose of enhancing the contrast of an image by increasing the dynamic range of gray scale. Furthermore, the invention performs gray scale stretching on the specific area according to the piecewise linear transformation function in the contrast stretching method, thereby further improving the contrast of the output image. When contrast stretching is performed, gray value transformation is essentially achieved. The invention realizes gray value conversion by linear stretching, wherein the linear stretching refers to pixel level operation with linear relation between input and output gray values, and a gray conversion formula is as follows:
Db=f(Da)=a*Da+b
where a is the linear slope and b is the intercept on the Y-axis. When a > 1, the image contrast of the output image is enhanced compared with the original image. When a < 1, the contrast of the output image is weakened compared with the original image, where DaRepresenting the gray value of the input image, DbRepresenting the output image grey scale value.
c. Noise reduction:
the Gaussian filtering is linear smooth filtering, is suitable for eliminating Gaussian noise, and is widely applied to the noise reduction process of image processing. In the invention, each pixel in the image of the vehicle-checking image set is scanned by using a template (or called convolution and mask), and the weighted average gray value of the pixels in the neighborhood determined by the template is used for replacing the value of the central pixel point of the template, so that the N-dimensional space normal distribution equation is as follows:
where σ is the standard deviation of the normal distribution, the larger the σ value, the more blurred (smoothed) the image, and r is the blur radius, which refers to the distance of the template element to the center of the template.
S3, segmenting the vehicle of the target vehicle-checking image set through an edge detection method and a threshold segmentation method to obtain a local key point image of the vehicle.
The basic idea of edge detection is to consider edge points as those pixel points in an image where the gray level of pixels has a step change or a roof change, i.e. where the derivative of the gray level is large or extremely large. In the preferred embodiment of the invention, a Canny edge detection method is used for carrying out interface positioning on the vehicle of the target vehicle-checking image set, the amplitude and the direction of the gradient of the vehicle are calculated through the finite difference of first-order partial derivatives, the amplitude of a non-local maximum point in the gradient of the vehicle is set to be zero, a refined vehicle edge image is obtained, a dual-threshold method is used for segmenting the vehicle edge image, a region growing method is used for amplifying key points in the segmented vehicle edge image, and the segmented vehicle edge image is connected through edge connection, so that a local key point image of the vehicle is obtained.
The basic idea of the region growing method is to group pixels or sub-regions into larger regions according to a predefined criterion, starting from a set of growing points (the growing point can be a single pixel or a small region), merging adjacent pixels or regions with similar properties to the growing point with the growing point to form a new growing point, and repeating the process until the growing point cannot grow. The four corners of the segmented vehicle edge image are taken as seed growing points, the pixel values of the background part of the segmented vehicle edge image are set to be zero, the image of the local key point part of the vehicle is segmented, and the image of the local key point part of the vehicle is amplified.
Furthermore, the preferred embodiment of the present invention presets two threshold values T1And T2(T1<T2) Obtaining two threshold edge images N1[i,j]And N2[i,j]. Preferably, the double threshold method is performed by applying a voltage at the N2[i,j]Connecting the interrupted edges into a complete profile, such that when the point of interruption of said edge is reached, it is at said N1[i,j]Up to N, find edges that can connect2[i,j]All discontinuities are connected.
S4, establishing a direction gradient characteristic atlas of the vehicle according to the local key point image of the vehicle, and taking the direction gradient characteristic atlas as a training set.
The directional gradient feature is a feature descriptor used for object detection in computer vision and image processing. The directional gradient feature constitutes a feature by calculating and counting a gradient direction histogram of a local region of the image. In the preferred embodiment of the invention, a gradient matrix of the local key point image of the vehicle is formed by calculating the gradient magnitude G (x, y) and the gradient direction sigma (x, y) of each pixel point (x, y) of the local key point image of the vehicle, wherein each element in the gradient matrix is a vector, the first component is the gradient magnitude, and the second component and the third component are combined to represent the gradient direction; dividing the image matrix into small cell units, wherein each cell unit is preset to be 4 × 4 pixels, each 2 × 2 cell units form a block, and the angle from 0 ° to 180 ° is averagely divided into 9 directional channels; calculating the gradient size and direction of each pixel point in the cell unit, and counting a gradient direction histogram, wherein the gradient direction histogram comprises 9 direction channels, and the sum of pixel gradients of each direction channel in the gradient direction histogram is calculated to obtain a group of vectors formed by the accumulated sum of the pixel gradients of each channel; forming the cell units into blocks, and performing normalization processing on vectors in the blocks to obtain characteristic vectors; and connecting all the feature vectors subjected to normalization processing to form a directional gradient feature map set of the local key point image of the vehicle.
S5, training a pre-constructed intelligent self-checking vehicle model by using the training set to obtain a training value, inputting the training value and the label set into a loss function of the intelligent self-checking vehicle model to obtain a loss function value, and finishing training the intelligent self-checking vehicle model until the loss function value is smaller than a preset threshold value.
In a preferred embodiment of the present invention, the intelligent self-checking vehicle model comprises a convolutional neural network. The convolutional neural network is a feedforward neural network, the artificial neurons of the convolutional neural network can respond to surrounding units in a part of coverage range, the basic structure of the convolutional neural network comprises two layers, one layer is a characteristic extraction layer, the input of each neuron is connected with a local receiving domain of the previous layer, and the local characteristics are extracted. Once the local feature is extracted, the position relation between the local feature and other features is determined; the other is a feature mapping layer, each calculation layer of the network is composed of a plurality of feature mappings, each feature mapping is a plane, and the weights of all neurons on the plane are equal.
In a preferred embodiment of the present invention, the convolutional neural network comprises an input layer, a convolutional layer, a pooling layer, and an output layer. In the preferred embodiment of the present invention, the input layer of the convolutional neural network model receives the training set, and performs convolution operation on the training set by presetting a set of filters in the convolutional layer to extract feature vectors, where the filters may be { filter } filters0,filter1-generating a set of features on similar channels and dissimilar channels, respectively; and performing pooling operation on the feature vectors by using the pooling layer, inputting the pooled feature vectors to a full-connection layer, performing normalization processing and calculation on the pooled feature vectors through an activation function to obtain a training value, and inputting a calculation result to an output layer. The normalization process is to "compress" a K-dimensional vector containing arbitrary real numbers to another K-dimensional real vector such that each element ranges between (0, 1) and the sum of all elements is 1.
In the embodiment of the present invention, the activation function is a softmax function, and a calculation formula is as follows:
wherein, OjA characteristic image output value, I, of the vehicle representing the jth neuron of the convolutional neural network output layerjAnd representing the input value of the jth neuron of the convolutional neural network output layer, t representing the total amount of the neurons of the output layer, and e being an infinite acyclic decimal.
In a preferred embodiment of the present invention, the threshold of the predetermined loss function value is 0.01, and the loss function is a least square method:
wherein s is an error value between the vehicle feature image with the highest matching degree of the input direction gradient feature map and the vehicle image in the vehicle image library, k is the number of the direction gradient feature map sets, y isiIs a vehicle image of the vehicle image library, y'iAnd the vehicle characteristic image with the highest matching degree of the input direction gradient characteristic map is obtained.
S6, inputting the vehicle inspection images uploaded by the user into the trained intelligent self-inspection vehicle model to obtain the feature images with the highest matching degree of the vehicle inspection images, traversing and comparing the feature images with the highest matching degree with the vehicle image library by using a logic comparison program, and outputting the self-inspection results of the vehicle inspection images uploaded by the user.
In a preferred embodiment of the present invention, the logical alignment program is written by MapReduce in Hadoop. The MapReduce is a programming model used for parallel operation of large-scale data sets (larger than 1 TB). According to the invention, the feature image with the highest matching degree with the vehicle inspection image is obtained through the intelligent self-checking vehicle inspection model, and whether the vehicle image corresponding to the feature image with the highest matching degree exists in the vehicle image library is identified according to the logic comparison program, so that the self-checking result of the vehicle inspection image is output. And the traversing comparison is to compare the vehicle inspection image with the vehicle images in the vehicle image library one by one. Preferably, the invention does not process the car inspection image which passes the self-checking, and the car inspection image which does not pass the self-checking is submitted to manual review again.
The invention further provides an intelligent self-checking vehicle inspection device. Fig. 2 is a schematic view of an internal structure of the intelligent self-checking vehicle inspection device according to an embodiment of the present invention.
In the present embodiment, the smart self-checking vehicle device 1 may be a PC (Personal Computer), a terminal device such as a smart phone, a tablet Computer, and a mobile Computer, or may be a server. The intelligent self-checking vehicle device 1 at least comprises a memory 11, a processor 12, a communication bus 13 and a network interface 14.
The memory 11 includes at least one type of readable storage medium, which includes a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, and the like. The memory 11 may in some embodiments be an internal storage unit of the intelligent self-verifying vehicle device 1, such as a hard disk of the intelligent self-verifying vehicle device 1. The memory 11 may also be an external storage device of the Smart self-checking vehicle device 1 in other embodiments, such as a plug-in hard disk provided on the Smart self-checking vehicle device 1, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. Further, the memory 11 may also include both an internal storage unit and an external storage device of the intelligent self-checking vehicle apparatus 1. The memory 11 may be used to store not only application software installed in the intelligent self-checking vehicle device 1 and various types of data, such as codes of the intelligent self-checking vehicle program 01, but also temporarily store data that has been output or is to be output.
Processor 12, which in some embodiments may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor or other data Processing chip, is configured to execute program code stored in memory 11 or process data, such as executing smart car self-check program 01.
The communication bus 13 is used to realize connection communication between these components.
The network interface 14 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), typically used to establish a communication link between the apparatus 1 and other electronic devices.
Optionally, the apparatus 1 may further comprise a user interface, which may comprise a Display (Display), an input unit such as a Keyboard (Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable for displaying information processed in the intelligent self-verifying vehicle device 1 and for displaying a visual user interface.
While FIG. 2 only shows the intelligent self-verifying device 1 with the components 11-14 and the intelligent self-verifying program 01, those skilled in the art will appreciate that the configuration shown in FIG. 1 does not constitute a limitation of the intelligent self-verifying device 1, and may include fewer or more components than shown, or some components in combination, or a different arrangement of components.
In the embodiment of the apparatus 1 shown in fig. 2, the memory 11 stores therein an intelligent self-checking vehicle program 01; the processor 12 implements the following steps when executing the intelligent self-checking vehicle program 01 stored in the memory 11:
step one, a vehicle image set stored by a user in history is obtained from a vehicle image library, and a label is established for the vehicle image set of the vehicle image library to generate a label set.
In the preferred embodiment of the present invention, the user may be a certain enterprise related to vehicle insurance, such as china security. The vehicle image source in the invention mainly comprises the following two modes: the first method is as follows: the method is obtained by the popularization and acquisition of the offline vehicle insurance of Chinese safe business personnel; the second method comprises the following steps: and performing on-line signing acquisition according to the vehicle insurance app of Chinese safety and/or the vehicle insurance official network of Chinese safety. Further, the invention builds labels for the vehicle images in the vehicle image library, thereby generating a label set. For example, tags belonging to an insured vehicle and not belonging to an insured vehicle are established separately from license plate authentication of the vehicle.
And step two, receiving the vehicle checking image set of the user, and carrying out preprocessing operation on the vehicle checking image set to obtain a target vehicle checking image set.
In a preferred embodiment of the present invention, the vehicle-checking image set is mainly derived from vehicle images uploaded by a vehicle owner. The pre-processing operations include contrast enhancement, graying, and noise reduction. The method comprises the steps that a car inspection image in a car inspection image set is converted into a gray image through histogram equalization; contrast enhancement is carried out on the gray level image by using a contrast stretching method; and denoising the contrast-enhanced gray level image by using Gaussian filtering to obtain the target car inspection image set.
In detail, the specific implementation steps of contrast enhancement, graying processing and noise reduction are as follows:
d. graying treatment:
the histogram equalization is a process of having the same number of pixel points on each gray level, and aims to make the image distributed and homogenized in the whole dynamic variation range of the gray level, improve the brightness distribution state of the image and enhance the visual effect of the image. In the embodiment of the present invention, the histogram equalization processing includes: counting a histogram of the vehicle inspection image set with the improved contrast; calculating new gray scale of the vehicle inspection image after transformation by adopting cumulative distribution function according to the counted histogram; and replacing the old gray scale with the new gray scale, and simultaneously combining the gray scales which are equal or approximate to each other to obtain a balanced vehicle-checking image set. Preferably, the invention converts the car inspection image containing the color image into the gray image by using the proportional methods, wherein the proportional methods are that the three components of the current pixel are respectively R, G and B, and the converted pixel component value Y is obtained by using a color conversion formula, so that the gray image of the color image is obtained. The color conversion formula is:
Y=0.3R+0.59G+0.11B
e. contrast enhancement:
the contrast refers to the contrast between the brightness maximum and minimum in the imaging system, wherein low contrast increases the difficulty of image processing. In the preferred embodiment of the present invention, a contrast stretching method is used to achieve the purpose of enhancing the contrast of an image by increasing the dynamic range of gray scale. Furthermore, the invention performs gray scale stretching on the specific area according to the piecewise linear transformation function in the contrast stretching method, thereby further improving the contrast of the output image. When contrast stretching is performed, gray value transformation is essentially achieved. The invention realizes gray value conversion by linear stretching, wherein the linear stretching refers to pixel level operation with linear relation between input and output gray values, and a gray conversion formula is as follows:
Db=f(Da)=a*Da+b
where a is the linear slope and b is the intercept on the Y-axis. When a > 1, the image contrast of the output image is enhanced compared with the original image. When a < 1, the contrast of the output image is weakened compared with the original image, where DaRepresenting the gray value of the input image, DbRepresenting the output image grey scale value.
f. Noise reduction:
the Gaussian filtering is linear smooth filtering, is suitable for eliminating Gaussian noise, and is widely applied to the noise reduction process of image processing. In the invention, each pixel in the image of the vehicle-checking image set is scanned by using a template (or called convolution and mask), and the weighted average gray value of the pixels in the neighborhood determined by the template is used for replacing the value of the central pixel point of the template, so that the N-dimensional space normal distribution equation is as follows:
where σ is the standard deviation of the normal distribution, the larger the σ value, the more blurred (smoothed) the image, and r is the blur radius, which refers to the distance of the template element to the center of the template.
And thirdly, segmenting the vehicles of the target vehicle-checking image set through an edge detection method and a threshold segmentation method to obtain local key point images of the vehicles.
The basic idea of edge detection is to consider edge points as those pixel points in an image where the gray level of pixels has a step change or a roof change, i.e. where the derivative of the gray level is large or extremely large. In the preferred embodiment of the invention, a Canny edge detection method is used for carrying out interface positioning on the vehicle of the target vehicle-checking image set, the amplitude and the direction of the gradient of the vehicle are calculated through the finite difference of first-order partial derivatives, the amplitude of a non-local maximum point in the gradient of the vehicle is set to be zero, a refined vehicle edge image is obtained, a dual-threshold method is used for segmenting the vehicle edge image, a region growing method is used for amplifying key points in the segmented vehicle edge image, and the segmented vehicle edge image is connected through edge connection, so that a local key point image of the vehicle is obtained.
The basic idea of the region growing method is to group pixels or sub-regions into larger regions according to a predefined criterion, starting from a set of growing points (the growing point can be a single pixel or a small region), merging adjacent pixels or regions with similar properties to the growing point with the growing point to form a new growing point, and repeating the process until the growing point cannot grow. The four corners of the segmented vehicle edge image are taken as seed growing points, the pixel values of the background part of the segmented vehicle edge image are set to be zero, the image of the local key point part of the vehicle is segmented, and the image of the local key point part of the vehicle is amplified.
Furthermore, the preferred embodiment of the present invention presets two threshold values T1And T2(T1<T2) Obtaining two threshold edge images N1[i,j]And N2[i,j]. Preferably, the double threshold method is performed by applying a voltage at the N2[i,j]Connecting the interrupted edges into a complete profile, such that when the point of interruption of said edge is reached, it is at said N1[i,j]Up to N, find edges that can connect2[i,j]All discontinuities are connected.
And fourthly, establishing a direction gradient characteristic atlas of the vehicle according to the local key point image of the vehicle, and taking the direction gradient characteristic atlas as a training set.
The directional gradient feature is a feature descriptor used for object detection in computer vision and image processing. The directional gradient feature constitutes a feature by calculating and counting a gradient direction histogram of a local region of the image. In the preferred embodiment of the invention, a gradient matrix of the local key point image of the vehicle is formed by calculating the gradient magnitude G (x, y) and the gradient direction sigma (x, y) of each pixel point (x, y) of the local key point image of the vehicle, wherein each element in the gradient matrix is a vector, the first component is the gradient magnitude, and the second component and the third component are combined to represent the gradient direction; dividing the image matrix into small cell units, wherein each cell unit is preset to be 4 × 4 pixels, each 2 × 2 cell units form a block, and the angle from 0 ° to 180 ° is averagely divided into 9 directional channels; calculating the gradient size and direction of each pixel point in the cell unit, and counting a gradient direction histogram, wherein the gradient direction histogram comprises 9 direction channels, and the sum of pixel gradients of each direction channel in the gradient direction histogram is calculated to obtain a group of vectors formed by the accumulated sum of the pixel gradients of each channel; forming the cell units into blocks, and performing normalization processing on vectors in the blocks to obtain characteristic vectors; and connecting all the feature vectors subjected to normalization processing to form a directional gradient feature map set of the local key point image of the vehicle.
And fifthly, training a pre-constructed intelligent self-checking vehicle model by using the training set to obtain a training value, inputting the training value and the label set into a loss function of the intelligent self-checking vehicle model to obtain a loss function value, and finishing the training of the intelligent self-checking vehicle model until the loss function value is smaller than a preset threshold value.
In a preferred embodiment of the present invention, the intelligent self-checking vehicle model comprises a convolutional neural network. The convolutional neural network is a feedforward neural network, the artificial neurons of the convolutional neural network can respond to surrounding units in a part of coverage range, the basic structure of the convolutional neural network comprises two layers, one layer is a characteristic extraction layer, the input of each neuron is connected with a local receiving domain of the previous layer, and the local characteristics are extracted. Once the local feature is extracted, the position relation between the local feature and other features is determined; the other is a feature mapping layer, each calculation layer of the network is composed of a plurality of feature mappings, each feature mapping is a plane, and the weights of all neurons on the plane are equal.
In a preferred embodiment of the present invention, the convolutional neural network comprises an input layer, a convolutional layer, a pooling layer, and an output layer. In the preferred embodiment of the present invention, the input layer of the convolutional neural network model receives the training set, and performs convolution operation on the training set by presetting a set of filters in the convolutional layer to extract feature vectors, where the filters may be { filter } filters0,filter1-generating a set of features on similar channels and dissimilar channels, respectively; and performing pooling operation on the feature vectors by using the pooling layer, inputting the pooled feature vectors to a full-connection layer, performing normalization processing and calculation on the pooled feature vectors through an activation function to obtain a training value, and inputting a calculation result to an output layer. The normalization process is to "compress" a K-dimensional vector containing arbitrary real numbers to another K-dimensional real vector such that each element ranges between (0, 1) and the sum of all elements is 1.
In the embodiment of the present invention, the activation function is a softmax function, and a calculation formula is as follows:
wherein, OjA characteristic image output value, I, of the vehicle representing the jth neuron of the convolutional neural network output layerjAnd representing the input value of the jth neuron of the convolutional neural network output layer, t representing the total amount of the neurons of the output layer, and e being an infinite acyclic decimal.
In a preferred embodiment of the present invention, the threshold of the predetermined loss function value is 0.01, and the loss function is a least square method:
wherein s is the error between the vehicle characteristic image with the highest matching degree of the input direction gradient characteristic map and the vehicle image in the vehicle image libraryThe value k is the number of spectral sets of the directional gradient features, yiIs a vehicle image of the vehicle image library, y'iAnd the vehicle characteristic image with the highest matching degree of the input direction gradient characteristic map is obtained.
Inputting the vehicle inspection image uploaded by the user into the trained intelligent self-checking vehicle inspection model to obtain the characteristic image with the highest matching degree of the vehicle inspection image, traversing and comparing the characteristic image with the highest matching degree with the vehicle image library by using a logic comparison program, and outputting the self-checking result of the vehicle inspection image uploaded by the user.
In a preferred embodiment of the present invention, the logical alignment program is written by MapReduce in Hadoop. The MapReduce is a programming model used for parallel operation of large-scale data sets (larger than 1 TB). According to the invention, the feature image with the highest matching degree with the vehicle inspection image is obtained through the intelligent self-checking vehicle inspection model, and whether the vehicle image corresponding to the feature image with the highest matching degree exists in the vehicle image library is identified according to the logic comparison program, so that the self-checking result of the vehicle inspection image is output. And the traversing comparison is to compare the vehicle inspection image with the vehicle images in the vehicle image library one by one. Preferably, the invention does not process the car inspection image which passes the self-checking, and the car inspection image which does not pass the self-checking is submitted to manual review again.
Alternatively, in other embodiments, the intelligent self-checking vehicle program may be further divided into one or more modules, and the one or more modules are stored in the memory 11 and executed by one or more processors (in this embodiment, the processor 12) to implement the present invention.
For example, referring to fig. 3, a schematic diagram of program modules of an intelligent self-checking vehicle program in an embodiment of the intelligent self-checking vehicle device of the present invention is shown, in this embodiment, the intelligent self-checking vehicle program may be divided into an image receiving module 10, an image processing module 20, a model training module 30, and a result self-checking module 40, which exemplarily:
the image acquisition module 10 is configured to: the method comprises the steps of obtaining vehicle images stored in a user history from a vehicle image library, receiving a vehicle checking image set of the user, and establishing labels for the vehicle images in the vehicle image library to generate a label set.
The image processing module 20 is configured to: preprocessing the vehicle checking image set to obtain a target vehicle checking image set, segmenting vehicles of the target vehicle checking image set through an edge detection method and a threshold segmentation method to obtain local key point images of the vehicles, establishing a direction gradient characteristic atlas of the vehicles according to the local key point images of the vehicles, and taking the direction gradient characteristic atlas as a training set.
The model training module 30 is configured to: training a pre-constructed intelligent self-checking vehicle model by utilizing the training set to obtain a training value, inputting the training value and the label set into a loss function of the intelligent self-checking vehicle model to obtain a loss function value, and finishing the training of the intelligent self-checking vehicle model until the loss function value is smaller than a preset threshold value.
The result self-core module 40 is configured to: inputting the vehicle inspection image uploaded by the user into the trained intelligent self-inspection vehicle model to obtain the characteristic image with the highest matching degree of the vehicle inspection image, traversing and comparing the characteristic image with the highest matching degree with the vehicle image library by utilizing a logic comparison program, and outputting the self-inspection result of the vehicle inspection image uploaded by the user.
The functions or operation steps of the image obtaining module 10, the image processing module 20, the model training module 30, and the result self-checking module 40 when executed are substantially the same as those of the above embodiments, and are not repeated herein.
Furthermore, an embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium has an intelligent self-checking vehicle program stored thereon, and the intelligent self-checking vehicle program is executable by one or more processors to implement the following operations:
acquiring vehicle images historically stored by a user from a vehicle image library, and establishing labels for the vehicle images in the vehicle image library to generate a label set;
receiving the vehicle-checking image set of the user, and carrying out preprocessing operation on the vehicle-checking image set to obtain a target vehicle-checking image set;
segmenting the vehicle of the target vehicle-checking image set by an edge detection method and a threshold segmentation method to obtain a local key point image of the vehicle;
establishing a direction gradient characteristic atlas of the vehicle according to the local key point image of the vehicle, and taking the direction gradient characteristic atlas as a training set;
training a pre-constructed intelligent self-checking vehicle model by using the training set to obtain a training value, inputting the training value and the label set into a loss function of the intelligent self-checking vehicle model to obtain a loss function value, and finishing the training of the intelligent self-checking vehicle model until the loss function value is smaller than a preset threshold value;
inputting the vehicle inspection image uploaded by the user into the trained intelligent self-inspection vehicle model to obtain the characteristic image with the highest matching degree of the vehicle inspection image, traversing and comparing the characteristic image with the highest matching degree with the vehicle image library by utilizing a logic comparison program, and outputting the self-inspection result of the vehicle inspection image uploaded by the user.
The specific implementation manner of the computer-readable storage medium of the present invention is substantially the same as that of the above-mentioned embodiments of the intelligent self-checking vehicle inspection device and method, and will not be described herein in a repeated manner.
It should be noted that the above-mentioned numbers of the embodiments of the present invention are merely for description, and do not represent the merits of the embodiments. And the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, apparatus, article, or method that includes the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. An intelligent self-checking vehicle method, characterized in that the method comprises:
acquiring a vehicle image set historically stored by a user from a vehicle image library, and establishing labels for the vehicle image set of the vehicle image library to generate a label set;
receiving the vehicle-checking image set of the user, and carrying out preprocessing operation on the vehicle-checking image set to obtain a target vehicle-checking image set;
segmenting the vehicle of the target vehicle-checking image set by an edge detection method and a threshold segmentation method to obtain a local key point image of the vehicle;
establishing a direction gradient characteristic atlas of the vehicle according to the local key point image of the vehicle, and taking the direction gradient characteristic atlas as a training set;
training a pre-constructed intelligent self-checking vehicle model by using the training set to obtain a training value, inputting the training value and the label set into a loss function of the intelligent self-checking vehicle model to obtain a loss function value, and finishing the training of the intelligent self-checking vehicle model until the loss function value is smaller than a preset threshold value;
inputting the vehicle inspection image uploaded by the user into the trained intelligent self-inspection vehicle model to obtain the characteristic image with the highest matching degree of the vehicle inspection image, traversing and comparing the characteristic image with the highest matching degree with the vehicle image library by utilizing a logic comparison program, and outputting the self-inspection result of the vehicle inspection image uploaded by the user.
2. The intelligent self-checking vehicle method according to claim 1, wherein the receiving the vehicle-checking image set of the user, and performing a preprocessing operation on the vehicle-checking image set to obtain a target vehicle-checking image set comprises:
converting the car inspection images in the car inspection image set into gray level images through histogram equalization; contrast enhancement is carried out on the gray level image by using a contrast stretching method; and denoising the contrast-enhanced gray level image by using Gaussian filtering to obtain the target car inspection image set.
3. The intelligent self-checking vehicle inspection method according to claim 1, wherein the segmenting the vehicle of the target vehicle inspection image set by an edge detection method and a threshold segmentation method to obtain the local key point image of the vehicle comprises:
interface positioning is carried out on the vehicle of the target vehicle-checking image set by using a Canny edge detection method, the amplitude and the direction of the gradient of the vehicle are calculated through the finite difference of first-order partial derivatives, and the amplitude of a non-local maximum value point in the gradient of the vehicle is set to be zero, so that a refined vehicle edge image is obtained;
and segmenting the vehicle edge image by using a dual threshold method, amplifying key points in the segmented vehicle edge image by using a region growing method, and connecting the segmented vehicle edge image by edge connection so as to obtain a local key point image of the vehicle.
4. The intelligent self-checking vehicle method according to any one of claims 1 to 3, wherein the establishing of the directional gradient feature atlas of the vehicle according to the local key point image of the vehicle comprises:
calculating the gradient amplitude G (x, y) and the gradient direction sigma (x, y) of each pixel point (x, y) in the local key point image of the vehicle to form a gradient matrix of the local key point image of the vehicle, and dividing the gradient matrix into small cell units;
calculating the gradient size and direction of each pixel point in the cell unit, counting a gradient direction histogram, and calculating the sum of pixel gradients of each direction channel in the gradient direction histogram;
accumulating the sum of the pixel gradients of each direction channel to form a vector, combining the cell units into a block, normalizing the vector in the block to obtain a characteristic vector, and connecting the characteristic vectors to obtain a direction gradient characteristic map set of the vehicle.
5. The intelligent self-checking vehicle method of claim 1, wherein said training a pre-constructed intelligent self-checking vehicle model with said training set to obtain training values comprises:
inputting the training set to an input layer of a convolutional neural network of the intelligent self-checking vehicle model, and extracting a feature vector by performing convolution operation on the training set through presetting a group of filters in the convolutional layer of the convolutional neural network;
and performing pooling operation on the feature vector by using a pooling layer of the convolutional neural network, inputting the pooled feature vector to a full-connection layer, and performing normalization processing and calculation on the pooled feature vector through an activation function of the convolutional neural network to obtain the training value.
6. An intelligent self-checking vehicle inspection device, characterized in that the device comprises a memory and a processor, the memory stores an intelligent self-checking vehicle inspection program which can run on the processor, and when the intelligent self-checking vehicle inspection program is executed by the processor, the following steps are realized:
acquiring vehicle images historically stored by a user from a vehicle image library, and establishing labels for the vehicle images in the vehicle image library to generate a label set;
receiving the vehicle-checking image set of the user, and carrying out preprocessing operation on the vehicle-checking image set to obtain a target vehicle-checking image set;
segmenting the vehicle of the target vehicle-checking image set by an edge detection method and a threshold segmentation method to obtain a local key point image of the vehicle;
establishing a direction gradient characteristic atlas of the vehicle according to the local key point image of the vehicle, and taking the direction gradient characteristic atlas as a training set;
training a pre-constructed intelligent self-checking vehicle model by using the training set to obtain a training value, inputting the training value and the label set into a loss function of the intelligent self-checking vehicle model to obtain a loss function value, and finishing the training of the intelligent self-checking vehicle model until the loss function value is smaller than a preset threshold value;
inputting the vehicle inspection image uploaded by the user into the trained intelligent self-inspection vehicle model to obtain the characteristic image with the highest matching degree of the vehicle inspection image, traversing and comparing the characteristic image with the highest matching degree with the vehicle image library by utilizing a logic comparison program, and outputting the self-inspection result of the vehicle inspection image uploaded by the user.
7. The intelligent self-checking vehicle inspection device according to claim 6, wherein said receiving the vehicle inspection image set of the user, and performing a preprocessing operation on the vehicle inspection image set to obtain a target vehicle inspection image set comprises:
converting the car inspection images in the car inspection image set into gray level images through histogram equalization; contrast enhancement is carried out on the gray level image by using a contrast stretching method; and denoising the contrast-enhanced gray level image by using Gaussian filtering to obtain the target car inspection image set.
8. The intelligent self-checking vehicle inspection device according to claim 6, wherein the segmenting the vehicle of the target vehicle inspection image set by an edge detection method and a threshold segmentation method to obtain the local key point image of the vehicle comprises:
interface positioning is carried out on the vehicle of the target vehicle-checking image set by using a Canny edge detection method, the amplitude and the direction of the gradient of the vehicle are calculated through the finite difference of first-order partial derivatives, and the amplitude of a non-local maximum value point in the gradient of the vehicle is set to be zero, so that a refined vehicle edge image is obtained;
the method comprises the steps of segmenting the vehicle edge image by using a double threshold method, amplifying key points in the segmented vehicle edge image by using a region growing method, and connecting the segmented vehicle edge image through edge connection so as to obtain a local key point image of the vehicle.
9. The intelligent self-checking vehicle method according to any one of claims 6 to 8, wherein the establishing of the directional gradient feature atlas of the vehicle according to the local key point image of the vehicle comprises:
calculating the gradient amplitude G (x, y) and the gradient direction sigma (x, y) of each pixel point (x, y) in the local key point image of the vehicle to form a gradient matrix of the local key point image of the vehicle, and dividing the gradient matrix into small cell units;
calculating the gradient size and direction of each pixel point in the cell unit, counting a gradient direction histogram, and calculating the sum of pixel gradients of each direction channel in the gradient direction histogram;
accumulating the sum of the pixel gradients of each direction channel to form a vector, combining the cell units into a block, normalizing the vector in the block to obtain a characteristic vector, and connecting the characteristic vectors to obtain a direction gradient characteristic map set of the vehicle.
10. A computer readable storage medium having stored thereon a smart self-verifying vehicle program executable by one or more processors to perform the steps of the smart self-verifying vehicle method as claimed in any one of claims 1 to 5.
CN201910761970.6A 2019-08-14 2019-08-14 Intelligent self-checking vehicle method and device and computer readable storage medium Active CN110598033B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910761970.6A CN110598033B (en) 2019-08-14 2019-08-14 Intelligent self-checking vehicle method and device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910761970.6A CN110598033B (en) 2019-08-14 2019-08-14 Intelligent self-checking vehicle method and device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN110598033A true CN110598033A (en) 2019-12-20
CN110598033B CN110598033B (en) 2023-03-28

Family

ID=68854650

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910761970.6A Active CN110598033B (en) 2019-08-14 2019-08-14 Intelligent self-checking vehicle method and device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110598033B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111353549A (en) * 2020-03-10 2020-06-30 创新奇智(重庆)科技有限公司 Image tag verification method and device, electronic device and storage medium
CN112132812A (en) * 2020-09-24 2020-12-25 平安科技(深圳)有限公司 Certificate checking method and device, electronic equipment and medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110058743A1 (en) * 2009-09-08 2011-03-10 Myers Charles A Image Classification And Information Retrieval Over Wireless Digital Networks And The Internet
CN105787466A (en) * 2016-03-18 2016-07-20 中山大学 Vehicle type fine identification method and system
CN107729818A (en) * 2017-09-21 2018-02-23 北京航空航天大学 A kind of multiple features fusion vehicle recognition methods again based on deep learning
US20180285386A1 (en) * 2017-03-31 2018-10-04 Alibaba Group Holding Limited Method, apparatus, and electronic devices for searching images
CN108805196A (en) * 2018-06-05 2018-11-13 西安交通大学 Auto-increment learning method for image recognition
CN109101865A (en) * 2018-05-31 2018-12-28 湖北工业大学 A kind of recognition methods again of the pedestrian based on deep learning
CN109472262A (en) * 2018-09-25 2019-03-15 平安科技(深圳)有限公司 Licence plate recognition method, device, computer equipment and storage medium
CN110097068A (en) * 2019-01-17 2019-08-06 北京航空航天大学 The recognition methods of similar vehicle and device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110058743A1 (en) * 2009-09-08 2011-03-10 Myers Charles A Image Classification And Information Retrieval Over Wireless Digital Networks And The Internet
CN105787466A (en) * 2016-03-18 2016-07-20 中山大学 Vehicle type fine identification method and system
US20180285386A1 (en) * 2017-03-31 2018-10-04 Alibaba Group Holding Limited Method, apparatus, and electronic devices for searching images
CN107729818A (en) * 2017-09-21 2018-02-23 北京航空航天大学 A kind of multiple features fusion vehicle recognition methods again based on deep learning
CN109101865A (en) * 2018-05-31 2018-12-28 湖北工业大学 A kind of recognition methods again of the pedestrian based on deep learning
CN108805196A (en) * 2018-06-05 2018-11-13 西安交通大学 Auto-increment learning method for image recognition
CN109472262A (en) * 2018-09-25 2019-03-15 平安科技(深圳)有限公司 Licence plate recognition method, device, computer equipment and storage medium
CN110097068A (en) * 2019-01-17 2019-08-06 北京航空航天大学 The recognition methods of similar vehicle and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘占文等: "基于视觉注意机制的弱对比度下车辆目标分割方法", 《中国公路学报》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111353549A (en) * 2020-03-10 2020-06-30 创新奇智(重庆)科技有限公司 Image tag verification method and device, electronic device and storage medium
CN112132812A (en) * 2020-09-24 2020-12-25 平安科技(深圳)有限公司 Certificate checking method and device, electronic equipment and medium
CN112132812B (en) * 2020-09-24 2023-06-30 平安科技(深圳)有限公司 Certificate verification method and device, electronic equipment and medium

Also Published As

Publication number Publication date
CN110598033B (en) 2023-03-28

Similar Documents

Publication Publication Date Title
CN110414507B (en) License plate recognition method and device, computer equipment and storage medium
CN110619274A (en) Identity verification method and device based on seal and signature and computer equipment
Zamri et al. Tree species classification based on image analysis using Improved-Basic Gray Level Aura Matrix
US9224207B2 (en) Segmentation co-clustering
CN110717497B (en) Image similarity matching method, device and computer readable storage medium
CN108830275B (en) Method and device for identifying dot matrix characters and dot matrix numbers
CN113436162B (en) Method and device for identifying weld defects on surface of hydraulic oil pipeline of underwater robot
CN110287787B (en) Image recognition method, image recognition device and computer-readable storage medium
CN110852311A (en) Three-dimensional human hand key point positioning method and device
CN112052845A (en) Image recognition method, device, equipment and storage medium
CN110598033B (en) Intelligent self-checking vehicle method and device and computer readable storage medium
CN111860496A (en) License plate recognition method, device, equipment and computer readable storage medium
Kabiraj et al. Number plate recognition from enhanced super-resolution using generative adversarial network
CN111783896A (en) Image identification method and system based on kernel method
Islam et al. An efficient method for extraction and recognition of bangla characters from vehicle license plates
CN111160169A (en) Face detection method, device, equipment and computer readable storage medium
CN111767915A (en) License plate detection method, device, equipment and storage medium
Bolotova et al. License plate recognition with hierarchical temporal memory model
CN114494994A (en) Vehicle abnormal aggregation monitoring method and device, computer equipment and storage medium
CN111414917B (en) Identification method of low-pixel-density text
Arsenovic et al. Deep learning driven plates recognition system
CN110516547B (en) Fake-licensed vehicle detection method based on weighted non-negative matrix factorization
Aichert Feature extraction techniques
Bala et al. Image simulation for automatic license plate recognition
CN111160142A (en) Certificate bill positioning detection method based on numerical prediction regression model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant