CN109784170B - Vehicle risk assessment method, device, equipment and storage medium based on image recognition - Google Patents

Vehicle risk assessment method, device, equipment and storage medium based on image recognition Download PDF

Info

Publication number
CN109784170B
CN109784170B CN201811528659.9A CN201811528659A CN109784170B CN 109784170 B CN109784170 B CN 109784170B CN 201811528659 A CN201811528659 A CN 201811528659A CN 109784170 B CN109784170 B CN 109784170B
Authority
CN
China
Prior art keywords
preset
vehicle
image
target
micro
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811528659.9A
Other languages
Chinese (zh)
Other versions
CN109784170A (en
Inventor
付美蓉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201811528659.9A priority Critical patent/CN109784170B/en
Publication of CN109784170A publication Critical patent/CN109784170A/en
Application granted granted Critical
Publication of CN109784170B publication Critical patent/CN109784170B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Financial Or Insurance-Related Operations Such As Payment And Settlement (AREA)

Abstract

The invention discloses a vehicle risk assessment method, device and equipment based on image recognition and a storage medium. The method comprises the following steps: acquiring a user video image of a case report user and a scene video image of an accident vehicle through self-service video call; inputting the field video image into an accident analysis model for image recognition to obtain vehicle information and vehicle damage information of an accident vehicle; if the vehicle insurance report request is determined to be effective according to the history report information of the accident vehicle, carrying out micro-expression recognition on the user video image by using a micro-expression recognition model to obtain the comprehensive emotion score of the user video image; and if the comprehensive emotion score meets the preset score threshold requirement, determining the damage assessment result of the accident vehicle according to the vehicle damage information. According to the technical scheme, automatic damage assessment is realized, manual intervention is not needed, the processing efficiency and timeliness of insurance of the vehicle insurance are improved, authenticity of the vehicle insurance report is effectively identified, the risk of false video report is reduced, and the accuracy of insurance processing is improved.

Description

Vehicle risk assessment method, device, equipment and storage medium based on image recognition
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a storage medium for vehicle risk assessment based on image recognition.
Background
With the continuous improvement of the economic living standard, vehicles have become a daily indispensable transportation means.
At present, since the vehicles usually purchase insurance, when traffic accidents occur, the vehicles are often subjected to insurance report directly through telephone, and the workers of insurance companies need to wait for on-site investigation, so that the period is long and road congestion is easy to cause.
Although the prior art also has a method for directly carrying out the car insurance report by recording the field video, the field video still needs to be subjected to on-line damage assessment in a manual mode, the processing efficiency of the insurance report is still lower, particularly when an on-line seat is busy, long-time waiting can be caused, meanwhile, the risk of false video report is higher due to no effective mechanism for checking the authenticity of the field video, and the accuracy of insurance processing is reduced.
Disclosure of Invention
The embodiment of the invention provides a vehicle risk assessment method, device, equipment and storage medium based on image recognition, which are used for solving the problems of low efficiency of the current vehicle risk assessment processing and high risk of false video report.
An image recognition-based vehicle risk assessment method comprises the following steps:
if a car insurance report request sent by a report user through a client is received, acquiring identity information of an insured person contained in the car insurance report request;
carrying out validity verification on the identity information of the insured person, if the verification is successful, starting self-service video call with the client, and acquiring a user video image of the case reporting user and a field video image of an accident vehicle in the self-service video call process;
inputting the field video image into a preset accident analysis model for image recognition to obtain vehicle information and vehicle damage information of the accident vehicle;
inquiring a history report record of the insured person in a preset claim database according to the identity information of the insured person;
inquiring historical report information of the accident vehicle in a historical report record of the insured according to the vehicle information of the accident vehicle, and determining whether the vehicle insurance report request is effective according to the historical report information;
if the car insurance report request is effective, carrying out microexpressive recognition on the user video image by using a preset microexpressive recognition model to obtain the probability of the user video image in each preset microexpressive state;
According to a preset combination calculation mode, carrying out combination calculation on the probability of the user video image in each preset micro expression state to obtain the comprehensive emotion score of the user video image;
if the comprehensive emotion score meets a preset score threshold requirement, determining a damage assessment result of the accident vehicle according to the vehicle damage information;
and feeding back the vehicle damage information and the damage assessment result of the accident vehicle to the client.
An image recognition-based vehicle risk assessment device, comprising:
the request module is used for acquiring identity information of an insured person contained in a car insurance report request if the car insurance report request sent by a report user through a client is received;
the video call module is used for verifying the validity of the identity information of the insured person, if the verification is successful, the video call module starts self-service video call with the client, and a user video image of the case reporting user and a scene video image of the accident vehicle are obtained in the self-service video call process;
the vehicle damage identification module is used for inputting the field video image into a preset accident analysis model to carry out image identification so as to obtain vehicle information and vehicle damage information of the accident vehicle;
The history inquiry module is used for inquiring the history report record of the insured in a preset claim database according to the identity information of the insured;
the checking module is used for inquiring the history report information of the accident vehicle in the history report record of the insured according to the vehicle information of the accident vehicle and determining whether the vehicle insurance report request is effective according to the history report information;
the micro-expression recognition module is used for carrying out micro-expression recognition on the user video image by using a preset micro-expression recognition model if the vehicle insurance case report request is effective, so as to obtain the probability of the user video image in each preset micro-expression state;
the score calculation module is used for carrying out combination calculation on the probability of the user video image in each preset micro-expression state according to a preset combination calculation mode to obtain the comprehensive emotion score of the user video image;
the damage assessment module is used for determining damage assessment results of the accident vehicles according to the vehicle damage information if the comprehensive emotion score meets a preset score threshold requirement;
and the sending module is used for feeding back the vehicle damage information and the damage assessment result of the accident vehicle to the client.
A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the image recognition based vehicle risk assessment method described above when the computer program is executed.
A computer readable storage medium storing a computer program which, when executed by a processor, performs the steps of the image recognition-based vehicle risk assessment method described above.
According to the vehicle insurance damage assessment method, device, equipment and storage medium based on image identification, through a self-service video call process, a case reporting user is guided to shoot a user video image and a scene video image of an accident vehicle, a preset accident analysis model is used for carrying out image identification on the scene video image to obtain vehicle information and vehicle damage information of the accident vehicle, automatic damage assessment is achieved, manual intervention is not needed, processing efficiency and timeliness of vehicle insurance are improved, on the basis, a historical case reporting record is obtained through the vehicle information, whether the case reporting is effective or not is determined according to the historical case reporting record, micro-expression identification is carried out on the user video image, meanwhile, comprehensive emotion score of the case reporting user is calculated according to an identification result, whether the comprehensive emotion score meets score threshold requirements is judged, when the case reporting is effective and the comprehensive emotion score meets the score threshold requirements, the vehicle insurance report request is determined to be a real case reporting request, micro-expression of the case reporting user and the historical case reporting information of the accident vehicle are comprehensively and quantitatively evaluated, authenticity of the case reporting request is determined, accordingly authenticity of the case reporting user is effectively achieved, false case reporting accuracy is reduced, and false case processing risk is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments of the present invention will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic view of an application environment of a vehicle risk assessment method based on image recognition according to an embodiment of the present invention;
FIG. 2 is a flow chart of a vehicle risk assessment method based on image recognition in an embodiment of the invention;
FIG. 3 is a flowchart of step S3 in a vehicle risk assessment method based on image recognition according to an embodiment of the present invention;
FIG. 4 is a flowchart of step S7 of the risk assessment method based on image recognition according to an embodiment of the present invention;
FIG. 5 is a flowchart of determining damage assessment results of an accident vehicle according to vehicle damage information in step S8 in a vehicle risk damage assessment method based on image recognition according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a vehicle risk assessment device based on image recognition according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a computer device in accordance with an embodiment of the invention.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The vehicle risk assessment method based on image recognition can be applied to an application environment shown in fig. 1, wherein the application environment comprises a server and a client, the server and the client are connected through a network, the network can be a wired network or a wireless network, the client specifically comprises but is not limited to various personal computers, notebook computers, smart phones, tablet computers and portable wearable devices, and the server can be specifically realized by using an independent server or a server cluster formed by a plurality of servers. The case reporting user initiates a car insurance case report through the client, shoots a user video image and a field video image, and sends the user video image and the field video image to the server, and the server analyzes and processes the user video image and the field video image, judges the authenticity of the car insurance case report and determines the damage assessment result of the accident car.
In an embodiment, as shown in fig. 2, a vehicle risk assessment method based on image recognition is provided, and the method is applied to the server in fig. 1 for illustration, and is described in detail as follows:
s1: and if a case report request sent by the case report user through the client is received, acquiring the identity information of the insured person contained in the case report request.
Specifically, when a traffic accident occurs in a vehicle where a case reporting user is located, the case reporting user can send a case reporting request for car insurance to a server through an application interface provided by a client, the case reporting user inputs identity information of an insured person in the application interface, and after clicking and mentioning, the client sends the case reporting request for car insurance containing the identity information of the insured person to the server.
The identity information of the insured can be a mobile phone number of the insured, an identity card number of the insured, or other information capable of uniquely identifying the insured.
If the service end receives the car insurance report request, the identity information of the insured person is extracted from the car insurance report request.
S2: and carrying out validity check on the identity information of the insured, if the verification is successful, starting self-service video call with the client, and acquiring a user video image of the case reporting user and a field video image of the accident vehicle in the self-service video call process.
Specifically, the process of the server side for verifying the validity of the identity information obtained in the step S1 includes:
inquiring whether the identity information of the insured exists in a preset car insurance information database. The preset vehicle insurance information database stores vehicle insurance information which comprises insurance applicant information, insured person information, vehicle information, insurance item information, insurance validity period and the like and has successfully purchased the vehicle insurance.
If the identity information of the insured person is queried in a preset vehicle insurance information database, further acquiring the insurance validity period in the vehicle insurance information corresponding to the insured person, and if the current time belongs to the insurance validity period, confirming that the insured person is successfully insusceptible and is within the insurance validity period, namely, successful verification; if the current time does not belong to the insurance validity period, confirming that the insured life is exceeded though the insured life is successfully insured, namely the verification fails.
If the identity information of the insured person is not queried in the preset vehicle insurance information database, confirming that the insured person does not make an application, namely checking fails.
If the result of the validity check is that the check is successful, the server starts self-service video call with the client, and guides the case reporting user to record a user video image containing a face image of the case reporting user and a scene video image of an accident vehicle through a preset voice command in the self-service video call process.
If the result of the validity check is that the check fails, the server returns the prompt information that the identity information of the insured person is illegal to the client and refuses the request of reporting the case of the car insurance.
Further, in the self-service video call process, the server side can specifically randomly select a question from a pre-stored question pool, send the selected question to the client side in a voice mode, ask a case report user of the client side to answer the question, and display a face image of the user in a video frame preset by an application interface of the client side while answering the question, so that the client side sends a user video image in the video frame to the server side.
After the report user finishes the answer of the questions, the server sends a video recording instruction of the accident scene to the report user of the client in a voice mode, the report user records the scene video image of the accident vehicle according to the video recording instruction, and the client sends the scene video image to the server after clicking and submitting.
The preset problem pool includes preset problems related to the car insurance report, for example, a report user is requested to inform the location of the scene, the current time, or the number of accident vehicles.
The video recording instruction may include a recording requirement for a live video image of the accident vehicle, and inform the reporting user in a voice manner, for example, the recording requirement may include recording at least a panoramic image and a close-up image including the head of the accident vehicle, a panoramic image and a close-up image including the tail of the accident vehicle, and the like.
S3: and inputting the field video image into a preset accident analysis model for image recognition to obtain the vehicle information and the vehicle damage information of the accident vehicle.
Specifically, a preset accident analysis model is used to identify vehicle information and vehicle damage information of an accident vehicle. The vehicle information includes attribute information, such as license plate number, vehicle type, and vehicle color, that identifies the characteristics of the accident vehicle. The vehicle damage information includes attribute information identifying a wound surface of the accident vehicle, such as a damage position, a damage area, and the like.
The preset accident analysis model can be specifically a pre-trained convolutional neural network model based on deep learning and an image information recognition model, wherein the convolutional neural network model is used for recognizing vehicle damage information, and the image information recognition model is used for recognizing vehicle information.
S4: inquiring the history report record of the insured in a preset claim database according to the identity information of the insured.
Specifically, a preset claim database is pre-stored with historical record of the insuring vehicle of the insured, and the historical record of the insuring vehicle corresponding to the identity information can be searched in the claim database according to the identity information of the insured.
Each history report record contains the vehicle information of the report vehicle, the report time information, and other report information.
S5: according to the vehicle information of the accident vehicle, inquiring the history report information of the accident vehicle in the history report record of the insured, and determining whether the vehicle insurance report request is effective according to the history report information.
Specifically, in the history report records obtained in step S4, whether the vehicle information of the accident vehicle exists is queried, if so, the number of history report records of the accident vehicle in the preset time period, namely the number of report times of the accident vehicle in the preset time period is counted according to the report time information in the history report records in which the vehicle information of the accident vehicle exists.
If the number of times of reporting the case is greater than or equal to the preset number of times threshold, the accident vehicle is confirmed to report the case frequently in the preset time period, and the possibility of false reporting the case exists, so that the vehicle insurance case reporting request is confirmed to be invalid, and a client side is used for notifying a case reporting user to conduct manual case reporting processing.
If the number of times of reporting the case is smaller than the preset number of times threshold, the accident vehicle is confirmed to have no frequent report in the preset time period, so that the vehicle insurance report request is confirmed to be effective.
The preset time period may be within one month, three months, half year, or the like, and may specifically be set according to the needs of practical applications, and a preset correspondence may exist between the preset time period and a preset frequency threshold, where the longer the preset time period is, the larger the frequency threshold is, and the shorter the preset time period is, the smaller the threshold is.
S6: if the car insurance report request is effective, carrying out microexpressive recognition on the user video image by using a preset microexpressive recognition model to obtain the probability of the user video image in each preset microexpressive state.
Specifically, if the vehicle insurance report request is effective according to the step S5, the micro-expression recognition model is used to perform micro-expression recognition on the user video image of the report user obtained in the step S2.
The method comprises the steps that a preset micro-expression recognition model extracts video frames of input user video images to obtain a preset number of target frame images, and the basic probability of each target frame image in N preset micro-expression states is obtained by capturing expression characteristics of a case reporting user in the target frame images and recognizing the micro-expression states of the case reporting user according to the expression characteristics, wherein N is a positive integer.
The preset micro-expression states include various micro-expression states such as happiness, fun, etc., for example: smiling face, high and fierce, confusion, anger fire, great surprise, fatigue and so on up to 54 micro expressions with nuances.
It should be noted that the micro-expression recognition model may be a neural network recognition model based on deep learning, or may be a local emotion recognition model based on a local binary pattern (Local Binary Pattern, LBP), which may specifically be selected according to the needs of practical applications, and is not limited herein.
And averaging or weighted averaging the probability of each preset micro-expression state in each target frame image to obtain the comprehensive probability of each preset micro-expression state, wherein the comprehensive probability is the probability of the user video image in the micro-expression state.
S7: and carrying out combination calculation on the probability of the user video image in each preset micro-expression state according to a preset combination calculation mode to obtain the comprehensive emotion score of the user video image.
Specifically, the combination calculation is performed on the probabilities, i.e., the N probabilities, under each micro-expression state obtained in step S6, and the specific process of performing the combination calculation on the N probabilities according to the preset combination calculation mode may be:
And calculating an average value of M probabilities which are larger than a preset probability threshold value in the N probabilities, and taking the obtained average value as the comprehensive emotion score of the user video image, wherein M is a positive integer, and M is smaller than or equal to N.
The preset combination calculation mode may also be to calculate a weighted average value for the N probabilities, and use the obtained weighted average value as the comprehensive emotion score of the video image of the user.
S8: and if the comprehensive emotion score meets the preset score threshold requirement, determining the damage assessment result of the accident vehicle according to the vehicle damage information.
Specifically, on the basis of confirming that the car insurance report request is effective in step S6, further judging that if the comprehensive emotion score obtained in step S7 meets the preset score threshold requirement, confirming that the car insurance report request is a real report request, and allowing the car insurance report request to be continuously executed.
And after confirming that the vehicle insurance report request is a real report request, determining an assessment result of the accident vehicle according to the vehicle damage information, wherein the assessment result comprises specific claim amount.
In a specific embodiment, according to a preset correspondence between the damaged information of the vehicle and the claim amount, the claim amount corresponding to the damaged information of the vehicle identified in the current vehicle insurance claim request is obtained, and the obtained claim amount is used as the damage assessment result of the current vehicle insurance claim request.
S9: and feeding back the vehicle damage information and the damage assessment result of the accident vehicle to the client.
Specifically, the vehicle damage information of the accident vehicle identified in the current vehicle insurance report request and the damage assessment result determined in step S8 are fed back to the client, where the vehicle damage information includes the damage position and the damage level, or the damage position and the target damage level.
And the case report user of the client checks the damage assessment result through a display interface of the client, if the case report user confirms that the case report is correct, clicks a completion icon to complete the case report request of the current car insurance, returns a completion instruction to the server, and after receiving the completion instruction, the server stores the car information, the case report time information and related information in the case report request of the current car insurance as a history case report record into a preset claim database for inquiry.
In this embodiment, through a self-service video call process, a case reporting user is guided to shoot a user video image and a scene video image of an accident vehicle, a preset accident analysis model is used to perform image recognition on the scene video image to obtain vehicle information and vehicle damage information of the accident vehicle, automatic damage assessment is achieved, manual intervention is not required, processing efficiency and timeliness of a vehicle insurance are improved, on the basis, a history case report record is obtained through the vehicle information, whether the case reporting is effective or not is determined according to the history case report record, at the same time, micro-expression recognition is performed on the user video image, comprehensive emotion score of the case reporting user is calculated according to a recognition result, whether the comprehensive emotion score meets score threshold requirements is judged, when the case reporting is effective and the comprehensive emotion score meets score threshold requirements, the case reporting request is determined to be a real case reporting request, comprehensive quantitative evaluation is performed on the micro-expression of the case reporting user and the history case reporting information of the accident vehicle, and authenticity of the case reporting request is determined, thereby the accuracy of the case reporting is effectively identified, and the false video case reporting risk is improved.
In one embodiment, the vehicle damage information includes a damage location and a damage level.
The damaged position is the relative position of the wound surface of the accident vehicle on the accident vehicle, for example, the damaged position can be specifically defined as a left front door position, a right front door position, a left front wheel position, a right front wheel position, a front bumper position and the like; the damage grade is the damage degree of the wound surface of the accident vehicle, for example, the damage grade can be specifically defined as light collision, serious collision, complete damage and the like, or can be further defined as primary damage, secondary damage, tertiary damage and the like.
It should be noted that, the definition of the damaged position and the damaged level may be set in advance, which may be specifically set according to the needs of the actual application, and is not limited herein.
Further, as shown in fig. 3, in step S3, the on-site video image is input into a preset accident analysis model for image recognition to obtain vehicle information and vehicle damage information of an accident vehicle, which specifically includes the following steps:
s31: and extracting video frames of the field video images according to a preset extraction mode to obtain video frame images.
Specifically, the preset extraction mode may be to extract one frame per preset frame number at intervals, or may be to randomly extract a plurality of frames, which may be specifically set according to the needs of practical applications, and is not limited herein.
S32: and carrying out image content identification on the video frame image by using a preset image information identification model to acquire the vehicle information of the accident vehicle.
In this embodiment, the preset image information recognition model is used to perform image content recognition on the input video frame image, where the recognized image content includes a license plate number of the vehicle, a brand type of the vehicle, a color of the vehicle, and the like.
For license plate numbers, in a preset image information identification model, through traversing pixel points in a video frame image, determining the video frame image containing license plate pixel points, acquiring an image area where the license plate pixel points are located from the video frame image, intercepting the image area from the video frame image, carrying out denoising, ashing and binarization processing on the intercepted pixel area to obtain a binarized license plate image, carrying out character cutting on the binarized license plate image, carrying out similarity matching on each character image after cutting by using a preset standard character template, determining characters in the standard character template which are successfully matched as characters in the character image, and combining the characters in each character image after identification to obtain the license plate numbers.
For the brand types of the vehicles, the brand types comprise brand marks and vehicle types, in a preset image information identification model, through traversing pixel points in a video frame image, determining a video frame image containing license plate marks, acquiring an image area and a vehicle contour where the license plate marks are located from the video frame image, matching the license plate marks in the image area by using pre-stored standard license plate marks, taking the standard license plate marks which are successfully matched as the brand marks of the vehicles, matching the contour of each vehicle type corresponding to the brand marks of the vehicles with the contour of the identified vehicle according to contour of different vehicle types corresponding to each preset standard license plate mark, and taking the vehicle type corresponding to the contour which is successfully matched as the vehicle type of the vehicle.
And obtaining the color of the vehicle according to the color corresponding to the pixel value of the pixel point in the vehicle contour on the basis of obtaining the vehicle contour.
S33: and performing image preprocessing on the video frame image to obtain a target image.
Specifically, the image preprocessing comprises defogging, antireflection, collision point separation, rotation or translation and the like, and after each frame of video frame image is subjected to image preprocessing, a target image corresponding to each frame of video frame image is obtained.
S34: and identifying damaged parts of the target image by using a preset neural network model, and determining damaged positions and damaged grades of the accident vehicle.
Specifically, the preset neural network model may be a pre-trained convolutional neural network model based on deep learning, and may specifically be a ResNet-50 deep convolutional model or the like.
And inputting the target image into a preset neural network model, and identifying damaged parts of each frame of target image in the neural network model to obtain damaged positions of accident vehicles in each frame of target image, and wound depths and deformation degrees of damaged wound surfaces at the damaged positions.
If the frame number of the target image is n frames, counting all damaged positions obtained by the n frames of the target image, obtaining the occurrence times of each damaged position, and determining the damaged position with the times greater than a preset threshold value as the damaged position of the accident vehicle.
For example, the target image has 3 frames, the damaged position in the first frame includes a left front door position and a right front door position, the damaged position in the second frame includes a left front door position and a left front wheel position, the damaged position in the third frame includes a left front door position, the number of occurrences of the left front door position is 3 times, the number of occurrences of the right front door position is 1 time, the number of occurrences of the left front wheel position is 1 time, and if the preset threshold is 2, the damaged position of the accident vehicle is the left front door position.
It should be noted that, the preset threshold and the frame number of the target image may be in a proportional mapping relationship, and it is understood that the more the frame number of the target image is, the larger the preset threshold is, whereas the fewer the frame number of the target image is, the smaller the preset threshold is.
According to the determined damaged position of the accident vehicle, the wound surface depth of the damaged wound surface at the damaged position in each frame of target image is combined, the specific combination mode can be a mode of calculating an average value or can also be a mode of weighting average to obtain the comprehensive wound surface depth of the damaged wound surface, and meanwhile, the deformation degree of the damaged wound surface at the damaged position in each frame of target image is combined by adopting the combination mode which is the same as the wound surface depth to obtain the comprehensive deformation degree of the damaged wound surface.
According to a preset corresponding relation between the depth and deformation degree of the wound surface and the damage grade, determining the damage grade corresponding to the comprehensive depth and the comprehensive deformation degree of the wound surface, and determining the obtained damage grade as the damage grade of the accident vehicle.
In the embodiment, the image content of the video frame image is identified by using the preset image information identification model to acquire the vehicle information of the accident vehicle, meanwhile, the damaged position and the damaged grade of the accident vehicle are determined by using the preset neural network model to identify the damaged position and the damaged grade of the accident vehicle, so that the vehicle information of the accident vehicle, the damaged position and the damaged grade are accurately identified according to the field video image shot by the user looking at the case, and the accuracy of the processing result of the vehicle insurance case is improved.
In an embodiment, as shown in fig. 4, in step S7, according to a preset combination calculation mode, the probability of the user video image in each preset microexpressive state is calculated in a combined manner to obtain a comprehensive emotion score of the user video image, and specifically includes the following steps:
s71: and acquiring the probability of each preset target micro-expression state from the probability of the user video image in each preset micro-expression state as target probability, wherein the preset target micro-expression state is the micro-expression state used for judging the authenticity of the newspaper in each preset micro-expression state.
Specifically, the preset target micro-expression state is a subset of a state set formed by each preset micro-expression state, each target micro-expression state in the subset is used for judging the authenticity of the newspaper, and the preset target micro-expression state can be set by each insurance provider according to the needs, for example, the preset target micro-expression state can comprise micro-expression states representing emotions such as a suspected emotion, a right look-aside emotion or a heart deficiency.
For example, if the probabilities of the user video image in each preset microexpressive state are 10, each probability corresponds to one microexpressive state, and the probabilities are A1, A2, A3, A4, A5, A6, A7, A8, A9 and a10 respectively, and if the preset target microexpressive states are A2, A3 and A7, the probabilities that the microexpressive states are corresponding to A2, A3 and A7 are obtained from the 10 probabilities, and the probabilities that the microexpressive states are corresponding to A2, A3 and A7 are taken as the target probabilities.
S72: according to a preset corresponding relation between the probabilities and the micro-expression scores, acquiring the micro-expression score corresponding to each target probability, and taking the acquired micro-expression score as a target score.
In this embodiment, the preset correspondence between the probability and the micro-expression score may be a direct-scale linear function relationship, that is, the larger the probability is, the higher the corresponding micro-expression score is, the smaller the probability is, the lower the corresponding micro-expression score is, and the probability and the micro-expression score are in a linear correspondence.
The preset corresponding relation between the probabilities and the micro-expression scores can also be a segmentation corresponding relation, namely the probabilities are segmented, each segment of probability corresponds to one micro-expression score, and the probabilities of each segment and the micro-expression scores are in a positive proportion relation. For example, when the probability is between 0% and 60%, the corresponding microexpressive score is 30 points; when the probability is between 60% and 70%, the corresponding micro expression score is 60 points; when the probability is between 70% and 90%, the corresponding micro expression score is 80; when the probability is between 90% and 100%, the corresponding micro expression score is 90.
Specifically, according to a preset corresponding relation between the probabilities and the micro-expression scores, determining the micro-expression score corresponding to each target probability, and taking the micro-expression score as the target score corresponding to the target probability.
S73: according to the target score corresponding to each target probability, calculating the comprehensive emotion score of the user video image according to the following formula:
wherein P is the comprehensive emotion score, m is the number of target probabilities, g i For the target score corresponding to the ith target probability, f i And (3) a preset target weight corresponding to the ith target probability, wherein k is a preset score adjustment coefficient, and delta is a preset basic score.
Specifically, the weighted average is carried out on the target score corresponding to each target probability, and the weighted average numerical value is regulated and optimized by using the score regulating coefficient and the basic score, so that the comprehensive emotion score of the user video image is obtained.
In this embodiment, when calculating the comprehensive emotion score of the user video image according to the probability combination of the user video image in each preset micro-expression state, the probability of the target micro-expression state for judging the authenticity of the newspaper is firstly obtained from the probability of each micro-expression state, the probability of the target micro-expression state is converted into the corresponding micro-expression score, and then the weighted average calculation is performed on the micro-expression score, so that the obtained comprehensive emotion score can fully and accurately reflect the emotion state of the newspaper user, and an accurate judgment basis is provided for the authenticity of the newspaper request.
In one embodiment, as shown in fig. 5, in step S8, the damage assessment result of the accident vehicle is determined according to the vehicle damage information, and specifically includes the following steps:
s81: obtaining the probability of each preset related micro-expression state from the probability of each preset micro-expression state of the user video image, and obtaining K related probabilities, wherein the preset related micro-expression state is the micro-expression state used for judging the severity of damage in each preset micro-expression state, and K is the number of the related micro-expression states.
Specifically, the probability of the related micro-expression state is obtained from the probability of the user video image obtained in the step S6 in each preset micro-expression state, so as to obtain K related probabilities.
The preset related micro-expression state is a preset micro-expression state related to an accident scene and used for judging the severity of damage, and can be set by each insurance provider according to needs, for example, the preset related micro-expression state can be specifically set into a micro-expression state for expressing anxiety emotion or tension emotion.
For example, if the probabilities of the user video image in each preset microexpressive state are 10, each probability corresponds to one microexpressive state, and the probabilities are A1, A2, A3, A4, A5, A6, A7, A8, A9 and a10 respectively, and if the preset related microexpressive states are A5 and A6, the probabilities that the microexpressive states are corresponding to A5 and A6 are obtained from the 10 probabilities, and the probabilities that the corresponding microexpressive states are corresponding to A5 and A6 are regarded as the related probabilities.
S82: and calculating a weighted average value of K related probabilities according to the preset related weight of each related micro-expression state, and taking the product of the weighted average value and a preset adjustment parameter as a severity coefficient corresponding to the video image of the user.
Specifically, a weight value is preset for each related micro-expression state in advance, as a related weight, the related weight of each related probability and the related micro-expression state corresponding to the related probability is subjected to weighted average calculation to obtain a weighted average value of K related probabilities, and the weighted average value is multiplied by a preset adjustment parameter to obtain a severity coefficient corresponding to a user video image. The preset adjusting parameters are used for dynamically adjusting the severity coefficient.
Further, the severity coefficient corresponding to the user video image is calculated using the following formula:
wherein eta is the severity coefficient corresponding to the video image of the user, K is the quantity of the related probability and alpha j Is the value of the j-th correlation probability, y j And the related weight of the related micro expression state corresponding to the j-th related probability is the preset adjusting coefficient.
The severity coefficient is used for identifying the degree of a specific emotion of the case reporting user, wherein the specific emotion is the emotion corresponding to the related micro-expression state. For example, if a particular emotion is a stressful emotion, a greater severity coefficient represents a stressful emotion of the reporting user. It will be appreciated that the more severe the incident, the more stressful or feared the emotion of the clatter user. Therefore, the damage level of the accident vehicle determined in the step S34 can be further corrected according to the severity coefficient, so that the situation that the identified damage level is inaccurate due to the fact that the field video image shot by the case reporting user does not meet the identification requirement is avoided.
S83: and correcting the damage grade of the accident vehicle according to the severity coefficient to obtain a target damage grade.
Specifically, on the basis of the damaged grade of the accident vehicle determined in step S34, the damaged grade is corrected according to the severity coefficient obtained in step S82, and the specific correction manner may be to directly adjust the damaged grade according to the severity coefficient, which is described in detail as follows:
obtaining a coefficient range in which a serious coefficient is located according to a corresponding relation between a preset coefficient range and a damaged grade, and taking the damaged grade corresponding to the coefficient range as a grade to be compared;
if the to-be-compared grade is higher than the damaged grade of the accident vehicle, correcting the damaged grade of the accident vehicle to be the to-be-compared grade, namely, the target damaged grade is the to-be-compared grade;
if the waiting grade is not higher than the damaged grade of the accident vehicle, the damaged grade of the accident vehicle is not corrected, namely the target damaged grade is the damaged grade of the accident vehicle.
Further, when the waiting level is higher than the damaged level of the accident vehicle, the damaged level can be redetermined as follows:
the server side sends an instruction for shooting the close-up photo of the damaged position to the client side, so that a case report user shoots and uploads the close-up photo of the damaged position according to the instruction;
After the service end receives the close-up photograph of the damaged position, the same method as that of the step S34 is adopted, the damaged position is identified by using a preset neural network model, the damaged grade of the accident vehicle is redetermined, and the redetermined damaged grade is used as target damaged customization after correction.
S84: and determining the damage assessment result of the accident vehicle according to the damage position and the target damage level of the accident vehicle.
Specifically, according to the preset corresponding relation between the damaged position and the damaged grade and the claim settlement amount, the claim settlement amount corresponding to the damaged position and the target damaged grade of the accident vehicle identified in the current car insurance claim request is obtained, and the obtained claim settlement amount is used as the damage assessment result of the current car insurance claim request.
In this embodiment, when determining the damage assessment result of the accident vehicle according to the vehicle damage information, the probability of the related micro-expression state for judging the damage severity is obtained from the probability of each micro-expression state of the user video image, and the probability of each related micro-expression state is weighted and averaged to obtain the severity coefficient of the user video image, and the severity coefficient is used to identify the degree of the specific emotion of the reporting user, so as to correct the damage grade of the accident vehicle according to the severity coefficient, ensure the accuracy of the damage grade, and further improve the accuracy of the damage assessment result.
In one embodiment, after step S7, the method further includes a rejection process for the vehicle insurance report request, which is described in detail below:
if the car insurance report request is invalid or the comprehensive emotion score does not meet the preset score threshold requirement, rejecting the car insurance report request and returning the prompt information of insufficient authenticity of the current report information to the client.
Specifically, if it is determined in step S5 that the current car insurance report request is invalid according to the number of times greater than or equal to the preset number of times threshold, the possibility of false report exists in the current car insurance report request is determined, so that the current car insurance report request is rejected, prompt information of insufficient authenticity of the current report information is returned to the client, and the report user is informed to perform manual report processing.
If the number of times of reporting is smaller than the preset number of times threshold in the step S5, determining that the current car insurance report request is effective, but the comprehensive emotion score obtained in the step S7 does not meet the preset score threshold, determining that the current car insurance report request still has a risk of false reporting, terminating the continuous execution of the current car insurance report request, and simultaneously returning a prompt message of insufficient authenticity of the current report information to the client and notifying a report user of manual report processing.
In this embodiment, when the vehicle insurance report request is confirmed to be invalid, or the vehicle insurance report request is valid but the comprehensive emotion score does not meet the preset score threshold requirement, the server side refuses the vehicle insurance report request and returns the prompt information of insufficient authenticity of the current report information to the client side, so that the report user reselects the manual channel to perform report processing, thereby realizing effective identification of the false report request, reducing the risk of false video report, and improving the accuracy of insurance processing.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
In an embodiment, a vehicle risk assessment device based on image recognition is provided, where the vehicle risk assessment device based on image recognition corresponds to the vehicle risk assessment method based on image recognition in the above embodiment one by one. As shown in fig. 6, the image recognition-based vehicle risk assessment device includes a request module 61, a video call module 62, a vehicle loss recognition module 63, a history inquiry module 64, a verification module 65, a microexpressive recognition module 66, a score calculation module 67, an assessment loss module 68 and a transmission module 69. The functional modules are described in detail as follows:
A request module 61, configured to obtain identity information of an insured person included in a case report request if the case report request sent by a case report user through a client is received;
the video call module 62 is configured to perform validity verification on identity information of the insured person, and if the verification is successful, start self-service video call with the client, and acquire a user video image of the case reporting user and a field video image of the accident vehicle during the self-service video call;
the vehicle damage identification module 63 is configured to input a field video image into a preset accident analysis model for image identification, so as to obtain vehicle information and vehicle damage information of an accident vehicle;
the history inquiry module 64 is configured to inquire a history report record of the insured in a preset claim database according to the identity information of the insured;
the checking module 65 is configured to query the history report information of the accident vehicle in the history report record of the insured according to the vehicle information of the accident vehicle, and determine whether the vehicle insurance report request is valid according to the history report information;
the microexpressive recognition module 66 is configured to perform microexpressive recognition on the user video image by using a preset microexpressive recognition model if the vehicle insurance report request is valid, so as to obtain the probability of the user video image in each preset microexpressive state;
The score calculating module 67 is configured to perform a combination calculation on probabilities of the user video image in each preset microexpressive state according to a preset combination calculation mode, so as to obtain a comprehensive emotion score of the user video image;
the damage assessment module 68 is configured to determine a damage assessment result of the accident vehicle according to the vehicle damage information if the comprehensive emotion score meets a preset score threshold requirement;
and the sending module 69 is used for feeding back the vehicle damage information and the damage assessment result of the accident vehicle to the client.
Further, the vehicle damage information includes a damage position and a damage level, and the vehicle damage identification module 63 includes:
the extraction sub-module 631 is configured to perform video frame extraction on the live video image according to a preset extraction manner, so as to obtain a video frame image;
a content recognition sub-module 632, configured to perform image content recognition on the video frame image using a preset image information recognition model, and obtain vehicle information of the accident vehicle;
the preprocessing sub-module 633 is configured to perform image preprocessing on the video frame image to obtain a target image;
the damage identification sub-module 634 is configured to identify a damaged portion of the target image by using a preset neural network model, and determine a damaged position and a damaged level of the accident vehicle.
Further, the score calculation module 67 includes:
a first screening sub-module 671, configured to obtain, from probabilities of a video image of a user in each preset micro-expression state, a probability of each preset target micro-expression state as a target probability, where the preset target micro-expression state is a micro-expression state used for judging authenticity of a newspaper in each preset micro-expression state;
the matching sub-module 672 is configured to obtain a microexpressive score corresponding to each target probability according to a preset correspondence between the probabilities and the microexpressive scores, and take the obtained microexpressive score as a target score;
a calculating sub-module 673, configured to calculate, according to the target score corresponding to each target probability, a comprehensive emotion score of the user video image according to the following formula:
wherein P is the comprehensive emotion score, m is the number of target probabilities, g i For the target score corresponding to the ith target probability, f i And (3) a preset target weight corresponding to the ith target probability, wherein k is a preset score adjustment coefficient, and delta is a preset basic score.
Further, the impairment module 68 includes:
a second screening sub-module 681, configured to obtain, from probabilities of each preset related micro-expression state in the preset micro-expression states of the user video image, K related probabilities, where the preset related micro-expression states are micro-expression states used for judging severity of damage in each preset micro-expression state, and K is a number of related micro-expression states;
The coefficient computing sub-module 682 is configured to compute a weighted average of the K correlation probabilities according to a preset correlation weight of each correlation microexpressive state, and take a product of the weighted average and a preset adjustment parameter as a severity coefficient corresponding to the user video image;
a correction sub-module 683, configured to correct the damaged level of the accident vehicle according to the severity coefficient, so as to obtain a target damaged level;
the results determination submodule 684 is used for determining the damage determination result of the accident vehicle according to the damage position and the target damage level of the accident vehicle.
Further, the vehicle risk assessment device based on image recognition further comprises:
the rejection processing module 60 is configured to reject the request for the car insurance report if the request for the car insurance report is invalid or the comprehensive emotion score does not meet the preset score threshold requirement, and return a prompt message indicating that the current report information is not true enough to the client.
For specific limitations on the image recognition-based vehicle risk assessment device, reference may be made to the above limitation on the image recognition-based vehicle risk assessment method, and no further description is given here. The modules in the vehicle risk assessment device based on image recognition can be all or partially realized by software, hardware and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 7. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program, when executed by a processor, implements a vehicle risk assessment method based on image recognition.
In an embodiment, a computer device is provided, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the processor executes the computer program to implement the steps of the vehicle risk assessment method based on image recognition in the foregoing embodiment, for example, steps S1 to S9 shown in fig. 2. Alternatively, the processor may implement the functions of the modules/units of the vehicle risk assessment device based on image recognition in the above embodiment, such as the functions of the modules 61 to 69 shown in fig. 6, when executing the computer program. To avoid repetition, no further description is provided here.
In an embodiment, a computer readable storage medium is provided, on which a computer program is stored, where the computer program when executed by a processor implements the method for assessing vehicle risk based on image recognition in the above method embodiment, or where the computer program when executed by a processor implements the functions of each module/unit in the device for assessing vehicle risk based on image recognition in the above device embodiment. To avoid repetition, no further description is provided here.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in many forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM
(DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.

Claims (9)

1. The vehicle risk assessment method based on image recognition is characterized by comprising the following steps of:
if a car insurance report request sent by a report user through a client is received, acquiring identity information of an insured person contained in the car insurance report request;
carrying out validity verification on the identity information of the insured person, if the verification is successful, starting self-service video call with the client, acquiring a user video image of the case reporting user and a scene video image of an accident vehicle in the self-service video call process, extracting a video frame image from the scene video image, identifying vehicle information of the accident vehicle in the video frame image, and carrying out image preprocessing on the video frame image to obtain N frame target images;
respectively inputting each frame of target image into a preset neural network model to obtain vehicle information, damaged positions of the accident vehicle and wound depths and deformation degrees of damaged wound surfaces at the damaged positions in each frame of target image;
counting the occurrence times of each damaged position in the N frames of target images, and determining the damaged position with the occurrence times larger than a preset threshold value as the target damaged position of the accident vehicle;
Performing average calculation on the wound depths of the target damaged positions in the N frames of target images to obtain comprehensive wound depths, performing average calculation on the deformation degree of the target damaged positions in the N frames of target images to obtain comprehensive deformation degree, determining the comprehensive wound depths and damage grades corresponding to the comprehensive deformation degree, and taking the target damaged positions and the damage grades as vehicle damage information;
inquiring a history report record of the insured person in a preset claim database according to the identity information of the insured person;
inquiring historical report information of the accident vehicle in a historical report record of the insured according to the vehicle information of the accident vehicle, and determining whether the vehicle insurance report request is effective according to the historical report information;
if the car insurance report request is effective, carrying out microexpressive recognition on the user video image by using a preset microexpressive recognition model to obtain the probability of the user video image in each preset microexpressive state;
according to a preset combination calculation mode, carrying out combination calculation on the probability of the user video image in each preset micro expression state to obtain the comprehensive emotion score of the user video image;
If the comprehensive emotion score meets a preset score threshold requirement, acquiring the probability of each preset related micro-expression state from the probability of the user video image in each preset micro-expression state to obtain K related probabilities, wherein the preset related micro-expression state is a micro-expression state used for judging damage severity in each preset micro-expression state, and K is the number of the related micro-expression states;
calculating a weighted average value of K related probabilities according to preset related weights of each related microexpressive state, and taking the product of the weighted average value and preset adjustment parameters as a severity coefficient corresponding to the user video image;
according to the corresponding relation between the preset coefficient range and the damaged grade, the damaged grade corresponding to the coefficient range where the serious coefficient is located is used as the grade to be compared;
if the waiting grade is higher than the damaged grade of the accident vehicle, correcting the damaged grade of the accident vehicle into the waiting grade, and if the waiting grade is not higher than the damaged grade of the accident vehicle, not correcting the damaged grade of the accident vehicle;
determining an assessment result of the accident vehicle according to the damaged position of the accident vehicle and the target damaged grade;
And feeding back the vehicle damage information and the damage assessment result of the accident vehicle to the client.
2. The image recognition-based vehicle risk assessment method according to claim 1, wherein said extracting a video frame image from said live video image, and identifying vehicle information of said accident vehicle in said video frame image comprises:
according to a preset extraction mode, extracting video frames of the field video image to obtain a video frame image;
and carrying out image content identification on the video frame image by using a preset image information identification model to acquire the vehicle information of the accident vehicle.
3. The method for vehicle risk assessment based on image recognition according to claim 1, wherein the performing a combination calculation on the probability of the user video image in each preset microexpressive state according to a preset combination calculation mode to obtain a comprehensive emotion score of the user video image comprises:
acquiring the probability of each preset target micro-expression state from the probability of each preset micro-expression state of the user video image, wherein the preset target micro-expression state is a micro-expression state used for judging the authenticity of a newspaper in each preset micro-expression state;
According to a preset corresponding relation between the probability and the micro-expression score, acquiring a micro-expression score corresponding to each target probability, and taking the acquired micro-expression score as a target score;
according to the target score corresponding to each target probability, calculating the comprehensive emotion score of the user video image according to the following formula:
wherein P is the comprehensive emotion score, m is the number of target probabilities, g i For the target score corresponding to the ith target probability, f i And (3) for the preset target weight corresponding to the ith target probability, k is a preset score adjustment coefficient, and delta is a preset basic score.
4. The method for assigning a risk to a vehicle based on image recognition according to any one of claims 1 to 3, wherein the method for assigning a risk to a vehicle based on image recognition further comprises, after obtaining the comprehensive emotion score of the user video image by performing a combined calculation on probabilities of the user video image in each preset microexpressive state according to a preset combined calculation mode:
if the car insurance report request is invalid or the comprehensive emotion score does not meet the preset score threshold requirement, rejecting the car insurance report request and returning a prompt message of insufficient authenticity of the current report information to the client.
5. The utility model provides a vehicle insurance loss assessment device based on image recognition which characterized in that, vehicle insurance loss assessment device based on image recognition includes:
the request module is used for acquiring identity information of an insured person contained in a car insurance report request if the car insurance report request sent by a report user through a client is received;
the video call module is used for verifying the identity information of the insured person, if the verification is successful, starting self-service video call with the client, acquiring a user video image of the case reporting user and a scene video image of an accident vehicle in the self-service video call process, extracting a video frame image from the scene video image, identifying the vehicle information of the accident vehicle in the video frame image, and carrying out image preprocessing on the video frame image to obtain an N-frame target image;
the damaged identification module is used for respectively inputting each frame of target image into a preset neural network model to obtain vehicle information of the accident vehicle, damaged positions, wound depths of damaged wound surfaces at the damaged positions and deformation degrees of the damaged wound surfaces in each frame of target image;
the position statistics module is used for counting the occurrence times of each damaged position in the N frames of target images, and determining the damaged position with the occurrence times larger than a preset threshold value as the target damaged position of the accident vehicle;
The grade determining module is used for carrying out mean value calculation on the wound surface depth of the target damaged position in the N frames of target images to obtain comprehensive wound surface depth, carrying out mean value calculation on the deformation degree of the target damaged position in the N frames of target images to obtain comprehensive deformation degree, determining the comprehensive wound surface depth and the damaged grade corresponding to the comprehensive deformation degree, and taking the target damaged position and the damaged grade as vehicle damaged information;
the history inquiry module is used for inquiring the history report record of the insured in a preset claim database according to the identity information of the insured;
the checking module is used for inquiring the history report information of the accident vehicle in the history report record of the insured according to the vehicle information of the accident vehicle and determining whether the vehicle insurance report request is effective according to the history report information;
the micro-expression recognition module is used for carrying out micro-expression recognition on the user video image by using a preset micro-expression recognition model if the vehicle insurance case report request is effective, so as to obtain the probability of the user video image in each preset micro-expression state;
The score calculation module is used for carrying out combination calculation on the probability of the user video image in each preset micro-expression state according to a preset combination calculation mode to obtain the comprehensive emotion score of the user video image;
the damage assessment module is used for acquiring the probability of each preset related micro-expression state from the probability of the user video image in each preset micro-expression state if the comprehensive emotion score meets the preset score threshold requirement to obtain K related probabilities, wherein the preset related micro-expression states are micro-expression states used for judging damage severity in each preset micro-expression state, and K is the number of the related micro-expression states;
calculating a weighted average value of K related probabilities according to preset related weights of each related microexpressive state, and taking the product of the weighted average value and preset adjustment parameters as a severity coefficient corresponding to the user video image;
according to the corresponding relation between the preset coefficient range and the damaged grade, the damaged grade corresponding to the coefficient range where the serious coefficient is located is used as the grade to be compared;
if the waiting grade is higher than the damaged grade of the accident vehicle, correcting the damaged grade of the accident vehicle into the waiting grade, and if the waiting grade is not higher than the damaged grade of the accident vehicle, not correcting the damaged grade of the accident vehicle;
Determining an assessment result of the accident vehicle according to the damaged position of the accident vehicle and the target damaged grade;
and the sending module is used for feeding back the vehicle damage information and the damage assessment result of the accident vehicle to the client.
6. The image recognition-based vehicle risk assessment device according to claim 5, wherein said video call module comprises:
the extraction sub-module is used for extracting video frames of the field video image according to a preset extraction mode to obtain a video frame image;
and the content recognition sub-module is used for carrying out image content recognition on the video frame image by using a preset image information recognition model to acquire the vehicle information of the accident vehicle.
7. The image recognition-based vehicle risk assessment device of claim 5, wherein the score calculation module comprises:
the first screening sub-module is used for acquiring the probability of each preset target micro-expression state from the probability of each preset micro-expression state of the user video image, wherein the preset target micro-expression state is a micro-expression state used for judging the authenticity of a newspaper in each preset micro-expression state;
The matching sub-module is used for acquiring the micro-expression score corresponding to each target probability according to the preset corresponding relation between the probability and the micro-expression score, and taking the acquired micro-expression score as a target score;
the calculating sub-module is used for calculating the comprehensive emotion score of the user video image according to the target score corresponding to each target probability and the following formula:
wherein P is the comprehensive emotion score, m is the number of target probabilities, g i For the target score corresponding to the ith target probability, f i And (3) for the preset target weight corresponding to the ith target probability, k is a preset score adjustment coefficient, and delta is a preset basic score.
8. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the image recognition-based vehicle risk assessment method according to any one of claims 1 to 4 when the computer program is executed.
9. A computer readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the image recognition-based vehicle risk assessment method according to any one of claims 1 to 4.
CN201811528659.9A 2018-12-13 2018-12-13 Vehicle risk assessment method, device, equipment and storage medium based on image recognition Active CN109784170B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811528659.9A CN109784170B (en) 2018-12-13 2018-12-13 Vehicle risk assessment method, device, equipment and storage medium based on image recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811528659.9A CN109784170B (en) 2018-12-13 2018-12-13 Vehicle risk assessment method, device, equipment and storage medium based on image recognition

Publications (2)

Publication Number Publication Date
CN109784170A CN109784170A (en) 2019-05-21
CN109784170B true CN109784170B (en) 2023-11-17

Family

ID=66495821

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811528659.9A Active CN109784170B (en) 2018-12-13 2018-12-13 Vehicle risk assessment method, device, equipment and storage medium based on image recognition

Country Status (1)

Country Link
CN (1) CN109784170B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110472656B (en) * 2019-07-03 2023-09-05 平安科技(深圳)有限公司 Vehicle image classification method, device, computer equipment and storage medium
CN110502998B (en) * 2019-07-23 2023-01-31 平安科技(深圳)有限公司 Vehicle damage assessment method, device, equipment and storage medium
CN110728582B (en) * 2019-09-05 2022-03-08 德联易控科技(北京)有限公司 Information processing method, device, storage medium and processor
CN111192150B (en) * 2019-12-23 2023-07-25 中国平安财产保险股份有限公司 Method, device, equipment and storage medium for processing vehicle danger-giving agent service
CN111060507B (en) * 2019-12-24 2021-05-04 北京嘀嘀无限科技发展有限公司 Vehicle verification method and device
CN111259848A (en) * 2020-01-22 2020-06-09 苏州华兴源创科技股份有限公司 Vehicle loss assessment method, vehicle loss assessment system, computer equipment and medium
CN111444323A (en) * 2020-02-25 2020-07-24 平安国际智慧城市科技股份有限公司 Accident information rapid acquisition method and device, computer equipment and storage medium
CN111311424B (en) * 2020-03-06 2023-08-22 德联易控科技(北京)有限公司 Data analysis method, device, storage medium and processor in vehicle insurance accident
CN111444808A (en) * 2020-03-20 2020-07-24 平安国际智慧城市科技股份有限公司 Image-based accident liability assignment method and device, computer equipment and storage medium
CN111612104B (en) * 2020-06-30 2021-04-13 爱保科技有限公司 Vehicle loss assessment image acquisition method, device, medium and electronic equipment
CN111860338A (en) * 2020-07-22 2020-10-30 支付宝(杭州)信息技术有限公司 Case processing method, client, platform and electronic equipment
CN111861240A (en) * 2020-07-27 2020-10-30 深圳前海微众银行股份有限公司 Suspicious user identification method, device, equipment and readable storage medium
CN112036283A (en) * 2020-08-25 2020-12-04 湖北经济学院 Intelligent vehicle damage assessment image identification method
CN112053250A (en) * 2020-09-03 2020-12-08 中国平安财产保险股份有限公司 Video-based vehicle insurance self-service claim settlement method, device and equipment
CN112712498A (en) * 2020-12-25 2021-04-27 北京百度网讯科技有限公司 Vehicle damage assessment method and device executed by mobile terminal, mobile terminal and medium
CN114170030B (en) * 2021-12-08 2023-09-26 北京百度网讯科技有限公司 Method, apparatus, electronic device and medium for remote damage assessment of vehicle

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107563893A (en) * 2017-08-31 2018-01-09 济南浪潮高新科技投资发展有限公司 A kind of vehicle insurance Claims Resolution method, client, server and system
CN108335193A (en) * 2018-01-12 2018-07-27 深圳壹账通智能科技有限公司 Whole process credit methods, device, equipment and readable storage medium storing program for executing
CN108510194B (en) * 2018-03-30 2022-11-29 平安科技(深圳)有限公司 Wind control model training method, risk identification method, device, equipment and medium
CN108537160A (en) * 2018-03-30 2018-09-14 平安科技(深圳)有限公司 Risk Identification Method, device, equipment based on micro- expression and medium
CN108491821A (en) * 2018-04-02 2018-09-04 深圳市亚来科技有限公司 Vehicle insurance accident discrimination method, system and storage medium based on image procossing and deep learning
CN108765131B (en) * 2018-05-18 2023-06-09 深圳壹账通智能科技有限公司 Micro-expression-based credit auditing method, device, terminal and readable storage medium

Also Published As

Publication number Publication date
CN109784170A (en) 2019-05-21

Similar Documents

Publication Publication Date Title
CN109784170B (en) Vehicle risk assessment method, device, equipment and storage medium based on image recognition
CN108876133B (en) Risk assessment processing method, device, server and medium based on business information
WO2021027336A1 (en) Authentication method and apparatus based on seal and signature, and computer device
CN110287971B (en) Data verification method, device, computer equipment and storage medium
CN111191539B (en) Certificate authenticity verification method and device, computer equipment and storage medium
WO2018166116A1 (en) Car damage recognition method, electronic apparatus and computer-readable storage medium
US20140258169A1 (en) Method and system for automated verification of customer reviews
CN109767321A (en) Question answering process optimization method, device, computer equipment and storage medium
CN109816200B (en) Task pushing method, device, computer equipment and storage medium
CN111861731A (en) Post-credit check system and method based on OCR
CN109214914A (en) A kind of loan information checking method and device based on communication open platform
CN110796054B (en) Certificate verification method and device
CN111178203B (en) Signature verification method and device, computer equipment and storage medium
CN111191568A (en) Method, device, equipment and medium for identifying copied image
WO2021190269A1 (en) Vehicle loss assessment method, vehicle loss assessment apparatus, and electronic device using same
CN111445058A (en) Data analysis method, device, equipment and computer readable storage medium
CN112232336A (en) Certificate identification method, device, equipment and storage medium
CN115034886A (en) Default risk prediction method and device
CN114493902A (en) Multi-mode information anomaly monitoring method and device, computer equipment and storage medium
CN111061948A (en) User label recommendation method and device, computer equipment and storage medium
CN110288483B (en) Account opening method, account opening device, account opening equipment and computer readable storage medium
CN110807630B (en) Payment method and device based on face recognition, computer equipment and storage medium
CN111192150A (en) Method, device and equipment for processing vehicle insurance agent business and storage medium
CN115205882A (en) Intelligent identification and processing method for expense voucher in medical industry
CN115759964A (en) Method, system and equipment for verifying universal qualification of bid document

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant