CN117455689A - Vehicle damage assessment method and device, electronic equipment and storage medium - Google Patents

Vehicle damage assessment method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117455689A
CN117455689A CN202311351890.6A CN202311351890A CN117455689A CN 117455689 A CN117455689 A CN 117455689A CN 202311351890 A CN202311351890 A CN 202311351890A CN 117455689 A CN117455689 A CN 117455689A
Authority
CN
China
Prior art keywords
vehicle
information
dimensional
damage
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311351890.6A
Other languages
Chinese (zh)
Inventor
夏修理
张兴
王伟
陈宇
童超
李鹏
林涛
郭勇
高贺
刘宏
肖维
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Resources Intelligent Computing Technology Guangdong Co ltd
China Resources Digital Technology Co Ltd
Original Assignee
China Resources Intelligent Computing Technology Guangdong Co ltd
China Resources Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Resources Intelligent Computing Technology Guangdong Co ltd, China Resources Digital Technology Co Ltd filed Critical China Resources Intelligent Computing Technology Guangdong Co ltd
Priority to CN202311351890.6A priority Critical patent/CN117455689A/en
Publication of CN117455689A publication Critical patent/CN117455689A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/092Reinforcement learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/625License plates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/95Pattern authentication; Markers therefor; Forgery detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/42Document-oriented image-based pattern recognition based on the type of document
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Technology Law (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Development Economics (AREA)
  • Financial Or Insurance-Related Operations Such As Payment And Settlement (AREA)

Abstract

The embodiment of the application provides a vehicle damage assessment method and device, electronic equipment and a storage medium, and belongs to the technical field of artificial intelligence. The method comprises the following steps: carrying out vehicle part recognition based on three-dimensional data of a target vehicle to obtain a three-dimensional recognition result; carrying out vehicle component recognition based on a vehicle image of a target vehicle to obtain a two-dimensional recognition result; obtaining man-car information based on object information of a target object and a preset insurance business database; and inputting the three-dimensional identification result, the two-dimensional identification result and the man-vehicle information into a preset vehicle damage assessment model to assess vehicle damage. According to the embodiment of the application, the accuracy of vehicle damage assessment can be improved.

Description

Vehicle damage assessment method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of vehicle damage assessment technologies, and in particular, to a vehicle damage assessment method and apparatus, an electronic device, and a storage medium.
Background
Loss assessment of a vehicle may refer to the process by which an insurance company evaluates the vehicle and estimates loss after an accident has occurred. Currently, this process relies on the operation of the claimants of the insurance company, i.e. on manual operation, which causes the problem of long periods of vehicle damage.
In the related art, in order to solve the above-described problems, vehicle damage determination may be performed based on an image recognition detection technique of deep learning. However, the damage recognition is performed based on the two-dimensional visual algorithm, the recognition content of the two-dimensional visual algorithm is effective, and the problem of inaccurate recognition results is easily caused. Based on the above, how to provide a vehicle damage assessment method to improve accuracy of vehicle damage assessment is a technical problem to be solved.
Disclosure of Invention
The embodiment of the application mainly aims to provide a vehicle damage assessment method and device, electronic equipment and storage medium, and aims to improve accuracy of vehicle damage assessment.
To achieve the above object, a first aspect of an embodiment of the present application provides a vehicle damage assessment method, including:
carrying out vehicle part recognition based on three-dimensional data of a target vehicle to obtain a three-dimensional recognition result;
carrying out vehicle component recognition based on a vehicle image of a target vehicle to obtain a two-dimensional recognition result;
obtaining man-car information based on object information of a target object and a preset insurance business database;
inputting the three-dimensional identification result, the two-dimensional identification result and the man-vehicle information into a preset vehicle damage assessment model to assess vehicle damage
In some embodiments, the identifying the vehicle component based on the vehicle image of the target vehicle to obtain a two-dimensional identification result includes:
acquiring an original accident image and identification prompt data;
according to the recognition prompt data, vehicle segmentation is carried out on the original accident image to obtain a vehicle image of the target vehicle;
and carrying out component recognition based on the vehicle image of the target vehicle to obtain the two-dimensional recognition result.
In some embodiments, the person-to-vehicle information includes vehicle information, policy information, certificate information, and license plate information, and the object information includes a face image, a certificate, a person-to-vehicle group image;
the obtaining the man-car information based on the object information of the target object and the preset insurance business database comprises the following steps:
obtaining the vehicle information and the policy information based on the face image and the insurance business database;
carrying out information identification based on the certificate to obtain the certificate information;
and carrying out license plate recognition based on the human-vehicle group photo image to obtain the license plate information.
In some embodiments, the inputting the three-dimensional recognition result, the two-dimensional recognition result and the man-car information into a preset vehicle damage assessment model for vehicle damage assessment includes:
Comparing the three-dimensional identification result with the vehicle information to obtain first damage data;
comparing the two-dimensional identification result with the vehicle information to obtain second damage data;
obtaining three-dimensional damage data according to the first preset weight and the first damage data, and obtaining two-dimensional damage data according to the second preset weight and the second damage data;
obtaining total damage data according to the two-dimensional damage data and the three-dimensional damage data;
inputting the vehicle information, the policy information, the total damage data, the certificate information and the license plate information into the vehicle damage assessment model to assess vehicle damage
In some embodiments, before inputting the three-dimensional recognition result, the two-dimensional recognition result and the man-car information to a preset vehicle damage assessment model for vehicle damage assessment, the method further includes:
inputting the vehicle image, the certificate and the human-vehicle group photo image into a preset image tampering detection model to carry out image tampering detection, so as to obtain a tampering result;
and inputting the tampered result, the license plate information and the vehicle information into a preset fraud evaluation model to perform fraud detection, so as to obtain fraud probability.
In some embodiments, the inputting the vehicle information, the policy information, the total damage data, the certificate information, the license plate information into the vehicle damage assessment model pair comprises:
if the fraud probability is greater than or equal to a preset probability threshold, switching to a manual processing mode;
and if the fraud probability is smaller than the preset probability threshold, inputting the vehicle information, the policy information, the total damage data, the certificate information and the license plate information into the vehicle damage assessment model to carry out vehicle damage assessment.
In some embodiments, the method further comprises:
obtaining the claim amount according to the vehicle damage assessment result, and switching to a manual processing mode if the claim amount is greater than or equal to a preset amount;
and if the claim settlement amount is smaller than the preset amount, carrying out claim settlement operation according to the claim settlement amount.
In some embodiments, the method further comprises:
displaying the claim amount, the fraud probability, the three-dimensional identification result and the two-dimensional identification result, and acquiring feedback data;
and carrying out model optimization on the vehicle damage assessment model and the fraud assessment model according to the feedback data.
To achieve the above object, a second aspect of the embodiments of the present application proposes a vehicle damage assessment device, the device including:
the three-dimensional recognition module is used for recognizing the vehicle parts based on the three-dimensional data of the target vehicle to obtain a three-dimensional recognition result;
the two-dimensional recognition module is used for recognizing the vehicle parts based on the vehicle image of the target vehicle to obtain a two-dimensional recognition result;
the system comprises a man-car information acquisition module, a man-car information acquisition module and a security service database, wherein the man-car information acquisition module is used for acquiring man-car information based on object information of a target object and the preset security service database;
and the vehicle damage assessment module is used for inputting the three-dimensional identification result, the two-dimensional identification result and the man-vehicle information into a preset vehicle damage assessment model to carry out vehicle damage assessment.
To achieve the above object, a third aspect of the embodiments of the present application proposes an electronic device comprising a memory and a processor, a program stored on the memory and executable on the processor, and a data bus for enabling a connection communication between the processor and the memory, the program, when executed by the processor, implementing the method according to the first aspect.
To achieve the above object, a fourth aspect of the embodiments of the present application proposes a storage medium, which is a computer-readable storage medium, for computer-readable storage, the storage medium storing one or more programs executable by one or more processors to implement the method described in the first aspect.
According to the vehicle damage assessment method and device, the electronic equipment and the storage medium, a two-dimensional identification result is obtained through a vehicle image of a target vehicle, and a three-dimensional identification result is obtained through three-dimensional data of the target vehicle. Therefore, the embodiment of the application can make up for the problem of insufficient identification content of the two-dimensional identification result through the three-dimensional identification result. Secondly, according to the embodiment of the application, the damage is estimated through the man-vehicle information, the three-dimensional identification result and the two-dimensional identification result, so that the problem that the damage is wrong can be reduced, and the accuracy of the damage of the vehicle is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a vehicle impairment determination method provided by an embodiment of the present application;
FIG. 2 is a schematic diagram of a 3D object detection model provided in an embodiment of the present application;
fig. 3 is a flowchart of step S102 in fig. 1;
Fig. 4 is a flowchart of step S103 in fig. 1;
FIG. 5 is a schematic diagram of multi-modal data recognition provided by embodiments of the present application;
FIG. 6 is a flow chart of another embodiment of a vehicle impairment method provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of an image tamper detection model provided by an embodiment of the present application;
fig. 8 is a flowchart of step S104 in fig. 1;
FIG. 9 is an overall flow chart of a vehicle impairment determination method provided by an embodiment of the present application;
FIG. 10 is a flow chart of another embodiment of a vehicle impairment method provided by an embodiment of the present application;
FIG. 11 is a flow chart of another embodiment of a vehicle impairment method provided by an embodiment of the present application;
FIG. 12 is a schematic diagram of a model optimization process provided by an embodiment of the present application;
fig. 13 is a schematic structural view of a vehicle damage assessment device provided in an embodiment of the present application;
fig. 14 is a schematic hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
It should be noted that although functional block division is performed in a device diagram and a logic sequence is shown in a flowchart, in some cases, the steps shown or described may be performed in a different order than the block division in the device, or in the flowchart. The terms first, second and the like in the description and in the claims and in the above-described figures, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the present application.
Loss assessment of a vehicle may refer to the process by which an insurance company evaluates the vehicle and estimates loss after an accident has occurred. Currently, this process relies on the operation of the claimants of the insurance company, i.e. on manual operation, which causes the problem of long periods of vehicle damage.
In the related art, in order to solve the above-described problems, vehicle damage determination may be performed based on an image recognition detection technique of deep learning. The method mainly comprises the following steps:
(1) Uploading a scene accident photo by a user or a surveyor;
(2) Performing damage identification on the scene accident photo based on a two-dimensional visual algorithm, such as determining the damaged part and area of the vehicle;
(3) And carrying out vehicle damage assessment according to the result of the damage identification.
The above method has the following disadvantages:
(1) The algorithm based on two-dimensional vision is difficult to acquire spatial information, so that damage identification results such as damaged volume and the like are inaccurate;
(2) Only the damage of the vehicle is considered, but the vehicle risk information, the vehicle model, the accident reason and the like are not considered, so that the workload of the claimant is not reduced;
(3) The loss assessment model of the vehicle is stiff, cannot be adjusted according to feedback of a user or a claimant, and has poor adaptability.
Based on the above, the embodiment of the application provides a vehicle damage assessment method and device, electronic equipment and storage medium, and aims to improve accuracy of vehicle damage assessment.
The method and apparatus for vehicle damage assessment, electronic device and storage medium provided in the embodiments of the present application are specifically described through the following embodiments, and the method for vehicle damage assessment in the embodiments of the present application is first described.
The embodiment of the application can acquire and process the related data based on the artificial intelligence technology. Among these, artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use knowledge to obtain optimal results.
Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
The embodiment of the application provides a vehicle damage assessment method, which relates to the technical field of artificial intelligence. The vehicle damage assessment method provided by the embodiment of the application can be applied to the terminal, the server side and software running in the terminal or the server side. In some embodiments, the terminal may be a smart phone, tablet, notebook, desktop, etc.; the server side can be configured as an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and a cloud server for providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligent platforms and the like; the software may be an application or the like for realizing the vehicle damage assessment method, but is not limited to the above form.
The subject application is operational with numerous general purpose or special purpose computer system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
It should be noted that, in each specific embodiment of the present application, when related processing is required according to user information, user behavior data, user history data, user location information, and other data related to user identity or characteristics, permission or consent of the user is obtained first, and the collection, use, processing, and the like of these data comply with related laws and regulations and standards. In addition, when the embodiment of the application needs to acquire the sensitive personal information of the user, the independent permission or independent consent of the user is acquired through a popup window or a jump to a confirmation page or the like, and after the independent permission or independent consent of the user is explicitly acquired, necessary user related data for enabling the embodiment of the application to normally operate is acquired.
Fig. 1 is an optional flowchart of a vehicle damage assessment method provided in an embodiment of the present application, where the method in fig. 1 may include, but is not limited to, steps S101 to S104.
Step S101, recognizing vehicle parts based on three-dimensional data of a target vehicle to obtain a three-dimensional recognition result;
step S102, vehicle component recognition is carried out based on a vehicle image of a target vehicle, and a two-dimensional recognition result is obtained;
step S103, obtaining man-vehicle information based on object information of a target object and a preset insurance business database;
and step S104, inputting the three-dimensional recognition result, the two-dimensional recognition result and the man-vehicle information into a preset vehicle damage assessment model to assess the vehicle damage.
In steps S101 to S104 of some embodiments, a two-dimensional recognition result is obtained from a vehicle image of the target vehicle, and a three-dimensional recognition result is obtained from three-dimensional data of the target vehicle. Therefore, the embodiment of the application can make up for the problem of insufficient identification content of the two-dimensional identification result through the three-dimensional identification result. Secondly, according to the embodiment of the application, the vehicle damage is determined through the man-vehicle information, the three-dimensional identification result and the two-dimensional identification result, so that the problem of incorrect damage determination can be reduced, and the accuracy of the vehicle damage determination is improved.
In step S101 of some embodiments, the target vehicle may refer to a vehicle for which vehicle impairment is desired. 3D point cloud data (namely three-dimensional data) of the target vehicle can be acquired based on the structured light, and the 3D point cloud data is input into a preset 3D target detection model for recognition, so that a three-dimensional recognition result is obtained. Taking a rear-end collision of a vehicle as an example, the three-dimensional recognition result may include a depression depth, a volume of a damaged portion of the vehicle, deformation of a damaged part, and the like. Based on the 3D point cloud technology, not only the names of the vehicle parts can be identified, but also the three-dimensional information (such as volume information and the like) of the vehicle parts can be identified, so that the accuracy of vehicle identification can be improved.
Referring to fig. 2, the 3d object detection model may include a backbone network, a converged network, a detection network. The backbone network comprises 3 feature space downsampling modules and 1 space pyramid pooling layer, and is used for extracting 3D point cloud backbone features from three-dimensional data. The fusion network is used for collecting the feature graphs in different levels in the backbone network and fusing the feature graphs after up-sampling. The detection network comprises 3 types of prediction targets and 3D information detection, a hierarchical prediction structure is adopted, and a final prediction structure (namely a three-dimensional recognition result) is obtained after a plurality of prediction structures are weighted and averaged.
In step S102 of some embodiments, the vehicle image may refer to a two-dimensional image of the target vehicle, and the incident scene photograph uploaded by the target object or other objects may be taken as the vehicle image. And identifying the vehicle based on a preset 2D target detection module to obtain a two-dimensional identification result. Still taking a rear-end collision as an example, the two-dimensional recognition result may include a concave area.
Referring to fig. 3, in some embodiments, step S102 includes, but is not limited to including, step S301 through step S303.
Step S301, obtaining an original accident image and identification prompt data;
step S302, vehicle segmentation is carried out on the original accident image according to the recognition prompt data, and a vehicle image of the target vehicle is obtained;
step S303, performing component recognition based on the vehicle image of the target vehicle, and obtaining a two-dimensional recognition result.
In step S301 of some embodiments, the original accident image refers to an original image uploaded by the target object, where noise such as vehicles other than the non-target vehicle, objects, and the like may be included in the original accident image. The identification hint data may refer to data provided by a target object or other object that is used to indicate target vehicle location information or other information in an original accident image. For example, an area of the target vehicle in the original accident image may be acquired by an operation of acquiring a target object touch a certain area of the original accident image. Taking a cursor as an example, the touch operation may be an operation in which a target object moves the cursor on an original incident image and stops moving the cursor in a certain area of the original incident image. Alternatively, the target object may be an operation of framing a certain area in the original accident image using a rectangular frame, which is not particularly limited in this embodiment of the present application.
In step S302 of some embodiments, an area of the target vehicle on the original accident image may be determined based on the recognition prompt data, and the original accident image is segmented based on the area, so as to obtain a vehicle image of the target vehicle.
In step S303 of some embodiments, a vehicle image is input to a preset 2D object detection model for recognition, so as to obtain a two-dimensional recognition result.
The step S301 to the step S303 have the advantage that noise information in the vehicle image of the target vehicle can be reduced to reduce interference of the noise information with component recognition, so that the accuracy of the two-dimensional recognition result can be improved.
It can be understood that, in the step of obtaining the three-dimensional recognition result, the interference data may be reduced based on the methods from step S301 to step S303, so as to improve the accuracy of the three-dimensional recognition result, which is not particularly limited in this embodiment of the present application.
In step S103 of some embodiments, the target object may refer to an object for which vehicle damage determination is required. The insurance business database is a database which is preset and is used for storing object insurance information, wherein the insurance information comprises information related to vehicles. The information stored in the insurance business database can be matched based on the object information of the target object, so that the man-car information is obtained. It is understood that the person-vehicle information includes information about a person and information about a vehicle corresponding to the target object.
Referring to fig. 4, in some embodiments, the person-to-vehicle information includes vehicle information, policy information, certificate information, and license plate information, and the object information includes a face image, a certificate, a person-to-vehicle group image. Step S103 includes, but is not limited to, including steps S401 to S403.
Step S401, obtaining vehicle information and policy information based on the face image and the insurance business database;
step S402, information identification is carried out based on credentials to obtain credentials information;
step S403, license plate recognition is carried out based on the human-vehicle group photo image, and license plate information is obtained.
In step S401 of some embodiments, referring to fig. 5, a face image uploaded by a target object is acquired, and the face image is used as input data of a preset face recognition module to obtain identity information including a face of the target object. And comparing the identity information obtained based on the face recognition module with information in an insurance business database to obtain vehicle information and policy information. The vehicle information may include a vehicle model, conditions of various parts at the time of shipment of the vehicle, and the like. The policy information may include application content, pay content, and the like.
In step S402 of some embodiments, the certificate may refer to an item that verifies identity information of the target object, such as a driver' S license, an identity card, or the like. The certificate is input into a preset OCR (Optical Character Recognition ) recognition model for recognition, so that information in the certificate is extracted, and the certificate information is obtained.
In step S403 of some embodiments, the human-vehicle group image refers to an image including a target object and a target vehicle. And acquiring a human-vehicle group photo image uploaded by the target object, inputting the human-vehicle group photo image into a preset re-identification model for license plate identification, and obtaining license plate information of the target vehicle. It is to be understood that the re-recognition model may be used for license plate recognition based on OCR technology or other technologies, and embodiments of the present application are not specifically limited thereto. It can be appreciated that the problem of inconsistent claims, targets and targets can be reduced based on the human-vehicle group photo image.
Referring to fig. 5 and 6, in some embodiments, prior to step S106, the method provided by embodiments of the present application further includes, but is not limited to including, step S601 to step S602.
Step S601, inputting a vehicle image, a certificate and a human-vehicle group photo image into a preset image tampering detection model for image tampering detection, and obtaining a tampering result;
and step S602, inputting the tampered result, license plate information and vehicle information into a preset fraud evaluation model to perform fraud detection, and obtaining fraud probability.
In step S601 of some embodiments, the image falsification detection model is a model set in advance for judging whether an image has been falsified. It is understood that in the embodiment of the present application, it can be considered that the vehicle image is stretched to expand the operation of the vehicle component loss area or the like as a tamper operation. And taking the vehicle image, the image corresponding to the certificate and the human-vehicle group photo image as input data of the image tampering detection model, so as to detect image tampering through the image tampering detection model, and obtaining a corresponding tampering result.
It will be appreciated that referring to fig. 7, the image tamper detection model may include a high frequency feature extraction module, an object encoder, and an image decoder. Because visual artifacts are difficult to detect in the RGB domain, and visual artifacts are mostly present in the high-frequency band of the frequency domain, in the embodiment of the present application, discrete Cosine Transform (DCT) may be performed on the image, and the high-frequency components (such as the spectrogram in fig. 7) may be obtained by filtering based on the high-frequency feature extraction module. The image and the high frequency component of the RGB domain are respectively used as input data of the object encoder to encode the image and the high frequency component based on the object encoder, and further extract the features. The extracted feature map can be divided into image blocks with the same size, and the image blocks are subjected to feature fusion to obtain a fused feature map. It will be appreciated that the object encoder may use a cross-attention model to build a dependency relationship a between different regions in the image i And based on dependency A i Extracting to obtain corresponding characteristics. Wherein the dependency relationship A i Can be calculated according to the following formula (1).
Wherein F is i Representing the frequency domain features in the ith image block, T i Representing RGB domain features in the ith image block, C representing the superparameter, W eq A weight matrix is preset.
The image block decoder is used for extracting features from different depths (such as 1/2, 1/3, 1/8, 1/16, 1/32 and the like) of the decoder, and inputting the extracted features into the image tampering detection head to obtain image tampering probability (namely a tampering result). Wherein the last layer of the image block decoder is for outputting a binary image of the image tampered part.
In step S602 of some embodiments, the fraud assessment model is a preset model for assessing whether fraud is present. In the embodiment of the present application, the following case may be considered as a case where the fraud probability is large: firstly, license plate information is inconsistent with information recorded in an insurance business database; secondly, the face recognition is failed, namely the face image of the target object is not matched with the face image stored in the insurance business database; thirdly, the vehicle model obtained based on the vehicle image is not matched with the vehicle model stored in the insurance business database; fourth, the tamper result indicates that the probability of tampering of the corresponding image is large. It will be appreciated that the above four cases are merely exemplary, and the embodiments of the present application do not specifically limit the determination of fraud. As can be seen from the above description, when the tamper result, license plate information, and vehicle information are used as input data of the fraud evaluation model, a corresponding fraud probability can be obtained.
In step S104 of some embodiments, the person-vehicle information, the three-dimensional recognition result, the two-dimensional recognition result are integrated, and the integrated data is input to a preset vehicle damage assessment model to assess the damage of the vehicle, so as to obtain a vehicle damage assessment result. It is understood that the vehicle impairment results may include the impairment condition of the target vehicle, the amount of claims determined based on the impairment condition and policy information, and the like.
Referring to fig. 8, in some embodiments, step S104 includes, but is not limited to including, step S801 to step S805.
Step S801, comparing the three-dimensional identification result with vehicle information to obtain first damage data;
step S802, comparing the two-dimensional identification result with vehicle information to obtain second damage data;
step 803, obtaining three-dimensional damage data according to the first preset weight and the first damage data, and obtaining two-dimensional damage data according to the second preset weight and the second damage data;
step S804, obtaining total damage data according to the two-dimensional damage data and the three-dimensional damage data;
and S805, inputting the vehicle information, the policy information, the total damage data, the certificate information and the license plate information into a vehicle damage assessment model to assess the damage of the vehicle.
In steps S801 to S802 of some embodiments, the vehicle information may include information at the time of shipment of the vehicle, such as a vehicle model, a length-width-height, a component size, a component model, and the like. And comparing the three-dimensional identification result with the vehicle information to determine the damage condition of the vehicle part and obtain first damage data. And comparing the two-dimensional recognition result with vehicle information, and correspondingly, obtaining second damage data. The first loss data (or the second loss data) may include scratches, cracks, depressions, wrinkles, perforations, and component parts falling off.
In steps S803 to S804 of some embodiments, the first preset weight is a preset weight corresponding to the first damage data, and a specific value of the first preset weight may be adaptively set according to an actual situation, which is not specifically limited in this embodiment of the present application. The second preset weight is a preset weight corresponding to the second damage data, and a specific value of the second preset weight can be adaptive according to actual situations, which is not particularly limited in this embodiment of the present application. Multiplying the first preset weight with the first damage data to obtain three-dimensional damage data. Multiplying the second preset weight by the second damage data to obtain two-dimensional damage data. Based on the two-dimensional damage data, the three-dimensional damage data and the following formula (2), the total damage data p can be obtained total
p total =(w 3d p 3d +b 3d )α×(w 2d p 2d +b 2d ) β ... (2)
Wherein alpha and beta represent superparameters, w 2d Representing a second preset weight, w 3d Represents a third preset weight, p 2d Representing second lesion data, p 3d Representing third lesion data.
In step S805 of some embodiments, the total damage data p is calculated for the vehicle information, the policy information total And integrating the certificate information and the license plate information, and inputting the integrated data into a preset vehicle damage assessment model to assess the vehicle so as to obtain a vehicle damage assessment result.
The step S801 to the step S805 have the advantage that total damage data representing the joint probability of the first damage data and the second damage data can be obtained, and thus the accuracy of vehicle damage assessment can be improved when vehicle damage assessment is performed based on the total damage data.
Referring to fig. 9, in some embodiments, step S805 includes, but is not limited to, including the steps of:
if the fraud probability is greater than or equal to a preset probability threshold, switching to a manual processing mode;
if the fraud probability is smaller than the preset probability threshold, the vehicle information, the policy information, the total damage data, the certificate information and the license plate information are input into a vehicle damage assessment model to conduct vehicle damage assessment.
In some embodiments, the preset probability threshold is a preset threshold for judging whether fraud exists, and a specific value of the preset probability threshold may be adaptively set according to an actual situation, which is not specifically limited in this embodiment of the present application. When the fraud probability threshold value output by the fraud evaluation model is larger than or equal to a preset probability threshold value, the probability of the existence of fraud is larger, and the fraud evaluation model can be switched into a manual processing mode to evaluate and audit again. When the fraud probability is smaller than the preset probability threshold, the probability of fraud is smaller, and at the moment, vehicle information, policy information, total damage data, certificate information, license plate information and the like can be used as input data of a vehicle damage assessment model so as to conduct vehicle damage assessment based on the vehicle damage assessment model.
The benefit of this embodiment is that it is possible to reduce instances of vehicle impairment based on fraudulent information.
Referring to fig. 9 and 10, in some embodiments, the method provided by embodiments of the present application further includes, but is not limited to including, step S1001 to step S1002.
Step S1001, obtaining the claim amount according to the vehicle damage assessment result, and switching to a manual processing mode if the claim amount is greater than or equal to the preset amount;
and step S1002, if the claim settlement amount is smaller than the preset amount, carrying out claim settlement operation according to the claim settlement amount.
In steps S1001 to S1002 of some embodiments, the claim amount may be determined based on the vehicle damage assessment result and the vehicle insurance application information stored in the insurance service database, or the like. And comparing the claim settlement amount with the preset amount, and if the claim settlement amount is larger than or equal to the preset amount, considering the claim settlement amount as a large claim settlement amount, and switching to a manual processing mode for re-evaluation and verification in order to ensure the accuracy of the claim settlement. When the claim amount is smaller than the preset amount, the claim can be considered as a small claim, and the corresponding claim settlement operation can be performed according to the claim settlement amount. It is understood that the specific value of the preset amount may be adaptively set according to practical situations, and the embodiment of the present application is not specifically limited, for example, may be set as 8720 yuan.
Referring to fig. 9 and 11, in some embodiments, the method provided by the embodiments of the present application further includes, but is not limited to including, step S1101 to step S1102.
Step 1101, displaying the claim amount, the fraud probability, the three-dimensional recognition result and the two-dimensional recognition result, and acquiring feedback data;
and step 1102, performing model optimization on the vehicle damage assessment model and the fraud assessment model according to the feedback data.
In step S1101 of some embodiments, a display operation may be performed on the claim amount, the fraud probability, the three-dimensional recognition result, the two-dimensional recognition result, and the like, so that the target object, the claimant, and the like can understand the vehicle damage assessment process based on the above-displayed information. And acquiring feedback data generated by the target object, the claimant and the like based on the display information, wherein the feedback data can reflect whether the damage assessment process of the vehicle is wrong or not and the damage assessment preference of the object.
In step S1102 of some embodiments, model optimization may be performed on the vehicle impairment model and the fraud assessment model based on the feedback data, so as to reduce the situation that the vehicle impairment model and the fraud assessment model are stiff, and improve accuracy of vehicle impairment.
It will be appreciated that model optimization may be performed on the vehicle impairment model and fraud assessment model based on reinforcement learning. Specifically, the vehicle loss assessment model can be used as an agent, the feedback data is used as environmental data, and the vehicle loss assessment result can be automatically generated. The weights of the models (the vehicle damage assessment model and the fraud assessment model) are increased (or decreased) according to the feedback data and the vehicle damage assessment result. The strategy adopted for adjusting the weight is to enable the deviation between the vehicle loss evaluation result and the feedback data to be in a fluctuation range. The fluctuation range can be obtained according to feedback data of different objects. The magnitude a of the adjustment weight motion can be calculated according to the following equation (3).
a=π(S risk ,S loss ) ... (3)
Wherein pi represents a policy abstract function, S risk Representing the deviation state of the vehicle loss evaluation result and the feedback data, S loss And the deviation state of the vehicle loss evaluation result and the feedback data is represented.
Specifically, referring to fig. 12, the model optimization process may include the steps of:
(1) And pre-training an initial model by using historical data, and freezing a shallow layer of the pre-trained model (such as a vehicle damage assessment model) to obtain a model to be optimized. The historical data comprises traffic accident scenes, accident contents, claim settlement amount, whether fraud exists or not and historical accumulated data of vehicle damage assessment results.
(2) Training a reward model according to the feedback data, aiming at learning a reward function R under the condition of taking theta as the parameter distribution of the tuning model θ Is maximized.
(3) And fine tuning the tuning model based on reinforcement learning. Specifically, KL divergence term penalty R KL For ensuring reasonable prediction of the tuning model and rewarding items R θ For integration into a tuning model based on object preferences. The final bonus term R is calculated as shown in equations (4) and (5) below.
R=R θkl R KL ... (4)
Wherein lambda is kl A weight coefficient representing a penalty term, N representing the number of network layers,representing parameterized probability distribution, pi, of an initial model θ Parameterized summary representing tuning modelsAnd (3) rate distribution.
According to the vehicle damage assessment method, vehicle damage assessment is carried out through multi-mode data, so that the risk of vehicle risk fraud and labor cost are reduced, and the accuracy of vehicle damage assessment is improved. By combining the three-dimensional recognition result and the two-dimensional recognition result, richer vehicle information can be obtained, and the risk of misjudgment of the loss of the vehicle is reduced. And model optimization is carried out on the vehicle damage assessment model and the fraud assessment model through feedback data, so that the vehicle damage assessment model and the fraud assessment model can output more accurate results, and the situation of model rigidification is reduced.
Referring to fig. 13, an embodiment of the present application further provides a vehicle damage assessment device, which may implement the vehicle damage assessment method, where the device includes:
the three-dimensional recognition module 1301 is configured to perform vehicle component recognition based on three-dimensional data of the target vehicle, so as to obtain a three-dimensional recognition result;
the two-dimensional recognition module 1302 is configured to perform vehicle component recognition based on a vehicle image of the target vehicle, so as to obtain a two-dimensional recognition result;
the passenger-vehicle information acquisition module 1303 is configured to obtain passenger-vehicle information based on object information of a target object and a preset insurance service database;
The vehicle damage assessment module 1304 is configured to input the three-dimensional recognition result, the two-dimensional recognition result and the man-vehicle information to a preset vehicle damage assessment model for vehicle damage assessment.
The specific implementation of the vehicle damage assessment device is basically the same as the specific embodiment of the vehicle damage assessment method, and will not be described herein.
The embodiment of the application also provides electronic equipment, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the vehicle damage assessment method when executing the computer program. The electronic equipment can be any intelligent terminal including a tablet personal computer, a vehicle-mounted computer and the like.
Referring to fig. 14, fig. 14 illustrates a hardware structure of an electronic device of another embodiment, the electronic device including:
the processor 1401 may be implemented by a general purpose CPU (central processing unit), a microprocessor, an application specific integrated circuit (ApplicationSpecificIntegratedCircuit, ASIC), or one or more integrated circuits, etc. for executing related programs to implement the technical solutions provided by the embodiments of the present application;
memory 1402 may be implemented in the form of read-only memory (ReadOnlyMemory, ROM), static storage, dynamic storage, or random access memory (RandomAccessMemory, RAM), among others. Memory 1402 may store an operating system and other application programs, and when the technical solutions provided in the embodiments of the present disclosure are implemented in software or firmware, relevant program codes are stored in memory 1402, and the processor 1401 invokes a method for vehicle damage assessment to perform the embodiments of the present disclosure;
An input/output interface 1403 for implementing information input and output;
the communication interface 1404 is configured to implement communication interaction between the device and other devices, and may implement communication in a wired manner (e.g. USB, network cable, etc.), or may implement communication in a wireless manner (e.g. mobile network, WIFI, bluetooth, etc.);
bus 1405) for transferring information between components of the device (e.g., processor 1401, memory 1402, input/output interface 1403, and communication interface 1404);
wherein processor 1401, memory 1402, input/output interface 1403 and communication interface 1404 enable communication connections between each other within the device via bus 1405.
The embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the computer program realizes the vehicle damage assessment method when being executed by a processor.
The memory, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer executable programs. In addition, the memory may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory remotely located relative to the processor, the remote memory being connectable to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The embodiments described in the embodiments of the present application are for more clearly describing the technical solutions of the embodiments of the present application, and do not constitute a limitation on the technical solutions provided by the embodiments of the present application, and as those skilled in the art can know that, with the evolution of technology and the appearance of new application scenarios, the technical solutions provided by the embodiments of the present application are equally applicable to similar technical problems.
It will be appreciated by those skilled in the art that the technical solutions shown in the figures do not constitute limitations of the embodiments of the present application, and may include more or fewer steps than shown, or may combine certain steps, or different steps.
The above described apparatus embodiments are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, i.e. may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
Those of ordinary skill in the art will appreciate that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof.
The terms "first," "second," "third," "fourth," and the like in the description of the present application and in the above-described figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in this application, "at least one" means one or more, and "a plurality" means two or more. "and/or" for describing the association relationship of the association object, the representation may have three relationships, for example, "a and/or B" may represent: only a, only B and both a and B are present, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the above-described division of units is merely a logical function division, and there may be another division manner in actual implementation, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including multiple instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods of the various embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing a program.
Preferred embodiments of the present application are described above with reference to the accompanying drawings, and thus do not limit the scope of the claims of the embodiments of the present application. Any modifications, equivalent substitutions and improvements made by those skilled in the art without departing from the scope and spirit of the embodiments of the present application shall fall within the scope of the claims of the embodiments of the present application.

Claims (11)

1. A method of vehicle damage assessment, the method comprising:
carrying out vehicle part recognition based on three-dimensional data of a target vehicle to obtain a three-dimensional recognition result;
carrying out vehicle component recognition based on a vehicle image of a target vehicle to obtain a two-dimensional recognition result;
obtaining man-car information based on object information of a target object and a preset insurance business database;
and inputting the three-dimensional identification result, the two-dimensional identification result and the man-vehicle information into a preset vehicle damage assessment model to assess vehicle damage.
2. The method according to claim 1, wherein the identifying the vehicle component based on the vehicle image of the target vehicle to obtain the two-dimensional identification result includes:
acquiring an original accident image and identification prompt data;
according to the recognition prompt data, vehicle segmentation is carried out on the original accident image to obtain a vehicle image of the target vehicle;
and carrying out component recognition based on the vehicle image of the target vehicle to obtain the two-dimensional recognition result.
3. The method of claim 1, wherein the person-to-vehicle information comprises vehicle information, policy information, certificate information, and license plate information, and the object information comprises a face image, a certificate, a person-to-vehicle group image;
The obtaining the man-car information based on the object information of the target object and the preset insurance business database comprises the following steps:
obtaining the vehicle information and the policy information based on the face image and the insurance business database;
carrying out information identification based on the certificate to obtain the certificate information;
and carrying out license plate recognition based on the human-vehicle group photo image to obtain the license plate information.
4. The method according to claim 3, wherein inputting the three-dimensional recognition result, the two-dimensional recognition result, and the man-vehicle information into a preset vehicle damage assessment model for vehicle damage assessment comprises:
comparing the three-dimensional identification result with the vehicle information to obtain first damage data;
comparing the two-dimensional identification result with the vehicle information to obtain second damage data;
obtaining three-dimensional damage data according to the first preset weight and the first damage data, and obtaining two-dimensional damage data according to the second preset weight and the second damage data;
obtaining total damage data according to the two-dimensional damage data and the three-dimensional damage data;
and inputting the vehicle information, the policy information, the total damage data, the certificate information and the license plate information into the vehicle damage assessment model to assess the damage of the vehicle.
5. The method of claim 4, wherein the inputting the three-dimensional recognition result, the two-dimensional recognition result, and the person-to-vehicle information to a predetermined vehicle impairment model for vehicle impairment, the method further comprises:
inputting the vehicle image, the certificate and the human-vehicle group photo image into a preset image tampering detection model to carry out image tampering detection, so as to obtain a tampering result;
and inputting the tampered result, the license plate information and the vehicle information into a preset fraud evaluation model to perform fraud detection, so as to obtain fraud probability.
6. The method of claim 5, wherein the inputting the vehicle information, the policy information, the total damage data, the certificate information, the license plate information into the vehicle damage assessment model pair performs vehicle damage assessment, comprising:
if the fraud probability is greater than or equal to a preset probability threshold, switching to a manual processing mode;
and if the fraud probability is smaller than the preset probability threshold, inputting the vehicle information, the policy information, the total damage data, the certificate information and the license plate information into the vehicle damage assessment model to carry out vehicle damage assessment.
7. The method of claim 6, wherein the method further comprises:
obtaining the claim amount according to the vehicle damage assessment result, and switching to a manual processing mode if the claim amount is greater than or equal to a preset amount;
and if the claim settlement amount is smaller than the preset amount, carrying out claim settlement operation according to the claim settlement amount.
8. The method of claim 7, wherein the method further comprises:
displaying the claim amount, the fraud probability, the three-dimensional identification result and the two-dimensional identification result, and acquiring feedback data;
and carrying out model optimization on the vehicle damage assessment model and the fraud assessment model according to the feedback data.
9. A vehicle damage assessment device, the device comprising:
the three-dimensional recognition module is used for recognizing the vehicle parts based on the three-dimensional data of the target vehicle to obtain a three-dimensional recognition result;
the two-dimensional recognition module is used for recognizing the vehicle parts based on the vehicle image of the target vehicle to obtain a two-dimensional recognition result;
the system comprises a man-car information acquisition module, a man-car information acquisition module and a security service database, wherein the man-car information acquisition module is used for acquiring man-car information based on object information of a target object and the preset security service database;
And the vehicle damage assessment module is used for inputting the three-dimensional identification result, the two-dimensional identification result and the man-vehicle information into a preset vehicle damage assessment model to carry out vehicle damage assessment.
10. A computer device, comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory communicate with each other through the communication bus;
a memory for storing a computer program;
a processor for implementing the steps of the vehicle impairment estimation method according to any one of claims 1 to 8, when executing a program stored on a memory.
11. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, carries out the steps of the vehicle impairment estimation method according to any one of claims 1-8.
CN202311351890.6A 2023-10-18 2023-10-18 Vehicle damage assessment method and device, electronic equipment and storage medium Pending CN117455689A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311351890.6A CN117455689A (en) 2023-10-18 2023-10-18 Vehicle damage assessment method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311351890.6A CN117455689A (en) 2023-10-18 2023-10-18 Vehicle damage assessment method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117455689A true CN117455689A (en) 2024-01-26

Family

ID=89592114

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311351890.6A Pending CN117455689A (en) 2023-10-18 2023-10-18 Vehicle damage assessment method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117455689A (en)

Similar Documents

Publication Publication Date Title
KR102418446B1 (en) Picture-based vehicle damage assessment method and apparatus, and electronic device
KR102270499B1 (en) Image-based vehicle damage determination method and device, and electronic device
JP6873237B2 (en) Image-based vehicle damage assessment methods, equipment, and systems, as well as electronic devices
CN115294665B (en) Information processing method, system, device and storage medium
CN110728218A (en) Dangerous driving behavior early warning method and device, electronic equipment and storage medium
CN111310770A (en) Target detection method and device
CN114550051A (en) Vehicle loss detection method and device, computer equipment and storage medium
CN117455689A (en) Vehicle damage assessment method and device, electronic equipment and storage medium
CN116778534B (en) Image processing method, device, equipment and medium
EP4390871A1 (en) Video anonymization method and apparatus, electronic device, and storage medium
CN116977942A (en) Method and device for detecting vehicle, electronic equipment and storage medium
CN117058119A (en) Door lock identification method, device, system, equipment and storage medium
CN115205808A (en) End-to-end lane line detection method, system, equipment and medium
CN116958523A (en) Image target detection method, device, apparatus, storage medium and program product
CN118196738A (en) Lane line detection method and device, electronic equipment and storage medium
CN118629027A (en) Monocular three-dimensional target detection method and monocular three-dimensional target detection system for fusion object imaging
CN116977754A (en) Image processing method, image processing device, computer device, storage medium, and program product
CN116110134A (en) Living body detection method and system
CN118657977A (en) Living body detection method, living body detection device, electronic device, storage medium and program product
CN117132810A (en) Target detection method, model training method, device, equipment and storage medium
CN117036720A (en) Image data processing method, device, equipment and storage medium
CN118247169A (en) Financial account information image desensitization method and device
CN115187850A (en) Information checking method, device, equipment and medium
CN116311172A (en) Training method, device, equipment and storage medium of 3D target detection model
CN116863157A (en) Sample generation method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination