CN113627252A - Vehicle damage assessment method and device, storage medium and electronic equipment - Google Patents

Vehicle damage assessment method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN113627252A
CN113627252A CN202110765308.5A CN202110765308A CN113627252A CN 113627252 A CN113627252 A CN 113627252A CN 202110765308 A CN202110765308 A CN 202110765308A CN 113627252 A CN113627252 A CN 113627252A
Authority
CN
China
Prior art keywords
vehicle
image
images
identified
relative position
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110765308.5A
Other languages
Chinese (zh)
Inventor
钟恕
冯旭
王磊
章毅
罗顺风
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Geely Holding Group Co Ltd
Hangzhou Youxing Technology Co Ltd
Original Assignee
Zhejiang Geely Holding Group Co Ltd
Hangzhou Youxing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Geely Holding Group Co Ltd, Hangzhou Youxing Technology Co Ltd filed Critical Zhejiang Geely Holding Group Co Ltd
Priority to CN202110765308.5A priority Critical patent/CN113627252A/en
Publication of CN113627252A publication Critical patent/CN113627252A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance
    • G06Q50/40

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Technology Law (AREA)
  • General Business, Economics & Management (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a vehicle loss assessment method, a vehicle loss assessment device, a storage medium and electronic equipment, wherein the vehicle loss assessment method comprises the following steps: receiving a plurality of images to be identified containing a target vehicle; inputting all images to be recognized into a pre-established vehicle detection model component so as to recognize the boundary parameters of the vehicle in each image to be recognized; respectively intercepting the vehicle image in each image to be identified based on the boundary parameters; inputting all the intercepted vehicle images into an image classification algorithm model component to obtain the shooting position and vehicle relative position parameters of each image to be identified; identifying the part category and the part position of each image to be identified through the trained part network based on the relative position parameters; and inputting the part types and the part positions into a pre-established damage type identification model assembly to determine damage parts and damage areas of the vehicles corresponding to the images to be identified. This application need not artifical preparation loss assessment information, and then has improved vehicle loss assessment efficiency.

Description

Vehicle damage assessment method and device, storage medium and electronic equipment
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a vehicle damage assessment method and device, a storage medium and electronic equipment.
Background
The form of net car reservation has become one of the important travel ways. The conditions such as damage, unclean appear easily in long-term operation network car booking vehicle, lead to the passenger to experience poorly, reduce trip company brand degree. So that it plays a very important role to check the vehicle condition regularly.
Based on artifical vehicle examination mode, it is inefficient, the time-effect is poor. In the prior art, vehicle parameter information is obtained in advance and is compared to position the damage condition of a vehicle; or the damage condition of the vehicle is directly identified by using an image identification technology, but the damaged part needs to be shot, and the damaged part cannot be searched for the whole vehicle. In addition, in the process of shooting and uploading by a driver, the shot vehicle images have the problems of incomplete shooting, incorrect direction, intentional hidden damage parts and the like, so that the investigation of the vehicle damage condition is interfered.
Therefore, it is highly desirable to provide a technical solution that can identify the angle of the captured image and the damaged parts of the whole vehicle, so as to improve the vehicle damage assessment efficiency.
Disclosure of Invention
In order to solve the technical problem, the invention provides a vehicle damage assessment method, which comprises the following steps:
receiving a plurality of images to be identified containing a target vehicle;
inputting all the images to be recognized into a pre-established vehicle detection model component so as to recognize the boundary parameters of the vehicle in each image to be recognized;
respectively intercepting vehicle images in each image to be identified based on the boundary parameters;
inputting all the intercepted vehicle images into an image classification algorithm model component to obtain the shooting position and vehicle relative position parameters of each image to be identified;
identifying the part category and the part position of each image to be identified through the trained part network based on the relative position parameters;
and inputting the part types and the part positions into a pre-established damage type identification model assembly to determine damage parts and damage areas of the vehicles corresponding to the multiple images to be identified.
Further, all the images to be recognized are input into a pre-established vehicle detection model component so as to recognize the boundary parameters of the vehicle in each image to be recognized; the method also comprises the following steps:
judging whether all the images to be identified contain vehicle images or not;
when all the images to be recognized comprise vehicle images, all the images to be recognized are input into a pre-established vehicle detection model component so as to recognize boundary parameters of vehicles in each image to be recognized;
when at least one image to be identified does not contain a vehicle image, a first re-shooting instruction is sent to a sending end of the images to be identified, so that the sending end can re-shoot the image to be identified containing the vehicle image.
Further, the step of inputting all the intercepted vehicle images into an image classification algorithm model component to obtain parameters of the shooting position and the relative position of the vehicle of each image to be recognized further comprises the following steps:
judging whether the shooting positions of all the images to be recognized and the relative position parameters of the vehicles meet the preset relative position requirements or not;
when the shooting positions of all the images to be recognized and the relative position parameters of the vehicle meet the requirements of preset relative positions, recognizing the part category and the part position of each image to be recognized through a trained part network based on the relative position parameters;
and when the relative position parameter of the shooting position of at least one image to be recognized and the vehicle does not meet the requirement of the preset relative position, sending a second re-shooting instruction which can meet the requirement of the preset relative position to the sending end, so that the sending end can re-shoot the image to be recognized which can meet the requirement of the preset relative position.
Further, the identifying the part category and the part position of each image to be identified through the trained part network based on the relative position parameters includes:
inputting the vehicle images corresponding to different relative position parameters into a pre-established part identification model to obtain part types contained in each vehicle image and part positions corresponding to the part types;
and labeling each vehicle image according to the part types and the part positions to obtain labeled images.
In another aspect, the present invention provides a vehicle damage assessment apparatus comprising:
the image receiving module is configured to receive a plurality of images to be identified containing the target vehicle;
the boundary determining module is configured to input all the images to be recognized into a pre-established vehicle detection model component so as to recognize boundary parameters of the vehicle in each image to be recognized;
a vehicle image intercepting module configured to perform respective interception of a vehicle image in each of the images to be recognized based on the boundary parameters;
the relative position parameter determining module is configured to input all the intercepted vehicle images into an image classification algorithm model component to obtain the shooting position and vehicle relative position parameters of each image to be identified;
the part determining module is configured to identify the part category and the part position of each image to be identified through the trained part network based on the relative position parameters;
and the damage assessment module is configured to input the part types and the part positions into a damage type identification model assembly established in advance to determine damage parts and damage areas of the vehicles corresponding to the images to be identified.
Further, still include:
the image judging module is configured to judge whether all the images to be identified contain the vehicle images;
the boundary determining module is further configured to input all the images to be recognized into a pre-established vehicle detection model component when all the images to be recognized contain vehicle images so as to recognize boundary parameters of vehicles in each image to be recognized;
the first instruction sending module is configured to execute sending of a first re-shooting instruction to sending ends of the multiple images to be identified when at least one of the images to be identified does not contain a vehicle image, so that the sending ends re-shoot the images to be identified containing the vehicle image.
Further, still include:
the position judging module is configured to execute judgment on whether the shooting positions of all the images to be recognized and the relative position parameters of the vehicles meet the preset relative position requirement or not;
the part determining module is further configured to identify the part type and the part position of each image to be identified through a trained part network based on the relative position parameters when the shooting positions of all the images to be identified and the relative position parameters of the vehicle meet the preset relative position requirement;
and the second instruction sending module is configured to execute sending a second re-shooting instruction capable of meeting the preset relative position requirement to the sending end when the shooting position of at least one image to be recognized and the vehicle relative position parameter do not meet the preset relative position requirement, so that the sending end re-shoots the image to be recognized capable of meeting the preset relative position requirement.
Further, the component determination module includes:
the category and position determining unit is configured to input vehicle images corresponding to different relative position parameters into a pre-established part identification model to obtain part categories contained in each vehicle image and part positions corresponding to the part categories;
and the marked image determining unit is configured to mark each vehicle image according to the part category and the part position to obtain a marked image.
In another aspect, the present invention provides a computer readable storage medium, having at least one instruction or at least one program stored therein, the at least one instruction or at least one program being loaded and executed by a processor to implement the vehicle damage assessment method as described above.
In yet another aspect, the present invention provides an electronic device comprising at least one processor, and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, and the at least one processor implements the vehicle damage assessment method as described above by executing the instructions stored by the memory.
The invention provides a vehicle loss assessment method, a vehicle loss assessment device, a storage medium and electronic equipment, which have the following beneficial effects:
receiving a plurality of images to be identified containing a target vehicle; inputting all the images to be recognized into a pre-established vehicle detection model component so as to recognize the boundary parameters of the vehicle in each image to be recognized; the accuracy of image identification is improved, and the interference of the image considered by the vehicle on the calculation result is avoided. Respectively intercepting vehicle images in each image to be identified based on the boundary parameters; inputting all the intercepted vehicle images into an image classification algorithm model component to obtain the shooting position and vehicle relative position parameters of each image to be identified; the accuracy of picture upload is effectively improved, and the driver is guaranteed to upload complete vehicle images. Identifying the part category and the part position of each image to be identified through the trained part network based on the relative position parameters; and inputting the part types and the part positions into a pre-established damage type identification model assembly to determine damage parts and damage areas of the vehicles corresponding to the multiple images to be identified. The damage position and the damage area which need to be maintained can be visually displayed in the damage type identification model, the user or damage assessment personnel can conveniently check the damage part in the damage type identification model, the damage assessment personnel do not need to draw circles and mark based on three views of the vehicle, the damage assessment information does not need to be made manually, and the vehicle damage assessment efficiency is improved.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings used in the description of the embodiment or the prior art will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
FIG. 1 is a schematic flow chart of a vehicle damage assessment method according to an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart diagram of another vehicle damage assessment method provided by the embodiments of the present application;
fig. 3 is a schematic structural diagram of a vehicle damage assessment device according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
The system comprises a vehicle image acquisition module 510, an image receiving module 520, a boundary determination module 530, a vehicle image interception module 540, a relative position parameter determination module 550, a part determination module 560 and a damage assessment module.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, apparatus, article, or device that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or device.
As shown in fig. 1, fig. 1 is a schematic flow chart of a vehicle damage assessment method provided in an embodiment of the present application, where an execution subject of the method may be a client (a sending end) that uploads a plurality of images to be identified or a server that manages vehicles, and the method includes:
s102, receiving a plurality of images to be identified containing the target vehicle.
In a specific implementation process, the vehicle tag may be generated by a user based on operations during vehicle renting, for example, the user may submit a vehicle corresponding to a vehicle tag desired to be rented before the vehicle renting, where the vehicle tag may be a pre-installed vehicle model setting, and different vehicle models correspond to different vehicle tags. I.e., the vehicle tag can determine the type of vehicle.
The image to be recognized can be an image shot by a user by using a client, and after the user uses a vehicle, the image to be recognized can be used for recognizing whether parts of the vehicle are damaged or not in the process of returning the vehicle. The image to be recognized may include a corresponding vehicle image. The outline of the whole vehicle should be in the image to be identified.
In some possible embodiments, fig. 2 is a schematic flowchart of another vehicle damage assessment method provided in an embodiment of the present application, and as shown in fig. 2, the method further includes:
s202, judging whether all the images to be identified contain vehicle images or not;
s204, when all the images to be recognized comprise vehicle images, all the images to be recognized are input into a pre-established vehicle detection model component so as to recognize boundary parameters of vehicles in each image to be recognized;
s206, when at least one image to be identified does not contain the vehicle image, sending a first re-shooting instruction to a sending end of the images to be identified so that the sending end can re-shoot the image to be identified containing the vehicle image.
In a specific implementation process, when the execution subject is a client, and when at least one of the images to be recognized does not contain a vehicle image, the client generates a corresponding first re-shooting instruction to prompt a user to re-shoot the image to be recognized containing the vehicle image. Only when all the images to be recognized contain vehicle images, all the images to be recognized are input into a pre-established vehicle detection model component so as to recognize boundary parameters of vehicles in each image to be recognized; . When the execution main body is a server, when at least one image to be identified does not contain a vehicle image, a first re-shooting instruction is sent to a sending end of the images to be identified, so that the sending end can re-shoot the image to be identified containing the vehicle image.
It is understood that the boundary parameter may be a position parameter of the outer contour of the vehicle in the image to be recognized.
Specifically, the vehicle detection model component may be established according to the following manner:
acquiring a plurality of groups of first images and vehicle detection frame lines corresponding to the first images;
establishing the vehicle detection model component, wherein the vehicle detection model component comprises a plurality of model parameters;
and taking the first image as input data of the vehicle detection model assembly, taking a vehicle detection frame line corresponding to the first image as output data of the vehicle detection model assembly, and adjusting the model parameters of the vehicle detection model assembly until the vehicle detection model assembly meets preset requirements.
It is understood that the algorithm adopted by the vehicle detection model component is not particularly limited in the embodiment of the present specification, and may be set according to actual needs, such as adopting the YOLOV5 algorithm.
S104, inputting all the images to be recognized into a pre-established vehicle detection model component so as to recognize the boundary parameters of the vehicle in each image to be recognized; .
S106, respectively intercepting the vehicle image in each image to be identified based on the boundary parameters.
In a specific implementation process, the vehicle image can be obtained by intercepting the image to be recognized according to the boundary parameters in the image to be recognized of the outline of the external wheel of the vehicle in the image to be recognized, so that the interference of the image to be recognized except the vehicle on the damage assessment of the vehicle is reduced.
And S108, inputting all the intercepted vehicle images into an image classification algorithm model component to obtain the shooting position of each image to be identified and the relative position parameter of the vehicle.
In a specific implementation process, the parameter of the shooting position of the image to be recognized and the relative position of the vehicle may be an angle of the camera device relative to the vehicle when the image to be recognized is shot. Such as right front, right back, left front 45 degrees, right back 45 degrees, etc.
In a specific implementation, the image classification algorithm model component may be built according to the following manner:
acquiring a plurality of groups of first intercepted image sets, wherein each group of first intercepted image sets comprises at least 4 first intercepted images in different directions, and each first intercepted image is marked with corresponding shooting position and vehicle relative position parameters;
establishing the image classification algorithm model component, wherein the image classification algorithm model component comprises a plurality of model parameters;
and taking the first captured image set as input data of the image classification algorithm model component, taking the corresponding shooting position and vehicle relative position parameters as output data of the image classification algorithm model component, and adjusting the model parameters of the image classification algorithm model component until the image classification algorithm model component meets preset requirements.
It is understood that the algorithm adopted by the image classification algorithm model component is not specifically limited in the embodiments of the present specification, and may be set according to actual needs, for example, the MobileNetV3 algorithm is adopted.
In some possible embodiments, the inputting all the intercepted vehicle images into an image classification algorithm model component obtains parameters of a shooting position and a vehicle relative position of each image to be identified, and then further includes:
judging whether the shooting positions of all the images to be recognized and the relative position parameters of the vehicles meet the preset relative position requirements or not;
when the shooting positions of all the images to be recognized and the relative position parameters of the vehicle meet the requirements of preset relative positions, recognizing the part category and the part position of each image to be recognized through a trained part network based on the relative position parameters;
and when the relative position parameter of the shooting position of at least one image to be recognized and the vehicle does not meet the requirement of the preset relative position, sending a second re-shooting instruction which can meet the requirement of the preset relative position to the sending end, so that the sending end can re-shoot the image to be recognized which can meet the requirement of the preset relative position.
In a specific implementation process, when the execution subject is a client, and when the shooting position of at least one image to be recognized and the vehicle relative position parameter do not meet the requirement of the preset relative position, the client generates a corresponding second re-shooting instruction to prompt a user to re-shoot the image to be recognized which can meet the requirement of the preset relative position. And only when all the images to be recognized contain the vehicle images, recognizing the part category and the part position of each image to be recognized through the trained part network based on the relative position parameters.
And S110, identifying the part category and the part position of each image to be identified through the trained part network based on the relative position parameters.
In a specific implementation process, the identifying the part category and the part position of each image to be identified through the trained part network based on the relative position parameters includes:
inputting the vehicle images corresponding to different relative position parameters into a pre-established part identification model to obtain part types contained in each vehicle image and part positions corresponding to the part types;
and labeling each vehicle image according to the part types and the part positions to obtain labeled images.
In a specific implementation process, the part identification model can be established according to the following modes:
acquiring a plurality of groups of first intercepted image sets marked with position parameters, wherein each group of first intercepted image sets comprises at least 4 first intercepted images in different directions, and each first intercepted image is marked with part types and part positions marked according to the position parameters;
establishing the part identification model, wherein the part identification model comprises a plurality of model parameters;
and taking the first captured image set marked with the position parameters as input data of the part identification model, taking the corresponding part type and part position as output data of the part identification model, and adjusting the model parameters of the part identification model until the part identification model meets preset requirements.
It is understood that the algorithm used by the part identification model is not specifically limited in the embodiments of the present specification, and may be set according to actual needs, for example, the YOLOV5 algorithm is used.
Wherein, the part categories may include: tens of safety levers, middle net, headlight and vehicle cover.
And S112, inputting the part types and the part positions into a pre-established damage type recognition model assembly to determine damage parts and damage areas of the vehicles corresponding to the multiple images to be recognized.
In a specific implementation process, an image corresponding to a part can be intercepted, and then a damage type is identified through a damage type identification algorithm, wherein common types comprise three types of damages of scraping, sinking and cracking. The damage type identification algorithm can have the following two stages: the first stage is a training stage, vehicle images are collected, damage positions and types of vehicles are marked, a damage type recognition model is trained by using marked data, the model can adopt an example segmentation Mask Rcnn method, and the areas of a plurality of damage positions and damage parts can be output simultaneously. The second stage is an application stage, and the model obtained by the training of the first stage is used for identifying the damaged part and the area.
Receiving a plurality of images to be identified containing a target vehicle; inputting all the images to be recognized into a pre-established vehicle detection model component so as to recognize the boundary parameters of the vehicle in each image to be recognized; the accuracy of image identification is improved, and the interference of the image considered by the vehicle on the calculation result is avoided. Respectively intercepting vehicle images in each image to be identified based on the boundary parameters; inputting all the intercepted vehicle images into an image classification algorithm model component to obtain the shooting position and vehicle relative position parameters of each image to be identified; the accuracy of picture upload is effectively improved, and the driver is guaranteed to upload complete vehicle images. Identifying the part category and the part position of each image to be identified through the trained part network based on the relative position parameters; and inputting the part types and the part positions into a pre-established damage type identification model assembly to determine damage parts and damage areas of the vehicles corresponding to the multiple images to be identified. The damage position and the damage area which need to be maintained can be visually displayed in the damage type identification model, the user or damage assessment personnel can conveniently check the damage part in the damage type identification model, the damage assessment personnel do not need to draw circles and mark based on three views of the vehicle, the damage assessment information does not need to be made manually, and the vehicle damage assessment efficiency is improved.
On the other hand, fig. 3 is a schematic structural diagram of a vehicle damage assessment device according to an embodiment of the present invention, and as shown in fig. 3, the present invention provides a vehicle damage assessment device, including:
an image receiving module 510 configured to perform receiving a plurality of images to be recognized including a target vehicle;
a boundary determining module 520 configured to perform inputting all the images to be recognized into a pre-established vehicle detection model component to identify boundary parameters of the vehicle in each image to be recognized;
a vehicle image intercepting module 530 configured to perform respective interception of vehicle images in each of the images to be recognized based on the boundary parameters;
a relative position parameter determining module 540 configured to input all the captured vehicle images into an image classification algorithm model component, so as to obtain a shooting position and a vehicle relative position parameter of each image to be identified;
a part determining module 550 configured to perform recognition of the part category and the part position of each image to be recognized through the trained part network based on the relative position parameters;
and the damage determination module 560 is configured to perform the step of inputting the part types and the part positions into a damage type identification model assembly established in advance to determine the damaged parts and the damaged areas of the vehicles corresponding to the multiple images to be identified.
On the basis of the above embodiments, in an embodiment of the present specification, the method further includes:
an image judging module 610 configured to perform judgment on whether all the images to be recognized contain a vehicle image;
a boundary determining module 520, further configured to perform, when all the images to be recognized include vehicle images, inputting all the images to be recognized into a vehicle detection model component established in advance to recognize boundary parameters of vehicles in each image to be recognized;
the first instruction sending module 620 is configured to execute sending a first re-shooting instruction to a sending end of the multiple images to be recognized when at least one of the images to be recognized does not contain a vehicle image, so that the sending end re-shoots the image to be recognized containing the vehicle image.
On the basis of the above embodiments, in an embodiment of the present specification, the method further includes:
a position judging module 710 configured to perform a judgment on whether the shooting positions of all the images to be recognized and the vehicle relative position parameters satisfy a preset relative position requirement;
the part determining module 550 is further configured to perform recognition of the part category and the part position of each image to be recognized through a trained part network based on the relative position parameters when the shooting positions and the vehicle relative position parameters of all the images to be recognized meet the preset relative position requirement;
and a second instruction sending module 720, configured to execute sending a second re-shooting instruction capable of meeting the preset relative position requirement to the sending end when the shooting position of at least one of the images to be recognized and the vehicle relative position parameter do not meet the preset relative position requirement, so that the sending end re-shoots the image to be recognized capable of meeting the preset relative position requirement.
On the basis of the above embodiments, in an embodiment of the present specification, the component determining module 550 includes:
the category and position determining unit is configured to input vehicle images corresponding to different relative position parameters into a pre-established part identification model to obtain part categories contained in each vehicle image and part positions corresponding to the part categories;
and the marked image determining unit is configured to mark each vehicle image according to the part category and the part position to obtain a marked image. It should be noted that, when the apparatus provided in the foregoing embodiment implements the functions thereof, only the division of the functional modules is illustrated, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the apparatus may be divided into different functional modules to implement all or part of the functions described above. In addition, the apparatus and method embodiments provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments for details, which are not described herein again.
On the other hand, fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, and as shown in fig. 4, the present invention provides a device for vehicle damage assessment, where the device includes a processor and a memory, where at least one instruction or at least one program is stored in the memory, and the at least one instruction or the at least one program is loaded and executed by the processor to implement the vehicle damage assessment method as described above.
The above functions, if implemented in the form of software functional units and sold or used as a separate product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention.
An embodiment of the present invention further provides a storage medium, where at least one instruction, at least one program, a code set, or an instruction set is stored in the storage medium, and the at least one instruction, the at least one program, the code set, or the instruction set may be executed by a processor of an electronic device to implement the vehicle damage assessment method described above.
Optionally, in an embodiment of the present invention, the storage medium may include, but is not limited to: various media capable of storing program codes, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
It should be noted that: the precedence order of the above embodiments of the present invention is only for description, and does not represent the merits of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the apparatus, the electronic device and the storage medium embodiment, since they are substantially similar to the method embodiment, the description is relatively simple, and the relevant points can be referred to the partial description of the method embodiment.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. The implementation principle and the generated technical effect of the testing method provided by the embodiment of the invention are the same as those of the system embodiment, and for the sake of brief description, the corresponding contents in the system embodiment can be referred to where the method embodiment is not mentioned.
In the several embodiments provided in the present application, it should be understood that the disclosed system and method may be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the above claims.

Claims (10)

1. A method of vehicle damage assessment, comprising:
receiving a plurality of images to be identified containing a target vehicle;
inputting all the images to be recognized into a pre-established vehicle detection model component so as to recognize the boundary parameters of the vehicle in each image to be recognized;
respectively intercepting vehicle images in each image to be identified based on the boundary parameters;
inputting all the intercepted vehicle images into an image classification algorithm model component to obtain the shooting position and vehicle relative position parameters of each image to be identified;
identifying the part category and the part position of each image to be identified through the trained part network based on the relative position parameters;
and inputting the part types and the part positions into a pre-established damage type identification model assembly to determine damage parts and damage areas of the vehicles corresponding to the multiple images to be identified.
2. The vehicle damage assessment method according to claim 1, wherein said inputting all the images to be identified into a pre-established vehicle detection model component to identify the boundary parameters of the vehicle in each image to be identified further comprises:
judging whether all the images to be identified contain vehicle images or not;
when all the images to be recognized comprise vehicle images, all the images to be recognized are input into a pre-established vehicle detection model component so as to recognize boundary parameters of vehicles in each image to be recognized;
when at least one image to be identified does not contain a vehicle image, a first re-shooting instruction is sent to a sending end of the images to be identified, so that the sending end can re-shoot the image to be identified containing the vehicle image.
3. The vehicle damage assessment method according to claim 1, wherein said all the captured vehicle images are inputted into an image classification algorithm model component, and parameters of a shooting position and a vehicle relative position of each image to be identified are obtained, and then further comprising:
judging whether the shooting positions of all the images to be recognized and the relative position parameters of the vehicles meet the preset relative position requirements or not;
when the shooting positions of all the images to be recognized and the relative position parameters of the vehicle meet the requirements of preset relative positions, recognizing the part category and the part position of each image to be recognized through a trained part network based on the relative position parameters;
and when the relative position parameter of the shooting position of at least one image to be recognized and the vehicle does not meet the requirement of the preset relative position, sending a second re-shooting instruction which can meet the requirement of the preset relative position to the sending end, so that the sending end can re-shoot the image to be recognized which can meet the requirement of the preset relative position.
4. The vehicle damage assessment method according to claim 3, wherein said identifying the part category and the part position of each image to be identified through the trained part network based on the relative position parameters comprises:
inputting the vehicle images corresponding to different relative position parameters into a pre-established part identification model to obtain part types contained in each vehicle image and part positions corresponding to the part types;
and labeling each vehicle image according to the part types and the part positions to obtain labeled images.
5. A vehicle damage assessment device, comprising:
the image receiving module is configured to receive a plurality of images to be identified containing the target vehicle;
the boundary determining module is configured to input all the images to be recognized into a pre-established vehicle detection model component so as to recognize boundary parameters of the vehicle in each image to be recognized;
a vehicle image intercepting module configured to perform respective interception of a vehicle image in each of the images to be recognized based on the boundary parameters;
the relative position parameter determining module is configured to input all the intercepted vehicle images into an image classification algorithm model component to obtain the shooting position and vehicle relative position parameters of each image to be identified;
the part determining module is configured to identify the part category and the part position of each image to be identified through the trained part network based on the relative position parameters;
and the damage assessment module is configured to input the part types and the part positions into a damage type identification model assembly established in advance to determine damage parts and damage areas of the vehicles corresponding to the images to be identified.
6. The vehicle damage assessment device according to claim 5, further comprising:
the image judging module is configured to judge whether all the images to be identified contain the vehicle images;
the boundary determining module is further configured to input all the images to be recognized into a pre-established vehicle detection model component when all the images to be recognized contain vehicle images so as to recognize boundary parameters of vehicles in each image to be recognized;
the first instruction sending module is configured to execute sending of a first re-shooting instruction to sending ends of the multiple images to be identified when at least one of the images to be identified does not contain a vehicle image, so that the sending ends re-shoot the images to be identified containing the vehicle image.
7. The vehicle damage assessment device according to claim 5, further comprising:
the position judging module is configured to execute judgment on whether the shooting positions of all the images to be recognized and the relative position parameters of the vehicles meet the preset relative position requirement or not;
the part determining module is further configured to identify the part type and the part position of each image to be identified through a trained part network based on the relative position parameters when the shooting positions of all the images to be identified and the relative position parameters of the vehicle meet the preset relative position requirement;
and the second instruction sending module is configured to execute sending a second re-shooting instruction capable of meeting the preset relative position requirement to the sending end when the shooting position of at least one image to be recognized and the vehicle relative position parameter do not meet the preset relative position requirement, so that the sending end re-shoots the image to be recognized capable of meeting the preset relative position requirement.
8. The vehicle damage assessment device of claim 7, wherein said component determination module comprises:
the category and position determining unit is configured to input vehicle images corresponding to different relative position parameters into a pre-established part identification model to obtain part categories contained in each vehicle image and part positions corresponding to the part categories;
and the marked image determining unit is configured to mark each vehicle image according to the part category and the part position to obtain a marked image.
9. A computer readable storage medium having stored therein at least one instruction or at least one program, the at least one instruction or the at least one program being loaded and executed by a processor to implement the vehicle damage assessment method according to any one of claims 1-4.
10. An electronic device comprising at least one processor, and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the at least one processor implementing the vehicle impairment method of any one of claims 1 to 4 by executing the instructions stored by the memory.
CN202110765308.5A 2021-07-07 2021-07-07 Vehicle damage assessment method and device, storage medium and electronic equipment Pending CN113627252A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110765308.5A CN113627252A (en) 2021-07-07 2021-07-07 Vehicle damage assessment method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110765308.5A CN113627252A (en) 2021-07-07 2021-07-07 Vehicle damage assessment method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN113627252A true CN113627252A (en) 2021-11-09

Family

ID=78379171

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110765308.5A Pending CN113627252A (en) 2021-07-07 2021-07-07 Vehicle damage assessment method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN113627252A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105719188A (en) * 2016-01-22 2016-06-29 平安科技(深圳)有限公司 Method and server for achieving insurance claim anti-fraud based on consistency of multiple pictures
CN108446618A (en) * 2018-03-09 2018-08-24 平安科技(深圳)有限公司 Car damage identification method, device, electronic equipment and storage medium
CN108734702A (en) * 2018-04-26 2018-11-02 平安科技(深圳)有限公司 Vehicle damages determination method, server and storage medium
CN109325488A (en) * 2018-08-31 2019-02-12 阿里巴巴集团控股有限公司 For assisting the method, device and equipment of car damage identification image taking
CN110674788A (en) * 2019-10-09 2020-01-10 北京百度网讯科技有限公司 Vehicle damage assessment method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105719188A (en) * 2016-01-22 2016-06-29 平安科技(深圳)有限公司 Method and server for achieving insurance claim anti-fraud based on consistency of multiple pictures
CN108446618A (en) * 2018-03-09 2018-08-24 平安科技(深圳)有限公司 Car damage identification method, device, electronic equipment and storage medium
WO2019169688A1 (en) * 2018-03-09 2019-09-12 平安科技(深圳)有限公司 Vehicle loss assessment method and apparatus, electronic device, and storage medium
CN108734702A (en) * 2018-04-26 2018-11-02 平安科技(深圳)有限公司 Vehicle damages determination method, server and storage medium
CN109325488A (en) * 2018-08-31 2019-02-12 阿里巴巴集团控股有限公司 For assisting the method, device and equipment of car damage identification image taking
CN110674788A (en) * 2019-10-09 2020-01-10 北京百度网讯科技有限公司 Vehicle damage assessment method and device

Similar Documents

Publication Publication Date Title
CN107392218B (en) Vehicle loss assessment method and device based on image and electronic equipment
CN111160302A (en) Obstacle information identification method and device based on automatic driving environment
CN109002820B (en) License plate recognition method and device and related equipment
CN111667011A (en) Damage detection model training method, damage detection model training device, damage detection method, damage detection device, damage detection equipment and damage detection medium
CN111626123A (en) Video data processing method and device, computer equipment and storage medium
CN108921026A (en) Recognition methods, device, computer equipment and the storage medium of animal identification
CN110532883B (en) Improvement of on-line tracking algorithm by off-line tracking algorithm
CN109271908B (en) Vehicle loss detection method, device and equipment
CN110807491A (en) License plate image definition model training method, definition detection method and device
CN109215119B (en) Method and device for establishing three-dimensional model of damaged vehicle
CN111160275B (en) Pedestrian re-recognition model training method, device, computer equipment and storage medium
CN110287936B (en) Image detection method, device, equipment and storage medium
DE112019000093T5 (en) Discrimination device and machine learning method
CN113516661A (en) Defect detection method and device based on feature fusion
CN112818821B (en) Human face acquisition source detection method and device based on visible light and infrared light
CN110728215A (en) Face living body detection method and device based on infrared image
US11120308B2 (en) Vehicle damage detection method based on image analysis, electronic device and storage medium
CN108399609B (en) Three-dimensional point cloud data repairing method and device and robot
CN113505781A (en) Target detection method and device, electronic equipment and readable storage medium
CN115830399A (en) Classification model training method, apparatus, device, storage medium, and program product
CN113706513A (en) Vehicle damage image analysis method, device, equipment and medium based on image detection
CN111401438B (en) Image sorting method, device and system
CN113627252A (en) Vehicle damage assessment method and device, storage medium and electronic equipment
JPWO2015068417A1 (en) Image collation system, image collation method and program
CN112241705A (en) Target detection model training method and target detection method based on classification regression

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination