CN113033517A - Vehicle damage assessment image acquisition method and device and storage medium - Google Patents

Vehicle damage assessment image acquisition method and device and storage medium Download PDF

Info

Publication number
CN113033517A
CN113033517A CN202110568258.1A CN202110568258A CN113033517A CN 113033517 A CN113033517 A CN 113033517A CN 202110568258 A CN202110568258 A CN 202110568258A CN 113033517 A CN113033517 A CN 113033517A
Authority
CN
China
Prior art keywords
image
video stream
scene
current frame
close
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110568258.1A
Other languages
Chinese (zh)
Other versions
CN113033517B (en
Inventor
王朋远
刘海龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aibao Technology Co ltd
Original Assignee
Aibao Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aibao Technology Co ltd filed Critical Aibao Technology Co ltd
Priority to CN202110568258.1A priority Critical patent/CN113033517B/en
Publication of CN113033517A publication Critical patent/CN113033517A/en
Application granted granted Critical
Publication of CN113033517B publication Critical patent/CN113033517B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • G06F18/24155Bayesian classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/467Encoded features or binary features, e.g. local binary patterns [LBP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention relates to the field of vehicle insurance claim settlement, and discloses a method, a device and a storage medium for acquiring vehicle damage assessment images, wherein the method acquires a current frame image in a video stream to be processed in real time; if the current frame image is a medium view and the definition meets the requirement, storing and prompting a user to acquire a near view image, and when the current frame image in the video stream is converted into a near view, storing and using the stored medium view image and the stored near view image as a damage assessment image; if the current frame in the video stream is a close shot and the definition meets the requirement, storing and prompting a user to collect a middle shot image, and when the current frame image in the video stream is converted into the middle shot, storing and taking the stored close shot image and the middle shot image as a loss assessment image; if the situation is a distant view or other situations, the video stream to be processed is continuously acquired and is mainly used for the intelligent terminal, so that the working efficiency of the loss assessment link is effectively improved, the transfer speed of case claims is accelerated, and the claim settlement experience of a client is improved; the accuracy of the loss assessment link is effectively improved, the loss in claim settlement is saved for insurance companies, and the cost is reduced.

Description

Vehicle damage assessment image acquisition method and device and storage medium
Technical Field
The embodiment of the invention relates to the technical field of illumination, in particular to a method and a device for acquiring a vehicle damage assessment image and a storage medium.
Background
The current vehicle damage assessment schemes of insurance companies are two, one is an artificial damage assessment scheme, damage conditions of vehicles are surveyed by damage assessment personnel according to experience in an accident scene, repair and replacement projects and money amount are listed item by item, and damage images of each part of the vehicles are reserved for damage checking and filing. The other scheme is an intelligent damage assessment scheme, a damage assessment specialist does not need to go to an accident site in person, only a party needs to use a mobile phone camera to respectively take pictures of each damaged part of a vehicle one by one, upload all damaged images of the vehicle in an accident to a server through a network, then use an AI (automated intelligence) damage assessment program at the server end to assess the damage of the images, and give out repair, change and money.
The image taking scheme in the existing intelligent damage assessment scheme is generally used for taking a picture of each damaged part of a vehicle by a damage assessment specialist or an accident party according to experience, and has the following problems:
the method comprises the steps that a person taking a picture is a non-professional person generally, the person taking the picture shoots damaged parts one by one according to personal shooting habits when the picture is shot, all pictures in a case are uploaded to a server to carry out AI damage assessment, when an AI damage assessment program evaluates a case, the types of all the pictures need to be sorted firstly, damaged parts need to be positioned, and finally damage of each part needs to be assessed. The complexity of an AI loss assessment link is increased, and the efficiency of intelligent loss assessment is reduced.
When an AI assessment program assesses a damage, at least one middle view image of the damaged part of the vehicle is needed to locate the damaged part in the vehicle to prevent the part from being identified by mistake, and in addition, a close view image is needed to identify the details of the damage degree.
Disclosure of Invention
The embodiment of the invention aims to provide a vehicle damage assessment image acquisition method, a vehicle damage assessment image acquisition device and a storage medium, which can effectively improve the working efficiency of a damage assessment link, accelerate the transfer speed of case claims and improve the customer claim settlement experience; the accuracy of the loss assessment link is effectively improved, the loss in claim settlement is saved for insurance companies, and the cost is reduced.
To solve the above technical problem, embodiments of the present invention provide the following solutions.
In one aspect, a vehicle damage assessment image acquisition method comprises the following steps:
acquiring a video stream to be processed;
acquiring a current frame image in the video stream to be processed in real time, and judging the image type of the current frame image, wherein the image type comprises a medium view, a close view, a long view and the like;
if the current frame image in the video stream to be processed is a medium scene and has definition
Figure 991368DEST_PATH_IMAGE001
Greater than a given threshold
Figure 478850DEST_PATH_IMAGE002
If yes, saving the current intermediate view image, starting timing T1, prompting a user to acquire a near view image close to the damaged part and start tracking a first key area in the image, when the current frame image scene classification result in the video stream is converted from the intermediate view to the near view, saving the current near view image and ending image acquisition, and taking the saved intermediate view image and the stored near view image as the damage assessment image;
if the current frame in the video stream is close shot and has definition
Figure 577256DEST_PATH_IMAGE001
Greater than a given threshold
Figure 609934DEST_PATH_IMAGE002
If yes, saving the current close shot image, starting timing T2, prompting the user to collect the middle shot image far away from the damaged part and starting tracking the second key area in the image, and saving the current middle shot image when the classification result of the current frame image scene in the video stream is changed from close shot to middle shotTaking pictures and finishing picture taking, and taking the stored close-range image and the stored middle-range image as loss assessment images;
and if the video stream is a long shot or other video streams, continuing to acquire the video stream to be processed.
Further optionally, the first key area in the tracking image includes:
calculating the accumulated tracking error between adjacent frames in a video stream starting from said current frame
Figure 918424DEST_PATH_IMAGE003
When tracking error between adjacent frames
Figure 960330DEST_PATH_IMAGE004
Always less than a given first threshold value
Figure 382084DEST_PATH_IMAGE005
When the tracking is successful, the tracking is continued and the user is prompted to approach the damaged part;
when in use
Figure 100510DEST_PATH_IMAGE004
Is greater than
Figure 240504DEST_PATH_IMAGE005
And when the tracking fails, deleting the stored medium scene image, and continuing to acquire the video stream to be processed.
Further optionally, the second key area in the tracking image includes:
calculating the accumulated tracking error between adjacent frames in a video stream starting from a current frame
Figure 86101DEST_PATH_IMAGE003
Tracking error between adjacent frames
Figure 752574DEST_PATH_IMAGE004
Is always less than a given second threshold value
Figure 517268DEST_PATH_IMAGE005
When the tracking is successful, the tracking is continued and the user is prompted to be far away from the damaged part;
when in use
Figure 19924DEST_PATH_IMAGE004
Is greater than
Figure 918479DEST_PATH_IMAGE006
And when the tracking fails, deleting the stored close-range image, and continuing to acquire the video stream to be processed.
Further optionally, when the current frame image scene classification result in the video stream is converted from a medium scene to a near scene, the step of storing the current near scene image and ending the image capture includes:
when the classification result of the current frame image scene in the video stream is converted from the medium scene to the close scene, and the definition is high
Figure 314825DEST_PATH_IMAGE007
Greater than a given sharpness threshold
Figure 860207DEST_PATH_IMAGE008
Accumulating the tracking error
Figure 365007DEST_PATH_IMAGE009
Greater than a given second threshold value
Figure 411460DEST_PATH_IMAGE010
Then, saving the current close-range image and finishing image acquisition;
if clarity is high
Figure 68838DEST_PATH_IMAGE007
Less than a given sharpness threshold
Figure 34389DEST_PATH_IMAGE008
Or accumulating the tracking error
Figure 901851DEST_PATH_IMAGE009
Less than a given second threshold value
Figure 627361DEST_PATH_IMAGE010
If so, continuing to guide the user to approach the damaged part by voice;
if the accumulated time T1 is in the time threshold
Figure 122933DEST_PATH_IMAGE011
And continuing to acquire the video stream to be processed if no close-range image meeting the condition is acquired within the time.
Further optionally, when the current frame image scene classification result in the video stream is converted from a close shot to an intermediate shot, the step of saving the current intermediate shot image and ending the image capture includes:
when the current frame image scene classification result in the video stream is converted from a close shot to a medium shot, the definition of the current frame image scene classification result is improved
Figure 400331DEST_PATH_IMAGE007
Greater than a given sharpness threshold
Figure 99297DEST_PATH_IMAGE008
Accumulating the tracking error
Figure 877766DEST_PATH_IMAGE009
Greater than a given second threshold value
Figure 368790DEST_PATH_IMAGE010
When the current middle scene image is acquired, the current middle scene image is stored and the image acquisition is finished;
if clarity is high
Figure 426876DEST_PATH_IMAGE007
Less than a given sharpness threshold
Figure 393564DEST_PATH_IMAGE008
Or accumulating the tracking error
Figure 585510DEST_PATH_IMAGE009
Less than a given second threshold value
Figure 540828DEST_PATH_IMAGE010
If so, continuing to guide the user to be far away from the damaged part by voice;
if the accumulated time T2 is at the time threshold
Figure 550241DEST_PATH_IMAGE011
And continuing to acquire the video stream to be processed if no middle scene image meeting the conditions is acquired within the time.
Further optionally, the method for obtaining a loss assessment image of a vehicle further includes:
it is determined to which bearing of the vehicle the current frame belongs, and the results of each frame are stored.
In another aspect, an embodiment of the present invention further provides a vehicle damage assessment image acquisition apparatus, including:
the information acquisition module to be processed is used for acquiring a video stream to be processed;
the scene classification module is used for acquiring a current frame image in the video stream to be processed in real time and judging the image type of the current frame image, wherein the image type comprises a medium view, a close view, a long view and the like;
a medium scene processing module, configured to determine whether a current frame image in the video stream to be processed is a medium scene and has a high definition
Figure 614012DEST_PATH_IMAGE007
Greater than a given threshold
Figure 485016DEST_PATH_IMAGE008
If yes, saving the current intermediate view image, starting timing T1, prompting a user to acquire a near view image close to the damaged part and start tracking a first key area in the image, when the current frame image scene classification result in the video stream is converted from the intermediate view to the near view, saving the current near view image and ending image acquisition, and taking the saved intermediate view image and the stored near view image as the damage assessment image;
a close shot processing module for processing the current frame in the video stream into a close shot and definition
Figure 809687DEST_PATH_IMAGE007
Greater than a given threshold
Figure 599789DEST_PATH_IMAGE008
If yes, saving the current close-range image, starting timing T2, prompting a user to keep away from the damaged part to collect a middle-range image and start tracking a second key area in the image, when the current frame image scene classification result in the video stream is converted from close range to middle range, saving the current middle-range image and ending image collection, and taking the saved close-range image and middle-range image as the damage assessment image;
in another aspect, an embodiment of the present invention further provides a terminal server, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the vehicle damage assessment image acquisition method described above.
In still another aspect, a computer-readable storage medium stores a computer program which, when executed by a processor, implements the vehicle damage assessment image acquisition method described above.
Compared with the prior art, the method and the device have the advantages that the program guide personnel are used for completing the acquisition process, anyone can complete the acquisition without training, and the labor cost can be greatly saved; and the collected combined image of the middle view and the close view is automatically grouped according to the damaged parts, and the damage estimation is more accurate in an AI damage estimation link relative to the image collected by the damage estimation personnel. The working efficiency of the loss assessment link can be effectively improved, the case claim transfer speed is accelerated, and the customer claim settlement experience is improved; the accuracy of the loss assessment link is effectively improved, the loss in claim settlement is saved for insurance companies, and the cost is reduced.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.
FIG. 1 is a schematic flow chart of a vehicle damage assessment image acquisition method according to an embodiment of the invention;
FIG. 2 is a functional structure diagram of a vehicle damage assessment image acquisition device according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an image sharpness detection process according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of thirteen image orientations in an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that numerous technical details are set forth in order to provide a better understanding of the present application in various embodiments of the present invention. However, the technical solution claimed in the present application can be implemented without these technical details and various changes and modifications based on the following embodiments. The following embodiments are divided for convenience of description, and should not constitute any limitation to the specific implementation manner of the present invention, and the embodiments may be mutually incorporated and referred to without contradiction.
A first embodiment of the present invention relates to a method for acquiring a damage assessment image of a vehicle, the flow of which is shown in fig. 1, and the method specifically includes the following steps:
101. and acquiring a video stream to be processed.
102. And acquiring a current frame image in the video stream to be processed in real time.
103. And judging the image type of the current frame image.
104. Calculating the definition of the current frame image and judging the vehicle direction in the image.
If the current frame image in the video stream to be processed is a medium scene and has definition
Figure 760643DEST_PATH_IMAGE007
Greater than a given threshold
Figure 419026DEST_PATH_IMAGE008
Then go to step 105; if the current frame in the video stream is close shot and has definition
Figure 473570DEST_PATH_IMAGE007
Greater than a given threshold
Figure 575518DEST_PATH_IMAGE008
If yes, go to step 106; if it is a long-term view or other, step 101 is executed.
The image types include medium view, close view, distant view, and others.
105. Saving the current intermediate view image, starting timing T1, prompting the user to acquire a close view image close to the damaged part and starting tracking a first key area in the image, when the classification result of the current frame image scene in the video stream is converted from the intermediate view to the close view, saving the current close view image and ending image acquisition, taking the saved intermediate view image and close view image as the damage assessment image, and executing step 107.
The vehicle bearing determination result is stored until the time T1 starts to be counted and the bearing determination result from T1 to the time T ends is calculated according to the formula s = (s1+ s2+ … … + st)/T, and then the maximum value of s is taken to determine the vehicle bearing of the collected flaw of the entire video.
In some alternative embodiments, tracking the first key region in the image may be achieved by, but is not limited to, the following processes:
1051a, calculating the accumulated tracking error between adjacent frames in a video stream starting from a current frame
Figure 82723DEST_PATH_IMAGE003
1052a, tracking between adjacent framesError of the measurement
Figure 544797DEST_PATH_IMAGE012
Always less than a given first threshold value
Figure 860372DEST_PATH_IMAGE006
When the tracking is successful, the tracking is continued and the user is prompted to approach the damaged part;
1053a, when
Figure 726697DEST_PATH_IMAGE004
Is greater than
Figure 376990DEST_PATH_IMAGE006
And when the tracking fails, deleting the stored medium scene image and continuously acquiring the video stream to be processed.
In some optional embodiments, when the current frame image scene classification result in the video stream is converted from the medium scene to the close scene, the current close scene image is saved and the image capture is finished may be implemented by, but not limited to, the following processes:
1051b, when the classification result of the current frame image scene in the video stream is converted from the middle scene to the close scene, and the definition is
Figure 862329DEST_PATH_IMAGE007
Greater than a given sharpness threshold
Figure 157044DEST_PATH_IMAGE008
Accumulating the tracking error
Figure 318904DEST_PATH_IMAGE009
Greater than a given second threshold value
Figure 207225DEST_PATH_IMAGE010
Then, saving the current close-range image and finishing image acquisition;
1052b, Huo clarity
Figure 620889DEST_PATH_IMAGE007
Less than a given sharpness threshold
Figure 894745DEST_PATH_IMAGE008
Or accumulating the tracking error
Figure 368451DEST_PATH_IMAGE009
Less than a given second threshold value
Figure 478490DEST_PATH_IMAGE010
If so, continuing to guide the user to approach the damaged part by voice;
1053b, if the accumulated time T1 is in the time threshold
Figure 554899DEST_PATH_IMAGE011
No close-range image meeting the conditions is collected within the time of the first time, and the video stream to be processed is continuously acquired.
106. Saving the current close-range image, starting timing T2, prompting the user to collect the middle-range image far away from the damaged part and start tracking the second key area in the image, when the classification result of the current frame image scene in the video stream is converted from close-range to middle-range, saving the current middle-range image and ending image collection, taking the saved close-range image and middle-range image as the damage assessment image, and executing step 107.
In some alternative embodiments, tracking the second key region in the image may be achieved by, but is not limited to, the following processes:
1061a, calculating the accumulated tracking error between adjacent frames in a video stream starting from a current frame
Figure 824206DEST_PATH_IMAGE003
1062a, tracking error between adjacent frames
Figure 344180DEST_PATH_IMAGE012
Is always less than a given second threshold value
Figure 925203DEST_PATH_IMAGE006
When the tracking is successful, the tracking is continued and the user is prompted to be far away from the damaged part;
1063a, when
Figure 946249DEST_PATH_IMAGE004
Is greater than
Figure 679850DEST_PATH_IMAGE006
And when the tracking fails, deleting the stored close-range image and continuously acquiring the video stream to be processed.
In some optional embodiments, when the current frame image scene classification result in the video stream is converted from a near scene to a medium scene, the current medium scene image is saved and the image capture is finished may be implemented by, but not limited to, the following processes:
1061b converting the current frame image scene classification result from the near scene to the medium scene with high definition
Figure 885572DEST_PATH_IMAGE007
Greater than a given sharpness threshold
Figure 94837DEST_PATH_IMAGE008
Accumulating the tracking error
Figure 263781DEST_PATH_IMAGE009
Figure 366735DEST_PATH_IMAGE008
Greater than a given second threshold value
Figure 618725DEST_PATH_IMAGE010
When the current middle scene image is acquired, the current middle scene image is stored and the image acquisition is finished;
1062b clarity of vision
Figure 925072DEST_PATH_IMAGE007
Less than a given sharpness threshold or accumulated tracking error
Figure 22341DEST_PATH_IMAGE009
Less than a given second threshold value
Figure 979802DEST_PATH_IMAGE010
Then, continuing to voice directs the user away from the impairmentA site;
1063b if the accumulated time T2 is at the time threshold
Figure 278059DEST_PATH_IMAGE011
No middle scene image meeting the conditions is collected within the time of (1), and the video stream to be processed is continuously obtained.
107. And transmitting the determined damage assessment image and the vehicle direction to a server.
According to the vehicle damage assessment image acquisition method provided by the embodiment of the invention, the acquisition process is finished by program guide personnel, and anyone can acquire the image without training, so that the labor cost can be greatly saved; and the collected combined image of the middle view and the close view is automatically grouped according to the damaged parts, and the damage estimation is more accurate in an AI damage estimation link relative to the image collected by the damage estimation personnel. The working efficiency of the loss assessment link can be effectively improved, the case claim transfer speed is accelerated, and the customer claim settlement experience is improved; the accuracy of the loss assessment link is effectively improved, the loss in claim settlement is saved for insurance companies, and the cost is reduced.
The steps of the above methods are divided for clarity, and the implementation may be combined into one step or split some steps, and the steps are divided into multiple steps, so long as the same logical relationship is included, which are all within the protection scope of the present patent; it is within the scope of the patent to add insignificant modifications to the algorithms or processes or to introduce insignificant design changes to the core design without changing the algorithms or processes.
An embodiment of the present invention provides a vehicle damage assessment image acquisition apparatus, as shown in fig. 2, including:
a to-be-processed information obtaining module 201, configured to obtain a to-be-processed video stream;
a scene classification module 202, configured to obtain a current frame image in the video stream to be processed in real time, and determine an image type of the current frame image, where the image type includes a medium view, a close view, a long view, and others;
a medium scene processing module 203, configured to, if the current frame image in the video stream to be processed is a medium scene and has sharpness
Figure 196337DEST_PATH_IMAGE007
Greater than a given threshold
Figure 221930DEST_PATH_IMAGE008
If yes, saving the current intermediate view image, starting timing T1, prompting a user to acquire a near view image close to the damaged part and start tracking a first key area in the image, when the current frame image scene classification result in the video stream is converted from the intermediate view to the near view, saving the current near view image and ending image acquisition, and taking the saved intermediate view image and the stored near view image as the damage assessment image;
a close shot processing module 204 for determining if the current frame in the video stream is close shot and has sharpness
Figure 784630DEST_PATH_IMAGE007
Greater than a given threshold
Figure 378422DEST_PATH_IMAGE008
If yes, saving the current close-range image, starting timing T2, prompting the user to collect the middle-range image far away from the damaged part and start tracking the second key area in the image, when the current frame image scene classification result in the video stream is converted from close-range to middle-range, saving the current middle-range image and ending image collection, and taking the saved close-range image and middle-range image as the damage assessment image.
An embodiment of the present invention further provides a server, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the vehicle damage assessment image acquisition method described above.
An embodiment of the present invention further provides a computer-readable storage medium, which stores a computer program, and the computer program, when executed by a processor, implements the above-mentioned vehicle damage assessment image acquisition method.
According to the vehicle damage assessment image acquisition device, the server and the computer readable storage medium, provided by the embodiment of the invention, the acquisition is finished by program guide personnel in the acquisition process, and any person can acquire the image without training, so that the labor cost can be greatly saved; the collected combined image of the middle view and the close view is automatically grouped according to the damaged parts by the middle view processing module and the close view processing module, and the damage estimation is more accurate in an AI damage estimation link relative to the image collected by damage estimation personnel. The working efficiency of the loss assessment link can be effectively improved, the case claim transfer speed is accelerated, and the customer claim settlement experience is improved; the accuracy of the loss assessment link is effectively improved, the loss in claim settlement is saved for insurance companies, and the cost is reduced.
It should be understood that this embodiment is an example of the apparatus corresponding to the first embodiment, and may be implemented in cooperation with the first embodiment. The related technical details mentioned in the first embodiment are still valid in this embodiment, and are not described herein again in order to reduce repetition. Accordingly, the related-art details mentioned in the present embodiment can also be applied to the first embodiment.
It should be noted that each module referred to in this embodiment is a logical module, and in practical applications, one logical unit may be one physical unit, may be a part of one physical unit, and may be implemented by a combination of multiple physical units. In addition, in order to highlight the innovative part of the present invention, elements that are not so closely related to solving the technical problems proposed by the present invention are not introduced in the present embodiment, but this does not indicate that other elements are not present in the present embodiment.
Where the memory and processor are connected by a bus, the bus may comprise any number of interconnected buses and bridges, the buses connecting together one or more of the various circuits of the processor and the memory. The bus may also connect various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor is transmitted over a wireless medium via an antenna, which further receives the data and transmits the data to the processor.
The processor is responsible for managing the bus and general processing and may also provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And the memory may be used to store data used by the processor in performing operations.
Those skilled in the art will understand that all or part of the steps in the method according to the above embodiments may be implemented by a program instructing related hardware to complete, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, etc.) or a processor (processor) to execute all or part of the steps in the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
For the convenience of the reader to understand, a specific embodiment of the vehicle damage assessment image acquisition device is provided below, the device includes 6 parts, the module division mode of the device is slightly different from that of the above embodiment, but the implementation of the device solves the corresponding technical problem based on the same concept, and the device includes:
1. the scene classification module is used for judging whether the image shot by the camera at the current moment is a vehicle long shot scene, a medium shot scene, a close shot scene or other scenes;
the scene classification module mainly functions to acquire the state of the image in the camera video stream in real time, such as a near view, a middle view, a far view or other categories. For directing the user to act next. The method is mainly realized by collecting images in actual cases, sorting and labeling the images in the cases, dividing the images into four categories of near, middle, far and other (non-vehicle images), adding super-near categories into other implementations, training a multi-classification neural network according to the 4 categories of images, wherein the network structure can be MobileNet, SqueeZeNet, ShuffleNet (different structures of a convolutional neural network CNN) and the like, and can also be ResNet, VGG and GoogleNet, but the image classification task in the video stream in the embodiment needs to be completed in a mobile phone end in real time, so the first 3 network structures are more reasonable. And obtaining the probability that the image belongs to each category, further obtaining information such as the distance between the optical center of the current camera and the damaged part of the vehicle, providing the information to the guide module, and guiding a user to finish the whole image acquisition by the guide module.
In some optional embodiments, the scene classification may also be performed by extracting traditional features, such as LBP features, and then performing feature classification using a support vector machine.
2. The image definition detection module is used for acquiring the definition of a current frame image in the video stream;
the image definition detection module judges the definition degree of an image by using the variance of the image after Laplace (Laplace) transformation, wherein the Laplace operator is the simplest isotropic differential operator and has rotation invariance. The laplace transform of a two-dimensional image function is the isotropic second derivative, defined as:
Figure 643050DEST_PATH_IMAGE013
Figure 347701DEST_PATH_IMAGE014
the method is equivalent to performing convolution processing on an image by adopting the following kernels:
Figure 30486DEST_PATH_IMAGE015
in order to eliminate the influence of noise in the image in the present embodiment, as shown in fig. 3, the image may be subjected to a process of eliminating noise by gaussian filtering. Since the laplacian is a second derivative of an image, the gray value of the image can be detected to be rapidly changed, and the laplacian is often used for edge detection of the image. The boundary in the normal image is clear, and the variance is large after Laplace calculation; the fuzzy image boundary information is less, and the variance is small. Therefore, the laplacian operator can be used for image definition judgment.
As shown in fig. 3, for an image in a video stream, gaussian filtering is used to remove noise, then laplace transform is used for the image after noise removal, and the variance of the image after laplace transform is obtained as the definition of the image
Figure 919814DEST_PATH_IMAGE016
Setting the threshold value of sharpness to
Figure 547104DEST_PATH_IMAGE017
When the definition of the image is greater than
Figure 665233DEST_PATH_IMAGE017
Is a normal image with variance less than or equal to
Figure 717371DEST_PATH_IMAGE017
Is a blurred image. The part acquires the fuzzy degree of the image in the video stream in real time, transmits the definition degree of the image to the guide module in real time, and the guide module guides a user to complete the whole image acquisition.
In some alternative embodiments, the image sharpness test may also use a Brenner gradient function or a Tenengrad gradient function to determine the sharpness of the image.
3. The tracking module is used for tracking the damaged part of the vehicle in the image and ensuring that the acquired images are all damaged in the same process of acquisition;
the tracking module has the main function of tracking key points in a medium video by adopting sparse optical flow tracking, and uniformly generating a batch of lattice points in a middle subregion (from the center of an image, the height of the middle subregion is respectively expanded by one half from top to bottom, and the width of the middle subregion is respectively expanded by one half from left to right) of the image at the time t to ensure the real-time performance during the tracking
Figure 528333DEST_PATH_IMAGE018
As the characteristic points, forward tracking the points to t +1 frame by using Lucas-Kanade tracker
Figure 642919DEST_PATH_IMAGE019
Then backward tracing from t +1 frame to t frame
Figure 814006DEST_PATH_IMAGE020
Figure 330438DEST_PATH_IMAGE021
And
Figure 312301DEST_PATH_IMAGE022
is taken as the tracking error, the Euclidean distance between
Figure 38817DEST_PATH_IMAGE023
And
Figure 888962DEST_PATH_IMAGE024
taking the half point with the minimum central European distance as the optimal tracking point, and taking the average error of the optimal tracking point
Figure 135266DEST_PATH_IMAGE012
As a tracking error between two consecutive video frames. Setting a threshold value of a tracking error to
Figure 147085DEST_PATH_IMAGE006
When tracking errors between successive frames
Figure 384335DEST_PATH_IMAGE012
Is greater than
Figure 913536DEST_PATH_IMAGE006
When, the trace is considered to have failed. In some alternative embodiments, tracking methods such as KCF (kernel correlation filtering) may be used instead.
4. The direction judgment module is used for judging which direction of thirteen directions of the automobile the current frame belongs to, and storing the result of each frame for deciding the position of the damaged part;
the direction judgment module judges the direction of the current frame image in the video stream, the direction is specifically shown in fig. 4, the direction mainly comprises thirteen directions, one of the realization forms is realized by LBP characteristics and SVM classifier, and can also be realized by using neural network or Bayes classifier, and the part weights the category judgment result of the whole image tracked to the key point according to the set rule on the basis of classification so as to improve the accuracy of the direction of the automobile to which the damage image belongs in the whole video stream. The weighting rule is:
Figure 138981DEST_PATH_IMAGE025
wherein the content of the first and second substances,
Figure 446334DEST_PATH_IMAGE026
weighting of the probability of t frames of images in the entire video stream, classified as result 0, with
Figure 163755DEST_PATH_IMAGE027
Representing the probability that the orientation of the damaged scene taken by the whole video is class 0, and the same way
Figure 355702DEST_PATH_IMAGE028
To
Figure 560287DEST_PATH_IMAGE029
Representing the probabilities that the orientation of the entire video is class 1 to class 12, respectively. Get
Figure 179487DEST_PATH_IMAGE030
The position of the category with the highest probability serves as the position of the current damage video and is transmitted to a rear-end server in a communication module, the problem that the position is easily identified wrongly when the rear end is damaged due to the fact that the rear end can only take two images and some components such as headlights and fender and the like of the same vehicle have a plurality of accessories is solved, the front end can be judged by the aid of the plurality of images, and the accuracy is far higher due to the fact that rear-end processing is conductedAnd all images are transmitted to the back end, which will be overloaded in transmission. In the acquisition process, the direction of each frame of image is judged and recorded between the time of starting to time the first image is saved and the time of stopping to time the second image is saved, and after the time is ended, the direction with the highest probability in all the saved direction results is selected as the direction of the vehicle corresponding to the vehicle damage image acquired by the backstroke.
5. The decision-making module is used for feeding back and guiding a user to obtain images of all damages in a case in real time;
the decision module has the main function of guiding a user to complete the whole image acquisition process according to the contents in the scene classification module, the image definition detection module and the tracking module. It can be achieved by, but is not limited to, the following processes:
step 1, prompting a user to aim at a damaged part of a vehicle to start image acquisition, and skipping to step 2;
step 2, starting scene classification, skipping to step 3 if the scene classification result in the current frame is a medium scene, skipping to step 4 if the current scene classification result is a near scene, and skipping to step 1 if the image scene classification result in the current frame is a long scene or other types;
step 3, if the current frame in the video stream is a medium scene and the definition is high
Figure 118624DEST_PATH_IMAGE031
Greater than a given threshold
Figure 973317DEST_PATH_IMAGE002
If yes, saving the current middle view image, starting timing T, carrying out vehicle direction judgment on each frame image in the video stream, saving the direction judgment result, prompting a user to approach the damaged part to collect a near view image and start tracking a key area in the image, and calculating the accumulated tracking error between adjacent frames in the video stream from the current frame
Figure 845458DEST_PATH_IMAGE003
When tracking error between adjacent frames
Figure 760193DEST_PATH_IMAGE012
Always less than a given first threshold value
Figure 45681DEST_PATH_IMAGE006
When tracking is successful, the tracking is considered to be successful when
Figure 454797DEST_PATH_IMAGE004
Is greater than
Figure 509340DEST_PATH_IMAGE006
When the tracking is failed, deleting the middle view image and the direction judgment result stored at the beginning of the step 3, skipping to the step 1, if the tracking is successful all the time, prompting a user to approach the damaged part, and when the classification result of the current frame image scene in the video stream is converted from the middle view to the close view and the definition is high
Figure 594977DEST_PATH_IMAGE032
Greater than a given sharpness threshold
Figure 508706DEST_PATH_IMAGE017
Accumulating the tracking error
Figure 580567DEST_PATH_IMAGE009
Greater than a given second threshold value
Figure 614251DEST_PATH_IMAGE010
Then, the current close-range image is stored and the image collection is finished, the stored middle-range image and close-range image are used as loss assessment images, the one with the highest probability in all the stored vehicle direction judgment results is used as the direction of the loss assessment image, and if the definition is high, the image collection is finished
Figure 887101DEST_PATH_IMAGE007
Less than a given sharpness threshold
Figure 881602DEST_PATH_IMAGE008
Or accumulating the tracking error
Figure 881788DEST_PATH_IMAGE009
Less than a given second threshold value
Figure 176503DEST_PATH_IMAGE033
And then continuing to guide the user to approach the damaged part by voice until a close-range image meeting the condition is acquired, and if the accumulated time T is within the time threshold value
Figure 89095DEST_PATH_IMAGE011
No close-range image meeting the above conditions is collected within the time of (3), the middle-range image and the azimuth judgment result stored at the beginning of the step (3) are deleted, and the step (1) is skipped.
Step 4, if the current frame in the video stream is close shot and has definition
Figure 226684DEST_PATH_IMAGE007
Greater than a given threshold
Figure 640348DEST_PATH_IMAGE008
If yes, saving the current close-range image, starting timing T, carrying out vehicle direction judgment on each frame image in the video stream, saving the direction judgment result, prompting a user to collect the middle-range image far away from the damaged part and start tracking a key area in the image, and calculating the accumulated tracking error between adjacent frames in the video stream from the current frame
Figure 664936DEST_PATH_IMAGE003
When tracking error between adjacent frames
Figure 873063DEST_PATH_IMAGE012
Always less than a given first threshold value
Figure 966790DEST_PATH_IMAGE006
When the tracking is successful, otherwise when
Figure 325090DEST_PATH_IMAGE012
Is greater than
Figure 328818DEST_PATH_IMAGE006
When the tracking is failed, deleting the stored close-range image and the stored direction judgment result, skipping to the step 1, if the tracking is successful all the time, prompting a user to keep away from the damaged part, and when the classification result of the current frame image scene in the video stream is converted from the close range to the middle range and the definition is high
Figure 98060DEST_PATH_IMAGE016
Greater than a given sharpness threshold
Figure 554449DEST_PATH_IMAGE017
Accumulating the tracking error
Figure 450861DEST_PATH_IMAGE009
Greater than a given second threshold value
Figure 699309DEST_PATH_IMAGE010
Then, the current middle view image is saved and the image collection is finished, the saved near view image and the saved middle view image are used as loss assessment images, the position with the highest probability in all the saved vehicle position judgment results is used as the position of the loss assessment image, and if the definition is high, the image collection is finished
Figure 514818DEST_PATH_IMAGE007
Less than a given sharpness threshold
Figure 68290DEST_PATH_IMAGE008
Or accumulating the tracking error
Figure 17660DEST_PATH_IMAGE009
Less than a given second threshold value
Figure 730401DEST_PATH_IMAGE010
And then, continuing to guide the user to be far away from the damaged part by voice until a medium image meeting the conditions is acquired, and if the accumulated time T is within the time threshold value
Figure 857757DEST_PATH_IMAGE011
Is not collected during the timeIf there is a middle view image satisfying the above conditions, the short view image and the direction judgment result stored at the beginning of step 4 are deleted and the process goes to step 1.
6. And the communication module uploads the archived image to a server through a network.
The communication module is responsible for filing the images collected by the first five modules and then sending the images to the server for AI (artificial intelligence) assessment loss or artificial damage assessment, the filing format is a tree structure, and the images are filed according to the three levels of cases- > (damage + azimuth) - > middle and close scenes and then sent to the server by using a mobile phone communication tool.
The implementation of the mapping logic is designed by combining a scene classification algorithm, image definition judgment, target tracking and video-level-based vehicle orientation analysis, and there may be some changes in the logic when there is a change in scene categories (such as whether different people are in super-close, near, middle, far and super-far) or in orientation categories (13 large categories, 37 small categories, more subdivided categories or some categories combined in the patent).
Compared with the prior art, the technical scheme provided by the embodiment of the invention has at least 2 advantages, namely, the manual image taking scheme is implemented by program guidance personnel in the acquisition process, and any person can acquire and finish the image without training, so that the labor cost can be greatly saved. And secondly, the combined image of the intermediate view and the close view is acquired by the acquisition scheme, and the images automatically grouped according to the damaged parts are more accurate in damage assessment in an AI damage assessment link relative to the images acquired by damage assessment personnel.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the invention, and that various changes in form and details may be made therein without departing from the spirit and scope of the invention in practice.

Claims (8)

1. A vehicle damage assessment image acquisition method is used for an intelligent terminal, and comprises the following steps:
acquiring a video stream to be processed;
acquiring a current frame image in the video stream to be processed in real time, and judging the image type of the current frame image, wherein the image type comprises a medium view, a close view, a long view and the like;
if the current frame image in the video stream to be processed is a medium scene and has definition
Figure 580654DEST_PATH_IMAGE001
Greater than a given threshold
Figure 701057DEST_PATH_IMAGE002
If yes, the current medium scene image is saved and timing is started
Figure 808690DEST_PATH_IMAGE003
Prompting a user to acquire a close-range image close to a damaged part and start tracking a first key area in the image, when a current frame image scene classification result in a video stream is converted from a middle range to a close range, storing the current close-range image and finishing image acquisition, and taking the stored middle range image and the close range image as a damage assessment image;
if the current frame in the video stream is close shot and has definition
Figure 797375DEST_PATH_IMAGE004
Greater than a given threshold
Figure 142906DEST_PATH_IMAGE005
If yes, the current close-range image is saved and timing is started
Figure 762106DEST_PATH_IMAGE006
Prompting the user to acquire the medium scene image far away from the damaged part and start trackingIn a second key area in the image, when the current frame image scene classification result in the video stream is converted from a close shot to a middle shot, storing the current middle shot image and finishing image acquisition, and taking the stored close shot image and middle shot image as a damage assessment image;
and if the video stream is a long shot or other video streams, continuing to acquire the video stream to be processed.
2. The vehicle damage assessment image acquisition method according to claim 1, wherein the first key area in the tracking image comprises:
calculating the accumulated tracking error between adjacent frames in a video stream starting from said current frame
Figure 357035DEST_PATH_IMAGE007
When tracking error between adjacent frames
Figure 883832DEST_PATH_IMAGE008
Always less than a given first threshold value
Figure 880606DEST_PATH_IMAGE009
When the tracking is successful, the tracking is continued and the user is prompted to approach the damaged part;
when in use
Figure 77232DEST_PATH_IMAGE008
Is greater than
Figure 159458DEST_PATH_IMAGE010
And when the tracking fails, deleting the stored medium scene image, and continuing to acquire the video stream to be processed.
3. The vehicle damage assessment image acquisition method according to claim 1, wherein the second key area in the tracking image comprises:
calculating the accumulated tracking error between adjacent frames in a video stream starting from a current frame
Figure 224366DEST_PATH_IMAGE011
Tracking error between adjacent frames
Figure 75647DEST_PATH_IMAGE008
Is always less than a given second threshold value
Figure 974333DEST_PATH_IMAGE010
When the tracking is successful, the tracking is continued and the user is prompted to be far away from the damaged part;
when in use
Figure 543855DEST_PATH_IMAGE008
Is greater than
Figure 146874DEST_PATH_IMAGE010
And when the tracking fails, deleting the stored close-range image, and continuing to acquire the video stream to be processed.
4. The vehicle damage assessment image obtaining method according to claim 1, wherein when the current frame image scene classification result in the video stream is converted from a medium scene to a near scene, the step of saving the current near scene image and ending the image-taking comprises:
when the classification result of the current frame image scene in the video stream is converted from the medium scene to the close scene, and the definition is high
Figure 993608DEST_PATH_IMAGE012
Greater than a given sharpness threshold
Figure 922249DEST_PATH_IMAGE013
Accumulating the tracking error
Figure 979067DEST_PATH_IMAGE014
Greater than a given second threshold value
Figure 261144DEST_PATH_IMAGE015
Then, saving the current close-range image and finishing image acquisition;
if clarity is high
Figure 352597DEST_PATH_IMAGE012
Less than a given sharpness threshold
Figure 186561DEST_PATH_IMAGE013
Or accumulating the tracking error
Figure 465095DEST_PATH_IMAGE014
Less than a given second threshold value
Figure 550863DEST_PATH_IMAGE015
If so, continuing to guide the user to approach the damaged part by voice;
if the accumulated time T1 is in the time threshold
Figure 231243DEST_PATH_IMAGE016
And continuing to acquire the video stream to be processed if no close-range image meeting the condition is acquired within the time.
5. The vehicle damage assessment image obtaining method according to claim 1, wherein when the current frame image scene classification result in the video stream is converted from a near scene to a medium scene, the step of saving the current medium scene image and ending the image-taking comprises:
when the current frame image scene classification result in the video stream is converted from a close shot to a medium shot, the definition of the current frame image scene classification result is improved
Figure 767266DEST_PATH_IMAGE012
Greater than a given sharpness threshold
Figure 408463DEST_PATH_IMAGE013
Accumulating the tracking error
Figure 156976DEST_PATH_IMAGE014
Greater than a given second threshold value
Figure 223021DEST_PATH_IMAGE015
When the current middle scene image is acquired, the current middle scene image is stored and the image acquisition is finished;
if clarity is high
Figure 539733DEST_PATH_IMAGE012
Less than a given sharpness threshold
Figure 58439DEST_PATH_IMAGE013
Or accumulating the tracking error
Figure 610643DEST_PATH_IMAGE014
Less than a given second threshold value
Figure 875403DEST_PATH_IMAGE015
If so, continuing to guide the user to be far away from the damaged part by voice;
if the accumulated time T2 is at the time threshold
Figure 753229DEST_PATH_IMAGE016
And continuing to acquire the video stream to be processed if no middle scene image meeting the conditions is acquired within the time.
6. A vehicle damage assessment image acquisition apparatus, characterized by comprising:
the information acquisition module to be processed is used for acquiring a video stream to be processed;
the scene classification module is used for acquiring a current frame image in the video stream to be processed in real time and judging the image type of the current frame image, wherein the image type comprises a medium view, a close view, a long view and the like;
a medium scene processing module, configured to determine whether a current frame image in the video stream to be processed is a medium scene and has a high definition
Figure 493652DEST_PATH_IMAGE017
Greater than a given threshold
Figure 724913DEST_PATH_IMAGE013
If yes, saving the current intermediate view image, starting timing T1, prompting a user to acquire a near view image close to the damaged part and start tracking a first key area in the image, when the current frame image scene classification result in the video stream is converted from the intermediate view to the near view, saving the current near view image and ending image acquisition, and taking the saved intermediate view image and the stored near view image as the damage assessment image;
a close shot processing module for processing the current frame in the video stream into a close shot and definition
Figure 499971DEST_PATH_IMAGE012
Greater than a given threshold
Figure 17540DEST_PATH_IMAGE013
If yes, saving the current close-range image, starting timing T2, prompting the user to collect the middle-range image far away from the damaged part and start tracking the second key area in the image, when the current frame image scene classification result in the video stream is converted from close-range to middle-range, saving the current middle-range image and ending image collection, and taking the saved close-range image and middle-range image as the damage assessment image.
7. A server, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the vehicle impairment image acquisition method of any one of claims 1 to 5.
8. A computer-readable storage medium storing a computer program, wherein the computer program is executed by a processor to implement the vehicle damage image acquisition method according to any one of claims 1 to 5.
CN202110568258.1A 2021-05-25 2021-05-25 Vehicle damage assessment image acquisition method and device and storage medium Active CN113033517B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110568258.1A CN113033517B (en) 2021-05-25 2021-05-25 Vehicle damage assessment image acquisition method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110568258.1A CN113033517B (en) 2021-05-25 2021-05-25 Vehicle damage assessment image acquisition method and device and storage medium

Publications (2)

Publication Number Publication Date
CN113033517A true CN113033517A (en) 2021-06-25
CN113033517B CN113033517B (en) 2021-08-10

Family

ID=76455626

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110568258.1A Active CN113033517B (en) 2021-05-25 2021-05-25 Vehicle damage assessment image acquisition method and device and storage medium

Country Status (1)

Country Link
CN (1) CN113033517B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114005105A (en) * 2021-12-30 2022-02-01 青岛以萨数据技术有限公司 Driving behavior detection method and device and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070291104A1 (en) * 2006-06-07 2007-12-20 Wavetronex, Inc. Systems and methods of capturing high-resolution images of objects
CN107194323A (en) * 2017-04-28 2017-09-22 阿里巴巴集团控股有限公司 Car damage identification image acquiring method, device, server and terminal device
CN107368776A (en) * 2017-04-28 2017-11-21 阿里巴巴集团控股有限公司 Car damage identification image acquiring method, device, server and terminal device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070291104A1 (en) * 2006-06-07 2007-12-20 Wavetronex, Inc. Systems and methods of capturing high-resolution images of objects
CN107194323A (en) * 2017-04-28 2017-09-22 阿里巴巴集团控股有限公司 Car damage identification image acquiring method, device, server and terminal device
CN107368776A (en) * 2017-04-28 2017-11-21 阿里巴巴集团控股有限公司 Car damage identification image acquiring method, device, server and terminal device
CN111797689A (en) * 2017-04-28 2020-10-20 创新先进技术有限公司 Vehicle loss assessment image acquisition method and device, server and client
CN111914692A (en) * 2017-04-28 2020-11-10 创新先进技术有限公司 Vehicle loss assessment image acquisition method and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114005105A (en) * 2021-12-30 2022-02-01 青岛以萨数据技术有限公司 Driving behavior detection method and device and electronic equipment

Also Published As

Publication number Publication date
CN113033517B (en) 2021-08-10

Similar Documents

Publication Publication Date Title
CN108388879B (en) Target detection method, device and storage medium
CN112862702B (en) Image enhancement method, device, equipment and storage medium
CN111461170A (en) Vehicle image detection method and device, computer equipment and storage medium
Psyllos et al. M-SIFT: A new method for Vehicle Logo Recognition
US20200349187A1 (en) Method and apparatus for data retrieval in a lightfield database
CN111507145A (en) Method, system and device for detecting barrier at storage position of embedded vehicle-mounted all-round looking system
CN111160481B (en) Adas target detection method and system based on deep learning
CN112287911A (en) Data labeling method, device, equipment and storage medium
CN113033517B (en) Vehicle damage assessment image acquisition method and device and storage medium
CN110826415A (en) Method and device for re-identifying vehicles in scene image
CN111010507B (en) Camera auto-focusing method and apparatus, analysis instrument, and storage medium
CN114299363A (en) Training method of image processing model, image classification method and device
CN111191481B (en) Vehicle identification method and system
CN115527050A (en) Image feature matching method, computer device and readable storage medium
US20170199900A1 (en) Server and method for providing city street search service
CN112949423B (en) Object recognition method, object recognition device and robot
CN115272284A (en) Power transmission line defect identification method based on image quality evaluation
CN113313086B (en) Feature vector conversion model processing method, device, server and storage medium
CN114511753A (en) Target detection model updating method, device, equipment and storage medium
CN113469135A (en) Method and device for determining object identity information, storage medium and electronic device
CN113628113A (en) Image splicing method and related equipment thereof
CN113379669A (en) Method, device and equipment for beverage cabinet image recognition
CN110569865A (en) Method and device for recognizing vehicle body direction
CN111860051A (en) Vehicle-based loop detection method and device and vehicle-mounted terminal
CN116797953B (en) Remote sensing data processing system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant