CN110570513A - method and device for displaying vehicle damage information - Google Patents

method and device for displaying vehicle damage information Download PDF

Info

Publication number
CN110570513A
CN110570513A CN201810942694.9A CN201810942694A CN110570513A CN 110570513 A CN110570513 A CN 110570513A CN 201810942694 A CN201810942694 A CN 201810942694A CN 110570513 A CN110570513 A CN 110570513A
Authority
CN
China
Prior art keywords
damage
information
dimensional model
vehicle
displaying
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810942694.9A
Other languages
Chinese (zh)
Other versions
CN110570513B (en
Inventor
王萌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced New Technologies Co Ltd
Advantageous New Technologies Co Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201810942694.9A priority Critical patent/CN110570513B/en
Publication of CN110570513A publication Critical patent/CN110570513A/en
Application granted granted Critical
Publication of CN110570513B publication Critical patent/CN110570513B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Computer Graphics (AREA)
  • Finance (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Technology Law (AREA)
  • General Business, Economics & Management (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the specification provides a method and a device for displaying vehicle damage information, wherein the method comprises the following steps: acquiring at least one image of a vehicle including vehicle damage information; generating a three-dimensional model of the vehicle based on the at least one image; generating, based on the at least one image, respective texture images at a plurality of surface locations of the three-dimensional model, wherein the plurality of surface locations are determined based on the at least one image; performing texture mapping at the plurality of surface locations with their respective texture images, respectively, to obtain a mapped three-dimensional model; and displaying the three-dimensional model of the map to display the damage information.

Description

Method and device for displaying vehicle damage information
Technical Field
the embodiment of the specification relates to the technical field of image processing, in particular to a method and a device for displaying vehicle damage information.
Background
In a conventional car insurance claim settlement scenario, an insurance company needs to send out professional loss-investigating and-fixing personnel to an accident site for field damage investigation and fixing, give a vehicle maintenance plan and a compensation amount, take a field picture, and keep the loss-investigating picture for the loss-verifying personnel to verify the damage and check the price. Because of the need for manual investigation and loss assessment, insurance companies need to invest a great deal of labor cost, and training cost of professional knowledge. From the experience of common users, the claim settlement process has a long period of claim settlement because the field shooting of a manual prospecting staff is waited, the damage settlement staff determines damage in a maintenance place, and the damage verification staff verifies damage in the background.
With the development of the internet, a claim settlement scheme has appeared in which damage settlement and claim settlement are performed by a customer taking a still image or video on site and transmitting the image or video to a damage grader at a remote operation center, and the damage grader and a damage checker manually observe the image or video. In this solution, the returned data is usually a set of static planar images taken on site, and the remote operator needs to observe the images and manually extract information related to vehicle damage.
therefore, a more effective scheme for displaying the car damage information is needed.
Disclosure of Invention
The embodiment of the specification aims to provide a more effective scheme for displaying the car damage information so as to solve the defects in the prior art.
To achieve the above object, one aspect of the present specification provides a method for displaying damage information, including:
Acquiring at least one image of a vehicle including vehicle damage information;
Generating a three-dimensional model of the vehicle based on the at least one image;
generating, based on the at least one image, respective texture images at a plurality of surface locations of the three-dimensional model, wherein the plurality of surface locations are determined based on the at least one image;
Performing texture mapping at the plurality of surface locations with their respective texture images, respectively, to obtain a mapped three-dimensional model; and
Displaying the three-dimensional model of the mapped image to display the damage information.
in one embodiment, the method for displaying the vehicle damage information further comprises, after acquiring at least one image of the vehicle including the vehicle damage information, acquiring model information of the vehicle, and acquiring a pre-modeled three-dimensional model based on the model information, wherein generating the three-dimensional model of the vehicle based on the at least one image comprises modifying the pre-modeled three-dimensional model based on the at least one image to generate the three-dimensional model of the vehicle.
In one embodiment, the method for displaying vehicle damage information further comprises, after texture mapping at the plurality of surface locations with their respective texture images, respectively, obtaining vehicle damage information based on the at least one image, and adding information to the mapped three-dimensional model based on the vehicle damage information.
in one embodiment, in the method for displaying vehicle damage information, the obtaining of vehicle damage information based on the at least one image includes performing vehicle damage detection and identification based on the at least one image using a predetermined algorithm to predict at least one of the following information: damaged parts, damaged location, type of damage, and degree of damage.
in one embodiment, in the method of presenting damage information, adding information to the mapped three-dimensional model based on the vehicle damage information comprises adding at least one of:
adding information for highlighting the lesion location on the mapped three-dimensional model; and
Adding information relating to at least one of: damaged parts, type of damage and degree of damage.
In one embodiment, in the method of presenting damage information, adding information on the mapped three-dimensional model for highlighting the damage location comprises highlighting at the damage location.
In one embodiment, in the method of presenting damage information, information relating to at least one of the following is added to the mapped three-dimensional model: the damaged part, the type of damage, and the degree of damage include, on the mapped three-dimensional model, differentially displaying in different colors any of: different damaged parts, different damage types and different damage degrees.
in one embodiment, in the method of presenting damage information, information relating to at least one of the following is added to the mapped three-dimensional model: the damaged part, the type of damage, and the degree of damage include adding textual information relating to at least one of: damaged parts, type of damage and degree of damage.
In one embodiment, in the method of presenting damage information, the vehicle damage information comprises a plurality of damage locations, wherein presenting the mapped three-dimensional model comprises automatically presenting each damage location of the mapped three-dimensional model in turn based on a distribution of the plurality of damage locations.
in one embodiment, in the method for displaying car damage information, the plurality of damage locations includes a first damage location corresponding to a first damage, wherein automatically displaying each damage location of the mapped three-dimensional model in turn includes displaying the first damage at the first damage location with an optimal viewing angle, the optimal viewing angle including a first angle and a first distance, wherein the first angle is an angle directly facing the first damage, and the first distance is determined based on a position and a degree of the first damage.
In one embodiment, in the method of presenting damage information, the presenting the mapped three-dimensional model comprises interactively presenting the mapped three-dimensional model.
In one embodiment, in the method for displaying the car damage information, the displaying the mapped three-dimensional model includes displaying the mapped three-dimensional model through any one of the following display devices: planar displays, immersive panoramic displays, VR/AR head displays, and holographic projection devices.
Another aspect of the present specification provides an apparatus for displaying vehicle damage information, including:
A first acquisition unit configured to acquire at least one image of a vehicle including vehicle damage information;
a first generating unit configured to generate a three-dimensional model of the vehicle based on the at least one image;
A second generating unit configured to generate, based on the at least one image, respective texture images at a plurality of surface positions of the three-dimensional model, wherein the plurality of surface positions are determined based on the at least one image;
A second obtaining unit configured to perform texture mapping at the plurality of surface positions with their respective texture images, respectively, to obtain mapped three-dimensional models; and
a presentation unit configured to present the mapped three-dimensional model to present the damage information.
In one embodiment, the device for displaying the car damage information further comprises: a third acquisition unit configured to acquire vehicle type information of a vehicle after acquiring at least one image of the vehicle including vehicle damage information; and a fourth acquisition unit configured to acquire a previously modeled three-dimensional model based on the vehicle type information, wherein the first generation unit is further configured to modify the previously modeled three-dimensional model based on the at least one image to generate a three-dimensional model of the vehicle.
In one embodiment, the apparatus for displaying damage information further includes a fifth obtaining unit configured to obtain damage information of a vehicle based on the at least one image after texture mapping at the plurality of surface positions with their respective texture images, respectively, and an adding unit configured to add information to the mapped three-dimensional model based on the damage information of the vehicle.
in one embodiment, in the apparatus for displaying vehicle damage information, the fifth obtaining unit is further configured to perform detection and identification of vehicle damage based on the at least one image using a predetermined algorithm to predict at least one of the following information: damaged parts, damaged location, type of damage, and degree of damage.
in one embodiment, in the device for displaying the car damage information, the adding unit includes at least one of the following sub-units:
A first adding subunit configured to add information for highlighting the lesion location on the mapped three-dimensional model; and
a second adding subunit configured to add information related to at least one of: damaged parts, type of damage and degree of damage.
In one embodiment, in the apparatus for displaying vehicle damage information, the first adding subunit is further configured to highlight at the damage position.
in one embodiment, in the apparatus for displaying the car damage information, the second adding subunit is further configured to display any one of the following in different color distinctively on the mapped three-dimensional model: different damaged parts, different damage types and different damage degrees.
In one embodiment, in the apparatus for displaying loss information, the second adding subunit is further configured to add text information related to at least one of the following on the mapped three-dimensional model: damaged parts, type of damage and degree of damage.
In one embodiment, in the apparatus for displaying vehicle damage information, the vehicle damage information includes a plurality of damage locations, wherein the display unit is further configured to automatically display each damage location of the mapped three-dimensional model in turn based on a distribution of the plurality of damage locations.
In one embodiment, in the apparatus for displaying vehicle damage information, the plurality of damage locations includes a first damage location, the first damage location corresponds to a first damage, wherein the display unit is further configured to display the first damage at the first damage location with an optimal viewing angle, the optimal viewing angle includes a first angle and a first distance, the first angle is an angle directly facing the first damage, and the first distance is determined based on the location and the degree of the first damage.
In one embodiment, in the apparatus for displaying damage information, the display unit is further configured to interactively display the mapped three-dimensional model.
In one embodiment, in the apparatus for displaying the car damage information, the display unit is further configured to display the three-dimensional model of the map through any one of the following display devices: planar displays, immersive panoramic displays, VR/AR head displays, and holographic projection devices.
another aspect of the present specification provides a computing device, including a memory and a processor, wherein the memory stores executable code, and the processor executes the executable code to implement any one of the above methods for displaying vehicle damage information.
according to the vehicle damage information display scheme, the three-dimensional model of the vehicle is reconstructed based on the images uploaded by the user, and the images including the damage are mapped to the model after being transformed, so that the people with the damage caused by the nuclear damage can observe the images more easily. Further, if the damage assessment is automatically carried out by using an algorithm, the damage assessment result can be displayed on the model in an enhanced mode, and therefore the working difficulty of the nuclear damage personnel is further reduced. Thereby reducing the capability requirement on the personnel with fixed nuclear damage and saving the labor cost. Meanwhile, the error rate can be reduced, and the working efficiency is improved.
Drawings
the embodiments of the present specification may be made more clear by describing the embodiments with reference to the attached drawings:
fig. 1 shows a schematic diagram of a system 100 for presenting damage information in accordance with an embodiment of the present description;
fig. 2 illustrates a flow chart of a method of presenting damage information in accordance with an embodiment of the present description;
FIG. 3 illustrates a scratch damage on a component of a vehicle;
FIG. 4 shows a prospective shot of the scratch damage of FIG. 3;
FIG. 5 shows the presentation after texture mapping of the three-dimensional model of the accident vehicle based on FIGS. 3 and 4;
FIG. 6 illustrates a flow chart of a method of presenting damage information in accordance with another embodiment of the present disclosure;
Fig. 7 illustrates an apparatus 700 for displaying damage information according to an embodiment of the present disclosure.
Detailed Description
the embodiments of the present specification will be described below with reference to the accompanying drawings.
fig. 1 shows a schematic diagram of a system 100 for presenting damage information in accordance with an embodiment of the present description. As shown in fig. 1, the system 100 includes a modeling module 11, a texture generation module 12, a traffic impairment detection module 13, and a display module 14. The system 100 is for example a server of an insurance company. After a user (e.g., an insurer owner) uploads at least one damage image or video (hereinafter collectively referred to as damage images) to the system 100, the system 100 first inputs the damage images into the modeling module 11. In the modeling module 11, a three-dimensional model of the damaged vehicle is generated based on the vehicle damage image. The system 100 also inputs the impairment image to the texture generation module 12. In the texture generation module 12, the at least one image is mapped to the surface of the three-dimensional model based on the vehicle surface position displayed by the damage image, so as to generate the texture at the corresponding surface position of the three-dimensional model. In addition, the system 100 also inputs the loss image into the loss detection module 13. In the vehicle damage detection module 13, vehicle damage detection and identification are performed based on the vehicle damage image according to an existing algorithm to predict vehicle damage information. Then, the texture generated by the texture generation module 12 may be attached to the three-dimensional model generated by the modeling module 11, and information related to the damage information may be added to be displayed in the display module 14. In the display module 14, the three-dimensional model can be interactively displayed, and the three-dimensional model can be automatically displayed according to the car damage position. Through the system 100, the damage condition of the whole vehicle can be comprehensively, intuitively and accurately displayed to the damage assessment personnel or the damage checking personnel, so that the damage assessment personnel and the damage checking personnel can be helped to rapidly and accurately assess the damage, check the claim and the like.
It is to be understood that the system 100 shown in FIG. 1 is merely exemplary and that systems according to embodiments of the present description are not limited to the configuration shown in FIG. 1. For example, the damage detection module 13 is not necessary, and the damage information detected by it is only used to enhance the three-dimensional model.
Fig. 2 shows a flowchart of a method for displaying vehicle damage information according to an embodiment of the present disclosure. The method comprises the following steps:
at step S202, acquiring at least one image of the vehicle including the vehicle damage information;
In step S204, generating a three-dimensional model of the vehicle based on the at least one image;
At step S206, generating respective texture images at a plurality of surface locations of the three-dimensional model based on the at least one image, wherein the plurality of surface locations are determined based on the at least one image;
At step S208, texture mapping is performed at the plurality of surface locations with their respective texture images, respectively, to obtain a mapped three-dimensional model, an
In step S210, the mapped three-dimensional model is shown to show the car damage information.
First, at step S202, at least one image of the vehicle including the vehicle damage information is acquired. The image may be a still image or an image taken from a video. Typically, the at least one image is obtained by uploading at least one loss photograph or video of the accident vehicle over a network by the vehicle owner at risk. In one example, a user (e.g., an accident owner) uploads two photographs of the damage to a vehicle as shown in fig. 3 and 4 as the at least one image, where fig. 3 shows that a scratch damage is present on a part of the vehicle and fig. 4 shows a long shot of the scratch damage. Alternatively, the at least one image may be obtained by a surveyor of the insurance company taking a photograph or video at the scene of the accident. In one embodiment, if the data volume of the photo or video uploaded by the owner is too large, the photos or videos can be input into the existing model to roughly select the image more relevant to the accident as the at least one image.
In step S204, a three-dimensional model of the vehicle is generated based on the at least one image. Here, the modeling may be performed based on the at least one image using various three-dimensional modeling techniques that are known in the art. In one embodiment, a previously accurately modeled three-dimensional model of the vehicle type may be obtained based on vehicle type information of the accident vehicle provided by a user or identified by an algorithm based on the at least one image. The three-dimensional model may then be modified based on the at least one image. For example, based on the at least one image, a vehicle damage with a recess on the left front door of the accident vehicle may be acquired, in which case the corresponding position of the front door may be modified to a recessed structure on the existing three-dimensional model. In addition, on the pre-modeled three-dimensional model, the surface texture corresponding to the vehicle type can be included, so that the vehicle can be displayed more truly.
in step S206, based on the at least one image, respective texture images at a plurality of surface locations of the three-dimensional model are generated, wherein the plurality of surface locations are determined based on the at least one image. As will be understood by those skilled in the art, before texture mapping is performed on a three-dimensional model, a surface of the three-dimensional model is generally divided into a plurality of adjacent small patches, for example, triangular small patches, and then texture mapping is performed by obtaining a texture corresponding to each small patch based on a planar photograph corresponding to the three-dimensional model. In the present specification embodiment, too, based on the vehicle surface position displayed in the at least one image, the at least one image is segmented based on the triangle patch to acquire respective small images corresponding to the triangle patch at the corresponding position on the three-dimensional model. Then, the small images are converted based on the angle, position, etc. of the camera, thereby generating textures corresponding to the corresponding triangular patches on the three-dimensional model. It is to be understood that, in the embodiments of the present specification, the method of generating the texture of the three-dimensional model based on the image is not limited to the above, but various methods of generating the texture of the three-dimensional model that may be obtained by those skilled in the art may be employed.
In step S208, texture mapping is performed at the plurality of surface locations with their respective texture images, respectively, to obtain a mapped three-dimensional model. As described in step S206, each texture image corresponds to a specific triangular patch (i.e., surface position), so that, according to the corresponding relationship between each triangular patch and each texture image, the triangular patches corresponding to each texture image are mapped by using a plurality of texture images, thereby mapping the car damage information included in the at least one image onto the three-dimensional model to visually display the car damage information on the three-dimensional model.
in step S210, the mapped three-dimensional model is shown to show the car damage information. In embodiments of the present specification, the mapped three-dimensional model may be displayed by any one of the following display devices: planar displays, immersive panoramic displays, VR/AR head displays, and holographic projection devices. In one embodiment, the mapped three-dimensional model may be presented interactively, i.e. a user (an orderer or a nuclear destroyer) may rotate the three-dimensional model by touching or by mouse on a display, may scale the three-dimensional model, etc. to find injuries on a vehicle, judge the extent of injuries, determine the results of the ordnance, etc.
Fig. 5 schematically shows the presentation after texture mapping of the three-dimensional model of the accident vehicle based on fig. 3 and 4. The effect shown in FIG. 5 is only schematic, and the actual texture mapped three-dimensional model is more natural and realistic. As shown in fig. 3, to clearly show a slight scratch damage on a part, it needs to be close to a shot close-up to be seen. In the prior art, this poses the problem that the person who determines the impairment cannot distinguish from the isolated image which part of the vehicle the impairment is specifically located on. The damage determiner therefore needs to, in conjunction with other relevant images, for example fig. 4, imagine the mutual positional relationship of the vehicle and the camera when taking the image, i.e. the area in which the camera is aligned when taking the image is in the vicinity of the left rear wheel of the vehicle, so as to recognize that the damage is occurring on this part of the vehicle, the "left rear fender". This requires the damage-assessment personnel to have strong spatial imagination reasoning ability and a certain accumulation of experience. Therefore, the claim settlement scheme in the prior art is high in cost, low in efficiency and easy to make mistakes.
By the embodiment of the specification, a person with nuclear damage can observe a three-dimensional model of the vehicle, such as shown in fig. 5, wherein the texture map on the model is partially or completely obtained by converting the image uploaded by the user. When observing the vehicle model, the personnel with the fixed core loss can freely perform translation, rotation and scaling operations to change the observation visual angle. Therefore, the position and the component of the damage can be known without depending on imagination reasoning, and only the type and the degree of the damage are judged.
Fig. 6 illustrates a flow chart of a method of presenting damage information in accordance with another embodiment of the present disclosure. As shown in fig. 6, the method comprises the steps of:
at step S602, acquiring at least one image of the vehicle including the vehicle damage information;
Generating a three-dimensional model of the vehicle based on the at least one image at step S604;
In step S606, based on the at least one image, generating respective texture images at a plurality of surface locations of the three-dimensional model, wherein the plurality of surface locations are determined based on the at least one image;
In step S608, performing texture mapping at the plurality of surface positions with their respective texture images, respectively, to obtain mapped three-dimensional models;
In step S610, vehicle damage information is acquired based on the at least one image;
In step S612, adding information to the mapped three-dimensional model based on the vehicle damage information to obtain an enhanced three-dimensional model; and
in step S614, the enhanced three-dimensional model is shown to show the vehicle damage information.
In the method shown in FIG. 6, the implementation of steps S602-S608 is the same as steps S202-S208 in FIG. 2, and will not be described again here.
in step S610, vehicle damage information is acquired based on the at least one image. In one embodiment, the detection and identification of vehicle damage is performed based on the at least one image using a predetermined algorithm to predict at least one of the following: damaged parts, damaged location, type of damage, and degree of damage. The predetermined algorithm may be, for example, a pre-trained damage detection model that may output corresponding vehicle damage information based on the input images. The lesion detection model may be obtained by training using a number of calibrated images including lesions. In the embodiment of the present specification, the acquisition of the vehicle damage information in this step is not limited to the above manner, and for example, the vehicle damage information may be acquired from a text statement submitted by an accident vehicle owner, and the like.
In step S612, information is added to the mapped three-dimensional model based on the vehicle damage information to obtain an enhanced three-dimensional model. Adding information to the mapped three-dimensional model may include adding information on the mapped three-dimensional model to highlight the lesion location. For example, the lesion location may be highlighted in the three-dimensional model, or indicated with an arrow, or the like. Adding information to the mapped three-dimensional model may further comprise adding information related to at least one of: damaged parts, type of damage and degree of damage. For example, displaying any one of the following in different color distinctions on the mapped three-dimensional model: different damaged parts, different damage types and different damage degrees. Or, adding text information describing the various damage information on the three-dimensional model subjected to mapping. Alternatively, the various damage information described above may be displayed by superimposing colors and characters.
in step S614, the enhanced three-dimensional model is shown to show the vehicle damage information. In one embodiment, a model display method may be designed according to the distribution of the damage locations, for example, a plurality of damages on an accident vehicle are respectively located at a plurality of damage locations, a model display script may be generated, and each damage location of the mapped three-dimensional model is automatically displayed in turn based on the distribution of the plurality of damage locations. In one embodiment, the model can be automatically rotated, translated, and scaled to the optimal viewing angle for each lesion in turn, facilitating the review without excessive human interaction by the nuclear casualty personnel. Wherein the optimal viewing angle includes an angle and a distance relative to the damage, the angle is an angle directly facing the damage, and the distance is determined based on the position and the degree of the damage, for example, the distance is farther when the area of the position of the damage is larger, that is, the viewing angle is smaller, and the distance is closer when the degree of the damage is lighter, that is, the viewing angle is larger.
Additionally, in the method, the augmented model may also be presented interactively. In addition, the display device in the method may refer to the description of the method shown in fig. 2, and is not repeated here.
Fig. 7 shows an apparatus 700 for displaying vehicle damage information according to an embodiment of the present disclosure, including:
a first acquisition unit 71 configured to acquire at least one image of the vehicle including the vehicle damage information;
A first generating unit 72 configured to generate a three-dimensional model of the vehicle based on the at least one image;
A second generating unit 73 configured to generate, based on the at least one image, respective texture images at a plurality of surface positions of the three-dimensional model, wherein the plurality of surface positions are determined based on the at least one image;
a second obtaining unit 74 configured to perform texture mapping at the plurality of surface positions with their respective texture images, respectively, to obtain a mapped three-dimensional model; and
a presentation unit 75 configured to present the three-dimensional model of the mapped image to present the damage information.
In one embodiment, the device for displaying the car damage information further comprises: a third acquisition unit 76 configured to acquire model information of a vehicle after acquiring at least one image of the vehicle including the loss information; and a fourth obtaining unit 77 configured to obtain a previously modeled three-dimensional model based on the vehicle type information, wherein the first generating unit is further configured to modify the previously modeled three-dimensional model based on the at least one image to generate the three-dimensional model of the vehicle.
in one embodiment, the apparatus for displaying damage information further comprises a fifth obtaining unit 78 configured to obtain damage information of a vehicle based on the at least one image after texture mapping at the plurality of surface locations with their respective texture images, respectively, and an adding unit 79 configured to add information to the mapped three-dimensional model based on the damage information of the vehicle.
In one embodiment, in the apparatus for displaying vehicle damage information, the fifth obtaining unit 78 is further configured to perform detection and identification of vehicle damage based on the at least one image by using a predetermined algorithm to predict at least one of the following information: damaged parts, damaged location, type of damage, and degree of damage.
In one embodiment, in the device for displaying the car damage information, the adding unit 79 includes at least one of the following sub-units:
A first adding subunit 791 configured to add information for highlighting the lesion location on the mapped three-dimensional model; and
A second adding subunit 792 configured to add information on the mapped three-dimensional model relating to at least one of: damaged parts, type of damage and degree of damage.
In one embodiment, in the apparatus for displaying the car damage information, the first adding subunit 791 is further configured to highlight at the damage position.
in one embodiment, in the device for displaying damage information, the second adding subunit 792 is further configured to differentially display, on the mapped three-dimensional model, any one of the following in different colors: different damaged parts, different damage types and different damage degrees.
in one embodiment, in the apparatus for presenting damage information, the second adding subunit 792 is further configured to add text information relating to at least one of: damaged parts, type of damage and degree of damage.
in one embodiment, in the apparatus for displaying vehicle damage information, the vehicle damage information includes a plurality of damage locations, wherein the display unit 75 is further configured to automatically display each damage location of the mapped three-dimensional model in turn based on a distribution of the plurality of damage locations.
In one embodiment, in the apparatus for displaying vehicle damage information, the plurality of damage locations includes a first damage location, and the first damage location corresponds to a first damage, wherein the display unit 75 is further configured to display the first damage at an optimal viewing angle in the first damage location, the optimal viewing angle including a first angle and a first distance, wherein the first angle is an angle directly facing the first damage, and the first distance is determined based on the location and the degree of the first damage.
In one embodiment, in the apparatus for displaying the car damage information, the display unit 75 is further configured to interactively display the mapped three-dimensional model.
In one embodiment, in the device for displaying the car damage information, the display unit 75 is further configured to display the three-dimensional model of the map through any one of the following display devices: planar displays, immersive panoramic displays, VR/AR head displays, and holographic projection devices.
Another aspect of the present specification provides a computing device, including a memory and a processor, wherein the memory stores executable code, and the processor executes the executable code to implement any one of the above methods for displaying vehicle damage information.
According to the vehicle damage information display scheme, the three-dimensional model of the vehicle is reconstructed based on the images uploaded by the user, and the images including the damage are mapped to the model after being transformed, so that the people with the damage caused by the nuclear damage can observe the images more easily. When observing the vehicle model, the personnel with the fixed core loss can freely perform translation, rotation and scaling operations to change the observation visual angle. Therefore, the position and the component of the damage can be known without depending on imagination reasoning, and only the type and the degree of the damage are judged. Furthermore, if the algorithm is used for automatic damage assessment, damage assessment results can be displayed on the model in an enhanced mode, for example, visual guide marks, special colors, description characters are displayed in an overlapping mode, and the like, so that the working difficulty of the nuclear damage personnel is further reduced. The model display method can also be designed according to the position distribution of the damage, for example, a plurality of damages are arranged on a vehicle and are respectively positioned on a plurality of component positions, a model display script can be generated, the model is automatically and sequentially rotated, translated and scaled to the optimal observation visual angle corresponding to each damage, and the damage checking personnel can conveniently observe without excessive manual interaction. Thereby reducing the capability requirement on the personnel with fixed nuclear damage and saving the labor cost. Meanwhile, the error rate can be reduced, and the working efficiency is improved.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
It will be further appreciated by those of ordinary skill in the art that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether these functions are performed in hardware or software depends on the particular application of the solution and design constraints. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, a software module executed by a processor, or a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
the above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (25)

1. A method of displaying loss information, comprising:
Acquiring at least one image of a vehicle including vehicle damage information;
Generating a three-dimensional model of the vehicle based on the at least one image;
Generating, based on the at least one image, respective texture images at a plurality of surface locations of the three-dimensional model, wherein the plurality of surface locations are determined based on the at least one image;
Performing texture mapping at the plurality of surface locations with their respective texture images, respectively, to obtain a mapped three-dimensional model; and
displaying the three-dimensional model of the mapped image to display the damage information.
2. The method of displaying damage information of claim 1, further comprising, after obtaining at least one image of a vehicle including damage information, obtaining model information of the vehicle and obtaining a pre-modeled three-dimensional model based on the model information, wherein generating the three-dimensional model of the vehicle based on the at least one image comprises modifying the pre-modeled three-dimensional model based on the at least one image to generate the three-dimensional model of the vehicle.
3. The method of displaying damage information of claim 1, further comprising, after texture mapping at the plurality of surface locations with their respective texture images, respectively, obtaining vehicle damage information based on the at least one image, and adding information to the mapped three-dimensional model based on the vehicle damage information.
4. the method for displaying vehicle damage information of claim 3, wherein obtaining vehicle damage information based on the at least one image comprises using a predetermined algorithm to perform vehicle damage detection and identification based on the at least one image to predict at least one of: damaged parts, damaged location, type of damage, and degree of damage.
5. the method of displaying vehicular damage information of claim 4, wherein based on the vehicular damage information, adding information to the mapped three-dimensional model comprises adding at least one of:
Adding information for highlighting the lesion location on the mapped three-dimensional model; and
adding information relating to at least one of: damaged parts, type of damage and degree of damage.
6. the method of displaying damage information of claim 5, wherein adding information on the mapped three-dimensional model to highlight the damage location comprises highlighting at the damage location.
7. The method of displaying car damage information according to claim 5, wherein information relating to at least one of the following is added to the mapped three-dimensional model: the damaged part, the type of damage, and the degree of damage include, on the mapped three-dimensional model, differentially displaying in different colors any of: different damaged parts, different damage types and different damage degrees.
8. The method of displaying car damage information according to claim 5, wherein information relating to at least one of the following is added to the mapped three-dimensional model: the damaged part, the type of damage, and the degree of damage include adding textual information relating to at least one of: damaged parts, type of damage and degree of damage.
9. The method of presenting damage information of claim 3, wherein the vehicle damage information comprises a plurality of damage locations, wherein the presenting the mapped three-dimensional model comprises automatically presenting each damage location of the mapped three-dimensional model in turn based on a distribution of the plurality of damage locations.
10. The method of displaying vehicle damage information of claim 9, wherein the plurality of damage locations includes a first damage location corresponding to a first damage, wherein automatically displaying each damage location of the mapped three-dimensional model in turn includes displaying the first damage at the first damage location with an optimal perspective, the optimal perspective including a first angle and a first distance, wherein the first angle is an angle directly facing the first damage, the first distance determined based on the location and extent of the first damage.
11. the method of presenting damage information of any of claims 1-3, wherein said presenting the mapped three-dimensional model comprises interactively presenting the mapped three-dimensional model.
12. the method of displaying the damage information of any of claims 1-3, wherein said displaying the mapped three-dimensional model comprises displaying the mapped three-dimensional model via any one of the following display devices: planar displays, immersive panoramic displays, VR/AR head displays, and holographic projection devices.
13. An apparatus for displaying vehicle damage information, comprising:
a first acquisition unit configured to acquire at least one image of a vehicle including vehicle damage information;
A first generating unit configured to generate a three-dimensional model of the vehicle based on the at least one image;
A second generating unit configured to generate, based on the at least one image, respective texture images at a plurality of surface positions of the three-dimensional model, wherein the plurality of surface positions are determined based on the at least one image;
a second obtaining unit configured to perform texture mapping at the plurality of surface positions with their respective texture images, respectively, to obtain mapped three-dimensional models; and
a presentation unit configured to present the mapped three-dimensional model to present the damage information.
14. The apparatus for displaying damage information of a vehicle of claim 13, further comprising: a third acquisition unit configured to acquire vehicle type information of a vehicle after acquiring at least one image of the vehicle including vehicle damage information; and a fourth acquisition unit configured to acquire a previously modeled three-dimensional model based on the vehicle type information, wherein the first generation unit is further configured to modify the previously modeled three-dimensional model based on the at least one image to generate a three-dimensional model of the vehicle.
15. The apparatus for displaying vehicle damage information of claim 13, further comprising a fifth obtaining unit configured to obtain vehicle damage information based on the at least one image after texture mapping at the plurality of surface locations with their respective texture images, respectively, and an adding unit configured to add information to the mapped three-dimensional model based on the vehicle damage information.
16. the apparatus for displaying vehicle damage information of claim 15, wherein the fifth obtaining unit is further configured to perform vehicle damage detection and identification based on the at least one image using a predetermined algorithm to predict at least one of the following information: damaged parts, damaged location, type of damage, and degree of damage.
17. The apparatus for displaying vehicle damage information of claim 16, wherein said adding unit comprises at least one of the following sub-units:
a first adding subunit configured to add information for highlighting the lesion location on the mapped three-dimensional model; and
a second adding subunit configured to add information related to at least one of: damaged parts, type of damage and degree of damage.
18. The apparatus displaying vehicle damage information of claim 17, wherein the first adding subunit is further configured to highlight at the damage location.
19. the apparatus for displaying vehicle damage information of claim 17, wherein said second adding subunit is further configured to differentially display any one of the following in different colors on said mapped three-dimensional model: different damaged parts, different damage types and different damage degrees.
20. The apparatus for displaying vehicle damage information of claim 17, wherein said second adding subunit is further configured to add text information on said mapped three-dimensional model relating to at least one of: damaged parts, type of damage and degree of damage.
21. The apparatus for displaying vehicle damage information of claim 15, wherein the vehicle damage information comprises a plurality of damage locations, wherein the display unit is further configured to automatically display each damage location of the mapped three-dimensional model in turn based on a distribution of the plurality of damage locations.
22. The apparatus for displaying vehicle damage information of claim 21, wherein the plurality of damage locations comprises a first damage location corresponding to a first damage, wherein the display unit is further configured to display the first damage at the first damage location with an optimal viewing angle, the optimal viewing angle comprising a first angle and a first distance, wherein the first angle is an angle directly facing the first damage, and the first distance is determined based on the location and extent of the first damage.
23. the apparatus for displaying damage information of a vehicle of any one of claims 13-15, wherein said display unit is further configured to interactively display said mapped three-dimensional model.
24. The apparatus for displaying the car damage information according to any one of claims 13-15, wherein the display unit is further configured to display the mapped three-dimensional model through any one of the following display devices: planar displays, immersive panoramic displays, VR/AR head displays, and holographic projection devices.
25. A computing device comprising a memory and a processor, wherein the memory has stored therein executable code that, when executed by the processor, performs the method of any of claims 1-12.
CN201810942694.9A 2018-08-17 2018-08-17 Method and device for displaying vehicle loss information Active CN110570513B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810942694.9A CN110570513B (en) 2018-08-17 2018-08-17 Method and device for displaying vehicle loss information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810942694.9A CN110570513B (en) 2018-08-17 2018-08-17 Method and device for displaying vehicle loss information

Publications (2)

Publication Number Publication Date
CN110570513A true CN110570513A (en) 2019-12-13
CN110570513B CN110570513B (en) 2023-06-20

Family

ID=68772480

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810942694.9A Active CN110570513B (en) 2018-08-17 2018-08-17 Method and device for displaying vehicle loss information

Country Status (1)

Country Link
CN (1) CN110570513B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112070250A (en) * 2020-11-13 2020-12-11 深圳壹账通智能科技有限公司 Vehicle damage assessment method and device, terminal equipment and storage medium
CN112307020A (en) * 2020-10-20 2021-02-02 北京精友世纪软件技术有限公司 Vehicle collision position positioning and damage assessment system based on vehicle 360-degree panoramic model
CN112348799A (en) * 2020-11-11 2021-02-09 德联易控科技(北京)有限公司 Vehicle damage assessment method and device, terminal equipment and storage medium
GB2591445A (en) * 2019-12-19 2021-08-04 Airbus Operations Ltd Image mapping to vehicle surfaces

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB0321888D0 (en) * 2003-09-18 2003-10-22 Canon Europa Nv Generation of texture maps for use in 3d computer graphics
EP1462762A1 (en) * 2003-03-25 2004-09-29 Aisin Seiki Kabushiki Kaisha Circumstance monitoring device of a vehicle
CN104021588A (en) * 2014-06-18 2014-09-03 公安部第三研究所 System and method for recovering three-dimensional true vehicle model in real time
US9053562B1 (en) * 2010-06-24 2015-06-09 Gregory S. Rabin Two dimensional to three dimensional moving image converter
CN104834784A (en) * 2015-05-13 2015-08-12 西南交通大学 Railway emergency auxiliary rescue three-dimensional virtual electronic sand table system
CN106813670A (en) * 2015-11-27 2017-06-09 华创车电技术中心股份有限公司 Three-dimensional vehicle auxiliary imaging device
CN107315470A (en) * 2017-05-25 2017-11-03 腾讯科技(深圳)有限公司 Graphic processing method, processor and virtual reality system
WO2017195228A1 (en) * 2016-05-09 2017-11-16 Uesse S.R.L. Process and system to analyze deformations in motor vehicles
CN107392218A (en) * 2017-04-11 2017-11-24 阿里巴巴集团控股有限公司 A kind of car damage identification method based on image, device and electronic equipment
CN107403424A (en) * 2017-04-11 2017-11-28 阿里巴巴集团控股有限公司 A kind of car damage identification method based on image, device and electronic equipment
CN108364253A (en) * 2018-03-15 2018-08-03 北京威远图易数字科技有限公司 Car damage identification method, system and electronic equipment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1462762A1 (en) * 2003-03-25 2004-09-29 Aisin Seiki Kabushiki Kaisha Circumstance monitoring device of a vehicle
GB0321888D0 (en) * 2003-09-18 2003-10-22 Canon Europa Nv Generation of texture maps for use in 3d computer graphics
US9053562B1 (en) * 2010-06-24 2015-06-09 Gregory S. Rabin Two dimensional to three dimensional moving image converter
CN104021588A (en) * 2014-06-18 2014-09-03 公安部第三研究所 System and method for recovering three-dimensional true vehicle model in real time
CN104834784A (en) * 2015-05-13 2015-08-12 西南交通大学 Railway emergency auxiliary rescue three-dimensional virtual electronic sand table system
CN106813670A (en) * 2015-11-27 2017-06-09 华创车电技术中心股份有限公司 Three-dimensional vehicle auxiliary imaging device
WO2017195228A1 (en) * 2016-05-09 2017-11-16 Uesse S.R.L. Process and system to analyze deformations in motor vehicles
CN107392218A (en) * 2017-04-11 2017-11-24 阿里巴巴集团控股有限公司 A kind of car damage identification method based on image, device and electronic equipment
CN107403424A (en) * 2017-04-11 2017-11-28 阿里巴巴集团控股有限公司 A kind of car damage identification method based on image, device and electronic equipment
CN107315470A (en) * 2017-05-25 2017-11-03 腾讯科技(深圳)有限公司 Graphic processing method, processor and virtual reality system
CN108364253A (en) * 2018-03-15 2018-08-03 北京威远图易数字科技有限公司 Car damage identification method, system and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
何原荣 等: "基于激光点云的建筑灾损三维信息获取与分析――以"莫兰蒂"超强台风为例", 《防灾减灾工程学报》 *
李瑞,张锡恩,李萍,刘耀周: "基于MultiGen CreatorPro建立的虚拟环境中的几何模型", 计算机工程 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2591445A (en) * 2019-12-19 2021-08-04 Airbus Operations Ltd Image mapping to vehicle surfaces
CN112307020A (en) * 2020-10-20 2021-02-02 北京精友世纪软件技术有限公司 Vehicle collision position positioning and damage assessment system based on vehicle 360-degree panoramic model
CN112348799A (en) * 2020-11-11 2021-02-09 德联易控科技(北京)有限公司 Vehicle damage assessment method and device, terminal equipment and storage medium
CN112070250A (en) * 2020-11-13 2020-12-11 深圳壹账通智能科技有限公司 Vehicle damage assessment method and device, terminal equipment and storage medium
CN112070250B (en) * 2020-11-13 2021-05-04 深圳壹账通智能科技有限公司 Vehicle damage assessment method and device, terminal equipment and storage medium
WO2022100454A1 (en) * 2020-11-13 2022-05-19 深圳壹账通智能科技有限公司 Vehicle damage assessment method, apparatus, terminal device and storage medium

Also Published As

Publication number Publication date
CN110570513B (en) 2023-06-20

Similar Documents

Publication Publication Date Title
CN110570513A (en) method and device for displaying vehicle damage information
US10559039B1 (en) Augmented reality insurance applications
CN106062805B (en) Mobile system for generating an estimate of compromised vehicle insurance
US10373387B1 (en) Systems and methods for enhancing and developing accident scene visualizations
US20180082414A1 (en) Methods Circuits Assemblies Devices Systems Platforms and Functionally Associated Machine Executable Code for Computer Vision Assisted Construction Site Inspection
JP7054424B2 (en) Second-hand goods sales system, and second-hand goods sales program.
US9723251B2 (en) Technique for image acquisition and management
JP6057298B2 (en) Rapid 3D modeling
CN111462249B (en) Traffic camera calibration method and device
EP2410290A1 (en) Image capturing device and method for three-dimensional measurement
US20100241946A1 (en) Annotating images with instructions
US11605182B2 (en) Method for generating reproducible perspectives of photographs of an object, and mobile device with an integrated camera
CN107683498A (en) The automatic connection of image is carried out using visual signature
Stanimirovic et al. [Poster] A Mobile Augmented reality system to assist auto mechanics
Fisseler et al. Extending Philological Research with Methods of 3D Computer Graphics Applied to Analysis of Cultural Heritage.
US11544839B2 (en) System, apparatus and method for facilitating inspection of a target object
Jiao et al. A virtual reality method for digitally reconstructing traffic accidents from videos or still images
EP2779102A1 (en) Method of generating an animated video sequence
KR20090003787A (en) Method for measuring 3d information of object in single image using collinearity condition equation, recorded medium for performing the same and system for measuring 3d information of object in single image using collinearity condition equation
US10832420B2 (en) Dynamic local registration system and method
KR102420856B1 (en) Method and Device for Examining the Existence of 3D Objects Using Images
CN105976429B (en) Criminal investigation portrait computer auxiliary system and method
Adamczyk et al. Three-dimensional measurement system for crime scene documentation
CN115527008A (en) Safety simulation experience training system based on mixed reality technology
JP2021060838A (en) Information processing apparatus, card, information processing method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40018753

Country of ref document: HK

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20200925

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Applicant after: Advanced innovation technology Co.,Ltd.

Address before: A four-storey 847 mailbox in Grand Cayman Capital Building, British Cayman Islands

Applicant before: Alibaba Group Holding Ltd.

Effective date of registration: 20200925

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Applicant after: Innovative advanced technology Co.,Ltd.

Address before: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Applicant before: Advanced innovation technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant