WO2019174306A1 - 物品定损方法及装置 - Google Patents

物品定损方法及装置 Download PDF

Info

Publication number
WO2019174306A1
WO2019174306A1 PCT/CN2018/117861 CN2018117861W WO2019174306A1 WO 2019174306 A1 WO2019174306 A1 WO 2019174306A1 CN 2018117861 W CN2018117861 W CN 2018117861W WO 2019174306 A1 WO2019174306 A1 WO 2019174306A1
Authority
WO
WIPO (PCT)
Prior art keywords
item
determined
photographing
surface structure
structure information
Prior art date
Application number
PCT/CN2018/117861
Other languages
English (en)
French (fr)
Inventor
周凡
Original Assignee
阿里巴巴集团控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司 filed Critical 阿里巴巴集团控股有限公司
Priority to EP18909705.8A priority Critical patent/EP3716204A4/en
Priority to SG11202005713XA priority patent/SG11202005713XA/en
Publication of WO2019174306A1 publication Critical patent/WO2019174306A1/zh
Priority to US16/888,598 priority patent/US11300522B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/20Administration of product repair or maintenance
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/25Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
    • G01N21/31Investigating relative effect of material at wavelengths characteristic of specific elements or molecules, e.g. atomic absorption spectrometry
    • G01N21/35Investigating relative effect of material at wavelengths characteristic of specific elements or molecules, e.g. atomic absorption spectrometry using infrared light
    • G01N21/3563Investigating relative effect of material at wavelengths characteristic of specific elements or molecules, e.g. atomic absorption spectrometry using infrared light for analysing solids; Preparation of samples therefor
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8806Specially adapted optical and illumination features
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/17Image acquisition using hand-held instruments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8887Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2201/00Features of devices classified in G01N21/00
    • G01N2201/02Mechanical
    • G01N2201/022Casings
    • G01N2201/0221Portable; cableless; compact; hand-held
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2201/00Features of devices classified in G01N21/00
    • G01N2201/12Circuits of general importance; Signal processing
    • G01N2201/129Using chemometrical methods
    • G01N2201/1296Using chemometrical methods using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/12Acquisition of 3D measurements of objects
    • G06V2201/121Acquisition of 3D measurements of objects using special illumination

Definitions

  • One or more embodiments of the present disclosure relate to the field of computer technology, and in particular, to an object loss method and apparatus.
  • the insurance company needs to assess the damage level of the item to determine the list of items to be repaired and the amount of the claim.
  • the second way is that the user takes photos of the damaged items under the guidance of the insurance company personnel, transmits them to the insurance company through the network, and then the remote damage is determined by the loss-receiving person through the photos.
  • One or more embodiments of the present specification describe an article loss method and apparatus that can automatically assess the degree of damage to a damaged item.
  • an object loss method comprising:
  • the mobile terminal detects whether the photographing device is in a shooting state
  • the degree of damage of the damaged portion of the item to be determined is output.
  • an article loss device including:
  • a detecting unit configured to detect whether the photographing device is in a shooting state
  • a determining unit configured to: when the detecting unit detects that the photographing device is in a photographing state, determine a photographing area of the photographing device to be damaged, the photographing area covers the damaged portion of the item to be determined, and acquires Corresponding to a captured image of the photographing area;
  • a communication unit configured to transmit a plurality of infrared rays to the photographing area of the to-be-determined item determined by the determining unit, to determine surface structure information and surface material information of the damaged portion of the item to be determined;
  • An input unit configured to input the captured image, the surface structure information, and the surface material information into a pre-constructed damage recognition model
  • an output unit configured to output a damage degree of the damaged portion of the item to be determined.
  • the method and device for determining the damage of an article provided by one or more embodiments of the present specification, when the mobile terminal detects that the photographing device is in a photographing state, determines a photographing region of the photographing device to be damaged, and acquires a photographed image corresponding to the photographing region. .
  • a plurality of infrared rays are emitted to the photographing area of the item to be determined to determine surface structure information and surface material information of the damaged portion of the item to be determined.
  • the captured image, surface structure information, and surface material information are input into a pre-built damage recognition model.
  • the degree of damage of the damaged part of the item to be determined is output. Thereby, an automatic assessment of the degree of damage to the item to be damaged is achieved.
  • FIG. 1 is a schematic diagram of an application scenario of an item loss method according to an embodiment of the present disclosure
  • FIG. 2 is a flow chart of a method for determining a loss of an article according to an embodiment of the present disclosure
  • Figure 3 is a schematic view showing the construction process of the surface relief distribution map provided by the present specification.
  • Figure 4 is a schematic view showing the construction process of the surface material distribution map provided by the present specification.
  • Figure 5 is a schematic view showing the process of determining the damage of the article provided by the present specification.
  • FIG. 6 is a schematic diagram of an apparatus for determining damage of an article according to an embodiment of the present disclosure.
  • the method for determining the damage of an item provided by an embodiment of the present disclosure may be applied to the scenario shown in FIG. 1.
  • the mobile terminal in FIG. 1 may be, for example, a mobile phone, a tablet computer, or the like, which may include a photographing device.
  • multiple sets of infrared emitters and multiple sets of receiving sensors can also be included.
  • the infrared emitter can emit infrared rays
  • the receiving sensor can receive infrared rays from the infrared emitter and sense the temperature of the received infrared rays.
  • the item to be determined in FIG. 1 may be, for example, a bicycle or a car.
  • the photographing device of the mobile terminal can operate simultaneously with the infrared emitter and/or the receiving sensor.
  • the mobile terminal may enable multiple sets of infrared emitters to emit multiple infrared rays to the photographing area of the item to be determined when photographing the damaged part of the item to be damaged, thereby determining surface structure information of the damaged part of the item to be determined (also called Depth information) and surface material information.
  • the captured image (visual information), surface structure information, and surface material information can be input into the pre-constructed damage recognition model.
  • the detection efficiency of the degree of damage to the damaged item can be improved, and the accuracy of the degree of damage detected can be improved.
  • FIG. 2 is a flow chart of a method for determining damage of an article according to an embodiment of the present disclosure.
  • the execution body of the method may be the mobile terminal in FIG. As shown in FIG. 2, the method may specifically include:
  • Step 210 The mobile terminal detects whether the photographing device is in a shooting state.
  • the shooting device here can be a camera built in the mobile phone.
  • Step 220 when it is detected that the photographing apparatus is in the photographing state, determine the photographing area of the photographing apparatus to be damaged, and acquire a photographed image corresponding to the photographing area.
  • the shooting area may be determined according to a viewing range displayed by the viewfinder of the photographing device after the focus of the object to be damaged is successfully focused. It can be understood that, when the user takes a photo of the damaged part of the damaged item through the mobile terminal, the above-mentioned framing range can cover the damaged part of the item to be determined, so that the determined shooting area can cover the damaged part of the item to be determined.
  • the photographed image corresponding to the photographing area is a photograph taken of the damaged part of the item to be damaged.
  • Step 230 A plurality of infrared rays are emitted to the shooting area of the item to be determined to determine surface structure information and surface material information of the damaged part of the item to be determined.
  • the determining process of the surface structure information may be: the mobile terminal transmits a plurality of infrared rays to the shooting area of the item to be determined through the plurality of sets of infrared emitters, and records the emission time and the emission speed of each of the infrared rays. Then, a plurality of sets of receiving sensors are used to receive a plurality of infrared rays returned by the item to be determined after receiving the plurality of infrared rays, and the received time of each of the returned infrared rays is recorded. Finally, based on the recorded multiple transmission times, the multiple transmission speeds, and the plurality of reception times, a plurality of different distances between the mobile terminal and the damaged portion can be determined. According to the plurality of different distances, the three-dimensional structure of the damaged portion can be obtained, and the surface structure information of the damaged portion of the item to be damaged can be determined.
  • the mobile terminal After determining the above surface structure information, the mobile terminal can construct a surface relief map of the damaged portion. It can be understood that, according to the surface relief map of the configuration, it can be judged whether or not there is damage of the concave deformation type on the surface of the damaged portion.
  • the construction process of the surface relief map may be as shown in FIG. 3.
  • the mobile terminal may further include a conversion unit.
  • the conversion unit is configured to convert the received infrared signal into a digital signal for the computer to process.
  • the surface undulation profile of the final construction can be three-dimensional, thereby making it easier to judge the degree of damage of the damaged portion.
  • the mobile terminal when the mobile terminal further includes a depth of field camera or a binocular camera, the image acquired by the depth of field camera or the binocular camera may be combined to further correct the surface structure information, thereby improving the damage.
  • the accuracy of the degree of damage of the part when the mobile terminal further includes a depth of field camera or a binocular camera, the image acquired by the depth of field camera or the binocular camera may be combined to further correct the surface structure information, thereby improving the damage.
  • the accuracy of the degree of damage of the part when the mobile terminal further includes a depth of field camera or a binocular camera.
  • the process of determining the surface material information may be: the mobile terminal transmits a plurality of infrared rays to the shooting area of the item to be determined through the plurality of sets of infrared emitters. Then, a plurality of sets of receiving sensors are used to receive a plurality of infrared rays returned by the item to be determined after receiving the plurality of infrared rays, and the received temperature of the returned infrared rays is recorded. Finally, based on the recorded multiple receiving temperatures, the surface material information of the damaged portion can be determined.
  • the principle of this determination process is that the temperatures of the infrared rays returned by different materials are different.
  • the mobile terminal After determining the surface structure information, the mobile terminal can construct a surface material distribution map of the damaged portion. It can be understood that, according to the surface material distribution map of the structure, it can be judged whether there is a lacquer surface peeling at the damaged portion, whether there is dust covering, whether a metal layer leaks, or the like.
  • the construction process of the surface material profile described above may be as shown in FIG. 4.
  • the mobile terminal may also include a conversion unit (described above).
  • Step 240 inputting the captured image, surface structure information, and surface material information into the pre-constructed damage recognition model.
  • Step 250 Output the damage degree of the damaged part of the item to be determined.
  • the damage recognition model herein may be obtained from a captured image of a damaged portion of a plurality of sample articles, surface structure information, surface material information, and a deep neural network algorithm. Specifically, a plurality of sample items may be collected in advance and a sample label (for example, slight damage, moderate damage, or severe damage, etc.) may be added thereto. Then, the damaged part of the sample article is photographed, and the surface structure information and the surface material information of the damaged part are obtained (the acquisition process is the same as described above). Finally, based on the sample label of the sample item, the captured image of the damaged part, the surface structure information and the surface material information, the deep neural network algorithm is trained to obtain the damage recognition model. Among them, the training model based on the sample data belongs to the conventional conventional technology, and will not be repeated here.
  • the trained damage recognition model may include: a convolution layer, a pooling layer, and a fully connected layer.
  • a process diagram for inputting a captured image, surface structure information, and surface material information into a damage recognition model trained according to a convolutional neural network algorithm to identify the degree of damage of the damaged portion can be illustrated in FIG. 5.
  • the degree of damage finally outputted in this embodiment may be, for example, slight damage (including slight depression, paint peeling, etc.), moderate damage, and moderate damage.
  • slight damage including slight depression, paint peeling, etc.
  • moderate damage including slight depression, paint peeling, etc.
  • moderate damage including slight depression, paint peeling, etc.
  • other information for indicating the degree of damage may be used, which is not limited in this specification.
  • the visual interference factors such as reflection, reflection and dust stain can be corrected to prevent the deep neural network from misidentifying the above visual interference factor as damage.
  • the program automatically damages the damaged part through the mobile terminal, which can greatly reduce the requirement of the injury identification to the personnel experience, provide a fair and objective identification result, improve the difficulty of fraud fraud, and enable the user to provide accurate and reliable identification results by themselves. Reduce the cost of on-site manual surveys by insurance companies.
  • an embodiment of the present specification further provides an article loss device, as shown in FIG. 6, the device includes:
  • the detecting unit 601 is configured to detect whether the photographing device is in a shooting state.
  • the determining unit 602 is configured to determine, when the detecting unit 601 detects that the photographing device is in the photographing state, the photographing region of the photographing device to be damaged, the photographing region covers the damaged portion of the item to be determined, and acquire a captured image corresponding to the photographing region. .
  • the communication unit 603 is configured to emit a plurality of infrared rays to the photographing area of the item to be determined determined by the determining unit 602 to determine surface structure information and surface material information of the damaged portion of the item to be determined.
  • the communication unit 603 can include a plurality of sets of infrared emitters and a plurality of sets of receiving sensors corresponding to the plurality of sets of infrared emitters.
  • the input unit 604 is configured to input the captured image, the surface structure information, and the surface material information into the pre-constructed damage recognition model.
  • the output unit 605 is configured to output the damage degree of the damaged portion of the item to be determined.
  • the communication unit 603 is specifically configured to:
  • a plurality of infrared rays are emitted to the photographing area of the item to be damaged, and the emission time and the emission speed are recorded.
  • the surface structure information of the damaged portion of the item to be determined is determined according to the launch time, the launch speed, and the receiving time.
  • the surface material information of the damaged portion of the item to be determined is determined according to the receiving temperature.
  • the device may further include:
  • the collecting unit 606 is configured to collect a target image of the item to be determined by the depth of field camera or the binocular camera.
  • the correcting unit 607 is configured to correct the surface structure information according to the target image collected by the collecting unit 606 to obtain the corrected surface structure information.
  • the input unit 604 can be specifically used to:
  • the captured image, the corrected surface structure information, and the surface material information are input into a pre-built damage recognition model.
  • the damage recognition model may be obtained based on a captured image of the damaged portion of the plurality of sample articles, surface structure information, surface material information, and a deep neural network algorithm.
  • the detecting unit 601 detects whether the photographing apparatus is in a photographing state.
  • the determination unit 602 determines the photographing area of the photographing apparatus to be damaged, and acquires a photographed image corresponding to the photographing area.
  • the communication unit 603 emits a plurality of infrared rays to the photographing area of the item to be determined to determine surface structure information and surface material information of the damaged portion of the item to be determined.
  • the input unit 604 inputs the captured image, the surface structure information, and the surface material information into the damage identification model constructed in advance.
  • the output unit 605 outputs the degree of damage of the damaged portion of the item to be determined. Thereby, an automatic assessment of the degree of damage to the item to be damaged is achieved.
  • the item loss device provided by the embodiment of the present specification may be a module or unit of the mobile terminal in FIG.
  • the functions described in this specification can be implemented in hardware, software, firmware, or any combination thereof.
  • the functions may be stored in a computer readable medium or transmitted as one or more instructions or code on a computer readable medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Business, Economics & Management (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Immunology (AREA)
  • Biochemistry (AREA)
  • Analytical Chemistry (AREA)
  • Chemical & Material Sciences (AREA)
  • Pathology (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Human Resources & Organizations (AREA)
  • Quality & Reliability (AREA)
  • Data Mining & Analysis (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Operations Research (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Tourism & Hospitality (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)

Abstract

一种物品定损方法及装置,在物品定损方法中,移动终端检测拍摄设备是否处于拍摄状态(S210),当检测到拍摄设备处于拍摄状态时,确定拍摄设备对待定损物品的拍摄区域,并获取对应于该拍摄区域的拍摄图像(S220),向待定损物品的拍摄区域发射多束红外线,以确定待定损物品的损伤部位的表面结构信息和表面材料信息(S230),将拍摄图像、表面结构信息以及表面材料信息输入预先构建的损伤识别模型(S240),输出待定损物品的损伤部位的损伤程度(S250)。

Description

物品定损方法及装置 技术领域
本说明书一个或多个实施例涉及计算机技术领域,尤其涉及一种物品定损方法及装置。
背景技术
在保险行业,用户在物品损伤提出理赔申请时,保险公司需要对物品的损伤程度进行评估,以确定需要修复的项目清单以及赔付金额。目前的评估方式主要有两种:第一种方式是,通过保险公司或第三方公估机构查勘员,对发生损坏的物品进行现场评估。第二种方式是,由用户在保险公司人员的指导下,对损坏物品拍照,通过网络传递给保险公司,再由定损员通过照片进行远程定损。
因此,需要提供一种更高效地物品定损方案。
发明内容
本说明书一个或多个实施例描述了一种物品定损方法及装置,可以自动地对待定损物品的损伤程度进行评估。
第一方面,提供了一种物品定损方法,包括:
移动终端检测拍摄设备是否处于拍摄状态;
当检测到所述拍摄设备处于拍摄状态时,确定所述拍摄设备对待定损物品的拍摄区域,所述拍摄区域覆盖所述待定损物品的损伤部位,并获取对应于所述拍摄区域的拍摄图像;
向所述待定损物品的所述拍摄区域发射多束红外线,以确定所述待定损物品的损伤部位的表面结构信息以及表面材料信息;
将所述拍摄图像、所述表面结构信息以及所述表面材料信息输入预先构建的损伤识别模型;
输出所述待定损物品的损伤部位的损伤程度。
第二方面,提供了一种物品定损装置,包括:
检测单元,用于检测拍摄设备是否处于拍摄状态;
确定单元,用于当所述检测单元检测到所述拍摄设备处于拍摄状态时,确定所述拍摄设备对待定损物品的拍摄区域,所述拍摄区域覆盖所述待定损物品的损伤部位,并获取对应于所述拍摄区域的拍摄图像;
通信单元,用于向所述确定单元确定的所述待定损物品的所述拍摄区域发射多束红外线,以确定所述待定损物品的损伤部位的表面结构信息以及表面材料信息;
输入单元,用于将所述拍摄图像、所述表面结构信息以及所述表面材料信息输入预先构建的损伤识别模型;
输出单元,用于输出所述待定损物品的损伤部位的损伤程度。
本说明书一个或多个实施例提供的物品定损方法及装置,移动终端在检测到拍摄设备处于拍摄状态时,确定拍摄设备对待定损物品的拍摄区域,并获取对应于该拍摄区域的拍摄图像。向待定损物品的拍摄区域发射多束红外线,以确定待定损物品的损伤部位的表面结构信息和表面材料信息。将拍摄图像、表面结构信息以及表面材料信息输入预先构建的损伤识别模型。输出待定损物品的损伤部位的损伤程度。由此,实现了对待定损物品损伤程度的自动评估。
附图说明
为了更清楚地说明本说明书实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本说明书的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。
图1为本说明书一个实施例提供的物品定损方法的应用场景示意图;
图2为本说明书一个实施例提供的物品定损方法流程图;
图3为本说明书提供的表面起伏分布图的构造过程示意图;
图4为本说明书提供的表面材料分布图的构造过程示意图;
图5为本说明书提供的物品定损过程示意图;
图6为本说明书一个实施例提供的物品定损装置示意图。
具体实施方式
下面结合附图,对本说明书提供的方案进行描述。
本说明书一个实施例提供的物品定损方法可以应用于如图1所示的场景中,图1中的移动终端例如可以为手机、平板电脑等,其可以包括拍摄设备。此外,还可以包括多组红外线发射器以及多组接收传感器。其中,红外线发射器可以发射红外线,接收传感器可以接收来自红外线发射器的红外线,并感知接收的红外线的温度。图1中的待定损物品例如可以为单车或者汽车等。
需要说明的是,图1中,移动终端的拍摄设备可以与红外线发射器和/或接收传感器同时运行。具体地,移动终端可以在对待定损物品的损伤部位拍照时,启用多组红外线发射器向待定损物品的拍摄区域发射多束红外线,以确定待定损物品的损伤部位的表面结构信息(也称深度信息)和表面材料信息。之后,可以将拍摄图像(视觉信息)、表面结构信息以及表面材料信息输入预先构建的损伤识别模型。以输出待定损物品的损伤部位的损伤程度。由此,既可以提高对待定损物品损伤程度的检测效率,又可以提高检测的损伤程度的准确性。
图2为本说明书一个实施例提供的物品定损方法流程图。所述方法的执行主体可以为图1中的移动终端。如图2所示,所述方法具体可以包括:
步骤210,移动终端检测拍摄设备是否处于拍摄状态。
以移动终端为手机为例来说,此处的拍摄设备可以为手机内置的相机。
步骤220,当检测到拍摄设备处于拍摄状态时,确定拍摄设备对待定损物品的拍摄区域,并获取对应于拍摄区域的拍摄图像。
在一种实现方式中,可以是根据拍摄设备的取景器在对待定损物品对焦成功后所显示的取景范围,来确定上述拍摄区域。可以理解的是,在用户通过移动终端对待定损物品的损伤部位进行拍照时,上述取景范围可以覆盖待定损物品的损伤部位,从而确定的拍摄区域可以覆盖待定损物品的损伤部位。
可以理解的是,当拍摄区域覆盖待定损物品的损伤部位时,对应于拍摄区域的拍摄图像即为对待定损物品的损伤部位拍摄的照片。
步骤230,向待定损物品的拍摄区域发射多束红外线,以确定待定损物品的损伤部位的表面结构信息以及表面材料信息。
在一种实现方式中,表面结构信息的确定过程可以为:移动终端通过多组红外线发射器向待定损物品的拍摄区域发射多束红外线,并记录各束红外线的发射时间和发射速度。之后通过多组接收传感器接收待定损物品在接收到多束红外线之后返回的多束红外线,并记录返回的各束红外线的接收时间。最后根据记录的多个发射时间、多个发射速度以及多个接收时间,就可以确定移动终端与损伤部位的多个不同的距离。根据该多个不同的距离,就可以得到损伤部位的三维立体结构,也即可以确定出待定损物品的损伤部位的表面结构信息。
在确定上述表面结构信息之后,移动终端可以构造损伤部位的表面起伏分布图。可以理解的是,根据该构造的表面起伏分布图,可以判断损伤部位的表面是否存在凹陷变形类的损伤。
在一个例子中,上述表面起伏分布图的构造过程可以如图3所示,图3中,移动终端还可以包括转换单元。该转换单元用于将接收的红外线信号转换为数字信号,以便计算机能够对其进行处理。从图3可以看出,最后构造的表面起伏分布图可以是三维的,由此可以更方便对损伤部位的损伤程度进行判断。
当然,在实际应用中,当移动终端还包括景深摄像头或者双目摄像头时,还可以结合景深摄像头或者双目摄像头采集的图像,来进一步对上述 表面结构信息进行修正,由此,可以提高对损伤部位损伤程度判断的准确性。
上述是表面结构信息的确定过程,表面材料信息的确定过程可以为:移动终端通过多组红外线发射器向待定损物品的拍摄区域发射多束红外线。之后通过多组接收传感器接收待定损物品在接收到多束红外线之后返回的多束红外线,并记录返回的各束红外线的接收温度。最后根据记录的多个接收温度,就可以确定损伤部位的表面材料信息。该确定过程的原理是:不同材料返回的红外线的温度不同。
在确定出上述表面结构信息之后,移动终端可以构造损伤部位的表面材料分布图。可以理解的是,根据该构造的表面材料分布图,可以判断损伤部位是否存在漆面脱落,是否有尘土覆盖,是否有金属层漏出等。
在一个例子中,上述表面材料分布图的构造过程可以如图4所示,图4中,移动终端也可以包括转换单元(同上所述)。
可以理解的是,上述表面结构信息和表面材料信息的确定过程可以分开进行,也可以同时进行,本说明书对此不作限定。
步骤240,将拍摄图像、表面结构信息以及表面材料信息输入预先构建的损伤识别模型。
步骤250,输出待定损物品的损伤部位的损伤程度。
此处的损伤识别模型可以是根据多个样本物品的损伤部位的拍摄图像、表面结构信息、表面材料信息以及深度神经网络算法得到的。具体地,可以预先收集多个样本物品,并为其添加样本标签(如,轻微损伤、中度损伤或者重度损伤等)。然后针对该样本物品的损伤部位进行拍照,并获取损伤部位的表面结构信息和表面材料信息(其获取过程同上所述)。最后,基于样本物品的样本标签、损伤部位的拍摄图像、表面结构信息以及表面材料信息,对深度神经网络算法进行训练,就可以得到损伤识别模型。其中,基于样本数据训练模型属于传统常规技术,在此不复赘述。
需要说明的是,当深度神经网络算法为卷积神经网络算法时,训练得 到的损伤识别模型可以包括:卷积层、池化层以及全连接层。
在一个例子中,将拍摄图像、表面结构信息以及表面材料信息输入根据卷积神经网络算法训练得到的损伤识别模型,以识别损伤部位的损伤程度的过程示意图可以如图5所示。
本实施例最后输出的损伤程度例如可以为:轻微损伤(包括轻微的凹陷、油漆脱落等)、中度损伤以及中度损伤等。当然,在实际应用中,还可以为其它用于表示损伤程度的信息,本说明书对此不作限定。
综上,本说明书实施例提供的方案具有如下优点:
1)结合待定损物品的损伤部位的表面结构信息和拍摄图像,可以提升凹陷、变形类损伤识别的准确率。
2)利用损伤部位的表面材质信息和表面结构信息,可以对反光、倒影、尘土污渍等视觉干扰因素进行修正,防止深度神经网络将上述视觉干扰因素错误识别为损伤。
3)对损伤程度进行更精准地判断,尤其对轻微的凹陷、油漆脱落等损伤,能够将准确程度提升到单凭视觉信息无法达到的高度。
总之,本方案在对待定损物品的损伤部位进行定损时,在将损伤部位的图像作为考量特征的同时,考虑了损伤部位的表面结构信息和表面材料信息的方式,可以提供更完整的损伤识别能力。此外,本方案通过移动终端自动对损伤部位定损,可以大幅减少损伤鉴定对人员经验的要求,提供公正客观的鉴定结果,提高欺诈造假的难度,以及让用户能够自行提供准确可靠的鉴定结果,减少保险公司现场人工查勘的成本。
与上述物品定损方法对应地,本说明书一个实施例还提供的一种物品定损装置,如图6所示,该装置包括:
检测单元601,用于检测拍摄设备是否处于拍摄状态。
确定单元602,用于当检测单元601检测到拍摄设备处于拍摄状态时,确定拍摄设备对待定损物品的拍摄区域,该拍摄区域覆盖待定损物品的损伤部位,并获取对应于拍摄区域的拍摄图像。
通信单元603,用于向确定单元602确定的待定损物品的拍摄区域发射多束红外线,以确定待定损物品的损伤部位的表面结构信息以及表面材料信息。
该通信单元603可以包括多组红外线发射器和与多组红外线发射器对应的多组接收传感器。
输入单元604,用于将拍摄图像、表面结构信息以及表面材料信息输入预先构建的损伤识别模型。
输出单元605,用于输出待定损物品的损伤部位的损伤程度。
可选地,通信单元603具体可以用于:
向待定损物品的拍摄区域发射多束红外线,并记录发射时间和发射速度。
接收待定损物品在接收到多束红外线之后返回的多束红外线,并记录接收时间和接收温度。
根据发射时间、发射速度以及接收时间,确定待定损物品的损伤部位的表面结构信息。
根据接收温度,确定待定损物品的损伤部位的表面材料信息。
可选地,该装置还可以包括:
采集单元606,用于通过景深摄像头或者双目摄像头采集待定损物品的目标图像。
修正单元607,用于根据采集单元606采集的目标图像,对表面结构信息进行修正,得到修正后的表面结构信息。
输入单元604具体可以用于:
将拍摄图像、修正后的表面结构信息以及表面材料信息输入预先构建的损伤识别模型。
该损伤识别模型可以是根据多个样本物品的损伤部位的拍摄图像、表面结构信息、表面材料信息以及深度神经网络算法得到的。
本说明书上述实施例装置的各功能模块的功能,可以通过上述方法实 施例的各步骤来实现,因此,本说明书一个实施例提供的装置的具体工作过程,在此不复赘述。
本说明书一个实施例提供的物品定损装置,检测单元601检测拍摄设备是否处于拍摄状态。当检测到拍摄设备处于拍摄状态时,确定单元602确定拍摄设备对待定损物品的拍摄区域,并获取对应于拍摄区域的拍摄图像。通信单元603向待定损物品的拍摄区域发射多束红外线,以确定待定损物品的损伤部位的表面结构信息以及表面材料信息。输入单元604将拍摄图像、表面结构信息以及表面材料信息输入预先构建的损伤识别模型。输出单元605输出待定损物品的损伤部位的损伤程度。由此,实现了对待定损物品损伤程度的自动评估。
本说明书实施例提供的物品定损装置可以为图1中移动终端的一个模块或者单元。
本领域技术人员应该可以意识到,在上述一个或多个示例中,本说明书所描述的功能可以用硬件、软件、固件或它们的任意组合来实现。当使用软件实现时,可以将这些功能存储在计算机可读介质中或者作为计算机可读介质上的一个或多个指令或代码进行传输。
以上所述的具体实施方式,对本说明书的目的、技术方案和有益效果进行了进一步详细说明,所应理解的是,以上所述仅为本说明书的具体实施方式而已,并不用于限定本说明书的保护范围,凡在本说明书的技术方案的基础之上,所做的任何修改、等同替换、改进等,均应包括在本说明书的保护范围之内。

Claims (10)

  1. 一种物品定损方法,其特征在于,包括:
    移动终端检测拍摄设备是否处于拍摄状态;
    当检测到所述拍摄设备处于拍摄状态时,确定所述拍摄设备对待定损物品的拍摄区域,所述拍摄区域覆盖所述待定损物品的损伤部位,并获取对应于所述拍摄区域的拍摄图像;
    向所述待定损物品的所述拍摄区域发射多束红外线,以确定所述待定损物品的损伤部位的表面结构信息以及表面材料信息;
    将所述拍摄图像、所述表面结构信息以及所述表面材料信息输入预先构建的损伤识别模型;
    输出所述待定损物品的损伤部位的损伤程度。
  2. 根据权利要求1所述的方法,其特征在于,所述向所述待定损物品的所述拍摄区域发射多束红外线,以确定所述待定损物品的损伤部位的表面结构信息和表面材料信息,包括:
    向所述待定损物品的所述拍摄区域发射多束红外线,并记录发射时间和发射速度;
    接收所述待定损物品在接收到所述多束红外线之后返回的多束红外线,并记录接收时间和接收温度;
    根据所述发射时间、所述发射速度以及所述接收时间,确定所述待定损物品的损伤部位的表面结构信息;
    根据所述接收温度,确定待定损物品的损伤部位的表面材料信息。
  3. 根据权利要求2所述的方法,其特征在于,还包括:
    通过景深摄像头或者双目摄像头采集所述待定损物品的目标图像;
    根据所述目标图像,对所述表面结构信息进行修正,得到修正后的表面结构信息;
    所述将所述拍摄图像、所述表面结构信息以及所述表面材料信息输入预先构建的损伤识别模型,包括:
    将所述拍摄图像、所述修正后的表面结构信息以及所述表面材料信息输入预先构建的损伤识别模型。
  4. 根据权利要求2所述的方法,其特征在于,所述移动终端通过多组红外线发射器发射所述多束红外线;所述移动终端通过与所述多组红外线发射器对应的多组接收传感器接收返回的多束红外线。
  5. 根据权利要求1-4任一项所述的方法,其特征在于,所述损伤识别模型是根据多个样本物品的损伤部位的拍摄图像、表面结构信息、表面材料信息以及深度神经网络算法得到的。
  6. 一种物品定损装置,其特征在于,包括:
    检测单元,用于检测拍摄设备是否处于拍摄状态;
    确定单元,用于当所述检测单元检测到所述拍摄设备处于拍摄状态时,确定所述拍摄设备对待定损物品的拍摄区域,所述拍摄区域覆盖所述待定损物品的损伤部位,并获取对应于所述拍摄区域的拍摄图像;
    通信单元,用于向所述确定单元确定的所述待定损物品的所述拍摄区域发射多束红外线,以确定所述待定损物品的损伤部位的表面结构信息以及表面材料信息;
    输入单元,用于将所述拍摄图像、所述表面结构信息以及所述表面材料信息输入预先构建的损伤识别模型;
    输出单元,用于输出所述待定损物品的损伤部位的损伤程度。
  7. 根据权利要求6所述的装置,其特征在于,所述通信单元具体用于:
    向所述待定损物品的所述拍摄区域发射多束红外线,并记录发射时间和发射速度;
    接收所述待定损物品在接收到所述多束红外线之后返回的多束红外线,并记录接收时间和接收温度;
    根据所述发射时间、所述发射速度以及所述接收时间,确定所述待定损物品的损伤部位的表面结构信息;
    根据所述接收温度,确定待定损物品的损伤部位的表面材料信息。
  8. 根据权利要求7所述的装置,其特征在于,还包括:
    采集单元,用于通过景深摄像头或者双目摄像头采集所述待定损物品的目标图像;
    修正单元,用于根据所述采集单元采集的所述目标图像,对所述表面结构信息进行修正,得到修正后的表面结构信息;
    所述输入单元具体用于:
    将所述拍摄图像、所述修正后的表面结构信息以及所述表面材料信息输入预先构建的损伤识别模型。
  9. 根据权利要求7所述的装置,其特征在于,所述通信单元包括多组红外线发射器和与所述多组红外线发射器对应的多组接收传感器。
  10. 根据权利要求6-9任一项所述的装置,其特征在于,所述损伤识别模型是根据多个样本物品的损伤部位的拍摄图像、表面结构信息、表面材料信息以及深度神经网络算法得到的。
PCT/CN2018/117861 2018-03-16 2018-11-28 物品定损方法及装置 WO2019174306A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP18909705.8A EP3716204A4 (en) 2018-03-16 2018-11-28 ARTICLE DAMAGE ASSESSMENT PROCESS AND DEVICE
SG11202005713XA SG11202005713XA (en) 2018-03-16 2018-11-28 Article damage evaluation method and apparatus
US16/888,598 US11300522B2 (en) 2018-03-16 2020-05-29 Article damage evaluation

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810220310.2A CN108550080A (zh) 2018-03-16 2018-03-16 物品定损方法及装置
CN201810220310.2 2018-03-16

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/888,598 Continuation US11300522B2 (en) 2018-03-16 2020-05-29 Article damage evaluation

Publications (1)

Publication Number Publication Date
WO2019174306A1 true WO2019174306A1 (zh) 2019-09-19

Family

ID=63516626

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/117861 WO2019174306A1 (zh) 2018-03-16 2018-11-28 物品定损方法及装置

Country Status (6)

Country Link
US (1) US11300522B2 (zh)
EP (1) EP3716204A4 (zh)
CN (1) CN108550080A (zh)
SG (1) SG11202005713XA (zh)
TW (1) TWI683260B (zh)
WO (1) WO2019174306A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115184378A (zh) * 2022-09-15 2022-10-14 北京思莫特科技有限公司 一种基于移动设备的混凝土结构病害检测系统及方法

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106803205A (zh) * 2016-12-27 2017-06-06 北京量子保科技有限公司 一种用于保险自动核赔的系统和方法
CN108550080A (zh) * 2018-03-16 2018-09-18 阿里巴巴集团控股有限公司 物品定损方法及装置
CN109615649A (zh) * 2018-10-31 2019-04-12 阿里巴巴集团控股有限公司 一种图像标注方法、装置及系统
CN110045382A (zh) * 2018-12-03 2019-07-23 阿里巴巴集团控股有限公司 车辆损伤检测的处理方法、装置、设备、服务器和系统
CN109900702A (zh) * 2018-12-03 2019-06-18 阿里巴巴集团控股有限公司 车辆损伤检测的处理方法、装置、设备、服务器和系统
CN111586225B (zh) * 2020-05-19 2021-08-06 杨泽鹏 一种影像采集辅助系统
CN113379745B (zh) * 2021-08-13 2021-11-30 深圳市信润富联数字科技有限公司 产品缺陷识别方法、装置、电子设备和存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103310223A (zh) * 2013-03-13 2013-09-18 四川天翼网络服务有限公司 一种基于图像识别的车辆定损系统及方法
CN105915853A (zh) * 2016-05-27 2016-08-31 大连楼兰科技股份有限公司 基于红外感知的远程无人定损方法及系统
US20170094141A1 (en) * 2015-09-24 2017-03-30 Intel Corporation Infrared and visible light dual sensor imaging system
CN107358596A (zh) * 2017-04-11 2017-11-17 阿里巴巴集团控股有限公司 一种基于图像的车辆定损方法、装置、电子设备及系统
CN108550080A (zh) * 2018-03-16 2018-09-18 阿里巴巴集团控股有限公司 物品定损方法及装置

Family Cites Families (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6495833B1 (en) * 2000-01-20 2002-12-17 Research Foundation Of Cuny Sub-surface imaging under paints and coatings using early light spectroscopy
DE10305861A1 (de) * 2003-02-13 2004-08-26 Adam Opel Ag Vorrichtung eines Kraftfahrzeuges zur räumlichen Erfassung einer Szene innerhalb und/oder außerhalb des Kraftfahrzeuges
US7451657B2 (en) * 2004-01-16 2008-11-18 Jentek Sensors, Inc. Material condition monitoring with multiple sensing modes
CN102077230A (zh) * 2008-04-17 2011-05-25 旅行者保险公司 用于确定和处理物体结构状况信息的方法和系统
CN101995395B (zh) * 2009-08-14 2013-07-31 上海镭立激光科技有限公司 一种激光诱导多种光谱联合指纹网络在线检测材料的方法
FI124299B (fi) * 2009-10-08 2014-06-13 Focalspec Oy Mittalaite ja menetelmä kohteen ja kohteen pinnan ominaisuuksien mittaamiseksi
US9424606B2 (en) * 2011-04-28 2016-08-23 Allstate Insurance Company Enhanced claims settlement
US20140201022A1 (en) * 2013-01-16 2014-07-17 Andre Balzer Vehicle damage processing and information system
KR102136401B1 (ko) * 2013-10-21 2020-07-21 한국전자통신연구원 다-파장 이미지 라이다 센서장치 및 이의 신호처리 방법
CN203643383U (zh) * 2013-12-17 2014-06-11 上海神洲阳光特种钢管有限公司 一种红外探伤装置
DE102014212682A1 (de) * 2014-07-01 2016-01-07 Trumpf Werkzeugmaschinen Gmbh + Co. Kg Verfahren und Vorrichtung zum Bestimmen einer Werkstoffart und/oder einer Oberflächenbeschaffenheit eines Werkstücks
US10163164B1 (en) * 2014-09-22 2018-12-25 State Farm Mutual Automobile Insurance Company Unmanned aerial vehicle (UAV) data collection and claim pre-generation for insured approval
US9129355B1 (en) * 2014-10-09 2015-09-08 State Farm Mutual Automobile Insurance Company Method and system for assessing damage to infrastructure
US20170148101A1 (en) * 2015-11-23 2017-05-25 CSI Holdings I LLC Damage assessment and repair based on objective surface data
US10354386B1 (en) * 2016-01-27 2019-07-16 United Services Automobile Association (Usaa) Remote sensing of structure damage
US11144889B2 (en) * 2016-04-06 2021-10-12 American International Group, Inc. Automatic assessment of damage and repair costs in vehicles
US10692050B2 (en) * 2016-04-06 2020-06-23 American International Group, Inc. Automatic assessment of damage and repair costs in vehicles
CN107392218B (zh) * 2017-04-11 2020-08-04 创新先进技术有限公司 一种基于图像的车辆定损方法、装置及电子设备
CN107622448A (zh) * 2017-08-28 2018-01-23 武汉六点整北斗科技有限公司 一种交通事故的保险验证方法及相关产品
US11087292B2 (en) * 2017-09-01 2021-08-10 Allstate Insurance Company Analyzing images and videos of damaged vehicles to determine damaged vehicle parts and vehicle asymmetries
KR101916347B1 (ko) * 2017-10-13 2018-11-08 주식회사 수아랩 딥러닝 기반 이미지 비교 장치, 방법 및 컴퓨터 판독가능매체에 저장된 컴퓨터 프로그램
JP7073785B2 (ja) * 2018-03-05 2022-05-24 オムロン株式会社 画像検査装置、画像検査方法及び画像検査プログラム
US10592934B2 (en) * 2018-03-30 2020-03-17 The Travelers Indemnity Company Systems and methods for automated multi-object damage analysis
CN110570316A (zh) * 2018-08-31 2019-12-13 阿里巴巴集团控股有限公司 训练损伤识别模型的方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103310223A (zh) * 2013-03-13 2013-09-18 四川天翼网络服务有限公司 一种基于图像识别的车辆定损系统及方法
US20170094141A1 (en) * 2015-09-24 2017-03-30 Intel Corporation Infrared and visible light dual sensor imaging system
CN105915853A (zh) * 2016-05-27 2016-08-31 大连楼兰科技股份有限公司 基于红外感知的远程无人定损方法及系统
CN107358596A (zh) * 2017-04-11 2017-11-17 阿里巴巴集团控股有限公司 一种基于图像的车辆定损方法、装置、电子设备及系统
CN108550080A (zh) * 2018-03-16 2018-09-18 阿里巴巴集团控股有限公司 物品定损方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3716204A4 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115184378A (zh) * 2022-09-15 2022-10-14 北京思莫特科技有限公司 一种基于移动设备的混凝土结构病害检测系统及方法
CN115184378B (zh) * 2022-09-15 2024-03-29 北京思莫特科技有限公司 一种基于移动设备的混凝土结构病害检测系统及方法

Also Published As

Publication number Publication date
US20200292464A1 (en) 2020-09-17
SG11202005713XA (en) 2020-07-29
EP3716204A4 (en) 2020-12-30
TWI683260B (zh) 2020-01-21
EP3716204A1 (en) 2020-09-30
TW201941100A (zh) 2019-10-16
CN108550080A (zh) 2018-09-18
US11300522B2 (en) 2022-04-12

Similar Documents

Publication Publication Date Title
WO2019174306A1 (zh) 物品定损方法及装置
US11270504B2 (en) Estimating a condition of a physical structure
US20190095877A1 (en) Image recognition system for rental vehicle damage detection and management
EP1732314B1 (en) Infrared camera with humidity sensor
US20180067209A1 (en) Method and apparatus for processing spectral images
US20060029272A1 (en) Stereo image processing device
CN105844240A (zh) 一种红外测温系统中的人脸检测方法及装置
JP5403147B2 (ja) 生体認証装置及び生体認証方法
JP2007271563A5 (zh)
JP6807268B2 (ja) 画像認識エンジン連携装置およびプログラム
US10491880B2 (en) Method for identifying objects, in particular three-dimensional objects
EP3194883B1 (en) Method and relevant device for measuring distance with auto-calibration and temperature compensation
CN103363927A (zh) 平台光电装备的任意轴距多光轴一致性检测装置及方法
CN105717513A (zh) 一种基于普通摄像头芯片的低成本激光测距装置及方法
CN111462254A (zh) 一种多光谱监测方法与系统
KR20150142475A (ko) 장애물 식별 장치 및 방법
Su et al. Dual-light inspection method for automatic pavement surveys
Uchiyama et al. Photogrammetric system using visible light communication
CN111382639A (zh) 一种活体人脸检测方法及装置
KR101719127B1 (ko) 장애물 식별 장치 및 방법
CN106605235A (zh) 用于确定是否在扫描操作期间存在目标对象的方法及系统
CN113124326A (zh) 油气管道泄漏巡线检测方法和系统
WO2022213288A1 (zh) 一种深度图像处理方法、装置及存储介质
KR102711209B1 (ko) 사진 측량학 기반 열화상 정합을 위한 픽셀 단위의 촬영 거리 보정 방법 및 장치
KR20240131600A (ko) 센서를 이용한 객체 크기 추정 시스템 및 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18909705

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2018909705

Country of ref document: EP

Effective date: 20200624

NENP Non-entry into the national phase

Ref country code: DE