TW201947451A - Interactive processing method, apparatus and processing device for vehicle loss assessment and client terminal - Google Patents
Interactive processing method, apparatus and processing device for vehicle loss assessment and client terminal Download PDFInfo
- Publication number
- TW201947451A TW201947451A TW108105279A TW108105279A TW201947451A TW 201947451 A TW201947451 A TW 201947451A TW 108105279 A TW108105279 A TW 108105279A TW 108105279 A TW108105279 A TW 108105279A TW 201947451 A TW201947451 A TW 201947451A
- Authority
- TW
- Taiwan
- Prior art keywords
- shooting
- damage
- vehicle
- information
- window
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q40/00—Finance; Insurance; Tax strategies; Processing of corporate or income taxes
- G06Q40/08—Insurance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/584—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Finance (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Accounting & Taxation (AREA)
- Development Economics (AREA)
- Multimedia (AREA)
- Economics (AREA)
- Marketing (AREA)
- Strategic Management (AREA)
- Technology Law (AREA)
- General Business, Economics & Management (AREA)
- User Interface Of Digital Computer (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
本說明書實施例方案屬電腦終端保險業務資料處理的技術領域,尤其涉及一種車輛定損的交互處理方法、裝置、處理設備及客戶端。The solutions in the embodiments of the present specification belong to the technical field of computer terminal insurance business data processing, and more particularly, to an interactive processing method, device, processing device, and client for vehicle fixed loss.
機動車輛保險即汽車保險(或簡稱車險),是指對機動車輛由於自然災害或意外事故所造成的人身傷亡或財產損失負賠償責任的一種商業保險。隨著經濟的發展,機動車輛的數量不斷增加,當前,車險已成為中國財產保險業務中最大的險種之一。
當被保險的車輛發生交通事故時,保險公司通常首先是現場查勘,拍照獲取定損圖像後再進行定損。車輛的定損涉及到後續維修、評估等多方面技術和利益,是整個車險服務中十分重要的過程。隨著技術發展和快速定損、理賠的業務發展需求,目的車輛發生事故進行定損處理時,事故車車主可以現場使用手機或其他終端設備拍攝的車損照片,用戶拍攝的圖像上傳給保險公司用來確定車輛車損,進而確定維修方案、評估定損等。由於定損圖像有一定的拍攝要求,目的大多數車主用戶由於車險知識的不充分或拍攝技術的限制,現場手機拍攝的圖像資訊常常包括大量的不符合規範的照片和視訊文件。當採集到無效的定損圖像時,用戶需要重新拍攝,甚至已經喪失拍攝時機,影響定損處理效率和用戶定損服務體驗。
因此,業內亟需一種可以更加便捷、快速的車輛定損處理方案。Motor vehicle insurance, or car insurance for short, refers to a type of commercial insurance that is liable for compensation for personal injury or property damage caused by a natural disaster or accident in a motor vehicle. With the development of the economy, the number of motor vehicles continues to increase. At present, auto insurance has become one of the largest types of insurance in China's property insurance business.
When a traffic accident occurs in an insured vehicle, an insurance company usually first conducts an on-site survey, takes a picture and obtains a fixed-loss image before performing the fixed-loss. Determining the damage of a vehicle involves many aspects of technology and benefits such as subsequent maintenance and evaluation, which is a very important process in the entire auto insurance service. With the development of technology and the business development requirements for rapid loss determination and claims settlement, when the destination vehicle is damaged and the damage is dealt with, the owner of the accident car can use the mobile phone or other terminal equipment to take a photo of the car damage on the scene. It is used by the company to determine the vehicle damage, and then to determine the maintenance plan and evaluate the fixed damage. Due to certain shooting requirements for fixed-loss images, the purpose of most car owners is due to inadequate auto insurance knowledge or the limitation of shooting technology. The image information captured by mobile phones on site often includes a large number of non-standard photos and video files. When an invalid fixed-loss image is collected, the user needs to take another shot, or even lost the shooting opportunity, which affects the efficiency of the fixed-loss processing and the user's fixed-loss service experience.
Therefore, there is an urgent need in the industry for a more convenient and fast vehicle damage treatment solution.
本說明書實施例目的在於提供一種車輛定損的交互處理方法、裝置、處理設備及客戶端,可以利用AR (Augmented Reality,擴增實境)拍攝交互引導,使用戶自行、快速、便利的進行車輛定損,提高車輛定損的處理效率,提高用戶定損交互體驗。
本說明書實施例提供的一種車輛定損的交互處理方法、裝置、處理設備及客戶端是包括以下方式實現的:
一種車輛定損的交互處理方法,所述方法包括:
透過拍攝視窗獲取車輛的特徵資料;
根據所述特徵資料建構所述車輛的擴增實境空間模型,所述擴增實境空間模型展示在所述拍攝視窗中,並實現與所述拍攝視窗中車輛的現實空間位置匹配;
基於所述擴增實境空間模型在所述拍攝視窗中進行損傷識別引導,所述損傷識別引導包括將展示基於從所述拍攝視窗中獲取的圖像資訊確定的拍攝引導資訊;
在所述拍攝視窗中展示損傷識別的結果資訊。
一種車輛定損的交互處理裝置,所述裝置包括:
特徵獲取模組,用於透過拍攝視窗獲取車輛的特徵資料;
AR處理模組,用於根據所述特徵資料建構所述車輛的擴增實境空間模型,所述擴增實境空間模型展示在所述拍攝視窗中,並實現與所述拍攝視窗中車輛的現實空間位置匹配;
拍攝引導模組,用於基於所述擴增實境空間模型在所述拍攝視窗中進行損傷識別引導,所述損傷識別引導包括將展示基於從所述拍攝視窗中獲取的圖像資訊確定的拍攝引導資訊;
結果展示模組,用於在所述拍攝視窗中展示損傷識別的結果資訊。
一種車輛定損的交互處理設備,包括處理器以及用於儲存處理器可執行指令的儲存器,所述處理器執行所述指令時實現:
透過拍攝視窗獲取車輛的特徵資料;
根據所述特徵資料建構所述車輛的擴增實境空間模型,所述擴增實境空間模型展示在所述拍攝視窗中,並實現與所述拍攝視窗中車輛的現實空間位置匹配;
基於所述擴增實境空間模型在所述拍攝視窗中進行損傷識別引導,所述損傷識別引導包括將展示基於從所述拍攝視窗中獲取的圖像資訊確定的拍攝引導資訊;
在所述拍攝視窗中展示損傷識別的結果資訊。
一種客戶端,包括處理器以及用於儲存處理器可執行指令的儲存器,所述處理器執行所述指令時實現:
透過拍攝視窗獲取車輛的特徵資料;
根據所述特徵資料建構所述車輛的擴增實境空間模型,所述擴增實境空間模型展示在所述拍攝視窗中,並實現與所述拍攝視窗中車輛的現實空間位置匹配;
基於所述擴增實境空間模型在所述拍攝視窗中進行損傷識別引導,所述損傷識別引導包括將展示基於從所述拍攝視窗中獲取的圖像資訊確定的拍攝引導資訊;
在所述拍攝視窗中展示損傷識別的結果資訊。
一種電子設備,包括顯示螢幕、處理器以及儲存處理器可執行指令的儲存器,所述處理器執行所述指令時實現本說明書任意一個實施例所述的方法步驟。
本說明書實施例提供的一種車輛定損的交互處理方法、裝置、處理設備及客戶端,可以利用AR在終端的視訊拍攝窗口中即時與用戶進行損傷識別的交互處理,引導用戶拍攝符合規範的圖像,並可以即時在終端的拍攝窗口中反饋損傷識別結果。利用本說明書實施例方案,用戶打開終端定損應用,啟動結合AR的拍攝視窗對車輛進行取景,根據實際的車輛位置、角度等資訊對用戶進行拍攝引導和反饋,用戶可以無需其他拍照錄像等複雜操作,按照拍攝引導資訊進行拍攝即可完成損傷識別,進而快速實現定損、理賠。本說明書提供的實施例方案中,用戶可以無需專業的定損圖像拍攝技能和複雜的拍攝操作步驟,定損處理成本更低,結合AR的引導拍攝可以進一步提高用戶定損服務的服務體驗。The purpose of the embodiments of the present specification is to provide an interactive processing method, device, processing device and client for fixed vehicle damage, which can use AR (Augmented Reality, Augmented Reality) to shoot interactive guidance, so that users can carry out vehicles quickly, conveniently Fixed loss, improve the processing efficiency of vehicle fixed loss, and improve user interactive experience of fixed loss.
An interactive processing method, device, processing device, and client for vehicle fixed loss provided by the embodiments of this specification are implemented in the following ways:
An interactive processing method for vehicle fixed damage, the method includes:
Obtain the characteristic data of the vehicle through the shooting window;
Constructing an augmented reality space model of the vehicle according to the characteristic data, the augmented reality space model being displayed in the shooting window, and achieving matching with the real space position of the vehicle in the shooting window;
Performing damage recognition guidance in the shooting window based on the augmented reality space model, the damage recognition guidance including displaying shooting guidance information determined based on image information obtained from the shooting window;
The result information of the damage recognition is displayed in the shooting window.
An interactive processing device for vehicle damage determination, the device includes:
Feature acquisition module, used to obtain the characteristic data of the vehicle through the shooting window;
An AR processing module is configured to construct an augmented reality space model of the vehicle according to the characteristic data, the augmented reality space model is displayed in the shooting window, and realizes the same as the vehicle in the shooting window. Real space position matching;
A shooting guide module for performing damage recognition and guidance in the shooting window based on the augmented reality space model, and the damage recognition and guidance includes displaying a shooting determined based on image information obtained from the shooting window Guidance information
The result display module is configured to display the result information of the damage recognition in the shooting window.
An interactive processing device for vehicle damage determination includes a processor and a memory for storing processor-executable instructions. When the processor executes the instructions, the processor implements:
Obtain the characteristic data of the vehicle through the shooting window;
Constructing an augmented reality space model of the vehicle according to the characteristic data, the augmented reality space model being displayed in the shooting window, and achieving matching with the real space position of the vehicle in the shooting window;
Performing damage recognition guidance in the shooting window based on the augmented reality space model, the damage recognition guidance including displaying shooting guidance information determined based on image information obtained from the shooting window;
The result information of the damage recognition is displayed in the shooting window.
A client includes a processor and a storage for storing processor-executable instructions. When the processor executes the instructions, the processor implements:
Obtain the characteristic data of the vehicle through the shooting window;
Constructing an augmented reality space model of the vehicle according to the characteristic data, the augmented reality space model being displayed in the shooting window, and achieving matching with the real space position of the vehicle in the shooting window;
Performing damage recognition guidance in the shooting window based on the augmented reality space model, the damage recognition guidance including displaying shooting guidance information determined based on image information obtained from the shooting window;
The result information of the damage recognition is displayed in the shooting window.
An electronic device includes a display screen, a processor, and a memory storing processor-executable instructions. When the processor executes the instructions, the method steps described in any one of the embodiments of the specification are implemented.
An interactive processing method, device, processing device, and client for vehicle fixed damage provided by the embodiments of this specification can use AR to perform real-time damage identification interactive processing with the user in the video shooting window of the terminal, and guide the user to take pictures that meet the specifications. Image, and can immediately report the damage recognition result in the shooting window of the terminal. With the solution of the embodiment of this specification, the user opens the terminal loss-fixing application, starts framing the vehicle with the AR shooting window, and guides and gives feedback to the user based on information such as the actual vehicle position and angle. Operation, according to the shooting guide information for shooting can complete the damage identification, and then quickly determine the loss and claim. In the embodiment scheme provided in this specification, the user may not need professional fixed-loss image capturing skills and complicated shooting operation steps, and the fixed-loss processing cost is lower. In combination with AR guided shooting, the service experience of the user's fixed-loss service can be further improved.
為了使所屬技術領域中具有通常知識者更好地理解本說明書中的技術方案,下面將結合本說明書實施例中的圖式,對本說明書實施例中的技術方案進行清楚、完整地描述,顯然,所描述的實施例僅是本說明書中的一部分實施例,而不是全部的實施例。基於本說明書中的一個或多個實施例,所屬技術領域中具有通常知識者在沒有作出創造性勞動前提下所獲得的所有其他實施例,都應當屬本說明書實施例保護的範圍。
本說明書提供的一種實施方案可以應用到客戶端/伺服器的系統構架中。所述的客戶端可以包括車損現場人員(可以是事故車車主用戶,也可以是保險公司人員或進行定損處理的其他人員)使用的具有拍攝功能的終端設備,如智慧手機、平板電腦、智慧穿戴設備、專用定損終端等。所述的客戶端可以具有通訊模組,可以與遠端的伺服器進行通訊連接,實現與所述伺服器的資料傳輸。所述伺服器可以包括保險公司一側的伺服器或定損服務方一側的伺服器,其他的實施場景中也可以包括其他服務方的伺服器,例如與定損服務方的伺服器有通訊鏈接的配件供應商的終端、車輛維修廠的終端等。所述的伺服器可以包括單台電腦設備,也可以包括多個伺服器組成的伺服器集群,或者分散式系統的伺服器。客戶端一側可以將現場拍攝採集的圖像資料即時發送給伺服器,由伺服器一側進行損傷的識別、維修方案的制定、維修費用的計算等,如定損伺服器識別出損傷部件和損傷程度後,可以向維修廠伺服器進行確認維修費用和向保險公司伺服器確認理賠金額,定損伺服器將保險公司給出的理賠金額和維修廠的維修費用資訊反饋給客戶端。伺服器一側的處理的實施方案,損傷識別等處理由伺服器一側執行,處理速度通常高於客戶端一側,可以減少客戶端處理壓力,提高損傷識別速度。當然,本說明書不排除其他的實施例中上述全部或部分處理由客戶端一側實現,如客戶端一側進行損傷的即時檢測和識別。
下面以一個具體的手機客戶端應用場景為例對本說明書實施方案進行說明。具體的,圖1是本說明書提供的所述一種車輛定損的交互處理方法實施例的流程示意圖。雖然本說明書提供了如下述實施例或圖式所示的方法操作步驟或裝置結構,但基於常規或者無需創造性的勞動在所述方法或裝置中可以包括更多或者部分合併後更少的操作步驟或模組單元。在邏輯性上不存在必要因果關係的步驟或結構中,這些步驟的執行順序或裝置的模組結構不限於本說明書實施例或圖式所示的執行順序或模組結構。所述的方法或模組結構的在實際中的裝置、伺服器或終端產品應用時,可以按照實施例或者圖式所示的方法或模組結構進行順序執行或者平行執行(例如平行處理器或者多線程處理的環境、甚至包括分散式處理、伺服器集群的實施環境)。當然,下述實施例的描述並不對基於本說明書的其他可擴展到的技術方案構成限制。例如其他的實施場景中。具體的一種實施例如圖1所示,本說明書提供的一種車輛定損的交互處理方法的一種實施例中,所述方法可以包括:
S0:透過拍攝視窗獲取車輛的特徵資料。
本實施例中用戶一側的客戶端可以為智慧手機,所述的智慧手機可以具有拍攝功能。用戶可以在車輛事故現場打開實施了本說明書實施方案的手機應用對車輛事故現場進行取景拍攝。客戶端打開應用後,可以在客戶端顯示螢幕上展示拍攝視窗,透過拍攝視窗獲取對車輛進行拍攝,獲取車輛的特徵資料。所述的拍攝視窗可以為視訊拍攝窗口,透過客戶端整合的拍攝裝置獲取的圖像資訊可以展示在所述拍攝視窗中。所述拍攝視窗具體的界面結構和展示的相關資訊可以自定義的設計。
所述的特徵資料可以根據車輛識別、環境識別、圖像識別等資料處理需求進行具體的設置。一般的,所述的特徵資料可以包括識別出的車輛的各個部件的資料資訊,可以建構3D座標資訊,建立車輛的擴增實境空間模型(AR空間模型,一種資料表徵方式,主體的輪廓圖形)。當然,所述的特徵資料還可以包括其他的例如車輛的品牌、型號、顏色、輪廓、唯一識別碼等資料資訊。
S2:根據所述特徵資料建構所述車輛的擴增實境空間模型,所述擴增實境空間模型展示在所述拍攝視窗中,並實現與所述拍攝視窗中車輛的現實空間位置匹配。
所述的擴增實境AR通常是指一種即時地計算攝影機影像的位置及角度並加上相應圖像、視訊、3D模型的技術實現方案,這種方案可以在螢幕幕上把虛擬世界套在(疊加到)現實世界並可以進行互動。本說明書實施例中利用所述特徵資料建構的增強資訊空間模型可以為車輛的輪廓資訊,具體的可以基於獲取的車輛的型號、拍攝角度以及車輛的輪胎位置、頂棚位置、前臉位置、前大燈位置、尾燈位置、前後車窗位置等多個特徵資料建構出所述車輛的輪廓。所述的輪廓可以包括基於3D座標建立的資料模型,所述輪廓中帶有相應的3D座標資訊。然後可以將建構的輪廓展示在拍攝視窗中。當然,本說明書不排除其他的實施例中所述的擴增實境空間模型還可以包括其他的模型形式或者在所述輪廓之上增加的其他模型資訊。
所述的AR模型可以在所述拍攝時長中與真實的車輛位置進行匹配,如將建構的3D輪廓與真實車輛的輪廓位置匹配。具體的匹配處理中,可以透過對取景方向做引導,用戶透過引導移動拍攝方向或角度,將建構的輪廓與拍攝的真實車輛的輪廓對準。如圖2所示,圖2是本說明書提供的一種車輛定損交互中AR模型匹配的應用場景示意圖。
本說明書實施例結合擴增實境技術,不僅展現了用戶實際客戶端拍攝的車輛真實資訊,而且將建構的所述車輛的擴增實境空間模型資訊同時顯示出來,兩種資訊相互補充、疊加,可以提供更好的定損服務體驗。
S4:基於所述擴增實境空間模型在所述拍攝視窗中進行損傷識別引導,所述損傷識別引導包括將展示基於從所述拍攝視窗中獲取的圖像資訊確定的拍攝引導資訊。
結合了AR空間模型的拍攝視窗可以更加直觀的展示車輛現場情況,可以有效的進行車輛損傷位置的定損和拍攝引導。客戶端可以在AR場景下進行損傷識別引導,所述損傷識別引導具體的可以包括將展示基於從所述拍攝視窗中獲取的圖像資訊確定的拍攝引導資訊。客戶端可以獲取拍攝窗口中AR場景下獲取圖像資訊,可以對獲取的圖像資訊進行分析計算,根據分析結果確定需要在拍攝視窗中展示什麼樣的拍攝引導資訊。例如當前拍攝視窗的中車輛的位置較遠,可以在拍攝視窗中提示用戶靠近拍攝。若拍攝位置偏左,無法拍攝到車輛尾部,則可以展示拍攝引導資訊,提示用戶將拍攝角度向右平移。損傷識別引導具體處理的資料資訊以及在什麼樣的條件下展示什麼樣的拍攝引導資訊,可以預先設定相應的策略或規則,本實施例不再逐一描述。
本說明書提供的所述方法的一個具體損傷識別引導的實施例中,所述損傷識別引導可以包括:
S40:識別拍攝的圖像中是否存在疑似損傷;
S42:若有,則根據拍攝視窗中所述車輛的座標資訊和所述疑似損傷的圖像拍攝要求進行匹配計算,根據計算結果確定拍攝引導資訊;
S44:在所述拍攝視窗中展示所述拍攝引導資訊。
在本實施例中,若透過圖像識別算法發現場輛存在疑似損傷,則可以計算所述疑似損傷在所述車輛實際空間位置的座標資訊,然後對比所述疑似損傷的圖像拍攝要求,確定出需要用戶進行怎樣的操作。根據匹配計算的結果確定出需要展示的拍攝引導資訊。例如,若捕捉到車輛後翼子板存在擦痕,而擦痕需要進行正面拍攝和順著擦痕方向的拍攝,但根據座標資訊計算得到此時用戶為斜著的45度拍攝,且距離擦痕位置較遠。則此時可以提示用戶靠近擦痕位置,提示用戶正面和順著擦痕方向進行拍攝。拍攝引導資訊可以根據當前取景即時調整,例如若用戶已經靠近擦痕位置,符合拍攝要求,則此時提示用戶靠近擦痕位置的拍攝引導資訊可以不再展示。所述的疑似損傷可以由客戶端或伺服器一側進行識別。
具體的拍攝時需要展示的拍攝引導資訊以及拍攝條件等可以根據定損交互設計或者定損處理需求進行相應的設置。本說明書提供的一個實施例中,所述拍攝引導資訊可以至少包括下述之一:
調整拍攝方向;
調整拍攝角度;
調整拍攝距離;
調整拍攝光線;
所述疑似損傷的疑似位置。
所述疑似損傷可以包括初步判識的可能存在的損傷,或還未經過指定的損傷識別系統/算法計算確認的損傷,相應的,疑似損傷的位置區域可以稱為疑似位置。
一個拍攝引導的示例如圖3所示。用戶可以透過即時的拍攝引導資訊更加便利、高效的進行定損處理。用戶根據拍攝引導資訊進行拍攝,可以無需專業的拍攝技能或繁瑣的拍攝操作,用戶體驗更好。上述實施例描述了透過文字展示的拍攝引導資訊,可擴展實施例中,所述的拍攝引導資訊還可以包括圖像、語音、動畫、震動等的展現方式,透過箭頭或語音提示將當前拍攝畫面對準某個區域。
S6:在所述拍攝視窗中展示損傷識別的結果資訊。
透過損傷識別引導的交互方式進行定損拍攝,拍攝獲取的圖像資料可以由客戶端或伺服器進一步的進行處理,如是否存在損傷的檢測、損傷類型的識別、損傷部件的識別、維修費用的計算、定損核損的處理等。上述的處理可以歸屬於為基於AR交互場景下的損傷識別的結果資訊,一個或多個損傷識別的結果資訊可以展示在客戶端的拍攝視窗中,用戶可以即時進行查看。具體的一個實施例中,所述損傷識別的結果資訊可以包括基於所述損傷識別引導獲取的圖像資訊確定的損傷位置、損傷部件、維修方案、維修費用中的至少一項。
一個示例如圖4所示,可以在定損拍攝的視訊界面中展示給用戶損傷識別的結果資訊,可以同時展示多個損傷識別結果,如當識別出保險桿和左後翼子板存在損傷時,若兩者均在當前的拍攝視窗中,則可以在相應位置同時展示兩者損傷識別的結果資訊。
圖5是本說明書提供的所述方法的另一個實施例的實施場景示意圖。如圖5所示,若當前拍攝視窗中正在識別處理的目標損傷的結果資訊還未確定出來,則可以顯示該目標損傷的處理進度。即時的展示目標損傷的處理進度,可以進一步增加用戶的定損交互體驗。因此,所述方法的另一個實施例中,在展示目標損傷的損傷識別結果資訊之前,所述方法還可以包括:
S8:展示所述目標損傷的處理進度。
圖6是本說明書提供的所述方法的另一個實施例的方法流程示意圖。一些實施例中,展示所述處理進度的界面窗口可以與展示所述結果資訊的界面窗口為使用的同一個界面窗口,或者相同位置的界面窗口。當然,也可以分別使用不同的界面窗口。
另一種實施方式中,展示所述結果資訊或處理進度的界面窗口可以根據展示的資訊內容進行大小自適應的調節,也可以根據當前拍攝角度或拍攝位置等進行窗口位置的相應移動、跟蹤等。因此,如圖7所示,本說明書提供的所述方法的另一個實施例中,所述方法還可以包括:
S10:展示所述引導提示資訊、結果資訊、處理進度中的至少一個界面窗口可以基於拍攝視窗中圖像變化進行相應的跟蹤變化。
所述的跟蹤變化可以包括前述的位置跟蹤、窗口大小調整,或者顏色、輪廓的變化等。例如,當用戶移動變化拍攝角度時,若受損部件A一直存在於拍攝視窗中,則受損部件A的結果資訊可以一直根據用戶的拍攝展示在拍攝視窗中。
需要說明的,上述實施例中所描述的即時可以包括在獲取或確定某個資料資訊後即刻發送、接收或展示,所屬技術領域中具有通常知識者可以理解的是,經過緩存或預期的計算、等待時間後的發送、接收或展示仍然可以屬所述即時的定義範圍。本說明書實施例所述的圖像可以包括視訊,視訊可以視為連續的圖像集合。
另外,本說明書實施例方案中拍攝獲取的圖像可以儲存到本地客戶端或即時上傳給遠端伺服器。本地客戶端儲存進行一些資料防篡改或上傳至伺服器儲存後,可以有效防止定損資料被篡改,或盜用其他非本次事故圖像的資料進行的保險詐欺。因此,本說明書實施例還可以提高定損處理的資料安全性和定損結果的可靠性。
上述實施例中,客戶端或伺服器一側可以利用預先或即時建構的損傷識別算法來識別客戶端拍攝獲取的圖像。所述的損傷識別算法可以包括採用多種訓練模型訓練建構的損傷識別算法,如深度神經網路Faster R-CNN,可以透過事先標注好損傷區域的大量圖片,訓練出一個深度神經網路,對於車輛各個方位及光照條件的圖片,給出損傷區域的範圍。
上述實施例描述了用戶在手機客戶端進行定損交互處理的實施方式。需要說明的是,本說明書實施例上述所述的方法可以在多種處理設備中,以及包括客戶端與伺服器的實施場景中。
本說明書中上述方法的各個實施例均採用遞進的方式描述,各個實施例之間相同相似的部分互相參見即可,每個實施例重點說明的都是與其他實施例的不同之處。相關之處參見方法實施例的部分說明即可。
本發明實施例所提供的方法實施例可以在行動終端、PC終端、專用定損終端、伺服器或者類似的運算裝置中執行。以運行在行動終端上為例,圖8是應用本發明方法或裝置實施例一種車輛定損的交互處理的客戶端的硬體結構框圖。如圖8所示,客戶端10可以包括一個或多個(圖中僅示出一個)處理器102(處理器102可以包括但不限於微處理器MCU或可程式化邏輯裝置FPGA等的處理裝置)、用於儲存資料的儲存器104、以及用於通訊功能的傳輸模組106。所屬技術領域中具有通常知識者可以理解,圖8所示的結構僅為示意,其並不對上述電子裝置的結構造成限定。例如,客戶端10還可包括比圖8中所示更多或者更少的組件,例如還可以包括其他的處理硬體,如GPU(Graphics Processing Unit,圖形處理器),或者具有與圖8所示不同的配置。
儲存器104可用於儲存應用軟體的軟體程式以及模組,如本說明書實施例中的搜索方法對應的程式指令/模組,處理器102透過運行儲存在儲存器104內的軟體程式以及模組,從而執行各種功能應用以及資料處理,即實現上述導航交互界面內容展示的處理方法。儲存器104可包括高速隨機儲存器,還可包括非易失性儲存器,如一個或者多個磁性儲存裝置、快閃記憶體、或者其他非易失性固態儲存器。在一些實例中,儲存器104可進一步包括相對於處理器102遠端設置的儲存器,這些遠端儲存器可以透過網路連接至客戶端10。上述網路的實例包括但不限於網際網路、企業內部網、區域網、行動通訊網及其組合。
傳輸模組106用於經由一個網路接收或者發送資料。上述的網路具體實例可包括電腦終端10的通訊供應商提供的無線網路。在一個實例中,傳輸模組106包括一個網路介面控制器(Network Interface Controller,NIC),其可透過基站與其他網路設備相連從而可與網際網路進行通訊。在一個實例中,傳輸模組106可以為射頻(Radio Frequency,RF)模組,其用於透過無線方式與網際網路進行通訊。
基於上述所述的圖像物體定位的方法,本說明書還提供一種車輛定損的交互處理裝置。所述的裝置可以包括使用了本說明書實施例所述方法的系統(包括分散式系統)、軟體(應用)、模組、組件、伺服器、客戶端等並結合必要的實施硬體的設備裝置。基於同一創新構思,本說明書提供的一種實施例中的處理裝置如下面的實施例所述。由於裝置解決問題的實現方案與方法相似,因此本說明書實施例具體的處理裝置的實施可以參見前述方法的實施,重複之處不再贅述。儘管以下實施例所描述的裝置較佳地以軟體來實現,但是硬體,或者軟體和硬體的組合的實現也是可能並被構想的。具體的,如圖9所示,圖9是本說明書提供的一種車輛定損的交互處理裝置實施例的模組結構示意圖,具體的可以包括:
特徵獲取模組201,可以用於透過拍攝視窗獲取車輛的特徵資料;
AR處理模組202,可以用於根據所述特徵資料建構所述車輛的擴增實境空間模型,所述擴增實境空間模型展示在所述拍攝視窗中,並實現與所述拍攝視窗中車輛的現實空間位置匹配;
拍攝引導模組203,可以用於基於所述擴增實境空間模型在所述拍攝視窗中進行損傷識別引導,所述損傷識別引導包括將展示基於從所述拍攝視窗中獲取的圖像資訊確定的拍攝引導資訊;
結果展示模組204,可以用於在所述拍攝視窗中展示損傷識別的結果資訊。
需要說明的是,上述實施例上述所述的裝置,根據相關方法實施例的描述還可以包括其他的實施方式,展示處理進度的模組。具體的實現方式可以參照方法實施例的描述,在此不作一一贅述。
本說明書實施例提供的設備型號識別方法可以在電腦中由處理器執行相應的程式指令來實現,如使用windows/ Linux操作系統的c++/java語言在PC端/伺服器端實現,或其他例如android、iOS系統相對應的應用設計語言集合必要的硬體實現,或者基於量子電腦的處理邏輯實現等。具體的,本說明書提供的一種車輛定損的交互處理設備實現上述方法的實施例中,所述處理設備可以包括處理器以及用於儲存處理器可執行指令的儲存器,所述處理器執行所述指令時實現:
透過拍攝視窗獲取車輛的特徵資料;
根據所述特徵資料建構所述車輛的擴增實境空間模型,所述擴增實境空間模型展示在所述拍攝視窗中,並實現與所述拍攝視窗中車輛的現實空間位置匹配;
基於所述擴增實境空間模型在所述拍攝視窗中進行損傷識別引導,所述損傷識別引導包括將展示基於從所述拍攝視窗中獲取的圖像資訊確定的拍攝引導資訊;
在所述拍攝視窗中展示損傷識別的結果資訊。
基於前述方法實施例描述,所述處理設備的另一個實施例中,所述處理器執行所述傷識別引導時實現:
識別拍攝的圖像中是否存在疑似損傷;
若有,則根據拍攝視窗中所述車輛的座標資訊和所述疑似損傷的圖像拍攝要求進行匹配計算,根據計算結果確定拍攝引導資訊;
在所述拍攝視窗中展示所述拍攝引導資訊。
基於前述方法實施例描述,所述處理設備的另一個實施例中,所述拍攝引導資訊至少包括下述之一:
調整拍攝方向;
調整拍攝角度;
調整拍攝距離;
所述疑似損傷的疑似位置。
基於前述方法實施例描述,所述處理設備的另一個實施例中,所述損傷識別的結果資訊包括基於所述損傷識別引導獲取的圖像資訊確定的損傷位置、損傷部件、維修方案、維修費用中的至少一項。
基於前述方法實施例描述,所述處理設備的另一個實施例中,所述處理器在展示目標損傷的損傷識別結果資訊之前,還執行:
展示所述目標損傷的處理進度。
基於前述方法實施例描述,所述處理設備的另一個實施例中,所述處理器還執行:
展示所述引導提示資訊、結果資訊、處理進度中的至少一個界面窗口基於拍攝視窗中圖像變化進行相應的跟蹤變化。
需要說明的是,上述實施例上述所述的處理設備,根據相關方法實施例的描述還可以包括其他的可擴展實施方式。具體的實現方式可以參照方法實施例的描述,在此不作一一贅述。
上述的指令可以儲存在多種電腦可讀儲存媒體中。所述電腦可讀儲存媒體可以包括用於儲存資訊的物理裝置,可以將資訊數位化後再以利用電、磁或者光學等方式的媒體加以儲存。本實施例所述的電腦可讀儲存媒體有可以包括:利用電能方式儲存資訊的裝置如,各式儲存器,如RAM、ROM等;利用磁能方式儲存資訊的裝置如,硬碟、軟碟、磁帶、磁芯儲存器、磁泡儲存器、隨身碟;利用光學方式儲存資訊的裝置如,CD或DVD。當然,還有其他方式的可讀儲存媒體,例如量子儲存器、石墨烯儲存器等。本說明書實施例中所述的裝置或伺服器或客戶端或系統中的指令同上描述。
上述方法或裝置實施例可以用於用戶一側的客戶端,如智慧手機。因此,本說明書提供一種客戶端,包括處理器以及用於儲存處理器可執行指令的儲存器,所述處理器執行所述指令時實現:
基於前述所述,本說明書實施例還提供一種電子設備,包括顯示螢幕、處理器以及儲存處理器可執行指令的儲存器。圖10是本說明提供的一種電子設備實施例的結構示意圖,所述處理器執行所述指令時可以實現本說明書任意一個實施例所述的方法步驟。
本說明書中的裝置、客戶端、電子設備等的各個實施例均採用遞進的方式描述,各個實施例之間相同相似的部分互相參見即可,每個實施例重點說明的都是與其他實施例的不同之處。尤其,對於硬體+程式類實施例而言,由於其基本相似於方法實施例,所以描述的比較簡單,相關之處參見方法實施例的部分說明即可。
上述對本說明書特定實施例進行了描述。其它實施例在所附申請專利範圍的範圍內。在一些情況下,在申請專利範圍中記載的動作或步驟可以按照不同於實施例中的順序來執行並且仍然可以實現期望的結果。另外,在圖式中描繪的過程不一定要求示出的特定順序或者連續順序才能實現期望的結果。在某些實施方式中,多工處理和平行處理也是可以的或者可能是有利的。
雖然本發明提供了如實施例或流程圖所述的方法操作步驟,但基於常規或者無創造性的勞動可以包括更多或者更少的操作步驟。實施例中列舉的步驟順序僅為眾多步驟執行順序中的一種方式,不代表唯一的執行順序。在實際中的裝置或客戶端產品執行時,可以按照實施例或者圖式所示的方法順序執行或者平行執行(例如平行處理器或者多線程處理的環境)。
儘管本說明書實施例內容中提到AR技術、拍攝引導資訊展示、與用戶交互的拍攝引導、利用深度神經網路初步識別損傷位置等之類的資料獲取、位置排列、交互、計算、判斷等操作和資料描述,但是,本說明書實施例並不局限於必須是符合行業通訊標準、標準圖像資料處理協議、通訊協議和標準資料模型/模板或本說明書實施例所描述的情況。某些行業標準或者使用自定義方式或實施例描述的實施基礎上略加修改後的實施方案也可以實現上述實施例相同、等同或相近、或變形後可預料的實施效果。應用這些修改或變形後的資料獲取、儲存、判斷、處理方式等獲取的實施例,仍然可以屬本說明書的可選實施方案範圍之內。
在20世紀90年代,對於一個技術的改進可以很明顯地區分是硬體上的改進(例如,對二極管、電晶體、開關等電路結構的改進)還是軟體上的改進(對於方法流程的改進)。然而,隨著技術的發展,當今的很多方法流程的改進已經可以視為硬體電路結構的直接改進。設計人員幾乎都透過將改進的方法流程程式化到硬體電路中來得到相應的硬體電路結構。因此,不能說一個方法流程的改進就不能用硬體實體模組來實現。例如,可程式化邏輯裝置(Programmable Logic Device, PLD)(例如現場可程式化閘陣列(Field Programmable Gate Array,FPGA))就是這樣一種積體電路,其邏輯功能由用戶對裝置程式化來確定。由設計人員自行程式化來把一個數位系統“積體”在一片PLD上,而不需要請晶片製造廠商來設計和製作專用的積體電路晶片。而且,如今,取代手工地製作積體電路晶片,這種程式化也多半改用“邏輯編譯器(logic compiler)”軟體來實現,它與程式開發撰寫時所用的軟體編譯器相類似,而要編譯之前的原始碼也得用特定的程式語言來撰寫,此稱之為硬體描述語言(Hardware Description Language,HDL),而HDL也並非僅有一種,而是有許多種,如ABEL (Advanced Boolean Expression Language)、AHDL(Altera Hardware Description Language)、Confluence、CUPL (Cornell University Programming Language)、HDCal、JHDL(Java Hardware Description Language)、Lava、Lola、MyHDL、PALASM、RHDL(Ruby Hardware Description Language)等,目前最普遍使用的是VHDL (Very-High-Speed Integrated Circuit Hardware Description Language)與Verilog。所屬技術領域中具有通常知識者也應該清楚,只需要將方法流程用上述幾種硬體描述語言稍作邏輯程式化並程式化到積體電路中,就可以很容易得到實現該邏輯方法流程的硬體電路。
控制器可以按任何適當的方式實現,例如,控制器可以採取例如微處理器或處理器以及儲存可由該(微)處理器執行的電腦可讀程式碼(例如軟體或韌體)的電腦可讀媒體、邏輯閘、開關、專用積體電路(Application Specific Integrated Circuit,ASIC)、可程式化邏輯控制器和嵌入微控制器的形式,控制器的例子包括但不限於以下微控制器:ARC 625D、Atmel AT91SAM、Microchip PIC18F26K20以及Silicone Labs C8051F320,儲存器控制器還可以被實現為儲存器的控制邏輯的一部分。所屬技術領域中具有通常知識者也知道,除了以純電腦可讀程式碼方式實現控制器以外,完全可以透過將方法步驟進行邏輯程式化來使得控制器以邏輯閘、開關、專用積體電路、可程式化邏輯控制器和嵌入微控制器等的形式來實現相同功能。因此這種控制器可以被認為是一種硬體部件,而對其內包括的用於實現各種功能的裝置也可以視為硬體部件內的結構。或者甚至,可以將用於實現各種功能的裝置視為既可以是實現方法的軟體模組又可以是硬體部件內的結構。
上述實施例闡明的系統、裝置、模組或單元,具體可以由電腦晶片或實體實現,或者由具有某種功能的產品來實現。一種典型的實現設備為電腦。具體的,電腦例如可以為個人電腦、膝上型電腦、車載人機交互設備、蜂巢式電話、相機電話、智慧電話、個人數位助理、媒體播放器、導航設備、電子郵件設備、遊戲控制台、平板電腦、可穿戴設備或者這些設備中的任何設備的組合。
雖然本說明書實施例提供了如實施例或流程圖所述的方法操作步驟,但基於常規或者無創造性的手段可以包括更多或者更少的操作步驟。實施例中列舉的步驟順序僅為眾多步驟執行順序中的一種方式,不代表唯一的執行順序。在實際中的裝置或終端產品執行時,可以按照實施例或者圖式所示的方法順序執行或者平行執行(例如平行處理器或者多線程處理的環境,甚至為分散式資料處理環境)。術語“包括”、“包含”或者其任何其他變體意在涵蓋非排他性的包含,從而使得包括一系列要素的過程、方法、產品或者設備不僅包括那些要素,而且還包括沒有明確列出的其他要素,或者是還包括為這種過程、方法、產品或者設備所固有的要素。在沒有更多限制的情況下,並不排除在包括所述要素的過程、方法、產品或者設備中還存在另外的相同或等同要素。
為了描述的方便,描述以上裝置時以功能分為各種模組分別描述。當然,在實施本說明書實施例時可以把各模組的功能在同一個或多個軟體和/或硬體中實現,也可以將實現同一功能的模組由多個子模組或子單元的組合實現等。以上所描述的裝置實施例僅是示意性的,例如,所述單元的劃分,僅為一種邏輯功能劃分,實際實現時可以有另外的劃分方式,例如多個單元或組件可以結合或者可以整合到另一個系統,或一些特徵可以忽略,或不執行。另一點,所顯示或討論的相互之間的耦合或直接耦合或通訊連接可以是透過一些介面,裝置或單元的間接耦合或通訊連接,可以是電性,機械或其它的形式。
所屬技術領域中具有通常知識者也知道,除了以純電腦可讀程式碼方式實現控制器以外,完全可以透過將方法步驟進行邏輯程式化來使得控制器以邏輯閘、開關、專用積體電路、可程式化邏輯控制器和嵌入微控制器等的形式來實現相同功能。因此這種控制器可以被認為是一種硬體部件,而對其內部包括的用於實現各種功能的裝置也可以視為硬體部件內的結構。或者甚至,可以將用於實現各種功能的裝置視為既可以是實現方法的軟體模組又可以是硬體部件內的結構。
本發明是參照根據本發明實施例的方法、設備(系統)、和電腦程式產品的流程圖和/或方框圖來描述的。應理解可由電腦程式指令實現流程圖和/或方框圖中的每一流程和/或方框、以及流程圖和/或方框圖中的流程和/或方框的結合。可提供這些電腦程式指令到通用電腦、專用電腦、嵌入式處理機或其他可程式化資料處理設備的處理器以產生一個機器,使得透過電腦或其他可程式化資料處理設備的處理器執行的指令產生用於實現在流程圖一個流程或多個流程和/或方框圖一個方框或多個方框中指定的功能的裝置。
這些電腦程式指令也可儲存在能引導電腦或其他可程式化資料處理設備以特定方式工作的電腦可讀儲存器中,使得儲存在該電腦可讀儲存器中的指令產生包括指令裝置的製造品,該指令裝置實現在流程圖一個流程或多個流程和/或方框圖一個方框或多個方框中指定的功能。
這些電腦程式指令也可裝載到電腦或其他可程式化資料處理設備上,使得在電腦或其他可程式化設備上執行一系列操作步驟以產生電腦實現的處理,從而在電腦或其他可程式化設備上執行的指令提供用於實現在流程圖一個流程或多個流程和/或方框圖一個方框或多個方框中指定的功能的步驟。
在一個典型的配置中,計算設備包括一個或多個處理器(CPU)、輸入/輸出介面、網路介面和記憶體。
記憶體可能包括電腦可讀媒體中的非永久性儲存器,隨機存取記憶體(RAM)和/或非易失性記憶體等形式,如唯讀記憶體(ROM)或快閃記憶體(flash RAM)。記憶體是電腦可讀媒體的示例。
電腦可讀媒體包括永久性和非永久性、可行動和非可行動媒體可以由任何方法或技術來實現資訊儲存。資訊可以是電腦可讀指令、資料結構、程式的模組或其他資料。電腦的儲存媒體的例子包括,但不限於相變記憶體(PRAM)、靜態隨機存取記憶體(SRAM)、動態隨機存取記憶體(DRAM)、其他類型的隨機存取記憶體(RAM)、唯讀記憶體(ROM)、電可抹除可程式化唯讀記憶體(EEPROM)、快閃記憶體或其他記憶體技術、唯讀光碟唯讀記憶體(CD-ROM)、數位多功能光碟(DVD)或其他光學儲存、磁盒式磁帶,磁帶磁磁碟儲存或其他磁性儲存設備或任何其他非傳輸媒體,可用於儲存可以被計算設備存取的資訊。按照本文中的界定,電腦可讀媒體不包括暫存電腦可讀媒體(transitory media),如調變的資料訊號和載波。
所屬技術領域中具有通常知識者應明白,本說明書的實施例可提供為方法、系統或電腦程式產品。因此,本說明書實施例可採用完全硬體實施例、完全軟體實施例或結合軟體和硬體態樣的實施例的形式。而且,本說明書實施例可採用在一個或多個其中包含有電腦可用程式碼的電腦可用儲存媒體(包括但不限於磁碟儲存器、CD-ROM、光學儲存器等)上實施的電腦程式產品的形式。
本說明書實施例可以在由電腦執行的電腦可執行指令的一般上下文中描述,例如程式模組。一般地,程式模組包括執行特定任務或實現特定抽象資料類型的例程、程式、物件、組件、資料結構等。也可以在分散式計算環境中實踐本說明書實施例,在這些分散式計算環境中,由透過通訊網路而被連接的遠端處理設備來執行任務。在分散式計算環境中,程式模組可以位於包括儲存設備在內的本地和遠端電腦儲存媒體中。
本說明書中的各個實施例均採用遞進的方式描述,各個實施例之間相同相似的部分互相參見即可,每個實施例重點說明的都是與其他實施例的不同之處。尤其,對於系統實施例而言,由於其基本相似於方法實施例,所以描述的比較簡單,相關之處參見方法實施例的部分說明即可。在本說明書的描述中,參考術語“一個實施例”、“一些實施例”、“示例”、“具體示例”、或“一些示例”等的描述意指結合該實施例或示例描述的具體特徵、結構、材料或者特點包含於本說明書實施例的至少一個實施例或示例中。在本說明書中,對上述術語的示意性表述不必須針對的是相同的實施例或示例。而且,描述的具體特徵、結構、材料或者特點可以在任一個或多個實施例或示例中以合適的方式結合。此外,在不相互矛盾的情況下,所屬技術領域中具有通常知識者可以將本說明書中描述的不同實施例或示例以及不同實施例或示例的特徵進行結合和組合。
以上所述僅為本說明書實施例的實施例而已,並不用於限制本說明書實施例。對於所屬技術領域中具有通常知識者來說,本說明書實施例可以有各種更改和變化。凡在本說明書實施例的精神和原理之內所作的任何修改、等同替換、改進等,均應包含在本說明書實施例的請求項範圍之內。In order to enable those with ordinary knowledge in the technical field to better understand the technical solutions in this specification, the technical solutions in the embodiments of this specification will be clearly and completely described in combination with the drawings in the embodiments of this specification. Obviously, The described embodiments are only a part of the embodiments in this specification, and not all the embodiments. Based on one or more embodiments in this specification, all other embodiments obtained by a person having ordinary knowledge in the technical field without making creative work should fall within the protection scope of the embodiments of this specification.
An embodiment provided in this specification can be applied to a client / server system architecture. The client may include a terminal device with a photographing function, such as a smart phone, a tablet computer, Smart wearable devices, dedicated fixed loss terminals, etc. The client may have a communication module that can communicate with a remote server to achieve data transmission with the server. The server may include a server on the insurance company side or a server on the fixed loss service side. In other implementation scenarios, the server on the other service side may also be included, for example, there is communication with the server on the fixed loss service side. Terminals of linked accessory suppliers, terminals of vehicle repair shops, etc. The server may include a single computer device, or a server cluster composed of multiple servers, or a server of a distributed system. The client side can send the image data collected on the scene to the server in real time, and the server side will identify the damage, formulate the maintenance plan, and calculate the maintenance cost. For example, the fixed damage server identifies the damaged part and the After the degree of damage, you can confirm the maintenance costs with the server of the repair shop and the claim amount with the server of the insurance company. The fixed loss server feeds back the compensation amount given by the insurance company and the repair cost information of the repair shop to the client. In the implementation of the processing on the server side, processing such as damage identification is performed by the server side, and the processing speed is usually higher than that on the client side, which can reduce the processing pressure on the client side and improve the speed of damage recognition. Of course, this description does not exclude that all or part of the processing described above is implemented by the client side in other embodiments, such as the client side performing instant detection and identification of damage.
The following describes a specific implementation scenario of a mobile phone client as an example. Specifically, FIG. 1 is a schematic flowchart of an embodiment of an interactive processing method for vehicle fixed loss provided in this specification. Although the present specification provides method operation steps or device structures as shown in the following embodiments or drawings, based on conventional or no creative labor, the method or device may include more or partially merged fewer operation steps. Or module unit. Among the steps or structures that do not logically have the necessary causal relationship, the execution order of these steps or the module structure of the device is not limited to the execution order or module structure shown in the embodiments or the drawings of this specification. When the described method or module structure is applied to an actual device, server, or end product, the method or module structure shown in the embodiment or the diagram may be executed sequentially or in parallel (for example, a parallel processor or Multi-threaded processing environment, even decentralized processing, server cluster implementation environment). Of course, the description of the following embodiments does not limit other technical solutions that can be extended based on this specification. For example in other implementation scenarios. A specific embodiment is shown in FIG. 1. In an embodiment of a method for interactive processing of vehicle fixed damage provided in this specification, the method may include:
S0: Obtain the characteristic data of the vehicle through the shooting window.
The client on the user side in this embodiment may be a smart phone, and the smart phone may have a shooting function. The user can open a mobile phone application that implements the embodiment of the present specification to take a framing shot of the vehicle accident scene at the vehicle accident scene. After the client opens the application, the shooting window can be displayed on the client display screen, and the vehicle can be captured through the shooting window to obtain the vehicle's characteristic data. The shooting window may be a video shooting window, and image information obtained through a client-integrated shooting device may be displayed in the shooting window. The specific interface structure of the shooting window and related information displayed can be customized.
The characteristic data may be specifically set according to data processing requirements such as vehicle identification, environment identification, and image identification. Generally, the characteristic data may include data information of each component of the identified vehicle, 3D coordinate information may be constructed, and an augmented reality space model of the vehicle may be established (AR space model, a data characterization method, and a contour image of the subject). ). Of course, the characteristic data may also include other information such as the brand, model, color, outline, and unique identification code of the vehicle.
S2: Construct an augmented reality space model of the vehicle according to the characteristic data, the augmented reality space model is displayed in the shooting window, and is matched with the real space position of the vehicle in the shooting window.
The augmented reality AR generally refers to a technical implementation solution that calculates the position and angle of the camera image in real time and adds corresponding images, videos, and 3D models. This solution can put the virtual world on the screen (Overlay to) the real world and can interact. The enhanced information space model constructed by using the feature data in the embodiments of the present specification may be vehicle outline information, which may be specifically based on the obtained vehicle model, shooting angle, and vehicle tire position, ceiling position, front face position, front large A plurality of characteristic data such as a lamp position, a tail lamp position, and a front and rear window position construct a contour of the vehicle. The contour may include a data model based on 3D coordinates, and the contour carries corresponding 3D coordinate information. The constructed outline can then be displayed in the shooting window. Of course, this specification does not exclude that the augmented reality space model described in other embodiments may also include other model forms or other model information added on the contour.
The AR model may match the real vehicle position during the shooting duration, such as matching the constructed 3D contour with the contour position of the real vehicle. In the specific matching process, the framing direction can be guided, and the user can move the shooting direction or angle through the guide to align the constructed contour with the contour of the real vehicle photographed. As shown in FIG. 2, FIG. 2 is a schematic diagram of an application scenario of AR model matching in a fixed loss interaction of a vehicle provided in this specification.
The embodiment of this specification combines the augmented reality technology to not only display the real information of the vehicle photographed by the user's actual client, but also display the constructed augmented reality spatial model information of the vehicle at the same time. , Can provide a better fixed loss service experience.
S4: Perform damage recognition guidance in the shooting window based on the augmented reality space model, and the damage recognition guidance includes displaying shooting guidance information determined based on image information obtained from the shooting window.
The shooting window combined with the AR space model can more intuitively show the scene of the vehicle, and can effectively determine the damage location of the vehicle and guide the shooting. The client may perform damage identification and guidance in an AR scenario, and the damage identification and guidance may specifically include shooting guidance information determined based on image information obtained from the shooting window. The client can obtain image information in the AR scene in the shooting window, analyze and calculate the obtained image information, and determine what kind of shooting guidance information needs to be displayed in the shooting window based on the analysis result. For example, the vehicle in the current shooting window is far away, and the user can be prompted to approach the shooting in the shooting window. If the shooting position is to the left and the rear of the vehicle cannot be captured, the shooting guide information can be displayed to prompt the user to pan the shooting angle to the right. The specific information of the damage identification guide and the shooting guide information displayed under what conditions can be set in advance corresponding policies or rules, which will not be described one by one in this embodiment.
In a specific damage recognition and guidance embodiment of the method provided in this specification, the damage recognition and guidance may include:
S40: identify whether there is suspected damage in the captured image;
S42: If yes, perform matching calculation according to the vehicle coordinate information in the shooting window and the image capturing requirements of the suspected damage, and determine shooting guidance information according to the calculation result;
S44: Display the shooting guide information in the shooting window.
In this embodiment, if the vehicle is found to have suspected damage through the image recognition algorithm, the coordinate information of the suspected damage in the actual space position of the vehicle may be calculated, and then the image capture requirements of the suspected damage are compared to determine Find out what kind of operation the user needs to do. The shooting guide information to be displayed is determined according to the result of the matching calculation. For example, if a scratch is detected on the rear fender of the vehicle, and the scratch needs to be shot in the front and along the direction of the scratch, according to the coordinate information, it is calculated that the user is shooting at an oblique 45 degrees and the distance is wiped. The marks are far away. At this time, the user may be prompted to approach the location of the scratch, and the user is prompted to shoot in the front and along the direction of the scratch. The shooting guide information can be adjusted in real time according to the current framing. For example, if the user is already close to the scratch position and meets the shooting requirements, the shooting guide information prompting the user to approach the scratch position at this time may no longer be displayed. The suspected damage can be identified by the client or the server.
The shooting guide information and shooting conditions that need to be displayed during specific shooting can be set accordingly according to the fixed loss interactive design or fixed loss processing requirements. In an embodiment provided in this specification, the shooting guide information may include at least one of the following:
Adjust the shooting direction;
Adjust the shooting angle;
Adjust the shooting distance;
Adjust shooting light;
The suspected location of the suspected injury.
The suspected damage may include a preliminarily identified possible damage, or a damage that has not been confirmed by a designated damage recognition system / algorithm calculation. Correspondingly, the location area of the suspected damage may be called a suspected location.
An example of shooting guidance is shown in Figure 3. Users can use the real-time shooting guide information for more convenient and efficient fixed loss processing. Users can shoot according to the shooting guide information, which eliminates the need for professional shooting skills or tedious shooting operations, and the user experience is better. The above embodiment describes the shooting guide information displayed through text. In the expandable embodiment, the shooting guide information may also include display modes such as images, voices, animations, vibrations, etc. The current shooting screen is displayed through arrows or voice prompts. Aim at an area.
S6: Display the result information of the damage recognition in the shooting window.
Fixed damage shooting is carried out through an interactive way guided by damage identification. The image data obtained by shooting can be further processed by the client or server, such as whether there is damage detection, damage type identification, damage component identification, and maintenance costs. Calculation, fixed loss and nuclear loss processing. The above processing can be attributed to the damage identification result information in an AR-based interactive scenario. One or more of the damage identification result information can be displayed in the shooting window of the client, and the user can view it immediately. In a specific embodiment, the result information of the damage identification may include at least one of a damage position, a damaged component, a maintenance scheme, and a maintenance cost determined based on the image information obtained by the damage identification guide.
An example is shown in Figure 4. The damage identification result information can be displayed to the user in the video interface of fixed-loss shooting, and multiple damage identification results can be displayed at the same time, such as when the bumper and the left rear fender are damaged If both are in the current shooting window, the result information of the damage identification of both can be displayed at the corresponding position at the same time.
FIG. 5 is a schematic diagram of an implementation scenario of another embodiment of the method provided in this specification. As shown in FIG. 5, if the result information of the target damage being identified and processed in the current shooting window has not been determined, the processing progress of the target damage can be displayed. Real-time display of the progress of the target damage processing can further increase the user's fixed loss interaction experience. Therefore, in another embodiment of the method, before displaying the damage identification result information of the target injury, the method may further include:
S8: Show the processing progress of the target damage.
FIG. 6 is a schematic flowchart of a method according to another embodiment of the method provided in the present specification. In some embodiments, the interface window displaying the processing progress may be the same interface window or the interface window at the same position as the interface window displaying the result information. Of course, you can also use different interface windows.
In another embodiment, the interface window displaying the result information or processing progress may be adaptively adjusted in size according to the displayed information content, and the window position may be moved and tracked accordingly according to the current shooting angle or shooting position. Therefore, as shown in FIG. 7, in another embodiment of the method provided in the present specification, the method may further include:
S10: Displaying at least one interface window among the guidance prompt information, result information, and processing progress may perform corresponding tracking changes based on image changes in the shooting window.
The tracking changes may include the aforementioned position tracking, window size adjustment, or changes in color and outline. For example, when the user moves to change the shooting angle, if the damaged part A is always present in the shooting window, the result information of the damaged part A may always be displayed in the shooting window according to the user's shooting.
It should be noted that the real-time described in the above embodiments may include sending, receiving, or displaying immediately after acquiring or determining certain material information. Those skilled in the art can understand that after caching or expected calculations, Sending, receiving or displaying after the waiting time may still fall within the immediate definition. The images described in the embodiments of the present specification may include video, and the video may be regarded as a continuous image set.
In addition, the images obtained by shooting in the embodiments of the present specification may be stored in a local client or uploaded to a remote server in real time. After the local client stores some data to prevent tampering or upload it to the server for storage, it can effectively prevent the tampering data from being tampered with, or the insurance fraud that misappropriates other data than the image of the accident. Therefore, the embodiments of the present specification can also improve the data security of the fixed loss processing and the reliability of the fixed loss results.
In the foregoing embodiment, the client or the server may use a damage recognition algorithm constructed in advance or in real time to identify the image captured by the client. The damage recognition algorithm may include a damage recognition algorithm constructed by using multiple training models, such as a deep neural network Faster R-CNN, which can train a deep neural network by marking a large number of pictures of the damage area in advance. Pictures of various orientations and lighting conditions give the extent of the damage area.
The above embodiment describes the implementation of the user performing fixed loss interaction processing on the mobile phone client. It should be noted that the method described in the embodiments of this specification can be used in a variety of processing devices and in implementation scenarios including a client and a server.
Each embodiment of the above method in this specification is described in a progressive manner, and the same or similar parts between the various embodiments may refer to each other. Each embodiment focuses on the differences from other embodiments. For related points, refer to the description of the method embodiments.
The method embodiments provided in the embodiments of the present invention may be executed in a mobile terminal, a PC terminal, a dedicated fixed loss terminal, a server, or a similar computing device. Taking running on a mobile terminal as an example, FIG. 8 is a block diagram of a hardware structure of a client to which an interactive process of vehicle fixed damage is applied according to the method or device embodiment of the present invention. As shown in FIG. 8, the client 10 may include one or more (only one shown in the figure) a processor 102 (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA) ), A memory 104 for storing data, and a transmission module 106 for communication functions. Those skilled in the art can understand that the structure shown in FIG. 8 is only for illustration, and it does not limit the structure of the electronic device. For example, the client 10 may further include more or fewer components than those shown in FIG. 8, for example, may further include other processing hardware, such as a GPU (Graphics Processing Unit, graphics processor), Shows different configurations.
The storage 104 may be used to store software programs and modules of application software, such as program instructions / modules corresponding to the search method in the embodiment of the present specification. The processor 102 runs the software programs and modules stored in the storage 104, In this way, various functional applications and data processing are executed, that is, the processing method for displaying the content of the navigation interactive interface described above. The storage 104 may include a high-speed random storage, and may also include a non-volatile storage, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state storage. In some examples, the storage 104 may further include storages remotely disposed relative to the processor 102, and these remote storages may be connected to the client 10 through a network. Examples of the above network include, but are not limited to, the Internet, an intranet, an intranet, a mobile communication network, and combinations thereof.
The transmission module 106 is used to receive or send data through a network. The above specific examples of the network may include a wireless network provided by a communication provider of the computer terminal 10. In one example, the transmission module 106 includes a network interface controller (NIC), which can be connected to other network devices through a base station so as to communicate with the Internet. In one example, the transmission module 106 may be a radio frequency (RF) module, which is used to communicate with the Internet in a wireless manner.
Based on the image object positioning method described above, this specification also provides an interactive processing device for vehicle fixed damage. The device may include a system (including a distributed system), software (application), a module, a component, a server, a client, and the like that use the method described in the embodiment of the present specification, and a device device that combines necessary implementation hardware . Based on the same innovative concept, the processing device in one embodiment provided in this specification is as described in the following embodiments. Since the implementation solution of the device to solve the problem is similar to the method, the implementation of the specific processing device in the embodiment of this specification may refer to the implementation of the foregoing method, and the duplicated details are not described again. Although the devices described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware is also possible and conceived. Specifically, as shown in FIG. 9, FIG. 9 is a schematic diagram of a module structure of an embodiment of an interactive processing device for vehicle damage determination provided in this specification, which may specifically include:
Feature acquisition module 201, which can be used to acquire the characteristic data of the vehicle through the shooting window;
The AR processing module 202 may be configured to construct an augmented reality space model of the vehicle according to the characteristic data, and the augmented reality space model is displayed in the shooting window and implemented in the shooting window. Vehicle's real space position matching;
The shooting guide module 203 may be used to perform damage recognition and guidance in the shooting window based on the augmented reality space model, and the damage recognition and guidance includes determining the display based on image information obtained from the shooting window. Shooting guidance information;
The result display module 204 may be used to display the result information of the damage recognition in the shooting window.
It should be noted that, in the device described in the foregoing embodiment, according to the description of the related method embodiment, it may further include other implementation modes, and a module for displaying a processing progress. For specific implementation manners, reference may be made to the description of the method embodiments, and details are not described herein.
The device model identification method provided in the embodiments of this specification can be implemented by a processor executing corresponding program instructions in a computer, such as using the c ++ / java language of the windows / Linux operating system on the PC / server side, or other such as android 2. The necessary hardware implementation of the application design language corresponding to the iOS system, or the implementation of processing logic based on quantum computers. Specifically, in the embodiment of the method for realizing the foregoing method provided by an interactive processing device for vehicle fixed damage provided in this specification, the processing device may include a processor and a memory for storing processor-executable instructions, and the processor executes all Implemented when describing the instruction:
Obtain the characteristic data of the vehicle through the shooting window;
Constructing an augmented reality space model of the vehicle according to the characteristic data, the augmented reality space model being displayed in the shooting window, and achieving matching with the real space position of the vehicle in the shooting window;
Performing damage recognition guidance in the shooting window based on the augmented reality space model, the damage recognition guidance including displaying shooting guidance information determined based on image information obtained from the shooting window;
The result information of the damage recognition is displayed in the shooting window.
Based on the foregoing method embodiment description, in another embodiment of the processing device, when the processor executes the injury identification and guidance, it is implemented:
Identify if there is suspected damage in the captured image;
If there is, matching calculation is performed according to the vehicle coordinate information in the shooting window and the image capturing requirements of the suspected damage, and shooting guidance information is determined according to the calculation result;
Displaying the shooting guide information in the shooting window.
Based on the foregoing method embodiment description, in another embodiment of the processing device, the shooting guide information includes at least one of the following:
Adjust the shooting direction;
Adjust the shooting angle;
Adjust the shooting distance;
The suspected location of the suspected injury.
Based on the description of the foregoing method embodiment, in another embodiment of the processing device, the damage identification result information includes a damage location, a damaged component, a maintenance plan, and a maintenance cost determined based on image information obtained by the damage identification guide. At least one of.
Based on the description of the foregoing method embodiment, in another embodiment of the processing device, before displaying the damage identification result information of the target damage, the processor further executes:
The progress of treatment of the target damage is shown.
Based on the foregoing method embodiment description, in another embodiment of the processing device, the processor further executes:
At least one interface window displaying the guidance prompt information, result information, and processing progress is tracked correspondingly based on image changes in the shooting window.
It should be noted that, in the processing device described in the foregoing embodiment, the description of the related method embodiment may further include other expandable implementations. For specific implementation manners, reference may be made to the description of the method embodiments, and details are not described herein.
The above instructions can be stored in a variety of computer-readable storage media. The computer-readable storage medium may include a physical device for storing information, and the information may be digitized and then stored by using a medium such as electricity, magnetism, or optics. The computer-readable storage medium described in this embodiment may include: a device that stores information using electric energy, such as various types of storage, such as RAM, ROM, etc .; a device that uses magnetic energy to store information, such as hard disks, floppy disks, Magnetic tape, magnetic core storage, bubble storage, flash drives; devices that use optical means to store information, such as CDs or DVDs. Of course, there are other ways of readable storage media, such as quantum storage, graphene storage, and so on. The instructions in the device or server or client or system described in the embodiments of this specification are as described above.
The above method or device embodiment can be used for a client on the user side, such as a smart phone. Therefore, this specification provides a client, which includes a processor and a memory for storing processor-executable instructions. When the processor executes the instructions, it implements:
Based on the foregoing, an embodiment of the present specification further provides an electronic device including a display screen, a processor, and a memory storing instructions executable by the processor. FIG. 10 is a schematic structural diagram of an embodiment of an electronic device provided in this description. When the processor executes the instructions, the method steps described in any one of the embodiments of the specification can be implemented.
The embodiments of the device, client, and electronic device in this specification are described in a progressive manner. The same and similar parts between the embodiments can be referred to each other. Each embodiment focuses on explaining other implementations. Example. In particular, for the hardware + programming embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and the relevant parts may refer to the description of the method embodiment.
The specific embodiments of the present specification have been described above. Other embodiments are within the scope of the appended patent applications. In some cases, the actions or steps described in the scope of the patent application may be performed in a different order than in the embodiments and still achieve the desired result. In addition, the processes depicted in the figures do not necessarily require the particular order shown or sequential order to achieve the desired result. In some embodiments, multiplexing and parallel processing are also possible or may be advantageous.
Although the present invention provides the operation steps of the method as described in the embodiment or the flowchart, more or less operation steps may be included based on conventional or non-creative labor. The sequence of steps listed in the embodiments is only one way of executing the steps, and does not represent a unique sequence of execution. When the actual device or client product is executed, it may be executed sequentially or in parallel according to the method shown in the embodiment or the diagram (for example, a parallel processor or a multi-threaded processing environment).
Although the content of the embodiments of this specification mentions AR technology, shooting guidance information display, shooting guidance interacting with the user, using deep neural network to initially identify the location of the damage, etc., data acquisition, position arrangement, interaction, calculation, judgment and other operations And data description, however, the embodiments of the present specification are not limited to the situations that must conform to industry communication standards, standard image data processing protocols, communication protocols, and standard data models / templates or the embodiments described in the present specification. Certain industry standards or implementations that are slightly modified based on implementations described in custom methods or embodiments can also achieve the same, equivalent or similar, or predictable implementation effects of the above embodiments. The embodiments obtained by applying these modified or deformed data acquisition, storage, judgment, processing methods, etc., may still fall within the scope of optional implementations of this specification.
In the 1990s, for a technical improvement, it can be clearly distinguished whether it is an improvement in hardware (for example, the improvement of circuit structures such as diodes, transistors, switches, etc.) or an improvement in software (for the improvement of method flow). . However, with the development of technology, the improvement of many methods and processes can be regarded as a direct improvement of the hardware circuit structure. Designers almost always get the corresponding hardware circuit structure by programming the improved method flow into the hardware circuit. Therefore, it cannot be said that the improvement of a method flow cannot be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (such as a Field Programmable Gate Array (FPGA)) is such an integrated circuit whose logic function is determined by the user's programming of the device. Designers can program a digital system to "integrate" on a PLD by themselves, without having to ask a chip manufacturer to design and produce a dedicated integrated circuit chip. Moreover, today, instead of making integrated circuit chips manually, this programming is mostly implemented with "logic compiler" software, which is similar to the software compiler used in program development and writing. The source code before compilation must also be written in a specific programming language. This is called the Hardware Description Language (HDL). There is not only one type of HDL, but many types, such as ABEL (Advanced Boolean Expression Language), AHDL (Altera Hardware Description Language), Confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), Lava, Lola, MyHDL, PALASM, RHDL (Ruby Hardware Description Language), etc. The most commonly used are VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog. Those with ordinary knowledge in the technical field should also be clear, as long as the method flow is logically programmed and integrated into the integrated circuit using the above-mentioned several hardware description languages, the logic method flow can be easily obtained. Hardware circuit.
The controller may be implemented in any suitable manner, for example, the controller may take the form of a microprocessor or processor and a computer-readable storage of computer-readable code (such as software or firmware) executable by the (micro) processor. Media, logic gates, switches, application specific integrated circuits (ASICs), programmable logic controllers and embedded microcontrollers. Examples of controllers include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20 and Silicone Labs C8051F320, the memory controller can also be implemented as part of the control logic of the memory. Those with ordinary knowledge in the technical field also know that, in addition to implementing the controller in a pure computer-readable code manner, the controller can be logically gated, switched, dedicated integrated circuit, Programmable logic controllers and embedded microcontrollers achieve the same functionality. Therefore, the controller can be considered as a hardware component, and the device included in the controller for implementing various functions can also be considered as a structure in the hardware component. Or even, a device for implementing various functions can be regarded as a structure that can be both a software module implementing the method and a hardware component.
The system, device, module, or unit described in the foregoing embodiments may be specifically implemented by a computer chip or entity, or by a product having a certain function. A typical implementation is a computer. Specifically, the computer may be, for example, a personal computer, a laptop computer, a vehicle-mounted human-computer interaction device, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, Tablet, wearable, or a combination of any of these.
Although the embodiments of the present specification provide the operation steps of the method as described in the embodiments or flowcharts, more or less operation steps may be included based on conventional or non-creative means. The sequence of steps listed in the embodiments is only one way of executing the steps, and does not represent a unique sequence of execution. When the actual device or terminal product is executed, it may be executed sequentially or in parallel according to the method shown in the embodiment or the diagram (for example, a parallel processor or multi-threaded environment, or even a distributed data processing environment). The terms "including,""including," or any other variation thereof are intended to encompass non-exclusive inclusion, such that a process, method, product, or device that includes a series of elements includes not only those elements, but also other elements not explicitly listed Elements, or elements that are inherent to such a process, method, product, or device. Without further limitation, it does not exclude that there are other identical or equivalent elements in the process, method, product or equipment including the elements.
For the convenience of description, when describing the above device, the functions are divided into various modules and described separately. Of course, when implementing the embodiments of this specification, the functions of each module may be implemented in the same or multiple software and / or hardware, or the module that implements the same function may be composed of multiple submodules or subunits. Implementation etc. The device embodiments described above are only schematic. For example, the division of the unit is only a logical function division. In actual implementation, there may be another division manner. For example, multiple units or components may be combined or integrated into Another system, or some features, can be ignored or not implemented. In addition, the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, which may be electrical, mechanical or other forms.
Those with ordinary knowledge in the technical field also know that, in addition to implementing the controller in a pure computer-readable code manner, the controller can be logically gated, switched, dedicated integrated circuit, Programmable logic controllers and embedded microcontrollers achieve the same functionality. Therefore, such a controller can be considered as a hardware component, and the device included in the controller for implementing various functions can also be considered as a structure within the hardware component. Or even, a device for implementing various functions can be regarded as a structure that can be both a software module implementing the method and a hardware component.
The present invention is described with reference to flowcharts and / or block diagrams of methods, devices (systems), and computer program products according to embodiments of the present invention. It should be understood that each process and / or block in the flowchart and / or block diagram, and the combination of the process and / or block in the flowchart and / or block diagram can be implemented by computer program instructions. These computer program instructions can be provided to the processor of a general-purpose computer, special-purpose computer, embedded processor, or other programmable data processing device to generate a machine for instructions executed by the processor of the computer or other programmable data processing device Means are generated for implementing the functions specified in one or more flowcharts and / or one or more blocks of the block diagrams.
These computer program instructions may also be stored in a computer-readable storage that can guide a computer or other programmable data processing device to work in a specific manner, so that the instructions stored in the computer-readable storage produce a manufactured article including a command device The instruction device implements the functions specified in a flowchart or a plurality of processes and / or a block or a plurality of blocks in the block diagram.
These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operating steps can be performed on the computer or other programmable equipment to generate computer-implemented processing, so that the computer or other programmable equipment can The instructions executed on the steps provide steps for implementing the functions specified in one or more of the flowcharts and / or one or more of the block diagrams.
In a typical configuration, a computing device includes one or more processors (CPUs), input / output interfaces, network interfaces, and memory.
Memory may include non-permanent storage in computer readable media, random access memory (RAM) and / or non-volatile memory, such as read-only memory (ROM) or flash memory ( flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media includes permanent and non-permanent, removable and non-removable media. Information can be stored by any method or technology. Information can be computer-readable instructions, data structures, modules of programs, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), and other types of random access memory (RAM) , Read-only memory (ROM), electrically erasable and programmable ROM (EEPROM), flash memory or other memory technologies, CD-ROM, digital multi-function Optical discs (DVDs) or other optical storage, magnetic tape cartridges, magnetic disk storage or other magnetic storage devices or any other non-transmitting media may be used to store information that can be accessed by computing devices. According to the definition in this article, computer-readable media does not include temporary computer-readable media (transitory media), such as modulated data signals and carrier waves.
Those with ordinary knowledge in the technical field should understand that the embodiments of the present specification may be provided as a method, a system, or a computer program product. Therefore, the embodiments of the present specification may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Moreover, the embodiments of the present specification may use a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable code. form.
The embodiments of this specification can be described in the general context of computer-executable instructions executed by a computer, such as program modules. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform specific tasks or implement specific abstract data types. The embodiments of the present specification can also be practiced in distributed computing environments. In these distributed computing environments, tasks are performed by remote processing devices connected through a communication network. In a decentralized computing environment, program modules can be located in local and remote computer storage media, including storage devices.
Each embodiment in this specification is described in a progressive manner, and the same or similar parts between the various embodiments can be referred to each other. Each embodiment focuses on the differences from other embodiments. In particular, for the system embodiment, since it is basically similar to the method embodiment, the description is relatively simple. For the relevant part, refer to the description of the method embodiment. In the description of this specification, the description with reference to the terms “one embodiment”, “some embodiments”, “examples”, “specific examples”, or “some examples” and the like means specific features described in conjunction with the embodiments or examples , Structure, materials, or features are included in at least one embodiment or example of an embodiment of the present specification. In this specification, the schematic expressions of the above terms are not necessarily directed to the same embodiment or example. Moreover, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. In addition, without contradiction, those having ordinary knowledge in the technical field may combine and combine different embodiments or examples described in this specification and features of different embodiments or examples.
The above descriptions are merely examples of the embodiments of the present specification, and are not intended to limit the embodiments of the present specification. For those having ordinary knowledge in the technical field, various modifications and changes can be made to the embodiments of the present specification. Any modification, equivalent replacement, and improvement made within the spirit and principle of the embodiments of the present specification shall be included in the scope of the claims of the embodiments of the present specification.
10‧‧‧客戶端10‧‧‧Client
102‧‧‧處理器 102‧‧‧ processor
104‧‧‧儲存器 104‧‧‧Memory
106‧‧‧傳輸模組 106‧‧‧Transmission Module
201‧‧‧特徵獲取模組 201‧‧‧Feature Acquisition Module
202‧‧‧AR處理模組 202‧‧‧AR Processing Module
203‧‧‧拍攝引導模組 203‧‧‧shooting guide module
204‧‧‧結果展示模組 204‧‧‧ Results Display Module
為了更清楚地說明本說明書實施例或現有技術中的技術方案,下面將對實施例或現有技術描述中所需要使用的圖式作簡單地介紹,顯而易見地,下面描述中的圖式僅是本說明書中記載的一些實施例,對於所屬技術領域中具有通常知識者來講,在不付出創造性勞動性的前提下,還可以根據這些圖式獲得其他的圖式。In order to more clearly explain the embodiments of the present specification or the technical solutions in the prior art, the drawings used in the embodiments or the description of the prior art will be briefly introduced below. Obviously, the drawings in the following description are only For some persons with ordinary knowledge in the technical field, other embodiments described in the specification may obtain other drawings according to these drawings without paying creative labor.
圖1是本說明書所述方法實施例的一個處理流程示意圖; FIG. 1 is a schematic flowchart of a process according to an embodiment of the method described in this specification;
圖2是本說明書提供的一種車輛定損交互中AR模型匹配的應用場景示意圖; FIG. 2 is a schematic diagram of an application scenario of AR model matching in a vehicle's fixed-loss interaction provided in this specification; FIG.
圖3是本說明書提供的所述方法的另一個實施例的實施場景示意圖; 3 is a schematic diagram of an implementation scenario of another embodiment of the method provided in this specification;
圖4是本說明書提供的所述方法的另一個實施例的實施場景示意圖; 4 is a schematic diagram of an implementation scenario of another embodiment of the method provided in this specification;
圖5是本說明書提供的所述方法的另一個實施例的實施場景示意圖; 5 is a schematic diagram of an implementation scenario of another embodiment of the method provided in this specification;
圖6是本說明書提供的所述方法另一個實施例的方法流程示意圖; 6 is a schematic flowchart of a method according to another embodiment of the method provided in this specification;
圖7是本說明書提供的所述方法另一個實施例的方法流程示意圖; 7 is a schematic flowchart of a method according to another embodiment of the method provided in this specification;
圖8是應用本發明方法或裝置實施例一種車輛定損的交互處理的客戶端的硬體結構框圖; FIG. 8 is a block diagram of a hardware structure of a client to which an interactive process of vehicle fixed damage is applied according to a method or device embodiment of the present invention; FIG.
圖9是本說明書提供的一種車輛定損的交互處理裝置實施例的模組結構示意圖; FIG. 9 is a schematic diagram of a module structure of an embodiment of an interactive processing device for vehicle damage determination provided in this specification; FIG.
圖10是本說明提供的一種電子設備實施例的結構示意圖。 FIG. 10 is a schematic structural diagram of an embodiment of an electronic device provided in this description.
Claims (15)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810434232.6 | 2018-05-08 | ||
CN201810434232.6A CN108665373B (en) | 2018-05-08 | 2018-05-08 | Interactive processing method and device for vehicle loss assessment, processing equipment and client |
Publications (2)
Publication Number | Publication Date |
---|---|
TW201947451A true TW201947451A (en) | 2019-12-16 |
TWI713995B TWI713995B (en) | 2020-12-21 |
Family
ID=63778161
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
TW108105279A TWI713995B (en) | 2018-05-08 | 2019-02-18 | Interactive processing method, device, equipment, client device and electronic equipment for vehicle damage assessment |
Country Status (3)
Country | Link |
---|---|
CN (1) | CN108665373B (en) |
TW (1) | TWI713995B (en) |
WO (1) | WO2019214313A1 (en) |
Families Citing this family (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108665373B (en) * | 2018-05-08 | 2020-09-18 | 阿里巴巴集团控股有限公司 | Interactive processing method and device for vehicle loss assessment, processing equipment and client |
CN108632530B (en) * | 2018-05-08 | 2021-02-23 | 创新先进技术有限公司 | Data processing method, device and equipment for vehicle damage assessment, client and electronic equipment |
US11080327B2 (en) * | 2019-04-18 | 2021-08-03 | Markus Garcia | Method for the physical, in particular optical, detection of at least one usage object |
CN110245552B (en) * | 2019-04-29 | 2023-07-18 | 创新先进技术有限公司 | Interactive processing method, device, equipment and client for vehicle damage image shooting |
CN110263615A (en) * | 2019-04-29 | 2019-09-20 | 阿里巴巴集团控股有限公司 | Interaction processing method, device, equipment and client in vehicle shooting |
CN112435209A (en) * | 2019-08-08 | 2021-03-02 | 武汉东湖大数据交易中心股份有限公司 | Image big data acquisition and processing system |
CN110598562B (en) * | 2019-08-15 | 2023-03-07 | 创新先进技术有限公司 | Vehicle image acquisition guiding method and device |
CN110659568B (en) * | 2019-08-15 | 2023-06-23 | 创新先进技术有限公司 | Vehicle inspection method and device |
CN111368752B (en) * | 2020-03-06 | 2023-06-02 | 德联易控科技(北京)有限公司 | Vehicle damage analysis method and device |
CN111368777B (en) * | 2020-03-13 | 2023-10-13 | 深圳市元征科技股份有限公司 | Vehicle characteristic acquisition method, server and client |
CN113543016B (en) * | 2020-04-22 | 2024-03-05 | 斑马智行网络(香港)有限公司 | Article returning method and device |
CN111553268A (en) * | 2020-04-27 | 2020-08-18 | 深圳壹账通智能科技有限公司 | Vehicle part identification method and device, computer equipment and storage medium |
TWI818181B (en) * | 2020-06-23 | 2023-10-11 | 新局數位科技有限公司 | Car damage assessment system and implementation method thereof |
CN112085223A (en) * | 2020-08-04 | 2020-12-15 | 深圳市新辉煌智能科技有限责任公司 | Guidance system and method for mechanical maintenance |
CN112434368A (en) * | 2020-10-20 | 2021-03-02 | 联保(北京)科技有限公司 | Image acquisition method, device and storage medium |
DE102020127797B4 (en) | 2020-10-22 | 2024-03-14 | Markus Garcia | Sensor method for optically detecting objects of use to detect a safety distance between objects |
CN113890990A (en) * | 2021-09-02 | 2022-01-04 | 北京城市网邻信息技术有限公司 | Prompting method and device in information acquisition process, electronic equipment and readable medium |
CN113873145A (en) * | 2021-09-02 | 2021-12-31 | 北京城市网邻信息技术有限公司 | Vehicle source information acquisition method and device, electronic equipment and readable medium |
EP4343714A1 (en) * | 2022-09-20 | 2024-03-27 | MotionsCloud GmbH | System and method for automated image analysis for damage analysis |
CN115631002B (en) * | 2022-12-08 | 2023-11-17 | 邦邦汽车销售服务(北京)有限公司 | Computer vision-based intelligent damage assessment method and system for vehicle insurance |
CN117455466B (en) * | 2023-12-22 | 2024-03-08 | 南京三百云信息科技有限公司 | Method and system for remote evaluation of automobile |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
BRPI0817039A2 (en) * | 2007-08-24 | 2015-07-21 | Stratech Systems Ltd | Runway surveillance system and method |
EP3207542A1 (en) * | 2014-10-15 | 2017-08-23 | Seiko Epson Corporation | Head-mounted display device, method of controlling head-mounted display device, and computer program |
DE102015003341A1 (en) * | 2015-03-14 | 2016-09-15 | Hella Kgaa Hueck & Co. | Method and device for determining the spatial position of damage to a glass body |
CN105182535B (en) * | 2015-09-28 | 2018-04-20 | 大连楼兰科技股份有限公司 | The method that automobile maintenance is carried out using intelligent glasses |
US10222301B2 (en) * | 2016-05-04 | 2019-03-05 | Embraer S.A. | Structural health monitoring system with the identification of the damage through a device based in augmented reality technology |
US9886771B1 (en) * | 2016-05-20 | 2018-02-06 | Ccc Information Services Inc. | Heat map of vehicle damage |
CN106231551A (en) * | 2016-07-29 | 2016-12-14 | 深圳市永兴元科技有限公司 | Vehicle insurance based on mobile communications network Claims Resolution method and device |
CN106296118A (en) * | 2016-08-03 | 2017-01-04 | 深圳市永兴元科技有限公司 | Car damage identification method based on image recognition and device |
CN106600421A (en) * | 2016-11-21 | 2017-04-26 | 中国平安财产保险股份有限公司 | Intelligent car insurance loss assessment method and system based on image recognition |
CN106504248B (en) * | 2016-12-06 | 2021-02-26 | 成都通甲优博科技有限责任公司 | Vehicle damage judging method based on computer vision |
CN111914692B (en) * | 2017-04-28 | 2023-07-14 | 创新先进技术有限公司 | Method and device for acquiring damage assessment image of vehicle |
CN111797689B (en) * | 2017-04-28 | 2024-04-16 | 创新先进技术有限公司 | Vehicle loss assessment image acquisition method and device, server and client |
CN108665373B (en) * | 2018-05-08 | 2020-09-18 | 阿里巴巴集团控股有限公司 | Interactive processing method and device for vehicle loss assessment, processing equipment and client |
-
2018
- 2018-05-08 CN CN201810434232.6A patent/CN108665373B/en active Active
-
2019
- 2019-02-18 TW TW108105279A patent/TWI713995B/en active
- 2019-02-19 WO PCT/CN2019/075471 patent/WO2019214313A1/en active Application Filing
Also Published As
Publication number | Publication date |
---|---|
WO2019214313A1 (en) | 2019-11-14 |
CN108665373A (en) | 2018-10-16 |
TWI713995B (en) | 2020-12-21 |
CN108665373B (en) | 2020-09-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
TW201947451A (en) | Interactive processing method, apparatus and processing device for vehicle loss assessment and client terminal | |
TW201947452A (en) | Data processing method, device and processing equipment for vehicle loss assessment and client | |
KR102677044B1 (en) | Image processing methods, apparatus and devices, and storage media | |
TWI704527B (en) | Processing method, processing device, processing equipment, client device and processing system for vehicle damage recognition | |
US10282913B2 (en) | Markerless augmented reality (AR) system | |
US10535160B2 (en) | Markerless augmented reality (AR) system | |
KR102051889B1 (en) | Method and system for implementing 3d augmented reality based on 2d data in smart glass | |
TWI715932B (en) | Vehicle damage identification processing method and its processing device, data processing equipment for vehicle damage assessment, damage assessment processing system, client and server | |
US11276238B2 (en) | Method, apparatus and electronic device for generating a three-dimensional effect based on a face | |
JP6323202B2 (en) | System, method and program for acquiring video | |
US9361731B2 (en) | Method and apparatus for displaying video on 3D map | |
TW202004638A (en) | Bill photographing interaction method and apparatus, processing device, and client | |
US10325414B2 (en) | Application of edge effects to 3D virtual objects | |
US20160091976A1 (en) | Dynamic hand-gesture-based region of interest localization | |
US20170168709A1 (en) | Object selection based on region of interest fusion | |
CN109584377B (en) | Method and device for presenting augmented reality content | |
WO2020211573A1 (en) | Method and device for processing image | |
CN114600162A (en) | Scene lock mode for capturing camera images | |
CA2634933C (en) | Group tracking in motion capture | |
US20190096073A1 (en) | Histogram and entropy-based texture detection | |
US11410398B2 (en) | Augmenting live images of a scene for occlusion | |
Jain et al. | [POSTER] AirGestAR: Leveraging Deep Learning for Complex Hand Gestural Interaction with Frugal AR Devices | |
CN113780045A (en) | Method and apparatus for training distance prediction model | |
Álvarez et al. | Towards a Diminished Reality System that Preserves Structures and Works in Real-time. | |
US20240153291A1 (en) | Method, apparatus and system for auto-labeling |