WO2019214313A1 - Procédé de traitement interactif, appareil et dispositif de traitement pour évaluer une perte de véhicule et terminal client - Google Patents

Procédé de traitement interactif, appareil et dispositif de traitement pour évaluer une perte de véhicule et terminal client Download PDF

Info

Publication number
WO2019214313A1
WO2019214313A1 PCT/CN2019/075471 CN2019075471W WO2019214313A1 WO 2019214313 A1 WO2019214313 A1 WO 2019214313A1 CN 2019075471 W CN2019075471 W CN 2019075471W WO 2019214313 A1 WO2019214313 A1 WO 2019214313A1
Authority
WO
WIPO (PCT)
Prior art keywords
shooting
damage
vehicle
window
information
Prior art date
Application number
PCT/CN2019/075471
Other languages
English (en)
Chinese (zh)
Inventor
郭之友
周凡
张侃
Original Assignee
阿里巴巴集团控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司 filed Critical 阿里巴巴集团控股有限公司
Publication of WO2019214313A1 publication Critical patent/WO2019214313A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights

Definitions

  • the embodiment of the present specification belongs to the technical field of data processing of computer terminal insurance services, and in particular, to an interactive processing method, device, processing device and client for vehicle damage.
  • Motor vehicle insurance that is, automobile insurance (or car insurance) refers to a type of commercial insurance that is liable for personal injury or property damage caused by natural disasters or accidents. With the development of the economy, the number of motor vehicles is increasing. At present, auto insurance has become one of the biggest insurances in China's property insurance business.
  • the insurance company When a traffic accident occurs in an insured vehicle, the insurance company usually first conducts on-site inspection, and takes a picture to obtain a fixed-loss image before making a loss.
  • the fixed damage of the vehicle involves many aspects of technology and benefits such as follow-up maintenance and evaluation. It is an important process in the entire auto insurance service.
  • the owner of the accident vehicle can use the mobile phone or other terminal equipment to take pictures of the car damage on the spot, and the image taken by the user is uploaded to the insurance company. Used to determine the vehicle damage, and then determine the maintenance plan, evaluate the damage, and so on.
  • the embodiment of the present specification aims to provide an interactive processing method, device, processing device and client for vehicle damage, which can use AR (Augmented Reality) to capture interactive guidance, so that the user can set the vehicle by himself, quickly and conveniently. Loss, improve the processing efficiency of the vehicle's loss, and improve the user's loss and interaction experience.
  • AR Augmented Reality
  • the method, device, processing device and client for interacting with the vehicle loss are provided by the following embodiments:
  • An interactive processing method for vehicle damage comprising:
  • the damage recognition guide including displaying photographing guide information determined based on image information acquired from the photographing window;
  • the result information of the damage recognition is displayed in the shooting window.
  • An interactive processing device for vehicle damage comprising:
  • a feature acquisition module configured to acquire feature data of the vehicle by using a shooting window
  • An AR processing module configured to construct an augmented reality space model of the vehicle according to the feature data, where the augmented reality space model is displayed in the shooting window, and achieves matching with a real space position of the vehicle in the shooting window;
  • a photographing guide module configured to perform damage recognition guidance in the photographing window based on the augmented reality space model, the damage recognition guide comprising displaying photographing guide information determined based on image information acquired from the photographing window;
  • a result display module for displaying result information of the damage recognition in the shooting window.
  • An interactive processing device for vehicle loss comprising a processor and a memory for storing processor-executable instructions, the processor implementing the instructions to:
  • the damage recognition guide including displaying photographing guide information determined based on image information acquired from the photographing window;
  • the result information of the damage recognition is displayed in the shooting window.
  • a client comprising a processor and a memory for storing processor executable instructions, the processor implementing the instructions to:
  • the damage recognition guide including displaying photographing guide information determined based on image information acquired from the photographing window;
  • the result information of the damage recognition is displayed in the shooting window.
  • An electronic device includes a display screen, a processor, and a memory storing processor-executable instructions that, when executed by the processor, implement the method steps of any one of the embodiments.
  • the interaction processing method, device, processing device and client of the vehicle loss determination provided by the embodiments of the present invention can use the AR to perform interaction processing with the user in real time in the video shooting window of the terminal, and guide the user to shoot according to the specification.
  • the image and the damage recognition result can be fed back in the shooting window of the terminal in real time.
  • the user opens the terminal to determine the loss application, activates the shooting window combined with the AR to view the vehicle, and guides and feedbacks the user according to the actual vehicle position and angle information, so that the user can complete the complicated video recording and the like. Operation, according to the shooting guidance information to shoot to complete the damage identification, and then quickly achieve the loss, claims.
  • the user can perform the fixed loss image processing skill and the complicated shooting operation step, and the damage processing cost is lower.
  • the guided shooting with the AR can further improve the service experience of the user's loss service.
  • FIG. 2 is a schematic diagram of an application scenario of AR model matching in a vehicle loss-making interaction provided by the present specification
  • FIG. 3 is a schematic diagram of an implementation scenario of another embodiment of the method provided by the present specification.
  • FIG. 4 is a schematic diagram of an implementation scenario of another embodiment of the method provided by the present specification.
  • FIG. 5 is a schematic diagram of an implementation scenario of another embodiment of the method provided by the present specification.
  • FIG. 6 is a schematic flow chart of a method of another embodiment of the method provided by the present specification.
  • FIG. 7 is a schematic flow chart of a method of another embodiment of the method provided by the present specification.
  • Figure 8 is a block diagram showing the hardware structure of a client for interactive processing of vehicle damage using the method or apparatus embodiment of the present invention
  • FIG. 9 is a schematic structural diagram of a module of an embodiment of an apparatus for interacting with a vehicle for determining damage according to the present specification.
  • FIG. 10 is a schematic structural diagram of an embodiment of an electronic device provided by the present specification.
  • the client may include a terminal device with a shooting function, such as a smart phone or a tablet computer, used by a vehicle loss site personnel (which may be an accident vehicle owner user or an insurance company personnel or other personnel performing a loss processing process). Smart wearable devices, dedicated loss terminals, etc.
  • the client may have a communication module, and may communicate with a remote server to implement data transmission with the server.
  • the server may include a server on the insurance company side or a server on the service side of the service provider.
  • Other implementation scenarios may also include servers of other service parties, such as a component supplier that has a communication link with the server of the fixed service provider. Terminal, terminal of vehicle repair shop, etc.
  • the server may include a single computer device, or may include a server cluster composed of a plurality of servers, or a server of a distributed system.
  • the client side can send the image data collected by the live shooting to the server in real time, and the server side performs the identification of the damage, the formulation of the maintenance plan, the calculation of the maintenance cost, etc., if the damage server identifies the damaged component and the damage degree, The maintenance cost can be confirmed to the repair shop server and the claim amount can be confirmed to the insurance company server, and the fixed loss server feeds back the claim amount given by the insurance company and the maintenance cost information of the repair shop to the client.
  • the processing on the server side, the damage recognition and the like are performed by the server side, and the processing speed is usually higher than the client side, which can reduce the processing pressure of the client and improve the speed of damage recognition.
  • this specification does not exclude that all or part of the above processing in other embodiments is implemented by the client side, such as real-time detection and identification of damage on the client side.
  • FIG. 1 is a schematic flowchart diagram of an embodiment of an interaction processing method for a vehicle loss according to the present disclosure.
  • the present specification provides method operation steps or device structures as shown in the following embodiments or figures, there may be more or partial merged fewer operational steps in the method or device based on conventional or no inventive labor. Or module unit.
  • the execution order of the steps or the module structure of the device is not limited to the execution order or the module structure shown in the embodiment or the drawings.
  • FIG. 1 is an embodiment of an interaction processing method for vehicle damage provided by the present specification, where the method may include:
  • the client on the user side may be a smart phone, and the smart phone may have a shooting function.
  • the user can open the mobile phone application that implements the implementation of the present specification at the scene of the vehicle accident to take a framing shot of the vehicle accident scene.
  • the shooting window can be displayed on the client display screen, and the vehicle is captured through the shooting window to obtain the vehicle characteristic data.
  • the shooting window may be a video shooting window, and image information acquired by the client-integrated camera device may be displayed in the shooting window.
  • the specific interface structure of the shooting window and the related information displayed can be customized.
  • the feature data can be specifically set according to data processing requirements such as vehicle identification, environment recognition, and image recognition.
  • the feature data may include data information of each component of the identified vehicle, and may construct 3D coordinate information to establish an augmented reality space model of the vehicle (AR space model, a data representation mode, and a contour figure of the body) .
  • the feature data may also include other data information such as the brand, model, color, outline, unique identification code of the vehicle.
  • S2 Constructing an augmented reality space model of the vehicle according to the feature data, the augmented reality space model being displayed in the shooting window and achieving matching with a real space position of the vehicle in the shooting window.
  • the augmented reality AR generally refers to a technical implementation scheme for calculating the position and angle of the camera image in real time and adding corresponding images, videos, and 3D models, which can put the virtual world on the screen (overlay The real world can interact.
  • the enhanced information space model constructed by using the feature data in the embodiment of the present specification may be the contour information of the vehicle, and may specifically be based on the acquired model of the vehicle, the shooting angle, and the tire position, the ceiling position, the front face position, and the front of the vehicle.
  • a plurality of feature data such as lamp position, taillight position, front and rear window positions, etc., construct an outline of the vehicle.
  • the contour may include a data model established based on 3D coordinates with corresponding 3D coordinate information.
  • the contour of the build can then be displayed in the capture window.
  • the present specification does not exclude that the augmented reality space model described in the other embodiments may also include other model forms or other model information added above the contours.
  • the AR model can be matched to the actual vehicle position during the shooting duration, such as matching the constructed 3D contour to the contour position of the real vehicle.
  • the user can guide the framing direction, and the user aligns the constructed contour with the contour of the captured real vehicle by guiding the moving shooting direction or angle.
  • FIG. 2 is a schematic diagram of an application scenario of AR model matching in a vehicle loss-reduction interaction provided by the present specification.
  • the embodiment of the present specification in combination with the augmented reality technology, not only displays the real information of the vehicle photographed by the actual client of the user, but also displays the augmented reality space model information of the vehicle that is constructed at the same time, and the two kinds of information complement each other and superimpose, and can provide more Good damage service experience.
  • S4 performing damage recognition guidance in the photographing window based on the augmented reality space model, the damage recognition guide including displaying photographing guide information determined based on image information acquired from the photographing window.
  • the shooting window combined with the AR space model can display the scene of the vehicle more intuitively, and can effectively perform the damage and shooting guidance of the vehicle damage position.
  • the client may perform damage identification guidance in the AR scenario, and the damage identification guidance may include including shooting guidance information determined based on image information acquired from the shooting window.
  • the client can obtain image information in the AR scene in the shooting window, analyze and calculate the acquired image information, and determine what shooting guidance information needs to be displayed in the shooting window according to the analysis result. For example, the position of the vehicle in the current shooting window is far away, and the user can be prompted to approach the shooting in the shooting window. If the shooting position is to the left and the tail of the vehicle cannot be captured, the shooting guidance information can be displayed to prompt the user to pan the shooting angle to the right.
  • the damage identification guides the specific processed data information and under what conditions the shooting guidance information is displayed, and the corresponding policies or rules may be preset, which are not described one by one in this embodiment.
  • the damage identification guide may include:
  • the coordinate information of the suspected damage in the actual spatial position of the vehicle may be calculated, and then the image capturing request of the suspected damage is compared to determine the need. What kind of operation the user does.
  • the shooting guidance information that needs to be displayed is determined according to the result of the matching calculation. For example, if there is a scratch on the rear fender of the vehicle, and the scratch needs to be photographed in front and in the direction of the scratch, but according to the coordinate information, the user is inclined at 45 degrees, and the distance is scratched. The location is far away. At this point, the user can be prompted to approach the scratch position, prompting the user to shoot in front and in the direction of the scratch.
  • the shooting guide information can be adjusted in real time according to the current view. For example, if the user has already approached the scratch position and meets the shooting requirements, the shooting guide information prompting the user to approach the scratch position may not be displayed.
  • the suspected damage can be identified by the client or server side.
  • the shooting guidance information and shooting conditions that need to be displayed during the specific shooting can be set according to the fixed loss interaction design or the damage processing requirements.
  • the shooting guidance information may include at least one of the following:
  • the suspected location of the suspected injury is the suspected location of the suspected injury.
  • the suspected damage may include a pre-existing damage that may be initially identified, or an impairment that has not been determined by a specified damage identification system/algorithm. Accordingly, the location of the suspected lesion may be referred to as a suspected location.
  • FIG. 3 An example of a shooting guide is shown in Figure 3.
  • the user can perform the loss processing more conveniently and efficiently through the real-time shooting guidance information.
  • the user can shoot according to the shooting guidance information, and the user experience can be better without professional shooting skills or cumbersome shooting operations.
  • the above embodiment describes the shooting guidance information displayed by the text.
  • the shooting guidance information may further include an image, a voice, an animation, a vibration, and the like, and the current shooting image is aligned by an arrow or a voice prompt. A certain area.
  • the damage detection is guided by the damage recognition guidance, and the captured image data can be further processed by the client or the server, such as whether there is damage detection, damage type identification, damage component identification, maintenance cost calculation, Treatment of fixed loss core loss, etc.
  • the foregoing processing may be attributed to the result information for the damage identification based on the AR interaction scenario, and the result information of the one or more damage identifications may be displayed in the shooting window of the client, and the user may view in real time.
  • the result information of the damage identification may include at least one of a damage location, a damage component, a maintenance plan, and a maintenance cost determined based on the image information acquired by the damage recognition guide.
  • the result information of the user's damage recognition can be displayed in the video interface of the fixed-loss shooting, and multiple damage recognition results can be displayed at the same time, such as when the bumper and the left rear fender are damaged. If both are in the current shooting window, the result information of the two lesion identifications can be simultaneously displayed in the corresponding position.
  • FIG. 5 is a schematic diagram of an implementation scenario of another embodiment of the method provided by the present specification. As shown in FIG. 5, if the result information of the target damage being identified in the current shooting window has not been determined, the processing progress of the target damage can be displayed. Real-time display of the progress of the target damage can further increase the user's loss-receiving interaction experience. Therefore, in another embodiment of the method, before the damage identification result information of the target damage is displayed, the method may further include:
  • the interface window showing the progress of the process may be the same interface window as the interface window displaying the result information, or an interface window of the same location. Of course, you can also use different interface windows.
  • the interface window displaying the result information or the processing progress may be adaptively adjusted according to the displayed information content, or may be correspondingly moved, tracked, etc. according to the current shooting angle or the shooting position. . Therefore, as shown in FIG. 7, in another embodiment of the method provided by the present specification, the method may further include:
  • the tracking changes may include the aforementioned position tracking, window size adjustment, or color, contour changes, and the like. For example, when the user moves the changing shooting angle, if the damaged component A is always present in the shooting window, the result information of the damaged component A can always be displayed in the shooting window according to the user's shooting.
  • the real-time described in the foregoing embodiments may include sending, receiving, or displaying immediately after acquiring or determining certain data information, and those skilled in the art may understand that after buffering or expected calculation, waiting time Sending, receiving, or presenting can still belong to the real-time defined range.
  • the image described in the embodiments of the present specification may include a video, and the video may be regarded as a continuous image collection.
  • the captured image obtained in the solution of the embodiment of the present specification may be stored in a local client or uploaded to a remote server in real time.
  • the local client store performs some data tampering or uploading to the server storage, it can effectively prevent the damaging data from being tampered with, or stealing other insurance data that is not the image of this accident. Therefore, the embodiment of the present specification can also improve the data security of the loss processing and the reliability of the loss determination result.
  • the client or server side may use a damage recognition algorithm constructed in advance or in real time to identify the image captured by the client.
  • the damage identification algorithm may include a damage recognition algorithm constructed by using a plurality of training models, such as the deep neural network Faster R-CNN, and a deep neural network may be trained by marking a large number of pictures of the damaged area in advance, for each vehicle A picture of the orientation and lighting conditions giving the range of the damage area.
  • the above embodiment describes an embodiment in which a user performs a lossy interaction process on a mobile phone client. It should be noted that the foregoing methods in the embodiments of the present specification may be implemented in various processing devices, and in implementation scenarios including a client and a server.
  • FIG. 8 is a hardware structural block diagram of a client that applies the interactive processing of the vehicle loss in the embodiment of the method or apparatus of the present invention.
  • client 10 may include one or more (only one shown) processor 102 (processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA)
  • processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA)
  • a memory 104 for storing data
  • a transmission module 106 for communication functions. It will be understood by those skilled in the art that the structure shown in FIG.
  • the client 10 may also include more or less components than those shown in FIG. 8, for example, may also include other processing hardware, such as a GPU (Graphics Processing Unit), or have the same as shown in FIG. Different configurations.
  • a GPU Graphics Processing Unit
  • the memory 104 can be used to store software programs and modules of application software, such as program instructions/modules corresponding to the search method in the embodiment of the present specification, and the processor 102 executes various functions by running software programs and modules stored in the memory 104.
  • Application and data processing that is, a processing method for realizing the content display of the above navigation interaction interface.
  • Memory 104 may include high speed random access memory, and may also include non-volatile memory such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory.
  • memory 104 may further include memory remotely located relative to processor 102, which may be connected to client 10 over a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the transmission module 106 is configured to receive or transmit data via a network.
  • the network specific examples described above may include a wireless network provided by a communication provider of the computer terminal 10.
  • the transport module 106 includes a Network Interface Controller (NIC) that can be connected to other network devices through a base station to communicate with the Internet.
  • the transmission module 106 can be a Radio Frequency (RF) module for communicating with the Internet wirelessly.
  • NIC Network Interface Controller
  • RF Radio Frequency
  • FIG. 9 is a schematic structural diagram of a module of an apparatus for processing an apparatus for determining a loss of a vehicle according to the present disclosure.
  • the specific structure may include:
  • the feature obtaining module 201 is configured to acquire feature data of the vehicle by using a shooting window
  • the AR processing module 202 may be configured to construct an augmented reality space model of the vehicle according to the feature data, where the augmented reality space model is displayed in the shooting window, and realize a real space position of the vehicle in the shooting window. match;
  • the shooting guide module 203 may be configured to perform damage recognition guidance in the shooting window based on the augmented reality space model, the damage recognition guiding comprising displaying shooting guidance information determined based on image information acquired from the shooting window ;
  • the result display module 204 can be configured to display result information of the damage identification in the shooting window.
  • the apparatus according to the related method embodiments may further include other implementation manners, and show a module for processing progress.
  • the apparatus according to the related method embodiments may further include other implementation manners, and show a module for processing progress.
  • the device model identification method provided by the embodiment of the present specification may be implemented by a processor executing a corresponding program instruction in a computer, such as using a C++/java language of a Windows/Linux operating system on a PC/server side, or other such as android,
  • the iOS system corresponds to the necessary hardware implementation of the application design language set, or the processing logic based on quantum computers.
  • the apparatus for implementing the above-described method is provided, wherein the processing device may include a processor and a memory for storing processor-executable instructions, where the processor executes When the instruction is implemented:
  • the damage recognition guide including displaying photographing guide information determined based on image information acquired from the photographing window;
  • the result information of the damage recognition is displayed in the shooting window.
  • the processor performs the injury identification booting:
  • the shooting guide information is displayed in the shooting window.
  • the shooting guidance information includes at least one of the following:
  • the suspected location of the suspected injury is the suspected location of the suspected injury.
  • the result information of the damage identification includes a damage location, a damage component, a maintenance plan, and a maintenance cost determined based on the image information acquired by the damage identification guide. At least one of them.
  • the processor further performs: before displaying the damage identification result information of the target damage:
  • the processor further performs:
  • At least one of the guidance prompt information, the result information, and the processing progress is displayed to perform a corresponding tracking change based on the image change in the shooting window.
  • processing device described above in the above embodiments may further include other scalable embodiments according to the description of the related method embodiments.
  • the above instructions may be stored in a variety of computer readable storage media.
  • the computer readable storage medium may include physical means for storing information, which may be digitized and stored in a medium utilizing electrical, magnetic or optical means.
  • the computer readable storage medium of this embodiment may include: means for storing information by means of electrical energy, such as various types of memories, such as RAM, ROM, etc.; means for storing information by magnetic energy means, such as hard disk, floppy disk, magnetic tape, magnetic Core memory, bubble memory, U disk; means for optically storing information such as CD or DVD.
  • electrical energy such as various types of memories, such as RAM, ROM, etc.
  • magnetic energy means such as hard disk, floppy disk, magnetic tape, magnetic Core memory, bubble memory, U disk
  • means for optically storing information such as CD or DVD.
  • quantum memories, graphene memories, and the like are as described above.
  • the above method or apparatus embodiment can be used for a client on the user side, such as a smart phone. Accordingly, the present specification provides a client comprising a processor and a memory for storing processor-executable instructions that, when executed by the processor, are implemented:
  • an embodiment of the present specification further provides an electronic device including a display screen, a processor, and a memory storing processor executable instructions.
  • FIG. 10 is a schematic structural diagram of an embodiment of an electronic device according to the present disclosure. When the processor executes the instruction, the method steps described in any one of the embodiments may be implemented.
  • embodiments of the present specification refer to the AR technology, the shooting guidance information display, the shooting guidance with the user interaction, the use of the deep neural network to initially identify the damage location, and the like, data acquisition, position alignment, interaction, calculation, judgment, etc.
  • the data is described, however, embodiments of the present description are not limited to situations that must be consistent with industry communication standards, standard image data processing protocols, communication protocols, and standard data models/templates or embodiments of the specification.
  • Certain industry standards or implementations that have been modified in a manner that uses a custom approach or an embodiment described above may also achieve the same, equivalent, or similar, or post-deformation implementation effects of the above-described embodiments.
  • Embodiments obtained by applying such modified or modified data acquisition, storage, judgment, processing, etc. may still fall within the scope of alternative embodiments of the present specification.
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • HDL Hardware Description Language
  • the controller can be implemented in any suitable manner, for example, the controller can take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (eg, software or firmware) executable by the (micro)processor.
  • computer readable program code eg, software or firmware
  • examples of controllers include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, The Microchip PIC18F26K20 and the Silicone Labs C8051F320, the memory controller can also be implemented as part of the memory's control logic.
  • the controller can be logically programmed by means of logic gates, switches, ASICs, programmable logic controllers, and embedding.
  • Such a controller can therefore be considered a hardware component, and the means for implementing various functions included therein can also be considered as a structure within the hardware component.
  • a device for implementing various functions can be considered as a software module that can be both a method of implementation and a structure within a hardware component.
  • the system, device, module or unit illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product having a certain function.
  • a typical implementation device is a computer.
  • the computer can be, for example, a personal computer, a laptop computer, a car-mounted human-machine interaction device, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet.
  • the above devices are described as being separately divided into various modules by function.
  • the functions of the modules may be implemented in the same software or software, or the modules that implement the same function may be implemented by multiple sub-modules or a combination of sub-units.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or integrated. Go to another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the controller can be logically programmed by means of logic gates, switches, ASICs, programmable logic controllers, and embedding.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
  • a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
  • processors CPUs
  • input/output interfaces network interfaces
  • memory volatile and non-volatile memory
  • the memory may include non-persistent memory, random access memory (RAM), and/or non-volatile memory in a computer readable medium, such as read only memory (ROM) or flash memory.
  • RAM random access memory
  • ROM read only memory
  • Memory is an example of a computer readable medium.
  • Computer readable media includes both permanent and non-persistent, removable and non-removable media.
  • Information storage can be implemented by any method or technology.
  • the information can be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory. (ROM), electrically erasable programmable read only memory (EEPROM), flash memory or other memory technology, compact disk read only memory (CD-ROM), digital versatile disk (DVD) or other optical storage, Magnetic tape cartridges, magnetic tape storage or other magnetic storage devices or any other non-transportable media can be used to store information that can be accessed by a computing device.
  • computer readable media does not include temporary storage of computer readable media, such as modulated data signals and carrier waves.
  • embodiments of the present specification can be provided as a method, system, or computer program product.
  • embodiments of the present specification can take the form of an entirely hardware embodiment, an entirely software embodiment or a combination of software and hardware.
  • embodiments of the present specification can take the form of a computer program product embodied on one or more computer usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • Embodiments of the present description can be described in the general context of computer-executable instructions executed by a computer, such as a program module.
  • program modules include routines, programs, objects, components, data structures, and the like that perform particular tasks or implement particular abstract data types.
  • Embodiments of the present specification can also be practiced in distributed computing environments where tasks are performed by remote processing devices that are connected through a communication network.
  • program modules can be located in both local and remote computer storage media including storage devices.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Finance (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Multimedia (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Technology Law (AREA)
  • General Business, Economics & Management (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

L'invention concerne un procédé de traitement interactif, un appareil et un dispositif de traitement pour une évaluation de perte de véhicule et un terminal client. Le procédé comprend les étapes suivantes : un utilisateur ouvre une application d'évaluation de perte de terminal et permet à une fenêtre de prise de vue, conjointement avec un dispositif de réalité augmentée (AR), de photographier un véhicule ; un guidage de prise de vue et une rétroaction sont adressés à l'utilisateur en fonction d'une position réelle du véhicule, d'un angle et d'autres informations ; l'utilisateur peut accomplir une identification de dommage en réalisant une prise de vue selon les informations de guidage de prise de vue, sans avoir besoin d'autres opérations complexes telles que des vidéos de prise de vue, et ensuite l'évaluation de la perte et le règlement de la réclamation sont réalisés rapidement. En utilisant le procédé, un utilisateur n'a pas besoin de disposer de compétences professionnelles de prise de vue d'image d'évaluation de perte ou d'effectuer des étapes d'opération de prise de vue complexes, le coût de traitement d'évaluation de la perte est inférieur, et l'expérience de service du service d'évaluation de perte d'utilisateur peut être encore améliorée par la prise de vue avec guidage en combinaison avec un dispositif AR.
PCT/CN2019/075471 2018-05-08 2019-02-19 Procédé de traitement interactif, appareil et dispositif de traitement pour évaluer une perte de véhicule et terminal client WO2019214313A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810434232.6 2018-05-08
CN201810434232.6A CN108665373B (zh) 2018-05-08 2018-05-08 一种车辆定损的交互处理方法、装置、处理设备及客户端

Publications (1)

Publication Number Publication Date
WO2019214313A1 true WO2019214313A1 (fr) 2019-11-14

Family

ID=63778161

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/075471 WO2019214313A1 (fr) 2018-05-08 2019-02-19 Procédé de traitement interactif, appareil et dispositif de traitement pour évaluer une perte de véhicule et terminal client

Country Status (3)

Country Link
CN (1) CN108665373B (fr)
TW (1) TWI713995B (fr)
WO (1) WO2019214313A1 (fr)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111368777A (zh) * 2020-03-13 2020-07-03 深圳市元征科技股份有限公司 一种车辆特征获取方法,一种服务器和客户端
CN111368752A (zh) * 2020-03-06 2020-07-03 德联易控科技(北京)有限公司 车辆损伤的分析方法和装置
CN112085223A (zh) * 2020-08-04 2020-12-15 深圳市新辉煌智能科技有限责任公司 一种用于机械维修的诱导系统及方法
CN113543016A (zh) * 2020-04-22 2021-10-22 斑马智行网络(香港)有限公司 物品归还方法及装置
CN117455466A (zh) * 2023-12-22 2024-01-26 南京三百云信息科技有限公司 一种汽车远程评估的方法及系统
EP4343714A1 (fr) * 2022-09-20 2024-03-27 MotionsCloud GmbH Système et procédé d'analyse d'image automatisée pour analyse de dommages

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108665373B (zh) * 2018-05-08 2020-09-18 阿里巴巴集团控股有限公司 一种车辆定损的交互处理方法、装置、处理设备及客户端
CN108632530B (zh) * 2018-05-08 2021-02-23 创新先进技术有限公司 一种车辆定损的数据处理方法、装置、设备及客户端、电子设备
US11080327B2 (en) * 2019-04-18 2021-08-03 Markus Garcia Method for the physical, in particular optical, detection of at least one usage object
CN110245552B (zh) * 2019-04-29 2023-07-18 创新先进技术有限公司 车损图像拍摄的交互处理方法、装置、设备及客户端
CN110263615A (zh) * 2019-04-29 2019-09-20 阿里巴巴集团控股有限公司 车辆拍摄中的交互处理方法、装置、设备及客户端
CN112435209A (zh) * 2019-08-08 2021-03-02 武汉东湖大数据交易中心股份有限公司 一种图像大数据采集和处理系统
CN110598562B (zh) * 2019-08-15 2023-03-07 创新先进技术有限公司 车辆图像采集引导方法以及装置
CN110659568B (zh) * 2019-08-15 2023-06-23 创新先进技术有限公司 验车方法及装置
CN111553268A (zh) * 2020-04-27 2020-08-18 深圳壹账通智能科技有限公司 车辆部件识别方法、装置、计算机设备和存储介质
TWI818181B (zh) * 2020-06-23 2023-10-11 新局數位科技有限公司 車體定損輔助系統及其實施方法
CN112434368A (zh) * 2020-10-20 2021-03-02 联保(北京)科技有限公司 一种图像采集方法、装置及存储介质
DE102020127797B4 (de) 2020-10-22 2024-03-14 Markus Garcia Sensorverfahren zum optischen Erfassen von Nutzungsobjekten zur Detektion eines Sicherheitsabstandes zwischen Objekten
CN113890990A (zh) * 2021-09-02 2022-01-04 北京城市网邻信息技术有限公司 信息采集过程中的提示方法、装置、电子设备及可读介质
CN113873145A (zh) * 2021-09-02 2021-12-31 北京城市网邻信息技术有限公司 车源信息的采集方法、装置、电子设备及可读介质
CN115631002B (zh) * 2022-12-08 2023-11-17 邦邦汽车销售服务(北京)有限公司 基于计算机视觉的车险智能定损方法及系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101918980A (zh) * 2007-08-24 2010-12-15 策技系统有限公司 跑道监视系统和方法
CN105182535A (zh) * 2015-09-28 2015-12-23 大连楼兰科技股份有限公司 使用智能眼镜进行汽车维保的方法
CN107194323A (zh) * 2017-04-28 2017-09-22 阿里巴巴集团控股有限公司 车辆定损图像获取方法、装置、服务器和终端设备
CN108665373A (zh) * 2018-05-08 2018-10-16 阿里巴巴集团控股有限公司 一种车辆定损的交互处理方法、装置、处理设备及客户端

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3207542A1 (fr) * 2014-10-15 2017-08-23 Seiko Epson Corporation Visiocasque, procédé de commande de visiocasque, et programme informatique
DE102015003341A1 (de) * 2015-03-14 2016-09-15 Hella Kgaa Hueck & Co. Verfahren und Vorrichtung zur Bestimmung der räumlichen Lage einer Beschädigung an einem Glaskörper
US10222301B2 (en) * 2016-05-04 2019-03-05 Embraer S.A. Structural health monitoring system with the identification of the damage through a device based in augmented reality technology
US9886771B1 (en) * 2016-05-20 2018-02-06 Ccc Information Services Inc. Heat map of vehicle damage
CN106231551A (zh) * 2016-07-29 2016-12-14 深圳市永兴元科技有限公司 基于移动通信网络的车险理赔方法及装置
CN106296118A (zh) * 2016-08-03 2017-01-04 深圳市永兴元科技有限公司 基于图像识别的车辆定损方法及装置
CN106600421A (zh) * 2016-11-21 2017-04-26 中国平安财产保险股份有限公司 一种基于图片识别的车险智能定损方法及系统
CN106504248B (zh) * 2016-12-06 2021-02-26 成都通甲优博科技有限责任公司 基于计算机视觉的车辆损伤判别方法
CN111797689B (zh) * 2017-04-28 2024-04-16 创新先进技术有限公司 车辆定损图像获取方法、装置、服务器和客户端

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101918980A (zh) * 2007-08-24 2010-12-15 策技系统有限公司 跑道监视系统和方法
CN105182535A (zh) * 2015-09-28 2015-12-23 大连楼兰科技股份有限公司 使用智能眼镜进行汽车维保的方法
CN107194323A (zh) * 2017-04-28 2017-09-22 阿里巴巴集团控股有限公司 车辆定损图像获取方法、装置、服务器和终端设备
CN108665373A (zh) * 2018-05-08 2018-10-16 阿里巴巴集团控股有限公司 一种车辆定损的交互处理方法、装置、处理设备及客户端

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111368752A (zh) * 2020-03-06 2020-07-03 德联易控科技(北京)有限公司 车辆损伤的分析方法和装置
CN111368752B (zh) * 2020-03-06 2023-06-02 德联易控科技(北京)有限公司 车辆损伤的分析方法和装置
CN111368777A (zh) * 2020-03-13 2020-07-03 深圳市元征科技股份有限公司 一种车辆特征获取方法,一种服务器和客户端
CN111368777B (zh) * 2020-03-13 2023-10-13 深圳市元征科技股份有限公司 一种车辆特征获取方法,一种服务器和客户端
CN113543016A (zh) * 2020-04-22 2021-10-22 斑马智行网络(香港)有限公司 物品归还方法及装置
CN113543016B (zh) * 2020-04-22 2024-03-05 斑马智行网络(香港)有限公司 物品归还方法及装置
CN112085223A (zh) * 2020-08-04 2020-12-15 深圳市新辉煌智能科技有限责任公司 一种用于机械维修的诱导系统及方法
EP4343714A1 (fr) * 2022-09-20 2024-03-27 MotionsCloud GmbH Système et procédé d'analyse d'image automatisée pour analyse de dommages
CN117455466A (zh) * 2023-12-22 2024-01-26 南京三百云信息科技有限公司 一种汽车远程评估的方法及系统
CN117455466B (zh) * 2023-12-22 2024-03-08 南京三百云信息科技有限公司 一种汽车远程评估的方法及系统

Also Published As

Publication number Publication date
TW201947451A (zh) 2019-12-16
CN108665373A (zh) 2018-10-16
TWI713995B (zh) 2020-12-21
CN108665373B (zh) 2020-09-18

Similar Documents

Publication Publication Date Title
WO2019214313A1 (fr) Procédé de traitement interactif, appareil et dispositif de traitement pour évaluer une perte de véhicule et terminal client
WO2019214319A1 (fr) Procédé de traitement de données d'évaluation de perte de véhicule, appareil, dispositif de traitement et client
WO2019214320A1 (fr) Procédé de traitement d'identification de dommage de véhicule, dispositif de traitement, client et serveur
JP6859505B2 (ja) 画像に基づく車両損傷判定方法、装置および電子デバイス
Sahu et al. Artificial intelligence (AI) in augmented reality (AR)-assisted manufacturing applications: a review
CN109410218B (zh) 用于生成车辆损伤信息的方法和装置
EP3467707B1 (fr) Système et procédé pour la reconnaissance des gestes de la main à base d'apprentissage profond en mode immersion
WO2019100839A1 (fr) Procédé et appareil pour identifier des pièces de véhicule endommagées, serveur, terminal client et système
CN109242903B (zh) 三维数据的生成方法、装置、设备及存储介质
US10282913B2 (en) Markerless augmented reality (AR) system
US10535160B2 (en) Markerless augmented reality (AR) system
CN110245552B (zh) 车损图像拍摄的交互处理方法、装置、设备及客户端
WO2019214321A1 (fr) Procédé de traitement d'identification de dommage à véhicule, dispositif de traitement, client et serveur
WO2019024771A1 (fr) Procédé, appareil, serveur et système de traitement d'image d'assurance automobile
KR20190069457A (ko) 이미지 기반 차량 손실 평가 방법, 장치 및 시스템, 및 전자 디바이스
US10068146B2 (en) Method and system for detection-based segmentation-free license plate recognition
CN110910628B (zh) 车损图像拍摄的交互处理方法、装置、电子设备
US20150268728A1 (en) Systems and methods for notifying users of mismatches between intended and actual captured content during heads-up recording of video
CN108122245B (zh) 一种目标行为描述方法、装置和监控设备
Pillai Traffic Surveillance Systems through Advanced Detection, Tracking, and Classification Technique
WO2021214540A1 (fr) Localisation fiable de dispositif de prise de vues en fonction d'une image à composante chromatique unique et d'un apprentissage multimodal
Gruosso et al. A preliminary investigation into a deep learning implementation for hand tracking on mobile devices
CN110609877B (zh) 一种图片采集的方法、装置、设备和计算机存储介质
Zhou et al. A Visible and Infrared Fusion Based Visual Odometry for Autonomous Vehicles
Yang et al. Sparse Color-Code Net: Real-Time RGB-Based 6D Object Pose Estimation on Edge Devices

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19800301

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19800301

Country of ref document: EP

Kind code of ref document: A1