WO2019214319A1 - Procédé de traitement de données d'évaluation de perte de véhicule, appareil, dispositif de traitement et client - Google Patents

Procédé de traitement de données d'évaluation de perte de véhicule, appareil, dispositif de traitement et client Download PDF

Info

Publication number
WO2019214319A1
WO2019214319A1 PCT/CN2019/076028 CN2019076028W WO2019214319A1 WO 2019214319 A1 WO2019214319 A1 WO 2019214319A1 CN 2019076028 W CN2019076028 W CN 2019076028W WO 2019214319 A1 WO2019214319 A1 WO 2019214319A1
Authority
WO
WIPO (PCT)
Prior art keywords
damage
shooting
area
vehicle
photographing
Prior art date
Application number
PCT/CN2019/076028
Other languages
English (en)
Chinese (zh)
Inventor
周凡
Original Assignee
阿里巴巴集团控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司 filed Critical 阿里巴巴集团控股有限公司
Publication of WO2019214319A1 publication Critical patent/WO2019214319A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/617Upgrading or updating of programs or applications for camera control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • H04N23/635Region indicators; Field of view indicators

Definitions

  • the embodiment of the present specification belongs to the technical field of computer terminal insurance service data processing, and in particular, to a data processing method, device, processing device and client for vehicle damage.
  • Motor vehicle insurance that is, automobile insurance (or car insurance) refers to a type of commercial insurance that is liable for personal injury or property damage caused by natural disasters or accidents. With the development of the economy, the number of motor vehicles is increasing. At present, auto insurance has become one of the biggest insurances in China's property insurance business.
  • the current evaluation methods mainly include: conducting an on-site assessment of the vehicle in which the accident occurred through an insurance company or a third-party public assessment agency, or taking pictures of the accident vehicle under the guidance of the insurance company personnel, and transmitting it to the insurance company through the network. Then, the loss maker will remotely determine the damage through the photo.
  • the insurance company arranges the vehicle and the personnel to go to the scene of the accident to conduct the survey, which requires a relatively high cost; the owner needs to spend more time waiting for the survey personnel to arrive at the scene, and the experience is poor;
  • the owner takes photos on his own, due to lack of experience, it is often necessary for the survey personnel to provide guidance through remote telephone or video call, which is time consuming and laborious.
  • remote guidance by the personnel Even in the case of remote guidance by the personnel, some of the photos taken in this way have a large number of invalid photos.
  • an invalid fixed-loss image is collected, the owner of the vehicle needs to re-shoot, or even loses the shooting opportunity, which seriously affects the timing. Loss processing efficiency and user-defined service experience.
  • the embodiment of the present specification aims to provide a data processing method, device, processing device and client for vehicle damage, which can automatically identify a damaged part of a vehicle on a mobile device and identify the need in an easy-to-identify manner in the shooting screen.
  • the shooting area continuously guides the user to take photos or videos of the area, so that the user can complete the shooting required to meet the fixed loss processing requirements without the need of professional knowledge, and improve the processing efficiency of the vehicle's loss. To improve the user's fixed loss interactive experience.
  • the data processing method, device, processing device and client for the vehicle loss determination provided by the embodiments of the present specification are implemented by the following methods:
  • a data processing method for vehicle damage comprising:
  • the first damaged area after the rendering is superimposed and displayed in the current shooting window by using virtual reality;
  • a data processing device for vehicle damage comprising:
  • a first prompting module configured to display shooting guide information of the first damaged area of the photographing vehicle
  • a damage identification result module configured to determine a first damage area of the first damage if a first damage is detected in the current photographing window
  • a display module is configured to superimpose the rendered first damage area in the current shooting window by using virtual reality after the first damage area is rendered in a significant manner;
  • a second prompting module is configured to display shooting guide information for the first damaged area.
  • a data processing device for vehicle damage comprising a processor and a memory for storing processor executable instructions, the processor implementing the instructions to:
  • the first damaged area after the rendering is superimposed and displayed in the current shooting window by using virtual reality;
  • a client comprising a processor and a memory for storing processor executable instructions, the processor implementing the instructions to:
  • the first damaged area after the rendering is superimposed and displayed in the current shooting window by using virtual reality;
  • An electronic device includes a display screen, a processor, and a memory storing processor-executable instructions that, when executed by the processor, implement the method steps of any one of the embodiments.
  • the data processing method, device, processing device and client of the vehicle loss determination provided by the embodiment of the present specification can automatically identify the damaged part of the vehicle on the mobile device, and identify the need to shoot in an easy-to-identify manner in the shooting screen.
  • the area continuously guides the user to take photos or videos of the area, so that the user can complete the shooting required to meet the fixed loss processing requirements without the need of professional knowledge, and improve the processing efficiency of the vehicle's damage. Improve the user's fixed loss interactive experience.
  • FIG. 1 is a schematic flow chart of an embodiment of a data processing method for a vehicle loss according to the present specification
  • FIG. 2 is a schematic diagram of a deep neural network model used in an embodiment of the method described in the present specification
  • FIG. 3 is a schematic diagram of the present specification for identifying a damaged area using small dot symbol rendering
  • FIG. 4 is a schematic diagram of an implementation scenario of a shooting guidance embodiment in the method provided by the present specification
  • FIG. 5 is a schematic diagram of an implementation scenario of another embodiment of the method provided by the present specification.
  • FIG. 6 is a block diagram showing the hardware structure of a client for interactive processing of vehicle damage using the method or apparatus embodiment of the present invention
  • FIG. 7 is a block diagram showing a module structure of an embodiment of a data processing apparatus for vehicle damage provided by the present specification
  • FIG. 8 is a schematic structural diagram of an embodiment of an electronic device provided by the present specification.
  • the client may include a terminal device with a shooting function, such as a smart phone or a tablet computer, used by a vehicle loss site personnel (which may be an accident vehicle owner user or an insurance company personnel or other personnel performing a loss processing process). Smart wearable devices, dedicated loss terminals, etc.
  • the client may have a communication module, and may communicate with a remote server to implement data transmission with the server.
  • the server may include a server on the insurance company side or a server on the service side of the service provider.
  • Other implementation scenarios may also include servers of other service parties, such as a component supplier that has a communication link with the server of the fixed service provider. Terminal, terminal of vehicle repair shop, etc.
  • the server may include a single computer device, or may include a server cluster composed of a plurality of servers, or a server of a distributed system.
  • the client side can send the image data collected by the live shooting to the server in real time, and the server side performs the damage identification, and the recognition result can be fed back to the client.
  • the processing on the server side, the damage recognition and the like are performed by the server side, and the processing speed is usually higher than the client side, which can reduce the processing pressure of the client and improve the speed of damage recognition.
  • this specification does not exclude that all or part of the above processing in other embodiments is implemented by the client side, such as real-time detection and identification of damage on the client side.
  • the present invention provides a data processing method for a vehicle to be applied to a mobile device, which can identify an area to be photographed in an easy-to-identify manner in the photographing screen, and continuously guide the user to take a photo or video to the area. So that the user can complete the shooting required for the damage without the need for professional knowledge.
  • FIG. 1 is a schematic flowchart diagram of an embodiment of a data processing method for a vehicle loss according to the present disclosure.
  • the present specification provides method operation steps or device structures as shown in the following embodiments or figures, there may be more or partial merged fewer operational steps in the method or device based on conventional or no inventive labor. Or module unit.
  • the execution order of the steps or the module structure of the device is not limited to the execution order or the module structure shown in the embodiment or the drawings.
  • S6 Display shooting guidance information for the first damage area.
  • the client on the user side may be a smart phone, and the smart phone may have a shooting function.
  • the user can open the mobile phone application that implements the implementation of the present specification at the scene of the vehicle accident to take a framing shot of the vehicle accident scene.
  • the shooting window can be displayed on the client display, and the vehicle can be photographed through the shooting window.
  • the shooting window may be a video shooting window, which may be used for framing (image capturing) of the vehicle damage scene by the terminal, and image information acquired by the client-integrated camera device may be displayed in the shooting window.
  • the specific interface structure of the shooting window and the related information displayed can be customized.
  • the vehicle's feature data can be acquired during vehicle shooting.
  • the feature data can be specifically set according to data processing requirements such as vehicle identification, environment recognition, and image recognition.
  • the feature data may include data information of each component of the identified vehicle, and may be used to construct 3D coordinate information, and establish an augmented reality space model of the vehicle (AR space model, a data representation mode, and a contour figure of the body) ).
  • AR space model augmented reality space model
  • the feature data may also include other data information such as the brand, model, color, outline, unique identification code of the vehicle.
  • the client When the client enables the loss service, it can display the boot information for shooting the damaged area.
  • the damaged area currently or initially to be photographed is referred to as a first damaged area.
  • the application can prompt the user to shoot at a distance that the vehicle can be seen in a position that is likely to be damaged. If necessary, the user may be prompted to move around the vehicle body. If no damage is found during the initial shooting, the user is prompted to take a full shot of the vehicle in reverse.
  • the damage area corresponding to the damage may be further calculated.
  • the process of damage identification may be performed by the client side or by the server side, and the server at this time may be referred to as a damage identification server.
  • the images collected by the client can be directly identified in the client for damage detection, or other fixed loss data processing, which can reduce network transmission overhead.
  • the process of damage identification can be processed by the server side.
  • the identifying that the first damage exists in the current shooting window may include:
  • S22 Receive a damage recognition result returned by the server, where the damage recognition result includes a processing result obtained by the damage identification server using the pre-trained deep neural network to perform damage identification on the acquired image.
  • the first damage identification in the embodiment is for the current damage recognition process, and the first does not constitute a limitation on the damage recognition process of the images collected by other damages.
  • the client or server side may use a deep neural network constructed in advance or in real time to identify damage in the image, such as damage location, damaged component, damage type, and the like.
  • Deep neural networks can be used for target detection and semantic segmentation. For the input image, find the location of the target in the image.
  • 2 is a schematic diagram of a deep neural network model used in an embodiment of the method described in the specification.
  • Figure 2 depicts a typical deep neural network, Faster R-CNN.
  • a deep neural network can be trained by pre-labeling a large number of pictures of the damaged area, and the damage is given to the pictures of the vehicle's various azimuths and illumination conditions. The extent of the area.
  • a network structure customized for mobile devices may be used, such as based on typical MobileNet, SqueezeNet or its improved network structure, so that the model can be lower in power consumption, less memory, and It runs in a slower processor environment, such as the client's mobile terminal operating environment.
  • the area can be rendered in a significant manner, and the area covered by the rendering damage is superimposed on the captured image by the AR technique.
  • the salient mode rendering mainly refers to the use of some features of the rendering mode to mark the damage area, so that the damage area is easy to identify, or more prominent.
  • the specific rendering manner is not limited, and specific constraints or conditions for achieving rendering in a significant manner may be set.
  • the salient mode rendering may include:
  • S40 Identify the first damage area by using a preset characterization symbol, where the preset characterization symbol includes one of the following:
  • FIG. 3 is a schematic diagram of the present specification for identifying a damaged area using small dot symbol rendering.
  • the preset characterization symbols may also include other forms, such as a guide line, a rule graphic frame, an irregular graphic frame, a customized graphic, etc., and other embodiments may also use text, Characters, data, etc. identify the damaged area and direct the user to take pictures of the damaged area.
  • One or more preset characterization symbols can be used for rendering.
  • the preset characterization symbol is used to identify the damaged area, and the location area where the damage is located can be more clearly displayed in the shooting window, thereby assisting the user in quickly positioning and guiding shooting.
  • a dynamic rendering effect may also be employed to identify the damaged area, and the user is directed to photograph the damaged area in a more obvious manner.
  • the salient mode rendering includes:
  • S400 Perform at least one animation display of color conversion, size conversion, rotation, and jitter on the preset characterization symbol.
  • the boundary of the actual damage may be superimposed on the AR, prompting the user to align the framing frame with the portion of the variable cross-section for shooting.
  • the augmented reality AR generally refers to a technical implementation scheme for calculating the position and angle of the camera image in real time and adding corresponding images, videos, and 3D models, which can put the virtual world on the screen in the real world and Engage.
  • the enhanced information space model constructed by using the feature data in the embodiment of the present specification may be the contour information of the vehicle, and may specifically be based on the acquired model of the vehicle, the shooting angle, and the tire position, the ceiling position, the front face position, and the front of the vehicle.
  • a plurality of feature data such as lamp position, taillight position, front and rear window positions, etc., construct an outline of the vehicle.
  • the contour may include a data model established based on 3D coordinates with corresponding 3D coordinate information.
  • the contour of the build can then be displayed in the capture window.
  • the present specification does not exclude that the augmented reality space model described in other embodiments may also include other model forms or other model information added above the contours.
  • the AR model can be matched with the real vehicle position during the shooting duration, such as superimposing the constructed 3D contour to the contour position of the real vehicle, and the matching can be considered when the two match or the matching degree reaches the threshold.
  • the user can guide the framing direction, and the user aligns the constructed contour with the contour of the captured real vehicle by guiding the moving shooting direction or angle.
  • the embodiment of the present specification in combination with the augmented reality technology, not only displays the real information of the vehicle photographed by the actual client of the user, but also displays the augmented reality space model information of the vehicle that is constructed at the same time, and the two kinds of information complement each other and superimpose, and can provide more Good damage service experience.
  • the shooting window combined with the AR space model can display the scene of the vehicle more intuitively, and can effectively perform the damage and shooting guidance of the vehicle damage position.
  • the client may perform damage recognition guidance in the AR scenario, and the damage recognition guidance may specifically include the presentation guidance information determined based on the image information acquired from the shooting window.
  • the client can obtain image information in the AR scene in the shooting window, analyze and calculate the acquired image information, and determine what shooting guidance information needs to be displayed in the shooting window according to the analysis result. For example, the position of the vehicle in the current shooting window is far away, and the user can be prompted to approach the shooting in the shooting window. If the shooting position is to the left and the tail of the vehicle cannot be captured, the shooting guidance information can be displayed to prompt the user to pan the shooting angle to the right.
  • the damage identification guides the specific processed data information and under what conditions the shooting guidance information is displayed, and the corresponding policies or rules may be preset, which are not described one by one in this embodiment.
  • photographing guidance information for the first lesion area may be displayed.
  • the shooting guidance information that needs to be displayed may be determined according to the current shooting information and the position information of the first damage area. For example, if there is a scratch on the rear fender of the vehicle, and the scratch needs to be photographed in front and in the direction of the scratch, but according to the current shooting position and angle information, the user is inclined at 45 degrees. And farther away from the scratch location. At this point, the user can be prompted to approach the scratch position, prompting the user to shoot in front and in the direction of the scratch.
  • the shooting guide information can be adjusted in real time according to the current view. For example, if the user has already approached the scratch position and meets the shooting requirements, the shooting guide information prompting the user to approach the scratch position may not be displayed.
  • the suspected damage can be identified by the client or server side.
  • the shooting guidance information and shooting conditions that need to be displayed during the specific shooting can be set according to the fixed loss interaction design or the damage processing requirements.
  • the shooting guidance information may include at least one of the following:
  • FIG. 4 An example of a shooting guide is shown in Figure 4.
  • the user can perform the loss processing more conveniently and efficiently through the real-time shooting guidance information.
  • the user can shoot according to the shooting guidance information, and the user experience can be better without professional shooting skills or cumbersome shooting operations.
  • the above embodiment describes the shooting guidance information displayed by the text.
  • the shooting guidance information may further include an image, a voice, an animation, a vibration, and the like, and the current shooting image is aligned by an arrow or a voice prompt.
  • the form of the shooting guidance information displayed in the current shooting window includes at least one of a symbol, a text, a voice, an animation, a video, and a vibration.
  • the user when the user aligns the camera of the mobile device with the vehicle for shooting, the user may shoot at a certain frame rate (eg, 15 frames/s), and then the deep neural network trained as described above may be used. Identify the image. If the damage is detected, a new shooting strategy can be initiated for the damaged area, such as speeding up the shooting frame rate (such as 30 frames/s), and obtaining other parameters to achieve continuous acquisition of the area at a faster speed and lower power consumption. The position in the current shooting window. In this way, the shooting parameters can be adjusted according to different shooting areas, and different shooting strategies can be used to flexibly adapt to different shooting scenes, and focus areas can be enhanced, and the power consumption can be reduced by down-converting corresponding non-key areas. Therefore, in another embodiment of the method provided by the present specification, when it is recognized that there is damage in the current photographing window, the photographing strategy of adjusting at least the parameter including the photographing frame rate is used to photograph the damaged area.
  • the photographing strategy of adjusting at least the parameter including the photographing frame rate
  • the specific shooting strategy can be customized according to the shooting scene.
  • the method may further include:
  • the client application can return the captured damage image to the insurance company for subsequent manual or automatic damage processing. It also avoids or reduces the risk of users falsifying the damage image for fraud. Therefore, in another embodiment of the method provided by the present specification, the method further includes:
  • the fixed loss server may include a server on the insurance company side, and may also include a server of the fixed service party.
  • the transmission to the fixed loss server may be directly transmitted by the client to the fixed loss server, or may be indirectly transmitted to the fixed loss server.
  • the determined qualified loss image can also be sent to the server of the fixed loss server and the fixed service party, such as the server end of the fixed loss service provided by a payment application.
  • the real-time described in the foregoing embodiments may include sending, receiving, or displaying immediately after acquiring or determining certain data information, and those skilled in the art may understand that after buffering or expected calculation, waiting time Sending, receiving, or presenting can still belong to the real-time defined range.
  • the image described in the embodiments of the present specification may include a video, and the video may be regarded as a continuous image collection.
  • the acquired image captured in the solution of the embodiment of the present specification or the qualified loss image that meets the requirements may be stored in a local client or uploaded to a remote server in real time.
  • the local client store performs some data tampering or uploading to the server storage, it can effectively prevent the damaging data from being tampered with, or stealing other insurance data that is not the image of this accident. Therefore, the embodiment of the present specification can also improve the data security of the loss processing and the reliability of the loss determination result.
  • the above embodiment describes an embodiment of a data processing method in which a user performs a vehicle loss on a mobile phone client. It should be noted that the foregoing methods in the embodiments of the present specification may be implemented in various processing devices, such as dedicated loss-making terminals, and implementation scenarios including client and server architectures.
  • FIG. 6 is a hardware structural block diagram of a client that applies the interactive processing of the vehicle loss in the embodiment of the method or apparatus of the present invention.
  • client 10 may include one or more (only one shown) processor 102 (processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA).
  • processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA).
  • a memory 104 for storing data
  • a transmission module 106 for communication functions. It will be understood by those skilled in the art that the structure shown in FIG.
  • the client 10 may also include more or less components than those shown in FIG. 6, for example, may also include other processing hardware, such as a GPU (Graphics Processing Unit), or have the same as shown in FIG. Different configurations.
  • a GPU Graphics Processing Unit
  • the memory 104 can be used to store software programs and modules of application software, such as program instructions/modules corresponding to the search method in the embodiment of the present specification, and the processor 102 executes various functions by running software programs and modules stored in the memory 104.
  • Application and data processing that is, a processing method for realizing the content display of the above navigation interaction interface.
  • Memory 104 may include high speed random access memory, and may also include non-volatile memory such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory.
  • memory 104 may further include memory remotely located relative to processor 102, which may be connected to client 10 over a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the transmission module 106 is configured to receive or transmit data via a network.
  • the network specific examples described above may include a wireless network provided by a communication provider of the computer terminal 10.
  • the transport module 106 includes a Network Interface Controller (NIC) that can be connected to other network devices through a base station to communicate with the Internet.
  • the transmission module 106 can be a Radio Frequency (RF) module for communicating with the Internet wirelessly.
  • NIC Network Interface Controller
  • RF Radio Frequency
  • FIG. 7 is a schematic structural diagram of a module of a data processing apparatus for determining a vehicle loss according to the present disclosure.
  • the specific structure may include:
  • the first prompting module 201 can be used to display shooting guidance information of the first damaged area of the photographing vehicle;
  • the damage identification result module 202 may be configured to determine the first damage area of the first damage if it is recognized that the first damage exists in the current shooting window;
  • the display module 203 is configured to: after performing the rendering of the first damage area in a significant manner, using the virtual reality to superimpose and display the rendered first damage area in the current shooting window;
  • the second prompting module 204 can be configured to display shooting guide information for the first damaged area.
  • the foregoing apparatus may further include other implementation manners, such as a rendering processing module that performs rendering, an AR display module that performs AR processing, and the like, according to the description of the related method embodiments.
  • a rendering processing module that performs rendering
  • an AR display module that performs AR processing
  • the device model identification method provided by the embodiment of the present specification may be implemented by a processor executing a corresponding program instruction in a computer, such as using a C++/java language of a Windows/Linux operating system on a PC/server side, or other such as android,
  • the iOS system corresponds to the necessary hardware implementation of the application design language set, or the processing logic based on quantum computers.
  • the data processing device of the vehicle fixed loss provided by the present specification may include a processor and a memory for storing processor executable instructions, where the processor executes When the instruction is implemented:
  • the first damaged area after the rendering is superimposed and displayed in the current shooting window by using virtual reality;
  • the processor further performs:
  • the shooting guidance information of the second damaged area of the vehicle is displayed until the recognized damage shooting is completed.
  • the salient mode rendering includes:
  • the first damaged area is identified by using a preset characterization symbol, and the preset characterization symbol includes one of the following:
  • the salient mode rendering includes:
  • the shooting guidance information includes at least one of the following:
  • the form of the shooting guidance information displayed in the current shooting window includes at least one of a symbol, a text, a voice, an animation, a video, and a vibration. .
  • the processor recognizes that the presence of the first damage in the current shooting window comprises:
  • the damage recognition result returned by the server is received, and the damage recognition result includes a processing result obtained by the damage recognition server using the pre-trained deep neural network to perform damage identification on the acquired image.
  • the processor when the processor recognizes that there is damage in the current shooting window, performing a shooting strategy that adjusts at least a parameter including a shooting frame rate to perform the damage region. Take a picture.
  • the processor further performs:
  • the captured image that meets the requirements for the lossy image acquisition is transmitted to the fixed loss server.
  • processing device described above in the above embodiments may further include other scalable embodiments according to the description of the related method embodiments.
  • the above instructions may be stored in a variety of computer readable storage media.
  • the computer readable storage medium may include physical means for storing information, which may be digitized and stored in a medium utilizing electrical, magnetic or optical means.
  • the computer readable storage medium of this embodiment may include: means for storing information by means of electrical energy, such as various types of memories, such as RAM, ROM, etc.; means for storing information by magnetic energy means, such as hard disk, floppy disk, magnetic tape, magnetic Core memory, bubble memory, U disk; means for optically storing information such as CD or DVD.
  • electrical energy such as various types of memories, such as RAM, ROM, etc.
  • magnetic energy means such as hard disk, floppy disk, magnetic tape, magnetic Core memory, bubble memory, U disk
  • means for optically storing information such as CD or DVD.
  • quantum memories, graphene memories, and the like are as described above.
  • the above method or apparatus embodiment can be used for a client on the user side, such as a smart phone. Accordingly, the present specification provides a client comprising a processor and a memory for storing processor-executable instructions that, when executed by the processor, are implemented:
  • the first damaged area after the rendering is superimposed and displayed in the current shooting window by using virtual reality;
  • an embodiment of the present specification further provides an electronic device including a display screen, a processor, and a memory storing processor executable instructions.
  • FIG. 8 is a schematic structural diagram of an embodiment of an electronic device according to the present disclosure.
  • the processor executes the instruction, the method steps described in any one of the embodiments may be implemented.
  • embodiments of the present specification refer to the AR technology, the shooting guidance information display, the shooting guidance with the user interaction, the use of the deep neural network to initially identify the damage location, and the like, data acquisition, position alignment, interaction, calculation, judgment, etc.
  • the data is described, however, embodiments of the present description are not limited to situations that must be consistent with industry communication standards, standard image data processing protocols, communication protocols, and standard data models/templates or embodiments of the specification.
  • Certain industry standards or implementations that have been modified in a manner that uses a custom approach or an embodiment described above may also achieve the same, equivalent, or similar, or post-deformation implementation effects of the above-described embodiments.
  • Embodiments obtained by applying such modified or modified data acquisition, storage, judgment, processing, etc. may still fall within the scope of alternative embodiments of the present specification.
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • HDL Hardware Description Language
  • the controller can be implemented in any suitable manner, for example, the controller can take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (eg, software or firmware) executable by the (micro)processor.
  • computer readable program code eg, software or firmware
  • examples of controllers include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, The Microchip PIC18F26K20 and the Silicone Labs C8051F320, the memory controller can also be implemented as part of the memory's control logic.
  • the controller can be logically programmed by means of logic gates, switches, ASICs, programmable logic controllers, and embedding.
  • Such a controller can therefore be considered a hardware component, and the means for implementing various functions included therein can also be considered as a structure within the hardware component.
  • a device for implementing various functions can be considered as a software module that can be both a method of implementation and a structure within a hardware component.
  • the system, device, module or unit illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product having a certain function.
  • a typical implementation device is a computer.
  • the computer can be, for example, a personal computer, a laptop computer, a car-mounted human-machine interaction device, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet.
  • the above devices are described as being separately divided into various modules by function.
  • the functions of the modules may be implemented in the same software or software, or the modules that implement the same function may be implemented by multiple sub-modules or a combination of sub-units.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or integrated. Go to another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the controller can be logically programmed by means of logic gates, switches, ASICs, programmable logic controllers, and embedding.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
  • a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
  • processors CPUs
  • input/output interfaces network interfaces
  • memory volatile and non-volatile memory
  • the memory may include non-persistent memory, random access memory (RAM), and/or non-volatile memory in a computer readable medium, such as read only memory (ROM) or flash memory.
  • RAM random access memory
  • ROM read only memory
  • Memory is an example of a computer readable medium.
  • Computer readable media includes both permanent and non-persistent, removable and non-removable media.
  • Information storage can be implemented by any method or technology.
  • the information can be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory. (ROM), electrically erasable programmable read only memory (EEPROM), flash memory or other memory technology, compact disk read only memory (CD-ROM), digital versatile disk (DVD) or other optical storage, Magnetic tape cartridges, magnetic tape storage or other magnetic storage devices or any other non-transportable media can be used to store information that can be accessed by a computing device.
  • computer readable media does not include temporary storage of computer readable media, such as modulated data signals and carrier waves.
  • embodiments of the present specification can be provided as a method, system, or computer program product.
  • embodiments of the present specification can take the form of an entirely hardware embodiment, an entirely software embodiment or a combination of software and hardware.
  • embodiments of the present specification can take the form of a computer program product embodied on one or more computer usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • Embodiments of the present description can be described in the general context of computer-executable instructions executed by a computer, such as a program module.
  • program modules include routines, programs, objects, components, data structures, and the like that perform particular tasks or implement particular abstract data types.
  • Embodiments of the present specification can also be practiced in distributed computing environments where tasks are performed by remote processing devices that are connected through a communication network.
  • program modules can be located in both local and remote computer storage media including storage devices.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Studio Devices (AREA)
  • Processing Or Creating Images (AREA)
  • Traffic Control Systems (AREA)

Abstract

Un mode de réalisation de la présente invention concerne un procédé de traitement de données d'évaluation de perte de véhicule, un appareil, un dispositif de traitement et un client. Un utilisateur peut identifier automatiquement, sur un dispositif mobile, un emplacement d'endommagement de véhicule, et peut utiliser un procédé d'identification facile pour marquer sur un écran de caméra des zones qui nécessitent une photographie supplémentaire. L'utilisateur est ensuite guidé de façon à photographier ou filmer les zones en question. De cette manière, un utilisateur peut terminer toutes les photographies requises sans avoir besoin de connaissances techniques et conformément aux exigences de traitement d'évaluation de perte, augmentant l'efficacité de traitement de l'évaluation de perte de véhicule, et améliorant l'expérience d'utilisateur.
PCT/CN2019/076028 2018-05-08 2019-02-25 Procédé de traitement de données d'évaluation de perte de véhicule, appareil, dispositif de traitement et client WO2019214319A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810432696.3 2018-05-08
CN201810432696.3A CN108632530B (zh) 2018-05-08 2018-05-08 一种车辆定损的数据处理方法、装置、设备及客户端、电子设备

Publications (1)

Publication Number Publication Date
WO2019214319A1 true WO2019214319A1 (fr) 2019-11-14

Family

ID=63695894

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/076028 WO2019214319A1 (fr) 2018-05-08 2019-02-25 Procédé de traitement de données d'évaluation de perte de véhicule, appareil, dispositif de traitement et client

Country Status (3)

Country Link
CN (2) CN113179368B (fr)
TW (1) TW201947452A (fr)
WO (1) WO2019214319A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3869404A3 (fr) * 2020-12-25 2022-01-26 Beijing Baidu Netcom Science And Technology Co. Ltd. Procédé d'évaluation de la perte d'un véhicule exécuté par un terminal mobile, dispositif, terminal mobile et support
CN115174885A (zh) * 2022-06-28 2022-10-11 深圳数位大数据科技有限公司 基于ar终端的线下场景信息采集方法、平台、系统及介质
EP4070251A4 (fr) * 2019-12-02 2023-08-30 Click-Ins, Ltd. Systèmes, procédés et programmes pour générer une empreinte de dommages dans un véhicule
CN117455466A (zh) * 2023-12-22 2024-01-26 南京三百云信息科技有限公司 一种汽车远程评估的方法及系统

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10471611B2 (en) * 2016-01-15 2019-11-12 Irobot Corporation Autonomous monitoring robot systems
CN113179368B (zh) * 2018-05-08 2023-10-27 创新先进技术有限公司 一种车辆定损的数据处理方法、装置、处理设备及客户端
CN109447171A (zh) * 2018-11-05 2019-03-08 电子科技大学 一种基于深度学习的车辆姿态分类方法
CN110245552B (zh) * 2019-04-29 2023-07-18 创新先进技术有限公司 车损图像拍摄的交互处理方法、装置、设备及客户端
CN110427810B (zh) * 2019-06-21 2023-05-30 北京百度网讯科技有限公司 视频定损方法、装置、拍摄端及机器可读存储介质
CN110659567B (zh) * 2019-08-15 2023-01-10 创新先进技术有限公司 车辆损伤部位的识别方法以及装置
CN113038018B (zh) * 2019-10-30 2022-06-28 支付宝(杭州)信息技术有限公司 辅助用户拍摄车辆视频的方法及装置
CN111489433B (zh) * 2020-02-13 2023-04-25 北京百度网讯科技有限公司 车辆损伤定位的方法、装置、电子设备以及可读存储介质
CN111368752B (zh) * 2020-03-06 2023-06-02 德联易控科技(北京)有限公司 车辆损伤的分析方法和装置
CN111475157B (zh) * 2020-03-16 2024-04-19 中保车服科技服务股份有限公司 一种图像采集模板管理方法、装置、存储介质及平台
CN111340974A (zh) * 2020-04-03 2020-06-26 北京首汽智行科技有限公司 一种记录共享汽车车辆损坏部位的方法
CN112492105B (zh) * 2020-11-26 2022-04-15 深源恒际科技有限公司 一种基于视频的车辆外观部件自助定损采集方法及系统
CN113033372B (zh) * 2021-03-19 2023-08-18 北京百度网讯科技有限公司 车辆定损方法、装置、电子设备及计算机可读存储介质
CN113486725A (zh) * 2021-06-11 2021-10-08 爱保科技有限公司 智能车辆定损方法及装置、存储介质及电子设备
CN113256778B (zh) * 2021-07-05 2021-10-12 爱保科技有限公司 生成车辆外观部件识别样本的方法、装置、介质及服务器
KR102366017B1 (ko) * 2021-07-07 2022-02-23 쿠팡 주식회사 설치 서비스를 위한 정보 제공 방법 및 장치
CN113840085A (zh) * 2021-09-02 2021-12-24 北京城市网邻信息技术有限公司 车源信息的采集方法、装置、电子设备及可读介质
CN113866167A (zh) * 2021-09-13 2021-12-31 北京逸驰科技有限公司 一种轮胎检测结果的生成方法、计算机设备及存储介质
CN114245055B (zh) * 2021-12-08 2024-04-26 深圳位置网科技有限公司 一种用于紧急呼叫情况下视频通话的方法和系统
CN114637438B (zh) * 2022-03-23 2024-05-07 支付宝(杭州)信息技术有限公司 基于ar的车辆事故处理方法及装置

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160050364A1 (en) * 2014-08-18 2016-02-18 Audatex North America, Inc. System for capturing an image of a damaged vehicle
US9723251B2 (en) * 2013-04-23 2017-08-01 Jaacob I. SLOTKY Technique for image acquisition and management
CN107194323A (zh) * 2017-04-28 2017-09-22 阿里巴巴集团控股有限公司 车辆定损图像获取方法、装置、服务器和终端设备
CN107358596A (zh) * 2017-04-11 2017-11-17 阿里巴巴集团控股有限公司 一种基于图像的车辆定损方法、装置、电子设备及系统
CN107368776A (zh) * 2017-04-28 2017-11-21 阿里巴巴集团控股有限公司 车辆定损图像获取方法、装置、服务器和终端设备
US20180082378A1 (en) * 2016-09-21 2018-03-22 Allstate Insurance Company Enhanced Image Capture and Analysis of Damaged Tangible Objects
CN108632530A (zh) * 2018-05-08 2018-10-09 阿里巴巴集团控股有限公司 一种车辆定损的数据处理方法、装置、处理设备及客户端
CN108665373A (zh) * 2018-05-08 2018-10-16 阿里巴巴集团控股有限公司 一种车辆定损的交互处理方法、装置、处理设备及客户端

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10748216B2 (en) * 2013-10-15 2020-08-18 Audatex North America, Inc. Mobile system for generating a damaged vehicle insurance estimate
CN107360365A (zh) * 2017-06-30 2017-11-17 盯盯拍(深圳)技术股份有限公司 拍摄方法、拍摄装置、终端以及计算机可读存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9723251B2 (en) * 2013-04-23 2017-08-01 Jaacob I. SLOTKY Technique for image acquisition and management
US20160050364A1 (en) * 2014-08-18 2016-02-18 Audatex North America, Inc. System for capturing an image of a damaged vehicle
US20180082378A1 (en) * 2016-09-21 2018-03-22 Allstate Insurance Company Enhanced Image Capture and Analysis of Damaged Tangible Objects
CN107358596A (zh) * 2017-04-11 2017-11-17 阿里巴巴集团控股有限公司 一种基于图像的车辆定损方法、装置、电子设备及系统
CN107194323A (zh) * 2017-04-28 2017-09-22 阿里巴巴集团控股有限公司 车辆定损图像获取方法、装置、服务器和终端设备
CN107368776A (zh) * 2017-04-28 2017-11-21 阿里巴巴集团控股有限公司 车辆定损图像获取方法、装置、服务器和终端设备
CN108632530A (zh) * 2018-05-08 2018-10-09 阿里巴巴集团控股有限公司 一种车辆定损的数据处理方法、装置、处理设备及客户端
CN108665373A (zh) * 2018-05-08 2018-10-16 阿里巴巴集团控股有限公司 一种车辆定损的交互处理方法、装置、处理设备及客户端

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4070251A4 (fr) * 2019-12-02 2023-08-30 Click-Ins, Ltd. Systèmes, procédés et programmes pour générer une empreinte de dommages dans un véhicule
EP3869404A3 (fr) * 2020-12-25 2022-01-26 Beijing Baidu Netcom Science And Technology Co. Ltd. Procédé d'évaluation de la perte d'un véhicule exécuté par un terminal mobile, dispositif, terminal mobile et support
CN115174885A (zh) * 2022-06-28 2022-10-11 深圳数位大数据科技有限公司 基于ar终端的线下场景信息采集方法、平台、系统及介质
CN117455466A (zh) * 2023-12-22 2024-01-26 南京三百云信息科技有限公司 一种汽车远程评估的方法及系统
CN117455466B (zh) * 2023-12-22 2024-03-08 南京三百云信息科技有限公司 一种汽车远程评估的方法及系统

Also Published As

Publication number Publication date
CN108632530B (zh) 2021-02-23
CN108632530A (zh) 2018-10-09
CN113179368B (zh) 2023-10-27
TW201947452A (zh) 2019-12-16
CN113179368A (zh) 2021-07-27

Similar Documents

Publication Publication Date Title
WO2019214319A1 (fr) Procédé de traitement de données d'évaluation de perte de véhicule, appareil, dispositif de traitement et client
WO2019214313A1 (fr) Procédé de traitement interactif, appareil et dispositif de traitement pour évaluer une perte de véhicule et terminal client
WO2019214320A1 (fr) Procédé de traitement d'identification de dommage de véhicule, dispositif de traitement, client et serveur
US20210158533A1 (en) Image processing method and apparatus, and storage medium
WO2019214321A1 (fr) Procédé de traitement d'identification de dommage à véhicule, dispositif de traitement, client et serveur
WO2019109730A1 (fr) Procédé et appareil de traitement pour identifier l'endommagement d'un objet, serveur et client
TWI759647B (zh) 影像處理方法、電子設備,和電腦可讀儲存介質
CN110245552B (zh) 车损图像拍摄的交互处理方法、装置、设备及客户端
CN110059623B (zh) 用于生成信息的方法和装置
CN110910628B (zh) 车损图像拍摄的交互处理方法、装置、电子设备
CN110349161B (zh) 图像分割方法、装置、电子设备、及存储介质
CN114267041B (zh) 场景中对象的识别方法及装置
WO2019062631A1 (fr) Procédé et dispositif de génération d'image dynamique locale
CN111382647B (zh) 一种图片处理方法、装置、设备及存储介质
CN111310815A (zh) 图像识别方法、装置、电子设备及存储介质
CN111160312A (zh) 目标识别方法、装置和电子设备
CN111325107A (zh) 检测模型训练方法、装置、电子设备和可读存储介质
KR20220004606A (ko) 신호등 식별 방법, 장치, 기기, 저장 매체 및 컴퓨터 프로그램
EP4303815A1 (fr) Procédé de traitement d'image, dispositif électronique, support de stockage et produit-programme
WO2023155350A1 (fr) Procédé et appareil de positionnement de foule, dispositif électronique et support de stockage
CN110177216A (zh) 图像处理方法、装置、移动终端以及存储介质
CN110807728B (zh) 对象的显示方法、装置、电子设备及计算机可读存储介质
CN110263721B (zh) 车灯设置方法及设备
WO2021214540A1 (fr) Localisation fiable de dispositif de prise de vues en fonction d'une image à composante chromatique unique et d'un apprentissage multimodal
KR102677044B1 (ko) 이미지 처리 방법, 장치 및 디바이스, 및 저장 매체

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19800030

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19800030

Country of ref document: EP

Kind code of ref document: A1