WO2020177480A1 - 车辆事故的鉴定方法及装置、电子设备 - Google Patents

车辆事故的鉴定方法及装置、电子设备 Download PDF

Info

Publication number
WO2020177480A1
WO2020177480A1 PCT/CN2020/070511 CN2020070511W WO2020177480A1 WO 2020177480 A1 WO2020177480 A1 WO 2020177480A1 CN 2020070511 W CN2020070511 W CN 2020070511W WO 2020177480 A1 WO2020177480 A1 WO 2020177480A1
Authority
WO
WIPO (PCT)
Prior art keywords
shooting
accident
image data
scene
vehicle accident
Prior art date
Application number
PCT/CN2020/070511
Other languages
English (en)
French (fr)
Inventor
周凡
Original Assignee
阿里巴巴集团控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司 filed Critical 阿里巴巴集团控股有限公司
Publication of WO2020177480A1 publication Critical patent/WO2020177480A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00127Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture
    • H04N1/00204Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture with a digital computer or a digital computer system, e.g. an internet server
    • H04N1/00244Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture with a digital computer or a digital computer system, e.g. an internet server with a server, e.g. an internet server
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00127Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture
    • H04N1/00249Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture with a photographic apparatus, e.g. a photographic printer or a projector
    • H04N1/00251Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture with a photographic apparatus, e.g. a photographic printer or a projector with an apparatus for taking photographic images, e.g. a camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording

Definitions

  • One or more embodiments of this specification relate to the field of communication technology, and in particular to a method and device for identifying a vehicle accident, and electronic equipment.
  • the damage assessor and traffic police of the insurance company usually need to manually survey the scene and verify the accident process stated by the parties involved in the accident to identify the vehicle accident.
  • the identification of vehicle accidents mainly relies on instrumental measurement, video playback, and manual judgment.
  • one or more embodiments of this specification provide a vehicle accident identification method and device, and electronic equipment.
  • a vehicle accident identification method including:
  • the identification result is obtained by inputting the image data into the output result of the accident identification model;
  • the accident identification model is composed of the image data of the historical vehicle accident scene and the accident identification of the historical vehicle accident scene Information obtained by training.
  • a vehicle accident identification device including:
  • Image acquisition unit to acquire the image data of the vehicle accident scene
  • the result determination unit determines the identification result, the identification result is obtained by inputting the image data into the output result obtained by the accident identification model; the accident identification model is composed of the image data of the historical vehicle accident scene and the historical vehicle accident On-site accident identification information training was obtained.
  • an electronic device including:
  • a memory for storing processor executable instructions
  • the processor executes the executable instruction to implement the vehicle accident identification method as described in any of the foregoing embodiments.
  • Fig. 1 is a schematic structural diagram of a vehicle accident identification system provided by an exemplary embodiment.
  • Fig. 2 is a flowchart of a method for identifying a vehicle accident according to an exemplary embodiment.
  • Fig. 3 is an interaction diagram of a method for identifying a vehicle accident according to an exemplary embodiment.
  • Fig. 4A is a schematic diagram showing guidance information provided by an exemplary embodiment.
  • Fig. 4B is another schematic diagram showing guidance information provided by an exemplary embodiment.
  • Fig. 4C is a schematic diagram of training an accident identification model provided by an exemplary embodiment.
  • Fig. 5 is a schematic structural diagram of a device provided by an exemplary embodiment.
  • Fig. 6 is a block diagram of a vehicle accident identification device provided by an exemplary embodiment.
  • the steps of the corresponding method may not be executed in the order shown and described in this specification.
  • the method includes more or fewer steps than described in this specification.
  • a single step described in this specification may be decomposed into multiple steps for description in other embodiments; and multiple steps described in this specification may also be combined into a single step in other embodiments. description.
  • Fig. 1 is a schematic structural diagram of a vehicle accident identification system provided by an exemplary embodiment.
  • the system may include a server 11, a network 12, and several image acquisition devices, such as a mobile phone 13, a mobile phone 14, a driving recorder 15, and a driving recorder 16.
  • the server 11 may be a physical server including an independent host, or the server 11 may be a virtual server carried by a host cluster. During the running process, the server 11 can run a server-side program of a certain application to implement related business functions of the application. In the technical solutions of one or more embodiments of this specification, the server 11 can be used as a server to cooperate with the clients running on the mobile phone 13-14 and the driving recorder 15-16 to realize the vehicle accident identification solution.
  • Mobile phones 13-14 and driving recorders 15-16 are just one type of image acquisition equipment that users can use.
  • image capture devices such as the following types: tablet devices, notebook computers, PDAs (Personal Digital Assistants), wearable devices (such as smart glasses, smart watches, etc.), etc., in this manual Or multiple embodiments do not limit this.
  • the image capture device can run a program on the client side of an application to implement related business functions of the application.
  • the image capture device can act as a client to interact with the server 11 to implement the instructions in this specification. Identification scheme for vehicle accidents.
  • the network 12 for interaction between the mobile phone 13-14, the driving recorder 15-16, and the server 11 may include multiple types of wired or wireless networks.
  • the network 12 may include a Public Switched Telephone Network (PSTN) and the Internet.
  • PSTN Public Switched Telephone Network
  • FIG. 2 is a flowchart of a method for identifying a vehicle accident according to an exemplary embodiment. As shown in Figure 2, the method is applied to the client and can include the following steps:
  • Step 202 Obtain image data of the vehicle accident scene.
  • Step 204 Determine the identification result, the identification result is obtained by inputting the image data into the output result of the accident identification model; the accident identification model is composed of the image data of the historical vehicle accident scene and the historical vehicle accident scene Accident identification information training obtained.
  • the user after a vehicle accident occurs, the user (for example, the driver of the vehicle accident, the traffic police, the damage assessor of the insurance company, etc.) can use the client (an image acquisition device equipped with a camera module, which can communicate with the server Communication; such as mobile phones, driving recorders, etc.) to capture the image data (such as photos, videos, etc.) of the vehicle accident scene, so that the captured image data can be used as the input of the accident identification model to output the identification result from the accident identification model.
  • the client an image acquisition device equipped with a camera module, which can communicate with the server Communication; such as mobile phones, driving recorders, etc.
  • the client an image acquisition device equipped with a camera module, which can communicate with the server Communication; such as mobile phones, driving recorders, etc.
  • the server Communication such as mobile phones, driving recorders, etc.
  • users can directly use photos and videos of the vehicle accident scene to conduct end-to-end vehicle accident identification, which can effectively improve the identification efficiency and shorten the identification cycle.
  • the vehicle accident identification scheme in this manual supports remote identification and automatic identification, thereby greatly reducing the identification cost of vehicle accidents.
  • the driver only needs to collect the image data of the vehicle accident scene through the client, and the vehicle accident identification scheme based on this manual can obtain the identification result, without the need for damage assessors to inspect the vehicle accident scene.
  • the traffic police can also deal with vehicle accidents as soon as possible.
  • the accident identification model can be configured on the client side, and the client can directly input the image data into the accident identification model to use the output result of the accident identification model as the identification result.
  • the accident identification model can be configured on the server side, then the client can send the image data to the server so that the server can input the image data into the accident identification model and send all the image data to the accident identification model.
  • the output result returned by the server is used as the authentication result.
  • the image data of the vehicle accident scene is the basis for identifying the vehicle accident (that is, the input of the accident identification model), and the image data needs to be captured by the user using the client. Therefore, it is necessary to guide the user to capture image data that can accurately reflect the scene of the vehicle accident. Further, the guide information can be displayed in the shooting interface of the image acquisition device (ie, the client), so as to guide the user to obtain correct image data by shooting.
  • the standard relative position relationship between the vehicle accident scene and the image acquisition device can be defined in advance; in other words, when the image acquisition device maintains the standard relative position relationship with the vehicle accident scene, it can shoot Obtain image data that can accurately reflect the scene of the vehicle accident (which can be understood as containing the details of the scene of the vehicle accident). Therefore, the user can be guided to move the image acquisition device according to the relative position relationship between the vehicle accident scene and the image acquisition device.
  • the vehicle accident scene and the image may be determined according to the image data (the image data of the vehicle accident scene acquired by the image acquisition device; for example, it may be the first photo obtained by the user initially taking the vehicle accident scene).
  • the initial relative position relationship between the acquisition devices is determined, and then the movement state of the image acquisition device is determined, so that based on the movement state and the initial relative position relationship, the relationship between the image acquisition device and the scene of the vehicle accident after the movement is determined Real-time relative position relationship; then, according to the real-time relative position relationship, the first guidance information can be displayed in the shooting interface of the image acquisition device to guide the user to move the image acquisition device to match the standard relative position relationship s position.
  • the standard shooting orientation of the image capture device to the vehicle accident scene can be defined in advance; in other words, the image capture device can accurately reflect the vehicle accident scene while maintaining the standard shooting orientation of the vehicle accident scene Image data. Therefore, the user can be guided to move the image acquisition device according to the standard shooting orientation of the image acquisition device to the vehicle accident scene.
  • the shooting position of the image acquisition device on the scene of the vehicle accident may be acquired first (for example, it may be the shooting position when the user initially uses the image acquisition device to photograph the scene of the vehicle accident), and then it is determined whether the shooting position meets the standard Shooting orientation; when the shooting orientation does not meet the standard shooting orientation, second guide information is displayed in the shooting interface of the image acquisition device to guide the user to move the image acquisition device to the standard shooting orientation.
  • the operation of acquiring the shooting position of the image acquisition device on the vehicle accident scene can be completed by using a machine learning model.
  • the real-time image data obtained by the image acquisition device shooting the scene of the vehicle accident can be acquired, and then the real-time image data can be input into the shooting orientation determination model (the shooting orientation determination model is determined by shooting the sample accident in the preset shooting orientation).
  • the corresponding relationship between the image data obtained by the vehicle and the preset shooting orientation is obtained through training), so that the output result of the shooting orientation determination model is used as the shooting orientation of the image acquisition device to the vehicle accident scene.
  • the above determination of the initial relative position relationship can also be completed by a machine learning model.
  • the second guide information that guides the user to move the image capture device to each standard shooting orientation may be sequentially displayed in the shooting interface according to a predefined shooting process.
  • the shooting process includes a standard shooting orientation for each shooting object in the scene of a vehicle accident, and the sequence of shooting each shooting object.
  • the parameters of the identification result may include at least one of the following: collision angle, driving speed before collision, damage location, damage degree.
  • FIG. 3 is an interactive diagram of a method for identifying a vehicle accident according to an exemplary embodiment.
  • the interaction process may include the following steps:
  • Step 302 The mobile phone captures image data of the vehicle accident scene.
  • a user after a vehicle accident occurs, a user (for example, a driver of a vehicle accident, a traffic police, an insurance company, etc.) can use a mobile phone to capture image data of the scene of the vehicle accident. For example, photograph the vehicle that has collided, photograph the specific damaged part of the vehicle, photograph the license plate number, etc.
  • Step 304 The mobile phone displays the guidance information in the shooting interface.
  • step 306 the mobile phone is moved by the user to a standard location to capture image data.
  • the image data obtained by the mobile phone shooting the scene of the vehicle accident will be used as the basis for identifying the vehicle accident (that is, as the input of the accident identification model). Therefore, the user needs to be guided to capture the image data that can accurately reflect the scene of the vehicle accident. To improve the accuracy of identifying vehicle accidents.
  • the guide information (showing the first guide information or the second guide information) can be displayed in the shooting interface of the mobile phone, so as to guide the user to take the correct image data.
  • the standard relative positional relationship between the scene of a vehicle accident and the image acquisition device can be defined in advance; in other words, the mobile phone maintains a standard relative positional relationship with the scene of the vehicle accident
  • the image data that can accurately reflect the scene of the vehicle accident can be obtained by shooting (it can be understood as including various details of the scene of the vehicle accident).
  • the following standard relative position relationships can be defined: 3 meters from the front of the vehicle, 4 meters from the left side of the vehicle, 4 meters from the right side of the vehicle, 3 meters from the rear of the vehicle, 50 cm from the damaged part, etc.
  • the first guidance information can be displayed in the shooting interface to guide the user to move the mobile phone so that the relative position relationship between the mobile phone and the accident vehicle conforms to the standard relative position relationship (ie, move the mobile phone to the standard position).
  • the mobile phone may determine the initial relative positional relationship between the mobile phone and the vehicle accident scene based on the image data captured in step 302 (for example, it may be the first photo taken by the user at the scene of the vehicle accident).
  • the initial relative position relationship can be determined by a relative position relationship determination model; where the relative position relationship determination model can be obtained from the training sample image data and the distance and angle between the sample image data and the subject (in terms of distance And angle to describe the relative position relationship).
  • the distance and angle between the mobile phone and the subject can be obtained through geometric calculation by identifying the subject in the image data and extracting the feature points of the subject.
  • the mobile state of the mobile phone is determined to determine the real-time relative position relationship between the mobile phone and the vehicle accident scene based on the mobile state of the mobile phone and the initial relative position relationship.
  • the mobile phone's mobile state can be calculated from the data collected by the mobile phone's gyroscope and accelerometer; after knowing how the mobile phone moves, because the scene of the vehicle accident is often in a static state, it can be based on the initial relative position relationship and During the movement of the mobile phone, the relative position relationship between the mobile phone and the scene of the vehicle accident (ie real-time relative position relationship) is determined. Based on the above determination of the real-time relative position relationship, according to the difference between the real-time relative position relationship and the above-mentioned standard relative position relationship, the first guidance information can be displayed in the shooting interface of the mobile phone to guide the user to move the mobile phone to the standard relative position The location where the relationship matches.
  • the mobile phone can display the guidance message 42 "Please take a photo closer to 1 meter" in the shooting interface 4. The user is guided to bring the mobile phone closer to the (accident vehicle 41) distance of 1 meter in the shooting direction.
  • the standard camera orientation of the mobile phone to the vehicle accident scene can be defined in advance; in other words, when the mobile phone maintains the standard camera orientation of the vehicle accident scene, the image data that can accurately reflect the vehicle accident scene can be captured.
  • the second guide information can be displayed in the shooting interface, so as to guide the user to move the mobile phone so that the shooting orientation of the mobile phone shooting the accident vehicle (or damaged part) meets the standard shooting orientation.
  • the user's mobile phone's shooting orientation of the vehicle accident scene can be acquired first (for example, it can be the user's initial shooting orientation of the vehicle accident scene using the mobile phone), and then it is determined whether the shooting orientation meets the standard shooting orientation.
  • the second guide information is displayed in the shooting interface to guide the user to move the mobile phone to the standard shooting orientation (ie, move the mobile phone to the standard position).
  • the mobile phone can input the image data captured in step 302 (for example, it may be the first photo taken by the user at the scene of a vehicle accident) into the shooting orientation determination model, and use the output result of the shooting orientation determination model as the current mobile phone The shooting position of the vehicle accident scene.
  • the shooting orientation determination model can be obtained by training the corresponding relationship between image data obtained by shooting a sample accident vehicle under a preset shooting orientation (which may include multiple different shooting orientations) and the preset shooting orientation.
  • the second guide information that guides the user to move the mobile phone to each standard shooting orientation can be displayed in the shooting interface in sequence according to the predefined shooting process.
  • the shooting process includes the standard shooting orientation of each shooting object at the scene of a vehicle accident, and the sequence of shooting each shooting object.
  • the photographing process includes photographing the accident vehicle at a position 4 meters from the left side of the vehicle and photographing the accident vehicle at a position 4 meters away from the right side of the vehicle in sequence. Then, when the user finishes taking pictures of the accident vehicle 41 at a position 4 meters from the left side of the vehicle, the guidance message 43 "Please take pictures of the right side of the accident vehicle at a distance of 4 meters” and "pointing to the right side of the accident vehicle 41" can be displayed in the shooting interface. Arrow” to guide the user to take a mobile phone and take a photo 4 meters to the right of the accident vehicle 41.
  • Step 308 The mobile phone sends the image data taken at the standard location to the server.
  • step 310 the server inputs the received image data into the accident identification model.
  • the image data of historical vehicle accident scenes may be collected in advance, and the accident identification information obtained from the image data of the historical vehicle accident scenes can be analyzed by reliable means (for example, the image data obtained by the damage assessor manually analyzing the image data).
  • Accident identification information Annotate the image data, so that the annotated image data is used as sample data to train a machine learning model to obtain an accident identification model.
  • the parameters of accident identification information can include collision angle, driving speed before collision, damage location, damage degree, etc.; logistic regression, decision tree, neural network, support vector machine and other algorithm training sample data can be used to obtain accident identification model.
  • one or more embodiments of this specification do not limit the parameters of the accident identification information and the algorithms used for training the accident identification model.
  • the vehicle accident identification scheme in this manual supports remote identification and automatic identification, thereby greatly reducing the identification cost of vehicle accidents. For example, after a vehicle accident occurs, the driver only needs to collect the image data of the vehicle accident scene through the client, and the vehicle accident identification scheme based on this manual can obtain the identification result, without the need for damage assessors to inspect the vehicle accident scene. And the traffic police can also deal with vehicle accidents as soon as possible.
  • the collision speed the relative speed between the vehicle and the colliding object
  • the collision speed the relative speed between the vehicle and the colliding object
  • the collision speed the relative speed between the vehicle and the colliding object
  • the collision speed the relative speed between the vehicle and the colliding object
  • the collision speed the photos of the collision. data.
  • a set of sample data with photos as input and collision speed as the marked value can be constructed for each collision component, and the collision speed can be rounded.
  • the value range of the collision velocity can be divided according to a certain accuracy. For example, the value range is 10km/h ⁇ 200km/h, and the accuracy is 1km/h; then, the collision speed can be divided into 191 speed sections ranging from 10km/h to 200km/h.
  • the prediction of collision speed can be defined as a classification problem.
  • the accident identification model can predict the speed zone to which the collision speed of the vehicle accident belongs.
  • CNN Convolutional Neural Networks, convolutional neural network
  • a CNN may include a convolutional layer, a pooling layer, and a fully connected layer.
  • the convolutional layer is used to calculate the input photos to extract the feature vector;
  • the pooling layer is usually located after the convolutional layer, on the one hand, the dimension of the feature vector is reduced to simplify the network calculation complexity, and on the other hand, the Reduce the feature vector output by the convolutional layer to avoid overfitting of the convolutional neural network;
  • the fully connected layer is used to map the feature vector learned by the network to the label space of the sample, such as the two-dimensional feature vector output by the pooling layer Converted into a one-dimensional vector. Since the number of vehicle accident photos is uncertain, and the visual features contained in each photo are related in time series, the above sample data (a group of vehicle accident photos marked with the collision speed for the same vehicle accident) can be used as input Train the neural network.
  • LSTM Long Short-Term Memory, long short-term memory network
  • Step 312 The server returns the output result of the accident identification model to the mobile phone.
  • the accident identification model can also be configured on the side of the mobile phone; in other words, after the mobile phone captures the image data at a standard location, directly input the captured image data into the accident identification model to obtain the accident identification result (ie, accident identification).
  • the accident identification information output by the model) without sending the captured image data to the server.
  • the server can periodically update the sample data to retrain the accident identification model, thereby improving the accuracy of identification.
  • the server can periodically send the updated accident identification model to the mobile phone.
  • Step 314 The mobile phone displays the received output result as the identification result for the current vehicle accident scene.
  • the output of the accident identification model is the probability of each collision speed that may exist for the current vehicle accident.
  • the collision speed with the highest probability in the output result may be used as the identification result, or the collision speed with the highest probability in the output result and exceeding the preset probability threshold may be used as the identification result.
  • Fig. 5 is a schematic structural diagram of a device provided by an exemplary embodiment. Please refer to FIG. 5. At the hardware level, the device includes a processor 502, an internal bus 504, a network interface 506, a memory 508, and a non-volatile memory 510. Of course, it may also include hardware required for other services.
  • the processor 502 reads the corresponding computer program from the non-volatile memory 510 to the memory 508 and then runs it to form a vehicle accident identification device on a logical level.
  • a vehicle accident identification device on a logical level.
  • one or more embodiments of this specification do not exclude other implementations, such as logic devices or a combination of software and hardware, etc., which means that the execution body of the following processing flow is not limited to each
  • the logic unit can also be a hardware or logic device.
  • the vehicle accident identification device may include:
  • the image acquisition unit 61 acquires image data of the scene of a vehicle accident
  • the result determination unit 62 determines the identification result, the identification result is obtained by inputting the image data into the output result obtained by the accident identification model; the accident identification model is composed of the image data of the historical vehicle accident scene and the historical vehicle Accident identification information training at the accident scene.
  • the result determining unit 62 is specifically configured to:
  • the image data is sent to the server so that the server inputs the image data into the accident identification model, and the output result returned by the server is used as the identification result.
  • Optional also includes:
  • the initial position determining unit 63 determines the initial relative position relationship between the vehicle accident scene and the image acquisition device according to the image data
  • a movement state determination unit 64 which determines the movement state of the image acquisition device
  • a real-time position determination unit 65 determines the real-time relative position relationship between the image acquisition device and the vehicle accident scene after the movement;
  • the first display unit 66 displays first guidance information in the shooting interface of the image acquisition device according to the real-time relative position relationship to guide the user to move the image acquisition device to a position matching the standard relative position relationship .
  • Optional also includes:
  • the orientation acquiring unit 67 acquires the shooting orientation of the vehicle accident scene by the image acquisition device
  • the orientation determining unit 68 determines whether the shooting orientation meets the standard shooting orientation
  • the second display unit 69 when the shooting orientation does not meet the standard shooting orientation, display second guide information in the shooting interface of the image acquisition device to guide the user to move the image acquisition device to the standard shooting orientation Place.
  • the position acquiring unit 67 is specifically configured to:
  • the shooting orientation determination model being trained by the correspondence between image data obtained by shooting a sample accident vehicle in a preset shooting orientation and the preset shooting orientation;
  • the output result of the shooting orientation determination model is used as the shooting orientation of the image acquisition device to the scene of the vehicle accident.
  • the second display unit 69 is specifically configured to:
  • the second guidance information that guides the user to move the image capture device to each standard shooting position is sequentially displayed in the shooting interface; the shooting process includes standard shooting of each shooting object at the scene of a vehicle accident The orientation, and the order in which each subject was photographed.
  • the parameters of the identification result include at least one of the following: collision angle, driving speed before collision, damage location, damage degree.
  • a typical implementation device is a computer.
  • the specific form of the computer can be a personal computer, a laptop computer, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email receiving and sending device, and a game control A console, a tablet computer, a wearable device, or a combination of any of these devices.
  • the computer includes one or more processors (CPU), input/output interfaces, network interfaces, and memory.
  • processors CPU
  • input/output interfaces network interfaces
  • memory volatile and non-volatile memory
  • the memory may include non-permanent memory in computer readable media, random access memory (RAM) and/or non-volatile memory, such as read-only memory (ROM) or flash memory (flash RAM). Memory is an example of computer readable media.
  • RAM random access memory
  • ROM read-only memory
  • flash RAM flash memory
  • Computer-readable media include permanent and non-permanent, removable and non-removable media, and information storage can be realized by any method or technology.
  • the information can be computer-readable instructions, data structures, program modules, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disc (DVD) or other optical storage, Magnetic cassettes, magnetic disk storage, quantum memory, graphene-based storage media or other magnetic storage devices or any other non-transmission media can be used to store information that can be accessed by computing devices. According to the definition in this article, computer-readable media does not include transitory media, such as modulated data signals and carrier waves.
  • first, second, third, etc. may be used in one or more embodiments of this specification to describe various information, the information should not be limited to these terms. These terms are only used to distinguish the same type of information from each other.
  • first information may also be referred to as second information, and similarly, the second information may also be referred to as first information.
  • word “if” as used herein can be interpreted as "when” or “when” or "in response to determination”.

Abstract

一种车辆事故的鉴定方法及装置、电子设备,该方法可以包括:获取车辆事故现场的图像数据(202);确定鉴定结果,所述鉴定结果是通过将所述图像数据输入事故鉴定模型而得到的输出结果得到;所述事故鉴定模型由历史车辆事故现场的图像数据,以及所述历史车辆事故现场的事故鉴定信息训练得到(204)。

Description

车辆事故的鉴定方法及装置、电子设备 技术领域
本说明书一个或多个实施例涉及通信技术领域,尤其涉及一种车辆事故的鉴定方法及装置、电子设备。
背景技术
在车辆发生事故后,保险公司的定损员和交警通常需要人工对现场进行勘察,以及对事故当事人陈述的事故经过进行核实,从而对车辆事故进行鉴定。在相关技术中,对于车辆事故的鉴定,主要依赖于仪器测量、视频回放和人工判断等方式。
发明内容
有鉴于此,本说明书一个或多个实施例提供一种车辆事故的鉴定方法及装置、电子设备。
为实现上述目的,本说明书一个或多个实施例提供技术方案如下:
根据本说明书一个或多个实施例的第一方面,提出了一种车辆事故的鉴定方法,包括:
获取车辆事故现场的图像数据;
确定鉴定结果,所述鉴定结果是通过将所述图像数据输入事故鉴定模型而得到的输出结果得到;所述事故鉴定模型由历史车辆事故现场的图像数据,以及所述历史车辆事故现场的事故鉴定信息训练得到。
根据本说明书一个或多个实施例的第二方面,提出了一种车辆事故的鉴定装置,包括:
图像获取单元,获取车辆事故现场的图像数据;
结果确定单元,确定鉴定结果,所述鉴定结果是通过将所述图像数据输入事故鉴定模型而得到的输出结果得到;所述事故鉴定模型由历史车辆事故现场的图像数据,以及所述历史车辆事故现场的事故鉴定信息训练得到。
根据本说明书一个或多个实施例的第三方面,提出了一种电子设备,包括:
处理器;
用于存储处理器可执行指令的存储器;
其中,所述处理器通过运行所述可执行指令以实现如上述任一实施例中所述的车辆事故的鉴定方法。
附图说明
图1是一示例性实施例提供的一种车辆事故的鉴定系统的架构示意图。
图2是一示例性实施例提供的一种车辆事故的鉴定方法的流程图。
图3是一示例性实施例提供的一种车辆事故的鉴定方法的交互图。
图4A是一示例性实施例提供的一种展示引导信息的示意图。
图4B是一示例性实施例提供的另一种展示引导信息的示意图。
图4C是一示例性实施例提供的训练事故鉴定模型的示意图。
图5是一示例性实施例提供的一种设备的结构示意图。
图6是一示例性实施例提供的一种车辆事故的鉴定装置的框图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本说明书一个或多个实施例相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本说明书一个或多个实施例的一些方面相一致的装置和方法的例子。
需要说明的是:在其他实施例中并不一定按照本说明书示出和描述的顺序来执行相应方法的步骤。在一些其他实施例中,其方法所包括的步骤可以比本说明书所描述的更多或更少。此外,本说明书中所描述的单个步骤,在其他实施例中可能被分解为多个步骤进行描述;而本说明书中所描述的多个步骤,在其他实施例中也可能被合并为单个步骤进行描述。
图1是一示例性实施例提供的一种车辆事故的鉴定系统的架构示意图。如图1所示,该系统可以包括服务器11、网络12、若干图像采集设备,比如手机13、手机14、行车 记录仪15和行车记录仪16等。
服务器11可以为包含一独立主机的物理服务器,或者该服务器11可以为主机集群承载的虚拟服务器。在运行过程中,服务器11可以运行某一应用的服务器侧的程序,以实现该应用的相关业务功能。而在本说明书一个或多个实施例的技术方案中,可由服务器11作为服务端与手机13-14、行车记录仪15-16上运行的客户端进行配合,以实现车辆事故的鉴定方案。
手机13-14、行车记录仪15-16只是用户可以使用的一种类型的图像采集设备。实际上,用户显然还可以使用诸如下述类型的图像采集设备:平板设备、笔记本电脑、掌上电脑(PDAs,Personal Digital Assistants)、可穿戴设备(如智能眼镜、智能手表等)等,本说明书一个或多个实施例并不对此进行限制。在运行过程中,该图像采集设备可以运行某一应用的客户端侧的程序,以实现该应用的相关业务功能,比如图像采集设备可作为客户端与服务器11进行交互,以实现本说明书中的车辆事故的鉴定方案。
而对于手机13-14、行车记录仪15-16与服务器11之间进行交互的网络12,可以包括多种类型的有线或无线网络。在一实施例中,该网络12可以包括公共交换电话网络(Public Switched Telephone Network,PSTN)和因特网。
下面分别针对客户端和服务端中的不同角色,对本说明书的车辆事故的鉴定方案进行说明。
请参见图2,图2是一示例性实施例提供的一种车辆事故的鉴定方法的流程图。如图2所示,该方法应用于客户端,可以包括以下步骤:
步骤202,获取车辆事故现场的图像数据。
步骤204,确定鉴定结果,所述鉴定结果是通过将所述图像数据输入事故鉴定模型而得到的输出结果得到;所述事故鉴定模型由历史车辆事故现场的图像数据,以及所述历史车辆事故现场的事故鉴定信息训练得到。
在一实施例中,在发生车辆事故后,用户(例如,发生车辆事故的司机、交警、保险公司的定损员等)可使用客户端(配置有摄像模组的图像采集设备,可与服务器进行通讯;比如手机、行车记录仪等)拍摄车辆事故现场的图像数据(比如照片、视频等),从而可将拍摄得到的图像数据作为事故鉴定模型的输入,以由事故鉴定模型输出鉴定结果。通过上述采用机器学习模型的方式来鉴定车辆事故,使得用户可直接利用车辆事故现场的照片、视频便可进行端到端的车辆事故鉴定,可有效提高鉴定效率,缩短鉴定周 期。同时,本说明书的车辆事故的鉴定方案支持远程鉴定和自动鉴定,从而大幅降低了车辆事故的鉴定成本。例如,在发生车辆事故后,司机只需要通过客户端采集车辆事故现场的图像数据,基于本说明书的车辆事故的鉴定方案便可得到鉴定结果,而无需定损员到车辆事故现场来勘察,司机和交警也可尽快地处理车辆事故。
在一实施例中,事故鉴定模型可配置于客户端侧,那么客户端可直接将所述图像数据输入所述事故鉴定模型,以将所述事故鉴定模型的输出结果作为所述鉴定结果。
在一实施例中,事故鉴定模型可配置于服务端侧,那么客户端可将所述图像数据发送至服务端以使所述服务端将所述图像数据输入所述事故鉴定模型,以及将所述服务端返回的输出结果作为所述鉴定结果。
在一实施例中,车辆事故现场的图像数据是鉴定车辆事故的依据(即为事故鉴定模型的输入),而该图像数据需要用户使用客户端来拍摄得到。因此,需引导用户拍摄得到能够准确反映出车辆事故现场的图像数据。进一步的,可在图像采集设备(即客户端)的拍摄界面中展示引导信息,从而引导用户拍摄得到正确的图像数据。
在一种情况下,可预先定义车辆事故现场与图像采集设备之间的标准相对位置关系;换言之,图像采集设备在保持与车辆事故现场之间的相对位置关系为标准相对位置关系时,可拍摄得到能够正确反映出车辆事故现场的图像数据(可理解为包含车辆事故现场的各个细节)。因此,可依据车辆事故现场与图像采集设备之间的相对位置关系来引导用户移动图像采集设备。作为一示例性实施例,可先根据图像数据(图像采集设备获取到的车辆事故现场的图像数据;例如,可以是用户初始拍摄车辆事故现场得到的首张照片)确定所述车辆事故现场与图像采集设备之间的初始相对位置关系,再确定所述图像采集设备的移动状态,从而基于所述移动状态和所述初始相对位置关系,确定所述图像采集设备移动后与所述车辆事故现场的实时相对位置关系;那么,可根据所述实时相对位置关系,在所述图像采集设备的拍摄界面中展示第一引导信息,以引导用户将所述图像采集设备移动至与标准相对位置关系相匹配的位置。可见,在确定出初始相对位置关系后,无需再根据图像采集设备拍摄到的图像数据来引导用户(基于图像采集设备的移动状态即可),即在移动过程中,引导操作可基于图像采集设备的移动状态来完成,而无需依赖于图像采集设备在移动时拍摄的图像数据。
在另一种情况下,可预先定义图像采集设备对车辆事故现场的标准拍摄方位;换言之,图像采集设备在保持处于对车辆事故现场的标准拍摄方位时,可拍摄得到能够正确反映出车辆事故现场的图像数据。因此,可依据图像采集设备对车辆事故现场的标准拍 摄方位来引导用户移动图像采集设备。作为一示例性实施例,可先获取图像采集设备对车辆事故现场的拍摄方位(例如,可以是用户初始使用图像采集设备拍摄车辆事故现场时的拍摄方位),再确定所述拍摄方位是否符合标准拍摄方位;当所述拍摄方位不符合标准拍摄方位时,在所述图像采集设备的拍摄界面中展示第二引导信息,以引导用户将所述图像采集设备移动至所述标准拍摄方位处。
在一实施例中,获取图像采集设备对车辆事故现场的拍摄方位(例如,包括图像采集设备与车辆事故现场之间的距离、角度等参数)的操作,可利用机器学习模型来完成。例如,可获取所述图像采集设备拍摄所述车辆事故现场得到的实时图像数据,再将所述实时图像数据输入拍摄方位确定模型(所述拍摄方位确定模型由在预设拍摄方位下拍摄样本事故车辆得到的图像数据与所述预设拍摄方位的对应关系训练得到),从而将所述拍摄方位确定模型的输出结果作为所述图像采集设备对所述车辆事故现场的拍摄方位。类似的,上述初始相对位置关系的确定操作,也可由机器学习模型来完成。
在一实施例中,在展示第二引导信息时,可按照预定义的拍摄流程依次在所述拍摄界面中展示引导用户将所述图像采集设备移动至各个标准拍摄方位的第二引导信息。其中,所述拍摄流程包括针对车辆事故现场中各个拍摄对象的标准拍摄方位,以及拍摄所述各个拍摄对象的顺序。
在一实施例中,鉴定结果的参数可包括以下至少之一:碰撞角度、碰撞前的行驶速度、损伤部位、损伤程度。
为了便于理解,下面以手机与服务器进行交互为例,结合附图对本说明书的车辆事故的鉴定方案进行详细说明。
请参见图3,图3是一示例性实施例提供的一种车辆事故的鉴定方法的交互图。如图3所示,该交互过程可以包括以下步骤:
步骤302,手机拍摄车辆事故现场的图像数据。
在一实施例中,在发生车辆事故后,用户(例如,发生车辆事故的司机、交警、保险公司的定损员等)可使用手机拍摄车辆事故现场的图像数据。例如,拍摄发生碰撞的车辆,拍摄车辆具体的损伤部位,拍摄车牌号等。
步骤304,手机在拍摄界面中展示引导信息。
步骤306,手机被用户移动至标准位置拍摄图像数据。
在一实施例中,手机拍摄车辆事故现场得到的图像数据将被作为鉴定车辆事故的依据(即作为事故鉴定模型的输入),因此需引导用户拍摄得到能够准确反映出车辆事故现场的图像数据,以提高鉴定车辆事故的准确性。进一步的,可在手机的拍摄界面中展示引导信息(展示第一引导信息或第二引导信息),从而引导用户拍摄得到正确的图像数据。
在一实施例中,可预先定义车辆事故现场与图像采集设备(本实施例以手机为例)之间的标准相对位置关系;换言之,手机在保持与车辆事故现场之间的相对位置关系为标准相对位置关系时,可拍摄得到能够正确反映出车辆事故现场的图像数据(可理解为包含车辆事故现场的各个细节)。举例而言,可定义如下标准相对位置关系:距离车辆正前方3米、距离车辆左侧4米、距离车辆右侧4米、距离车辆后方3米、距离受损部位50公分等。
基于对标准相对位置关系的定义,可在拍摄界面中展示第一引导信息,从而引导用户移动手机使得手机与事故车辆之间的相对位置关系符合标准相对位置关系(即移动手机至标准位置)。作为一示例性实施例,手机可根据步骤302拍摄得到的图像数据(例如,可以是用户拍摄车辆事故现场得到的首张照片)确定手机与车辆事故现场之间的初始相对位置关系。例如,可通过相对位置关系确定模型来确定该初始相对位置关系;其中,相对位置关系确定模型可由训练样本图像数据以及拍摄该样本图像数据时与被摄对象之间的距离和角度得到(以距离和角度来描述相对位置关系)。又如,还可通过识别图像数据中的被摄对象,并提取被摄对象的特征点以通过几何计算来得到手机与被摄对象之间的距离和角度。在确定出初始相对位置关系后,再确定手机的移动状态,以基于手机的移动状态和初始相对位置关系确定手机移动后与车辆事故现场的实时相对位置关系。其中,手机的移动状态可通过手机的陀螺仪和加速度计等传感器采集到的数据计算得到;在得知手机如何移动后,由于车辆事故现场往往处于静止状态,那么便可根据初始相对位置关系和手机的移动过程,确定出手机移动后与车辆事故现场之间的相对位置关系(即实时相对位置关系)。基于上述对实时相对位置关系的确定,可根据实时相对位置关系和上述标准相对位置关系之间的差异,在手机的拍摄界面中展示第一引导信息,以引导用户将手机移动至与标准相对位置关系相匹配的位置。可见,在上述引导的过程中,在确定出初始相对位置关系后,无需再根据手机拍摄到的图像数据来引导用户(基于手机的移动状态即可),即在手机移动的过程中,引导操作可基于手机的移动状态来完成,而无需依赖于手机在移动时拍摄到的图像数据。
举例而言,如图4A所示,当用户使用手机拍摄事故车辆41(车辆事故现场中发生碰撞的车辆)的左侧时,假定手机与事故车辆41之间的距离为5米,而在对应于该拍摄方向(即手机与事故车辆41之间的角度)的标准相对位置关系定义的距离为4米;那么手机可在拍摄界面4中展示引导信息42“请再靠近1米拍摄”,以引导用户携带手机在该拍摄方向上再靠近(事故车辆41)1米的距离。
在一实施例中,可预先定义手机对车辆事故现场的标准拍摄方位;换言之,手机在保持处于对车辆事故现场的标准拍摄方位时,可拍摄得到能够正确反映出车辆事故现场的图像数据。举例而言,可定义如下标准拍摄方位(同样以距离和角度为例):在距离车辆正前方3米的位置拍摄、在距离车辆左侧4米的位置拍摄、在距离车辆右侧4米的位置拍摄、在距离车辆后方3米的位置拍摄、在距离受损部位50公分的位置拍摄等。
基于对标准拍摄方位的定义,可在拍摄界面中展示第二引导信息,从而引导用户移动手机使得手机拍摄事故车辆(或受损部位)的拍摄方位符合标准拍摄方位。作为一示例性实施例,可先获取用户使用手机对车辆事故现场的拍摄方位(例如,可以是用户初始使用手机拍摄车辆事故现场时的拍摄方位),再确定该拍摄方位是否符合标准拍摄方位。当该拍摄方位不符合标准拍摄方位时,在拍摄界面中展示第二引导信息,以引导用户将手机移动至标准拍摄方位处(即移动手机至标准位置)。
在一实施例中,手机可将步骤302拍摄得到的图像数据(例如,可以是用户拍摄车辆事故现场得到的首张照片)输入拍摄方位确定模型,并将拍摄方位确定模型的输出结果作为当前手机对车辆事故现场的拍摄方位。其中,拍摄方位确定模型可由在预设拍摄方位(可包含多个不同的拍摄方位)下拍摄样本事故车辆得到的图像数据与该预设拍摄方位的对应关系训练得到。而在展示第二引导信息时,可按照预定义的拍摄流程依次在拍摄界面中展示引导用户将手机移动至各个标准拍摄方位的第二引导信息。其中,拍摄流程包括针对车辆事故现场中各个拍摄对象的标准拍摄方位,以及拍摄各个拍摄对象的顺序。
举例而言,如图4B所示,假定拍摄流程包括依次在距离车辆左侧4米的位置拍摄事故车辆,以及在距离车辆右侧4米的位置拍摄事故车辆。那么,当用户在距离车辆左侧4米的位置拍摄完事故车辆41后,可在拍摄界面中展示引导信息43“请距离4米拍摄事故车辆的右侧”以及“指向事故车辆41右侧的箭头”,以引导用户携带手机在事故车辆41的右侧4米处拍摄。
步骤308,手机向服务器发送在标准位置拍摄的图像数据。
步骤310,服务器将接收到的图像数据输入事故鉴定模型。
在一实施例中,可预先收集历史车辆事故现场的图像数据,并利用可靠途径分析该历史车辆事故现场的图像数据得来的事故鉴定信息(例如,由定损员人工分析该图像数据得到的事故鉴定信息)对该图像数据进行标注,从而将标注后的图像数据作为样本数据训练机器学习模型,以得到事故鉴定模型。其中,事故鉴定信息的参数可以包括碰撞角度、碰撞前的行驶速度、损伤部位、损伤程度等;可采用逻辑回归、决策树、神经网络、支持向量机等算法训练样本数据来得到事故鉴定模型。当然,本说明书一个或多个实施例并不对事故鉴定信息的参数,以及训练事故鉴定模型采用的算法进行限制。通过上述采用机器学习模型的方式来鉴定车辆事故,使得用户可直接利用车辆事故现场的照片、视频便可进行端到端的车辆事故鉴定,可有效提高鉴定效率,缩短鉴定周期。同时,本说明书的车辆事故的鉴定方案支持远程鉴定和自动鉴定,从而大幅降低了车辆事故的鉴定成本。例如,在发生车辆事故后,司机只需要通过客户端采集车辆事故现场的图像数据,基于本说明书的车辆事故的鉴定方案便可得到鉴定结果,而无需定损员到车辆事故现场来勘察,司机和交警也可尽快地处理车辆事故。
举例而言,可收集一批历史车辆事故的案例,并获取该案例中发生碰撞的车辆部件、车辆在发生碰撞时与碰撞对象的相对速度(以下简称为碰撞速度)、发生碰撞处的照片等数据。基于获取到的数据,可针对每一个碰撞部件均构建一组以照片为输入,碰撞速度为标注值的样本数据,并对碰撞速度进行取整。可选的,可按照一定的精度划分碰撞速度的取值范围。例如,取值范围为10km/h~200km/h,精度为1km/h;那么,可将碰撞速度划分为范围从10km/h到200km/h的191个速度区段。基于上述对碰撞速度的划分方式,可将碰撞速度的预测定义为一个分类问题。换言之,通过将一组车辆事故的照片输入事故鉴定模型,事故鉴定模型可预测出该车辆事故的碰撞速度所属的速度区段。
而针对训练过程,可采用CNN(Convolutional Neural Networks,卷积神经网络)来训练样本数据以得到事故鉴定模型。如图4C所示,CNN可包括卷积层、池化层和全连接层。其中,卷积层用于对输入的照片进行计算以提取出特征向量;池化层通常位于卷积层之后,一方面降低特征向量的维度以简化网络计算复杂度,另一方面通过池化来降低卷积层输出的特征向量,避免卷积神经网络出现过拟合;全连接层用于将网络学习到的特征向量映射到样本的标记空间中,比如将池化层输出的二维特征向量转化成一维向量。由于车辆事故照片的数量不定,同时每张照片所包含的视觉特征在时序维度上存在关联,因而可将上述样本数据(针对同一车辆事故,标注有碰撞速度的一组车辆事故照 片)作为输入以对神经网络进行训练。例如,利用CNN来提取每张照片的视觉特征向量,再将其输入至LSTM(Long Short-Term Memory,长短期记忆网络),以由LSTM来处理所有照片(图中所示为4张照片分别输入CNN)的视觉特征向量,从而生成最终的分类向量来代表针对各个可能的碰撞速度的预测概率。
步骤312,服务器将事故鉴定模型的输出结果返回至手机。
在一实施例中,也可将事故鉴定模型配置于手机侧;换言之,在手机在标准位置拍摄得到图像数据后,直接将拍摄得到的图像数据输入事故鉴定模型以获取事故鉴定结果(即事故鉴定模型输出的事故鉴定信息),而无需将拍摄到的图像数据发送至服务器。进一步的,服务器可定期更新样本数据以重新训练事故鉴定模型,从而提高鉴定的准确性。而当事故鉴定模型配置于手机侧时,服务器可定期向手机发送更新后的事故鉴定模型。
步骤314,手机将接收到的输出结果作为针对当前车辆事故现场的鉴定结果展示。
在一实施例中,承接于上述举例,事故鉴定模型的输出为针对当前车辆事故可能存在的各个碰撞速度的概率。例如,可将输出结果中概率最高的碰撞速度作为鉴定结果,也可以将输出结果中概率最高且超过预设概率阈值的碰撞速度作为鉴定结果。
举例而言,假定输出结果如表1所示:
碰撞速度 概率
110km/h 80%
111km/h 5%
112km/h 4%
113km/h 3%
…… ……
表1
在一种情况下,可将输出结果中概率最高的碰撞速度110km/h作为鉴定结果。在另一种情况下,假定预设概率阈值为75%,那么由于概率最高的碰撞速度110km/h的概率超过了概率阈值75%,可将110km/h作为鉴定结果。图5是一示例性实施例提供的一种设备的示意结构图。请参考图5,在硬件层面,该设备包括处理器502、内部总线504、网络接口506、内存508以及非易失性存储器510,当然还可能包括其他业务所需要的硬件。处理器502从非易失性存储器510中读取对应的计算机程序到内存508中然后运行,在逻辑层面上形成车辆事故的鉴定装置。当然,除了软件实现方式之外,本说明书 一个或多个实施例并不排除其他实现方式,比如逻辑器件抑或软硬件结合的方式等等,也就是说以下处理流程的执行主体并不限定于各个逻辑单元,也可以是硬件或逻辑器件。
请参考图6,在软件实施方式中,该车辆事故的鉴定装置可以包括:
图像获取单元61,获取车辆事故现场的图像数据;
结果确定单元62,确定鉴定结果,所述鉴定结果是通过将所述图像数据输入事故鉴定模型而得到的输出结果得到;所述事故鉴定模型由历史车辆事故现场的图像数据,以及所述历史车辆事故现场的事故鉴定信息训练得到。
可选的,所述结果确定单元62具体用于:
将所述图像数据输入所述事故鉴定模型,以将所述事故鉴定模型的输出结果作为所述鉴定结果;
或者,将所述图像数据发送至服务端以使所述服务端将所述图像数据输入所述事故鉴定模型,以及将所述服务端返回的输出结果作为所述鉴定结果。
可选的,还包括:
初始位置确定单元63,根据所述图像数据确定所述车辆事故现场与图像采集设备之间的初始相对位置关系;
移动状态确定单元64,确定所述图像采集设备的移动状态;
实时位置确定单元65,基于所述移动状态和所述初始相对位置关系,确定所述图像采集设备移动后与所述车辆事故现场的实时相对位置关系;
第一展示单元66,根据所述实时相对位置关系,在所述图像采集设备的拍摄界面中展示第一引导信息,以引导用户将所述图像采集设备移动至与标准相对位置关系相匹配的位置。
可选的,还包括:
方位获取单元67,获取图像采集设备对所述车辆事故现场的拍摄方位;
方位确定单元68,确定所述拍摄方位是否符合标准拍摄方位;
第二展示单元69,当所述拍摄方位不符合标准拍摄方位时,在所述图像采集设备的拍摄界面中展示第二引导信息,以引导用户将所述图像采集设备移动至所述标准拍摄方位处。
可选的,所述方位获取单元67具体用于:
获取所述图像采集设备拍摄所述车辆事故现场得到的实时图像数据;
将所述实时图像数据输入拍摄方位确定模型,所述拍摄方位确定模型由在预设拍摄方位下拍摄样本事故车辆得到的图像数据与所述预设拍摄方位的对应关系训练得到;
将所述拍摄方位确定模型的输出结果作为所述图像采集设备对所述车辆事故现场的拍摄方位。
可选的,所述第二展示单元69具体用于:
按照预定义的拍摄流程依次在所述拍摄界面中展示引导用户将所述图像采集设备移动至各个标准拍摄方位的第二引导信息;所述拍摄流程包括针对车辆事故现场中各个拍摄对象的标准拍摄方位,以及拍摄所述各个拍摄对象的顺序。
可选的,所述鉴定结果的参数包括以下至少之一:碰撞角度、碰撞前的行驶速度、损伤部位、损伤程度。
上述实施例阐明的系统、装置、模块或单元,具体可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。一种典型的实现设备为计算机,计算机的具体形式可以是个人计算机、膝上型计算机、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件收发设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任意几种设备的组合。
在一个典型的配置中,计算机包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁 带、磁盘存储、量子存储器、基于石墨烯的存储介质或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。
上述对本说明书特定实施例进行了描述。其它实施例在所附权利要求书的范围内。在一些情况下,在权利要求书中记载的动作或步骤可以按照不同于实施例中的顺序来执行并且仍然可以实现期望的结果。另外,在附图中描绘的过程不一定要求示出的特定顺序或者连续顺序才能实现期望的结果。在某些实施方式中,多任务处理和并行处理也是可以的或者可能是有利的。
在本说明书一个或多个实施例使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本说明书一个或多个实施例。在本说明书一个或多个实施例和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。
应当理解,尽管在本说明书一个或多个实施例可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本说明书一个或多个实施例范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”。
以上所述仅为本说明书一个或多个实施例的较佳实施例而已,并不用以限制本说明书一个或多个实施例,凡在本说明书一个或多个实施例的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本说明书一个或多个实施例保护的范围之内。

Claims (15)

  1. 一种车辆事故的鉴定方法,包括:
    获取车辆事故现场的图像数据;
    确定鉴定结果,所述鉴定结果是通过将所述图像数据输入事故鉴定模型而得到的输出结果得到;所述事故鉴定模型由历史车辆事故现场的图像数据,以及所述历史车辆事故现场的事故鉴定信息训练得到。
  2. 根据权利要求1所述的方法,所述确定鉴定结果,包括:
    将所述图像数据输入所述事故鉴定模型,以将所述事故鉴定模型的输出结果作为所述鉴定结果;
    或者,将所述图像数据发送至服务端以使所述服务端将所述图像数据输入所述事故鉴定模型,以及将所述服务端返回的输出结果作为所述鉴定结果。
  3. 根据权利要求1所述的方法,还包括:
    根据所述图像数据确定所述车辆事故现场与图像采集设备之间的初始相对位置关系;
    确定所述图像采集设备的移动状态;
    基于所述移动状态和所述初始相对位置关系,确定所述图像采集设备移动后与所述车辆事故现场的实时相对位置关系;
    根据所述实时相对位置关系,在所述图像采集设备的拍摄界面中展示第一引导信息,以引导用户将所述图像采集设备移动至与标准相对位置关系相匹配的位置。
  4. 根据权利要求1所述的方法,还包括:
    获取图像采集设备对所述车辆事故现场的拍摄方位;
    确定所述拍摄方位是否符合标准拍摄方位;
    当所述拍摄方位不符合标准拍摄方位时,在所述图像采集设备的拍摄界面中展示第二引导信息,以引导用户将所述图像采集设备移动至所述标准拍摄方位处。
  5. 根据权利要求4所述的方法,所述获取图像采集设备对所述车辆事故现场的拍摄方位,包括:
    获取所述图像采集设备拍摄所述车辆事故现场得到的实时图像数据;
    将所述实时图像数据输入拍摄方位确定模型,所述拍摄方位确定模型由在预设拍摄方位下拍摄样本事故车辆得到的图像数据与所述预设拍摄方位的对应关系训练得到;
    将所述拍摄方位确定模型的输出结果作为所述图像采集设备对所述车辆事故现场的拍摄方位。
  6. 根据权利要求4所述的方法,所述在所述图像采集设备的拍摄界面中展示第二引导信息,包括:
    按照预定义的拍摄流程依次在所述拍摄界面中展示引导用户将所述图像采集设备移动至各个标准拍摄方位的第二引导信息;所述拍摄流程包括针对车辆事故现场中各个拍摄对象的标准拍摄方位,以及拍摄所述各个拍摄对象的顺序。
  7. 根据权利要求1所述的方法,所述鉴定结果的参数包括以下至少之一:碰撞角度、碰撞前的行驶速度、损伤部位、损伤程度。
  8. 一种车辆事故的鉴定装置,包括:
    图像获取单元,获取车辆事故现场的图像数据;
    结果确定单元,确定鉴定结果,所述鉴定结果是通过将所述图像数据输入事故鉴定模型而得到的输出结果得到;所述事故鉴定模型由历史车辆事故现场的图像数据,以及所述历史车辆事故现场的事故鉴定信息训练得到。
  9. 根据权利要求8所述的装置,所述结果确定单元具体用于:
    将所述图像数据输入所述事故鉴定模型,以将所述事故鉴定模型的输出结果作为所述鉴定结果;
    或者,将所述图像数据发送至服务端以使所述服务端将所述图像数据输入所述事故鉴定模型,以及将所述服务端返回的输出结果作为所述鉴定结果。
  10. 根据权利要求8所述的装置,还包括:
    初始位置确定单元,根据所述图像数据确定所述车辆事故现场与图像采集设备之间的初始相对位置关系;
    移动状态确定单元,确定所述图像采集设备的移动状态;
    实时位置确定单元,基于所述移动状态和所述初始相对位置关系,确定所述图像采集设备移动后与所述车辆事故现场的实时相对位置关系;
    第一展示单元,根据所述实时相对位置关系,在所述图像采集设备的拍摄界面中展示第一引导信息,以引导用户将所述图像采集设备移动至与标准相对位置关系相匹配的位置。
  11. 根据权利要求8所述的装置,还包括:
    方位获取单元,获取图像采集设备对所述车辆事故现场的拍摄方位;
    方位确定单元,确定所述拍摄方位是否符合标准拍摄方位;
    第二展示单元,当所述拍摄方位不符合标准拍摄方位时,在所述图像采集设备的拍摄界面中展示第二引导信息,以引导用户将所述图像采集设备移动至所述标准拍摄方位 处。
  12. 根据权利要求11所述的装置,所述方位获取单元具体用于:
    获取所述图像采集设备拍摄所述车辆事故现场得到的实时图像数据;
    将所述实时图像数据输入拍摄方位确定模型,所述拍摄方位确定模型由在预设拍摄方位下拍摄样本事故车辆得到的图像数据与所述预设拍摄方位的对应关系训练得到;
    将所述拍摄方位确定模型的输出结果作为所述图像采集设备对所述车辆事故现场的拍摄方位。
  13. 根据权利要求11所述的装置,所述第二展示单元具体用于:
    按照预定义的拍摄流程依次在所述拍摄界面中展示引导用户将所述图像采集设备移动至各个标准拍摄方位的第二引导信息;所述拍摄流程包括针对车辆事故现场中各个拍摄对象的标准拍摄方位,以及拍摄所述各个拍摄对象的顺序。
  14. 根据权利要求8所述的装置,所述鉴定结果的参数包括以下至少之一:碰撞角度、碰撞前的行驶速度、损伤部位、损伤程度。
  15. 一种电子设备,包括:
    处理器;
    用于存储处理器可执行指令的存储器;
    其中,所述处理器通过运行所述可执行指令以实现如权利要求1-7中任一项所述的方法。
PCT/CN2020/070511 2019-03-07 2020-01-06 车辆事故的鉴定方法及装置、电子设备 WO2020177480A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910171587.5 2019-03-07
CN201910171587.5A CN110033386B (zh) 2019-03-07 2019-03-07 车辆事故的鉴定方法及装置、电子设备

Publications (1)

Publication Number Publication Date
WO2020177480A1 true WO2020177480A1 (zh) 2020-09-10

Family

ID=67235093

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/070511 WO2020177480A1 (zh) 2019-03-07 2020-01-06 车辆事故的鉴定方法及装置、电子设备

Country Status (3)

Country Link
CN (1) CN110033386B (zh)
TW (1) TWI770420B (zh)
WO (1) WO2020177480A1 (zh)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112434368A (zh) * 2020-10-20 2021-03-02 联保(北京)科技有限公司 一种图像采集方法、装置及存储介质
CN112465018A (zh) * 2020-11-26 2021-03-09 深源恒际科技有限公司 一种基于深度学习的车辆视频定损系统的智能截图方法及系统
CN112492105A (zh) * 2020-11-26 2021-03-12 深源恒际科技有限公司 一种基于视频的车辆外观部件自助定损采集方法及系统
CN113255842A (zh) * 2021-07-05 2021-08-13 平安科技(深圳)有限公司 车辆置换预测方法、装置、设备及存储介质
CN114637438A (zh) * 2022-03-23 2022-06-17 支付宝(杭州)信息技术有限公司 基于ar的车辆事故处理方法及装置
CN114724373A (zh) * 2022-04-15 2022-07-08 地平线征程(杭州)人工智能科技有限公司 交通现场信息获取方法和装置、电子设备和存储介质
CN114764979A (zh) * 2021-01-14 2022-07-19 大陆泰密克汽车系统(上海)有限公司 事故信息警示系统及方法、电子设备、存储介质
CN114637438B (zh) * 2022-03-23 2024-05-07 支付宝(杭州)信息技术有限公司 基于ar的车辆事故处理方法及装置

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110033386B (zh) * 2019-03-07 2020-10-02 阿里巴巴集团控股有限公司 车辆事故的鉴定方法及装置、电子设备
CN111079506A (zh) * 2019-10-11 2020-04-28 深圳壹账通智能科技有限公司 基于增强现实的信息采集方法、装置和计算机设备
CN110809088A (zh) * 2019-10-25 2020-02-18 广东以诺通讯有限公司 一种基于手机app的交通事故拍照方法及系统
CN113038018B (zh) * 2019-10-30 2022-06-28 支付宝(杭州)信息技术有限公司 辅助用户拍摄车辆视频的方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100002911A1 (en) * 2008-07-06 2010-01-07 Jui-Hung Wu Method for detecting lane departure and apparatus thereof
CN103646534A (zh) * 2013-11-22 2014-03-19 江苏大学 一种道路实时交通事故风险控制方法
CN107194323A (zh) * 2017-04-28 2017-09-22 阿里巴巴集团控股有限公司 车辆定损图像获取方法、装置、服务器和终端设备
CN107368776A (zh) * 2017-04-28 2017-11-21 阿里巴巴集团控股有限公司 车辆定损图像获取方法、装置、服务器和终端设备
CN110033386A (zh) * 2019-03-07 2019-07-19 阿里巴巴集团控股有限公司 车辆事故的鉴定方法及装置、电子设备

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8510196B1 (en) * 2012-08-16 2013-08-13 Allstate Insurance Company Feedback loop in mobile damage assessment and claims processing
CN103702029B (zh) * 2013-12-20 2017-06-06 百度在线网络技术(北京)有限公司 拍摄时提示对焦的方法及装置
US10089396B2 (en) * 2014-07-30 2018-10-02 NthGen Software Inc. System and method of a dynamic interface for capturing vehicle data
CN105719188B (zh) * 2016-01-22 2017-12-26 平安科技(深圳)有限公司 基于多张图片一致性实现保险理赔反欺诈的方法及服务器
CN106373395A (zh) * 2016-09-20 2017-02-01 三星电子(中国)研发中心 行车事故的监控方法和装置
CN108629963A (zh) * 2017-03-24 2018-10-09 纵目科技(上海)股份有限公司 基于卷积神经网络的交通事故上报方法及系统、车载终端
CN107392218B (zh) * 2017-04-11 2020-08-04 创新先进技术有限公司 一种基于图像的车辆定损方法、装置及电子设备
CN109325488A (zh) * 2018-08-31 2019-02-12 阿里巴巴集团控股有限公司 用于辅助车辆定损图像拍摄的方法、装置及设备
CN109359542A (zh) * 2018-09-18 2019-02-19 平安科技(深圳)有限公司 基于神经网络的车辆损伤级别的确定方法及终端设备
CN109344819A (zh) * 2018-12-13 2019-02-15 深源恒际科技有限公司 基于深度学习的车辆损伤识别方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100002911A1 (en) * 2008-07-06 2010-01-07 Jui-Hung Wu Method for detecting lane departure and apparatus thereof
CN103646534A (zh) * 2013-11-22 2014-03-19 江苏大学 一种道路实时交通事故风险控制方法
CN107194323A (zh) * 2017-04-28 2017-09-22 阿里巴巴集团控股有限公司 车辆定损图像获取方法、装置、服务器和终端设备
CN107368776A (zh) * 2017-04-28 2017-11-21 阿里巴巴集团控股有限公司 车辆定损图像获取方法、装置、服务器和终端设备
CN110033386A (zh) * 2019-03-07 2019-07-19 阿里巴巴集团控股有限公司 车辆事故的鉴定方法及装置、电子设备

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112434368A (zh) * 2020-10-20 2021-03-02 联保(北京)科技有限公司 一种图像采集方法、装置及存储介质
CN112465018A (zh) * 2020-11-26 2021-03-09 深源恒际科技有限公司 一种基于深度学习的车辆视频定损系统的智能截图方法及系统
CN112492105A (zh) * 2020-11-26 2021-03-12 深源恒际科技有限公司 一种基于视频的车辆外观部件自助定损采集方法及系统
CN112465018B (zh) * 2020-11-26 2024-02-02 深源恒际科技有限公司 一种基于深度学习的车辆视频定损系统的智能截图方法及系统
CN114764979A (zh) * 2021-01-14 2022-07-19 大陆泰密克汽车系统(上海)有限公司 事故信息警示系统及方法、电子设备、存储介质
CN113255842A (zh) * 2021-07-05 2021-08-13 平安科技(深圳)有限公司 车辆置换预测方法、装置、设备及存储介质
CN113255842B (zh) * 2021-07-05 2021-11-02 平安科技(深圳)有限公司 车辆置换预测方法、装置、设备及存储介质
CN114637438A (zh) * 2022-03-23 2022-06-17 支付宝(杭州)信息技术有限公司 基于ar的车辆事故处理方法及装置
CN114637438B (zh) * 2022-03-23 2024-05-07 支付宝(杭州)信息技术有限公司 基于ar的车辆事故处理方法及装置
CN114724373A (zh) * 2022-04-15 2022-07-08 地平线征程(杭州)人工智能科技有限公司 交通现场信息获取方法和装置、电子设备和存储介质
CN114724373B (zh) * 2022-04-15 2023-06-27 地平线征程(杭州)人工智能科技有限公司 交通现场信息获取方法和装置、电子设备和存储介质

Also Published As

Publication number Publication date
CN110033386B (zh) 2020-10-02
TWI770420B (zh) 2022-07-11
TW202034270A (zh) 2020-09-16
CN110033386A (zh) 2019-07-19

Similar Documents

Publication Publication Date Title
WO2020177480A1 (zh) 车辆事故的鉴定方法及装置、电子设备
WO2021135499A1 (zh) 损伤检测模型训练、车损检测方法、装置、设备及介质
US10817956B2 (en) Image-based vehicle damage determining method and apparatus, and electronic device
CN108629284B (zh) 基于嵌入式视觉系统的实时人脸跟踪和人脸姿态选择的方法及装置、系统
JP6893564B2 (ja) ターゲット識別方法、装置、記憶媒体および電子機器
WO2022213879A1 (zh) 目标对象检测方法、装置、计算机设备和存储介质
CN109101602B (zh) 图像检索模型训练方法、图像检索方法、设备及存储介质
US20190340746A1 (en) Stationary object detecting method, apparatus and electronic device
CN110753953A (zh) 用于自动驾驶车辆中经由交叉模态验证的以物体为中心的立体视觉的方法和系统
WO2021114612A1 (zh) 目标重识别方法、装置、计算机设备和存储介质
TWI712980B (zh) 理賠資訊提取方法和裝置、電子設備
CN110660102B (zh) 基于人工智能的说话人识别方法及装置、系统
CN112989962B (zh) 轨迹生成方法、装置、电子设备及存储介质
WO2021031704A1 (zh) 对象追踪方法、装置、计算机设备和存储介质
US9336243B2 (en) Image information search
US10198842B2 (en) Method of generating a synthetic image
CN114550053A (zh) 一种交通事故定责方法、装置、计算机设备及存储介质
CN112232311A (zh) 人脸跟踪方法、装置及电子设备
CN111881740A (zh) 人脸识别方法、装置、电子设备及介质
CN114663871A (zh) 图像识别方法、训练方法、装置、系统及存储介质
CN110334650A (zh) 物体检测方法、装置、电子设备及存储介质
CN111310595B (zh) 用于生成信息的方法和装置
CN110348369B (zh) 一种视频场景分类方法、装置、移动终端及存储介质
CN114157829A (zh) 模型训练优化方法、装置、计算机设备及存储介质
JP7416614B2 (ja) 学習モデルの生成方法、コンピュータプログラム、情報処理装置、及び情報処理方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20766694

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20766694

Country of ref document: EP

Kind code of ref document: A1