WO2018196837A1 - 车辆定损图像获取方法、装置、服务器和终端设备 - Google Patents

车辆定损图像获取方法、装置、服务器和终端设备 Download PDF

Info

Publication number
WO2018196837A1
WO2018196837A1 PCT/CN2018/084760 CN2018084760W WO2018196837A1 WO 2018196837 A1 WO2018196837 A1 WO 2018196837A1 CN 2018084760 W CN2018084760 W CN 2018084760W WO 2018196837 A1 WO2018196837 A1 WO 2018196837A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
damaged
vehicle
video
damaged portion
Prior art date
Application number
PCT/CN2018/084760
Other languages
English (en)
French (fr)
Inventor
章海涛
侯金龙
郭昕
程远
王剑
徐娟
周凡
张侃
Original Assignee
阿里巴巴集团控股有限公司
章海涛
侯金龙
郭昕
程远
王剑
徐娟
周凡
张侃
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司, 章海涛, 侯金龙, 郭昕, 程远, 王剑, 徐娟, 周凡, 张侃 filed Critical 阿里巴巴集团控股有限公司
Priority to JP2019558552A priority Critical patent/JP6905081B2/ja
Priority to EP18791520.2A priority patent/EP3605386A4/en
Priority to KR1020197033366A priority patent/KR20190139262A/ko
Priority to SG11201909740R priority patent/SG11201909740RA/en
Publication of WO2018196837A1 publication Critical patent/WO2018196837A1/zh
Priority to US16/655,001 priority patent/US11151384B2/en
Priority to PH12019502401A priority patent/PH12019502401A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5838Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance
    • G06Q50/40
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Definitions

  • the present application belongs to the technical field of computer image data processing, and in particular, to a method, device, server and terminal device for acquiring a vehicle damage image.
  • the insurance company After a vehicle traffic accident occurs, the insurance company needs a number of fixed-loss images to make a damage loss to the risk-taking vehicle, and to archive the data for the risk.
  • the image of the vehicle's fixed damage is usually obtained by the operator on the scene, and then the vehicle is subjected to the fixed damage processing according to the photograph taken on the spot.
  • the image requirements for vehicle damage need to be able to clearly reflect the specific parts of the vehicle damaged, the damaged parts, the type of damage, the degree of damage, etc. This usually requires the photographer to have knowledge of the damage of the professional vehicle in order to take photos and obtain the corresponding damage. Processing the required images, which obviously requires a relatively large manual cost of training and damage processing. Especially in the case of some vehicles that need to evacuate or move the vehicle as soon as possible after a traffic accident, it takes a long time for the insurance company operator to rush to the scene of the accident.
  • the fixed loss image obtained by the owner of the vehicle often does not meet the requirements of the fixed loss image processing due to the non-professional line.
  • the images obtained by the operator on the scene often need to be exported from the shooting device again, and the manual screening is performed to determine the required damage image. This also requires a large manpower and time, thereby reducing the damage required for the final loss processing. Image acquisition efficiency.
  • the existing insurance company operators or vehicle owners take pictures on the spot to obtain the loss image.
  • the professional vehicle damage knowledge is required, the manpower and time cost are large, and the efficiency of obtaining the fixed loss image that meets the requirements of the fixed loss processing is still efficient. Lower.
  • the purpose of the present application is to provide a method, a device, a server and a terminal device for acquiring a vehicle damage image, which can automatically and quickly generate high quality in accordance with the requirements of the fixed loss processing process by the photographer capturing the damaged part of the damaged vehicle.
  • the fixed loss image satisfies the requirements of the fixed loss processing, improves the acquisition efficiency of the fixed loss image, and is convenient for the operator to work.
  • a method, device, server and terminal device for acquiring a vehicle damage image provided by the present application are implemented as follows:
  • a method for acquiring a vehicle damage image comprising:
  • the client acquires the captured video data, and sends the captured video data to the server;
  • the server detects a video image in the captured video data, and identifies a damaged portion in the video image;
  • the server classifies the video image based on the detected damaged portion, and determines a candidate image classification set of the damaged portion;
  • a fixed loss image of the vehicle is selected from the candidate image classification set according to a preset screening condition.
  • a method for acquiring a vehicle damage image comprising:
  • a fixed loss image of the vehicle is selected from the candidate image classification set according to a preset screening condition.
  • a method for acquiring a vehicle damage image comprising:
  • a method for acquiring a vehicle damage image comprising:
  • a fixed loss image of the vehicle is selected from the candidate image classification set according to a preset screening condition.
  • a vehicle fixed-loss image acquiring device comprising:
  • a data receiving module configured to receive captured video data of the damaged vehicle uploaded by the terminal device
  • a damaged part identification module detecting a video image in the captured video data, and identifying a damaged part in the video image
  • a classification module configured to classify the video image based on the detected damaged portion, and determine a candidate image classification set of the damaged portion
  • a screening module configured to select a fixed loss image of the vehicle from the candidate image classification set according to a preset screening condition.
  • a vehicle fixed-loss image acquiring device comprising:
  • a shooting module for performing video shooting on a damaged vehicle to acquire captured video data
  • a communication module configured to send the captured video data to a processing terminal
  • a tracking module configured to receive a location area returned by the processing terminal for real-time tracking of the damaged part, and display the tracked location area, where the damaged part includes a video image of the processing terminal for the captured video data The detection is obtained.
  • a vehicle fixed loss image acquisition device includes a processor and a memory for storing processor executable instructions, the processor implementing the instructions to:
  • a fixed loss image of the vehicle is selected from the candidate image classification set according to a preset screening condition.
  • a computer readable storage medium having stored thereon computer instructions that, when executed, implement the following steps:
  • a fixed loss image of the vehicle is selected from the candidate image classification set according to a preset screening condition.
  • a computer readable storage medium having stored thereon computer instructions that, when executed, implement the following steps:
  • a server comprising a processor and a memory for storing processor-executable instructions, the processor implementing the instructions to:
  • a fixed loss image of the vehicle is selected from the candidate image classification set according to a preset screening condition.
  • a terminal device includes a processor and a memory for storing processor-executable instructions that are implemented when the processor executes the instructions:
  • a fixed loss image of the vehicle is selected from the candidate image classification set according to a preset screening condition.
  • the present invention provides a method, device, server and terminal device for acquiring vehicle damage images, and proposes a video-based automatic generation scheme for vehicle damage images.
  • the photographer can video capture the damaged vehicle through the terminal device, and the captured video data can be transmitted to the server of the system, and the server analyzes the video data to identify the damaged part and obtain the required damage according to the damaged part.
  • a candidate image of the category can then generate a lossy image of the damaged vehicle from the candidate image.
  • a high-quality fixed-loss image that meets the requirements of the fixed-loss processing can be automatically and quickly generated, meets the requirements of the fixed-loss processing, improves the acquisition efficiency of the fixed-loss image, and reduces the damage image of the insurance company operator. Get and process costs.
  • FIG. 1 is a schematic flow chart of an embodiment of a method for acquiring a vehicle damage image according to the present application
  • FIG. 2 is a schematic structural diagram of a model for identifying a damaged part in a video image constructed by the method of the present application;
  • FIG. 3 is a schematic diagram of an implementation scenario of identifying a damaged portion using a damage detection model according to the method of the present application
  • FIG. 4 is a schematic diagram of determining a close-up image based on the identified damaged portion in one embodiment of the present application
  • FIG. 5 is a schematic diagram of a model structure for identifying damaged components in a video image constructed by the method of the present application
  • FIG. 6 is a schematic diagram of a processing scenario of a method for acquiring a vehicle damage image according to the method of the present application
  • FIG. 7 is a schematic flow chart of another embodiment of the method described in the present application.
  • FIG. 10 is a schematic flow chart of another embodiment of the method according to the present application.
  • FIG. 11 is a block diagram showing a module structure of an embodiment of a vehicle damage image acquisition device provided by the present application.
  • FIG. 12 is a schematic structural diagram of another embodiment of an apparatus for acquiring a vehicle damage image according to the present application.
  • FIG. 13 is a schematic structural diagram of an embodiment of a terminal device provided by the present application.
  • FIG. 1 is a schematic flow chart of an embodiment of a method for acquiring a vehicle damage image according to the present application.
  • the present application provides method operation steps or device structures as shown in the following embodiments or figures, there may be more or partial merged fewer operational steps in the method or device based on conventional or no inventive labor. Or module unit.
  • the execution order of the steps or the module structure of the device is not limited to the execution order or the module structure shown in the embodiment of the present application or the drawings.
  • the device, server or terminal product of the method or module structure When the device, server or terminal product of the method or module structure is applied, it may be executed sequentially or in parallel according to the method or module structure shown in the embodiment or the drawing (for example, parallel processor or multi-thread processing). Environment, even including distributed processing, server cluster implementation environment).
  • the following embodiment describes a scenario in which a specific photographer performs video capture by a mobile terminal and the server processes the captured video data to obtain a lossy image.
  • the photographer can be an insurance company operator, and the photographer holds a mobile terminal to perform video shooting on the damaged vehicle.
  • the mobile terminal may include a mobile phone, a tablet computer, or other general purpose or special purpose device having a video capturing function and a data communication function.
  • the mobile terminal and the server may be deployed with corresponding application modules (such as a certain vehicle loss application APP installed by the mobile terminal to implement corresponding data processing.
  • application modules such as a certain vehicle loss application APP installed by the mobile terminal to implement corresponding data processing.
  • the essence of the solution is applied to other implementation scenarios for acquiring a fixed-loss image of a vehicle.
  • the photographer may also process the video data directly on the mobile terminal side and obtain a fixed-loss image, etc., for the owner of the vehicle or the mobile terminal. .
  • FIG. 1 is an embodiment of a method for acquiring a vehicle damage image provided by the present application, where the method may include:
  • S1 The client acquires the captured video data, and sends the captured video data to the server.
  • the client may include a general-purpose or special-purpose device having a video capturing function and a data communication function, such as a terminal device of a mobile phone, a tablet computer, or the like.
  • the client may also include a fixed computer device (such as a PC end) having a data communication function and a movable video capturing device connected thereto, and the combination is considered as the embodiment.
  • the server may include a processing device that analyzes a frame image in the video data and determines a lossy image.
  • the server may include a logical unit device having image data processing and data communication functions, such as a server of the application scenario of the present embodiment.
  • the server is a second terminal device that performs data communication with the first terminal device when the client is the first terminal device, and thus, for the sake of description,
  • the side on which the captured video data is generated for the vehicle video shooting is referred to as a client, and the side on which the captured video data is processed to generate a fixed-loss image is referred to as a server.
  • This application does not exclude the same terminal device in which the client and the server are physically connected as described in some embodiments.
  • video data captured by the client may be transmitted to the server in real time for the server to process quickly.
  • the video can also be transmitted to the server after the client video capture is completed. If the mobile terminal used by the photographer does not currently have a network connection, video capture can be performed first, and then connected to mobile cellular data or WLAN (Wireless Local Area Networks) or a proprietary network. Of course, even if the client can perform normal data communication with the server, the captured video data can be asynchronously transmitted to the server.
  • WLAN Wireless Local Area Networks
  • the captured video data obtained by the photographer to capture the damaged part of the vehicle may be a video clip or multiple video clips.
  • multi-segment video data generated by shooting at different angles and near and far distances is performed on the same damaged part, or different damaged parts are separately photographed to obtain captured video data of each damaged part.
  • a complete shot can be taken around each damaged part of the damaged vehicle to obtain a relatively long time video clip.
  • the server detects a video image in the captured video data, and identifies a damaged part in the video image.
  • the server may perform image detection on the video image in the captured video data to identify the damaged portion of the vehicle in the processed video image.
  • the identified damaged portion occupies an area on the video image and has corresponding area information, such as the location and size of the damaged area.
  • the damaged portion in the video image can be identified by the constructed damage detection model, which uses the deep neural network to detect the damaged portion of the vehicle and its The area in the image.
  • the damage detection model may be constructed based on a Convolutional Neural Network (CNN) and a Region Proposal Network (RPN) in combination with a pooling layer, a fully connected layer, and the like.
  • the damage detection model for identifying the damaged portion included in the video image may be constructed in advance using the designed machine learning algorithm. After the damage detection model is trained by the sample, one or more damaged parts in the video image can be identified.
  • the damage detection model may be constructed by using a network model of a deep neural network or a network model of the variant after being trained by the sample. In an embodiment, it may be based on a convolutional neural network and a regional suggestion network, in combination with other, such as a Fully-Connected Layer (FC), a pooling layer, a data normalization layer, and the like.
  • FC Fully-Connected Layer
  • Softmax probability output layer
  • FIG. 2 is a schematic structural diagram of a model for identifying a damaged part in a video image constructed by the method of the present application.
  • FIG. 3 is a schematic diagram of an implementation scenario of using the damage detection model to identify a damaged part according to the method of the present application, and the identified damaged part can be displayed on the client in real time.
  • Convolutional neural networks generally refer to a neural network composed of a convolutional layer (CNN) as the main structure combined with other active layers, etc., mainly for image recognition.
  • the deep neural network described in this embodiment may include a convolution layer and other important layers (such as an injury sample image trained by an input model, a data normalization layer, an activation layer, etc.), and combined with a regional suggestion network (RPN).
  • RPN regional suggestion network
  • Create a build Convolutional neural networks typically combine two-dimensional discrete convolution operations in image processing with artificial neural networks. This convolution operation can be used to automatically extract features.
  • the Regional Suggestion Network (RPN) can take the features extracted by an image (arbitrary size) as input (a two-dimensional feature that can be extracted using a convolutional neural network), and output a set of rectangular target suggestion boxes, each of which has an object score.
  • the above described embodiment can identify one or more damaged portions of the video image during model training. Specifically, when the sample is trained, the input is a picture, and multiple picture areas can be output. If there is a damaged part, you can output a picture area; if there are k damage, you can output k picture areas; if there is no damage, output 0 picture areas.
  • the damage detection model may use various models and variants based on a convolutional neural network and a regional suggestion network, such as Faster R-CNN, YOLO, Mask-FCN, and the like.
  • the convolutional neural network can use any CNN model, such as ResNet, Inception, VGG, etc. and its variants.
  • the convolutional network (CNN) part of the neural network can use mature network structures that achieve better effects in object recognition, such as Inception, ResNet, etc., such as ResNet network, input as a picture, and output as multiple damaged areas. , and the corresponding damaged area and confidence (the confidence here is a parameter indicating the degree of authenticity of the identified damaged area).
  • Fast R-CNN, YOLO, Mask-FCN, etc. are all deep neural networks including convolutional layers that can be used in this embodiment.
  • the deep neural network used in this embodiment in combination with the region suggestion layer and the CNN layer, can detect the damaged portion in the video image and confirm the region of the damaged portion in the video image.
  • the convolutional network (CNN) part of the present application can use a mature network structure in which object recognition achieves good effects, and the ResNet network can perform small-scale gradient descent by using marking data (mini-batch gradient). Descent) training.
  • the location area of the damaged part recognized by the server can be displayed on the client in real time, so that the user can observe and confirm the damaged part.
  • the server can automatically track the damaged part, and in the following process, as the shooting distance and angle change, the size and position of the corresponding position area of the damaged part in the video image can also change accordingly.
  • the photographer can interactively modify the position and size of the identified damaged portion.
  • the client side displays the location area of the damaged part detected by the server in real time. If the photographer believes that the location area of the damaged part identified by the server does not completely cover the damaged part observed in the scene and needs to be adjusted, the position and size of the position area of the damaged part can be adjusted on the client. If the position area is selected by long-pressing the damaged part, move it, adjust the position of the damaged part, or stretch the frame of the damaged part position area to adjust the size. The photographer can generate a new damaged part after the client adjusts the location of the damaged part, and then sends the new damaged part to the server.
  • the photographer can conveniently and flexibly adjust the position of the damaged part in the video image according to the actual damaged part of the scene, and more accurately locate the damaged part, so that the server can obtain a high-quality fixed-loss image more accurately and reliably.
  • the server receives the captured video data uploaded by the client, detects the video image in the captured video data, and identifies the damaged portion in the video image.
  • S3 The server classifies the video image based on the detected damaged part, and determines a candidate image classification set of the damaged part.
  • Vehicle damage often requires different types of image data, such as images of different angles of the vehicle, images showing damaged parts, and close-up details of specific damaged parts.
  • the present application can identify a video image, such as whether it is an image of a damaged vehicle, a vehicle component included in the recognition image, whether one or more vehicle components are included, whether there is damage on the vehicle component, etc. Wait.
  • the fixed loss images required for the vehicle to be damaged may be divided into different categories, and other requirements that do not meet the requirements of the fixed loss image may be separately classified into one category. Specifically, each frame of the captured video may be extracted, and each frame of the image is identified and classified to form a candidate image classification set of the damaged portion.
  • the determined candidate image classification set may include:
  • S301 Display a close-up image set of the damaged part, and display a part image set of the vehicle component to which the damaged part belongs.
  • the close-up image collection includes a close-up image of the damaged portion, and the component image set includes the damaged component of the damaged vehicle, and the damaged component has at least one damaged portion.
  • the photographer can perform shooting from near to far (or from far to near) on the damaged part of the damaged vehicle, which can be completed by the photographer moving or zooming.
  • the server side can perform frame recognition on the frame image in the captured video (which can be processed for each frame image or a frame image of a video segment) to determine the classification of the video image.
  • the video image of the captured video may be divided into three categories, which are specifically included in the following:
  • the recognition algorithm/classification requirement of the class a image and the like may be determined according to the requirement of the near-field image of the damaged portion in the fixed-loss image.
  • the determination may be determined by the size (area or region span) of the area occupied by the damaged part in the currently located video image. If the damaged portion occupies a large area in the video image (eg, greater than a certain threshold, such as a length or a width greater than a quarter of the video image size), then the video image may be determined to be a type a image.
  • the video image in the close-up image set may be determined by at least one of the following methods:
  • S3011 The area ratio of the damaged area in the video image is greater than the first preset ratio:
  • the ratio of the abscissa span of the damaged part to the length of the associated video image is greater than the second preset ratio, and/or the ratio of the ordinate of the damaged part to the height of the associated video image is greater than the third preset ratio;
  • S3013 Select, from the video image of the same damaged part, the first K video images after the area of the damaged portion is descended, or the video image within the fourth preset ratio after the area is descended, K ⁇ 1.
  • the damaged part In the damaged detail image of type a, the damaged part usually occupies a large area range.
  • the selection of the detailed image of the damaged part can be well controlled, and the type of a that meets the processing requirements is obtained. image.
  • the area of the damaged area in the a type image can be obtained by counting the pixel points contained in the damaged area.
  • the video image is 800*650 pixels
  • two long scratches of the damaged vehicle damage the scratch corresponding to the abscissa span is 600 pixels long
  • the span of each scratch is narrow.
  • the area of the damaged portion is less than one tenth of the video image of the video at this time, since the horizontal span of the damaged portion is 600 pixels, which is three quarters of the length of the entire video image, the video can be used at this time.
  • the image is labeled as an a type image, as shown in FIG. 4, which is a schematic diagram of determining a close-up image based on the identified damaged portion in one embodiment of the present application.
  • the area of the damaged portion may be the area of the damaged portion in S3011, or may be the span value of the damaged portion being long or high.
  • a type of image can also be identified in combination with the above various methods, for example, the area of the damaged portion satisfies both a certain proportion of the video image and the fourth pre-region with the largest area in all the same damaged area images. Set within the proportional range.
  • the class a image described in the scene of this embodiment typically contains all or part of the detailed image information of the damaged portion.
  • the first preset ratio, the second preset ratio, the third preset ratio, and the fourth preset ratio described above may be specifically set according to image recognition accuracy or classification accuracy or other processing requirements, for example,
  • the second preset ratio or the third preset ratio may be a quarter.
  • the components included in the video image can be identified by the constructed vehicle component detection model. If the damaged portion is on the detected damaged component, it can be confirmed that the video image belongs to the b-type image. Specifically, for example, in a video image P1, if the component area of the damaged component detected in P1 includes the identified damaged portion (the area of the normally identified component region is larger than the area of the damaged portion), P1 can be considered. The part area is a damaged part. Alternatively, in the video image P2, if the damaged area detected in P2 overlaps with the detected component area in P2, it may be considered that the vehicle component corresponding to the component area in P2 is also a damaged component, and Video images are classified into class b images.
  • the component detection model described in this embodiment uses a deep neural network to detect the regions of components and components in the image.
  • the component damage recognition model may be constructed based on a Convolutional Neural Network (CNN) and a Region Proposal Network (RPN) in combination with a pooling layer, a fully connected layer, and the like.
  • CNN Convolutional Neural Network
  • RPN Region Proposal Network
  • various models and variants based on convolutional neural networks and regional suggestion networks, such as Faster R-CNN, YOLO, Mask-FCN, etc. can be used.
  • the convolutional neural network (CNN) can use any CNN model, such as ResNet, Inception, VGG, etc. and its variants.
  • the convolutional network (CNN) part of the neural network can use a mature network structure that achieves better effects in object recognition, such as Inception, ResNet, etc., such as a ResNet network, where the input is a picture and the output is a plurality of component areas. And the corresponding component classification and confidence (the confidence here is a parameter indicating the degree of authenticity of the identified vehicle components).
  • Fast R-CNN, YOLO, Mask-FCN, etc. are all deep neural networks including convolutional layers that can be used in this embodiment.
  • the deep neural network used in this embodiment, in conjunction with the region suggestion layer and the CNN layer, can detect the vehicle components in the image to be processed and confirm the component regions of the vehicle component in the image to be processed.
  • FIG. 5 is a schematic diagram of a model structure for identifying damaged components in a video image constructed by the method of the present application.
  • the same video image satisfies the judgment logic of the class a and class b images at the same time, it can belong to the class a and class b images at the same time.
  • the server may extract a video image in the captured video data, classify the video image based on the detected damaged portion, and determine a candidate image classification set of the damaged portion.
  • S4 Select a fixed loss image of the vehicle from the candidate image classification set according to a preset screening condition.
  • An image that meets the preset screening condition may be selected from the candidate image classification set according to the category, the sharpness, and the like of the fixed-loss image as the fixed-loss image.
  • the preset screening condition may be a customized setting. For example, in one embodiment, multiple (eg, 5 or 10) sharpness may be selected according to the sharpness of the image in the type a and b images, respectively. And an image with a different angle is taken as a fixed-loss image of the identified damaged portion.
  • the sharpness of the image can be calculated by the image area where the damaged part and the detected vehicle part are located, for example, a space domain based operator (such as a Gabor operator) or a frequency domain based operator (such as a fast Fourier transform) can be used. ) and other methods are obtained.
  • a space domain based operator such as a Gabor operator
  • a frequency domain based operator such as a fast Fourier transform
  • the invention provides a vehicle loss image acquisition method, which provides a video-based vehicle loss generation image automatic generation scheme.
  • the photographer can video capture the damaged vehicle through the terminal device, and the captured video data can be transmitted to the server end of the system.
  • the system analyzes the video data on the server side, identifies the damaged part, and obtains the damage according to the damaged part. Different types of candidate images are required, and then a damaged image of the damaged vehicle can be generated from the candidate images.
  • a high-quality fixed-loss image that meets the requirements of the fixed-loss processing can be automatically and quickly generated, meets the requirements of the fixed-loss processing, improves the acquisition efficiency of the fixed-loss image, and reduces the damage image of the insurance company operator. Get and process costs.
  • the video captured by the client is transmitted to the server, and the server can track the location of the damaged part in the video in real time according to the damaged part.
  • some image algorithms may be used to obtain a correspondence between the adjacent video images of the captured video, such as using optical flow (optical). Flow) algorithm to achieve tracking of damaged parts.
  • the mobile terminal has sensors such as an accelerometer and a gyroscope, the signal data of the sensors can be combined to further determine the direction and angle of the photographer's motion, thereby achieving more accurate tracking of the damaged portion. Therefore, in another embodiment of the method of the present application, after identifying the damaged portion of the video image, the method may further include:
  • S200 The server tracks the location area of the damaged part in the captured video data in real time
  • the server determines that the damaged part re-enters the video image after leaving the video image, re-positioning and tracking the location area of the damaged part based on the image feature data of the damaged part.
  • the server can extract image feature data of the damaged area, such as SIFT feature data (Scale-invariant feature transform). If the damaged part is removed from the video image and then re-entered into the video image, the system can automatically locate and continue tracking. For example, the camera restarts after the power is turned off or the shooting area is displaced to the non-damaged part and then returns to the same damaged part.
  • SIFT feature data Scale-invariant feature transform
  • the location area of the damaged part identified by the server can be displayed on the client in real time, so that the user can observe and confirm the damaged part.
  • the client and server can simultaneously display the identified damaged parts.
  • the server can automatically track and identify the damaged part, and the size and position of the corresponding position area of the damaged part in the video image can also change correspondingly as the shooting distance and angle change. In this way, the server side can display the damaged parts tracked by the client in real time, which is convenient for the server operator to observe and use.
  • the server can send the tracked location area of the damaged part to the client during real-time tracking, so that the client can display the damaged part in real time in synchronization with the server, so that the photographer can observe the server. Locate the damaged part of the track. Therefore, in another embodiment of the method, the method may further include:
  • S210 The server sends the tracked location area of the damaged part to the client, so that the client displays the location area of the damaged part in real time.
  • the photographer can interactively modify the position and size of the damaged portion. For example, when the client displays the damaged part, if the photographer believes that the identified area of the damaged part does not completely cover the damaged part and needs to be adjusted, the position and size of the position area may be adjusted, such as After selecting the location area according to the damaged part, move it to adjust the position of the damaged part, or stretch the border of the damaged part position area to adjust the size.
  • the photographer can generate a new damaged part after the client adjusts the location area of the damaged part, and then sends the new damaged part to the server.
  • the server can synchronize the new damaged parts modified by the client.
  • the server can identify subsequent video images based on the new damaged portion.
  • the method may further include:
  • S220 Receive a new damaged part sent by the client, where the new damaged part includes a damaged part that is re-determined after the client modifies the location area of the damaged part based on the received interactive instruction;
  • the classifying the video image based on the detected damaged portion comprises classifying the video image based on the new damaged portion.
  • the photographer can conveniently and flexibly adjust the position area of the damaged part in the video image according to the actual damaged part of the scene, and more accurately locate the damaged part, so that the server can obtain a high quality fixed loss image.
  • the photographer can continuously shoot from different angles when shooting a close-up of the damaged portion.
  • the server side can obtain the shooting angle of each frame according to the tracking of the damaged part, and then select a set of video images of different angles as the fixed loss image of the damaged part, thereby ensuring that the fixed loss image can accurately reflect the received image.
  • the type and extent of damage Therefore, in another embodiment of the method of the present application, the determining, by the predetermined screening condition, the fixed loss image of the vehicle from the candidate image classification set comprises:
  • S401 Select, from the specified damaged part candidate image classification set, at least one video image as the fixed loss image of the damaged part according to the definition of the video image and the shooting angle of the damaged part.
  • the deformation of the component may be very obvious at some angles relative to other angles, or if the damaged component has reflection or reflection, the reflection or reflection may change with the change of the shooting angle, etc., and the embodiment of the present application is utilized.
  • Selecting images of different angles as fixed loss images can greatly reduce the interference of these factors on the fixed loss.
  • sensors such as accelerometers and gyroscopes on the client, the angles of the shots can also be obtained or assisted by the signals of these sensors.
  • multiple candidate image classification sets may be generated, but only one or more types of candidate image classification sets may be applied when specifically selecting the lossy image, such as the a class and the b class and the above.
  • Class c When selecting the final required fixed loss image, you can specify the candidate image classification set from class a and class b.
  • the type a and b images multiple images can be selected according to the sharpness of the video image (for example, 5 images of the same component are selected, and 10 images of the same damaged portion are selected), and the sharpness is the highest, and the shooting angle is the same. Different images are used as the lossy image.
  • the sharpness of the image can be calculated by calculating the image area of the damaged part and the detected part of the vehicle component, for example, a spatial domain based operator (such as Gabor operator) or a frequency domain based operator (such as fast Fourier) can be used. Transform) and other methods.
  • a spatial domain based operator such as Gabor operator
  • a frequency domain based operator such as fast Fourier
  • Transform and other methods.
  • FIG. 6 is a schematic diagram of a processing scenario of a method for acquiring a fixed-loss image of a vehicle according to the present application. As shown in FIG. 6, when the damaged portion A and the damaged portion B are close to each other, tracking processing can be performed at the same time, but damaged.
  • the part C is located on the other side of the damaged vehicle. If the distance between the damaged part A and the damaged part B is far away in the captured video, the damaged part C may not be tracked first, and the damaged part A and the damaged part B are photographed. Then take the damaged part C separately. Therefore, in another embodiment of the method of the present application, if it is detected that at least two damaged parts are present in the video image, it is determined whether the distances of the at least two damaged parts meet the set neighboring conditions;
  • the at least two damaged parts are simultaneously tracked, and corresponding fixed loss images are respectively generated.
  • the proximity condition may be set according to the number of damaged parts in the same video image, the size of the damaged part, the distance between the damaged parts, and the like.
  • the server detects that at least one of the close-up image set and the component image set of the damaged part is empty, or the video image in the close-up image set does not cover all areas corresponding to the damaged part, the video may be generated The prompt message is photographed, and then the video shooting prompt message may be sent to a client corresponding to the captured video data.
  • the server can obtain the b-type fixed-loss image that can determine the vehicle component in which the damaged portion is located, the camera can be fed back to the photographer to shoot adjacent vehicle components including the damaged portion. This ensures that the (b) type of lossy image is obtained. If the server cannot obtain a type A fixed loss image, or if the type a image does not cover the entire area of the damaged area, it can be fed back to the photographer to prompt him to take a close shot of the damaged part.
  • the server may be prompted Move slowly to ensure the quality of the captured image.
  • the photographer may be prompted Move slowly to ensure the quality of the captured image.
  • feedback to the mobile terminal APP prompts the user to pay attention to the factors such as focus, illumination, etc. when shooting images, such as displaying the prompt message “The speed is too fast, please move slowly to ensure image quality”.
  • the server may retain the video segment that produces the lossy image for subsequent viewing and verification, and the like.
  • the client can upload or copy the fixed loss image to the remote server after the video image is taken.
  • a video-based vehicle loss generation image automatic generation scheme is proposed.
  • the photographer can video capture the damaged vehicle through the terminal device, and the captured video data can be transmitted to the server, and the server analyzes the video data to identify the damaged part and obtain different categories required for the damage according to the damaged part.
  • Candidate image A fixed loss image of the damaged vehicle can then be generated from the candidate image.
  • a high-quality fixed-loss image that meets the requirements of the fixed-loss processing can be automatically and quickly generated, meets the requirements of the fixed-loss processing, improves the acquisition efficiency of the fixed-loss image, and reduces the damage image of the insurance company operator. Get and process costs.
  • FIG. 7 is a schematic flowchart of another embodiment of the method of the present application. As shown in FIG. 7, the method may include:
  • S10 receiving shooting video data of the damaged vehicle uploaded by the terminal device, detecting a video image in the captured video data, and identifying a damaged part in the video image;
  • S11 classifying the video image based on the detected damaged portion, and determining a candidate image classification set of the damaged portion;
  • S12 Select a fixed loss image of the vehicle from the candidate image classification set according to a preset screening condition.
  • the terminal device may be the client described in the foregoing embodiment, but the application does not exclude other terminal devices, such as a database system, a third-party server, a flash memory, and the like.
  • the server may detect the captured video data, identify the damaged part, and then identify the damaged part. Classify video images.
  • the fixed loss image of the vehicle is automatically generated by screening.
  • the required lossy image may be correspondingly divided into different categories.
  • the determined candidate image classification set may specifically include:
  • a close-up image collection of the damaged portion is displayed, and a component image set of the vehicle component to which the damaged portion belongs is displayed.
  • the video image in the component image set includes at least one damaged portion, such as the class a close-up view, the b-type component map, the class c and the b-type unsatisfied c-type image.
  • the video image in the close-up image set may be determined by at least one of the following:
  • the area ratio of the damaged portion in the area occupied by the video image is greater than the first preset ratio:
  • the ratio of the abscissa span of the damaged portion to the length of the video image to be associated is greater than the second preset ratio, and/or the ratio of the ordinate of the damaged portion to the height of the associated video image is greater than the third predetermined ratio;
  • the first K video images after the area of the damaged portion is descended, or the video image within the fourth preset ratio after the area descending is selected, K ⁇ 1.
  • the recognition algorithm/classification requirement of the class a image and the like can be determined according to the requirements of the near-field image of the damaged portion required for the fixed-loss processing.
  • the size of the region occupied by the damaged portion in the video image currently occupied can be identified and determined. If the damaged portion occupies a large area in the video image (eg, greater than a certain threshold, such as a length or a width greater than a quarter of the video image size), then the video image may be determined to be a type a image.
  • the current frame image of the other analyzed processing of the damaged component where the damaged portion is located if the current frame image of the other analyzed processing of the damaged component where the damaged portion is located, the area of the damaged portion relative to the other damaged portion is relatively larger. Large (in a certain ratio or TOP range), it can be determined that the current frame image is a type A image.
  • the method further includes:
  • the terminal device may be a client that interacts with the server, such as a mobile phone.
  • the method may further include:
  • the positional area of the damaged part is repositioned and tracked based on the image feature data of the damaged part.
  • the location area of the damaged portion that is repositioned and tracked can be displayed on the server.
  • the method may further include:
  • the photographer can display the identified damaged parts in real time on the client, so that the user can observe and confirm the damaged parts.
  • the photographer can conveniently and flexibly adjust the position area of the damaged part in the video image according to the actual damaged part of the scene, and more accurately locate the damaged part, so that the server can obtain a high quality fixed loss image.
  • the photographer can interactively modify the position and size of the damaged portion.
  • the photographer can generate a new damaged part after the client adjusts and modifies the identified area of the damaged part, and then sends the new damaged part to the server.
  • the server can synchronize the new damaged parts modified by the client.
  • the server can identify subsequent video images based on the new damaged portion. Therefore, in another embodiment of the method for acquiring a vehicle-damaged image, the method may further include:
  • the classifying the video image based on the detected damaged portion comprises classifying the video image based on the new damaged portion.
  • the photographer can conveniently and flexibly adjust the position area of the damaged part in the video image according to the actual damaged part of the scene, and more accurately locate the damaged part, so that the server can obtain a high quality fixed loss image.
  • the determining a fixed-loss image of the vehicle from the candidate image classification set according to a preset screening condition includes:
  • At least one video image is respectively selected as the fixed loss image of the damaged part according to the sharpness of the video image and the shooting angle of the damaged part.
  • the server can simultaneously track the multiple damaged parts and generate a fixed-loss image of each damaged part.
  • the server obtains the damage image of each damaged part according to the above processing for all the damaged parts designated by the photographer, and then can use all the generated damage images as the fixed damage image of the entire damaged vehicle. Therefore, in another embodiment of the method for acquiring a vehicle-damaged image, if it is detected that at least two damaged parts are present in the video image, it is determined whether the distance of the at least two damaged parts meets the set neighboring condition. ;
  • the at least two damaged parts are simultaneously tracked, and corresponding fixed loss images are respectively generated.
  • the adjacent condition may be set according to the number of damaged parts identified in the same video image, the size of the damaged part, the distance between the damaged parts, and the like.
  • the present invention further provides an implementation method for acquiring a fixed-loss image of a vehicle that can be used on a client side, based on an embodiment of automatically acquiring a loss-of-loss image by capturing video data from a damaged vehicle, which is described in the foregoing implementation scenario of client-server interaction.
  • FIG. 8 is a schematic flowchart of another embodiment of the method according to the present application. As shown in FIG. 8, the method may include:
  • S20 performing video shooting on the damaged vehicle to obtain captured video data
  • S21 Send the captured video data to a processing terminal
  • S22 Receive a location area that is tracked in real time by the processing terminal for the damaged part, and display the tracked location area, where the damaged part includes the processing terminal detecting and identifying a video image in the captured video data. get.
  • the processing terminal includes a terminal device that processes the captured video data and automatically generates a lossy image of the damaged vehicle based on the identified damaged portion, such as a remote server that can be a fixed-loss image processing.
  • the determined set of candidate image classifications may also include: displaying a close-up image set of the damaged portion, and displaying a component image set of the vehicle component to which the damaged portion belongs.
  • a close-up image set of the damaged portion may also include: displaying a close-up image set of the damaged portion, and displaying a component image set of the vehicle component to which the damaged portion belongs.
  • a component image set of the vehicle component to which the damaged portion belongs Such as the above type a image, b type image, and the like. If the server is unable to obtain a type b fixed-loss image that identifies the vehicle component in which the damaged part is located, the server can feed back to the photographer to send a video capture alert message prompting him to take a number of adjacent vehicle components, including the damaged location, to ensure A type b fixed loss image is obtained.
  • the method may further include:
  • S23 receiving and displaying a video shooting prompt message sent by the processing terminal, where the video shooting prompt message includes at least one of a close-up image set and a component image set of the damaged portion detected by the processing terminal being empty. Or generated when the video image in the close-up image set does not cover the entire area corresponding to the damaged part.
  • the client can display the location area of the damaged part tracked by the server in real time, and can interactively modify the position and size of the location area on the client side. Therefore, in another embodiment of the method, the method may further include:
  • the photographer can perform video shooting on the damaged vehicle through the terminal device, and the captured video data can be transmitted to the server of the system, and the server analyzes the video data to identify the damaged portion.
  • the candidate images of different categories required for the loss are acquired according to the damaged portion, and then the damage image of the damaged vehicle can be generated from the candidate images.
  • FIG. 9 is a schematic flowchart of another embodiment of the method in the present application. As shown in FIG. 9, the method includes:
  • S32 classify the video image based on the detected damaged portion, and determine a candidate image classification set of the damaged portion;
  • S33 Select a fixed loss image of the vehicle from the candidate image classification set according to a preset screening condition.
  • a specific implementation may be composed of application modules deployed on the client.
  • the terminal device may be a general-purpose or special-purpose device with a video capturing function and an image processing capability, such as a mobile phone, a tablet computer, and the like.
  • the photographer can use the client to video capture the damaged vehicle, and analyze the captured video data to identify the damaged part and generate a lossy image.
  • a server side may also be included to receive the fixed loss image generated by the client.
  • the fixed loss image that the client can generate is transmitted to the specified server in real time or asynchronously. Therefore, another embodiment of the method may further include:
  • S3302 Asynchronously transfer the fixed loss image to a designated server.
  • the client may upload the generated loss image to the remote server immediately, or may upload the loss image in batches afterwards or Copy to the remote server.
  • the method for automatically generating a fixed-loss image by the client side may further include other implementation manners, such as directly generating a video shooting prompt message, based on the foregoing description of the embodiment of the server that automatically generates a fixed-loss image, a damaged portion location tracking, and the like. It is displayed on the shooting terminal, the specific division and recognition of the damage image category, the classification method, the identification, location and tracking of the damaged part. For details, refer to the description of the related embodiments, and details are not described herein.
  • the present invention provides a method for acquiring a vehicle damage image, which can automatically generate a loss image based on a captured video of a damaged vehicle on the client side.
  • the photographer can video capture the damaged vehicle through the client to generate captured video data.
  • the captured video data is analyzed to identify the damaged part, and the candidate images of different categories required for the fixed loss are obtained.
  • a lossy image of the damaged vehicle can be generated from the candidate image.
  • video shooting can be directly performed on the client side, and a high-quality fixed-loss image that meets the requirements of the fixed-loss processing can be automatically and quickly generated to meet the requirements of the fixed-loss processing and improve the acquisition efficiency of the fixed-loss image. It also reduces the cost of acquiring and processing the damage image of insurance company operators.
  • the present application further provides a vehicle-based damage image acquisition device.
  • the apparatus may include a system (including a distributed system), software (applications), modules, components, servers, clients, etc., using the methods described herein in conjunction with the necessary means of implementing the hardware.
  • the apparatus in one embodiment provided by the present application is as described in the following embodiments. Since the implementation of the device to solve the problem is similar to the method, the implementation of the specific device of the present application can be referred to the implementation of the foregoing method, and the repeated description is not repeated.
  • the term "unit” or "module” may implement a combination of software and/or hardware of a predetermined function.
  • FIG. 11 is a schematic structural diagram of a module of an embodiment of a vehicle-based image loss acquiring apparatus provided by the present application. As shown in FIG. 11, the apparatus may include:
  • the data receiving module 101 is configured to receive the captured video data of the damaged vehicle uploaded by the terminal device;
  • the damaged part identification module 102 can detect a video image in the captured video data to identify a damaged part in the video image;
  • the classification module 103 is configured to classify the video image based on the detected damaged portion, and determine a candidate image classification set of the damaged portion;
  • the screening module 104 is configured to select a fixed loss image of the vehicle from the candidate image classification set according to a preset screening condition.
  • FIG. 12 is a schematic structural diagram of a module according to another embodiment of the apparatus of the present application.
  • the specific structure may include:
  • the shooting module 201 can be configured to perform video shooting on the damaged vehicle to obtain captured video data.
  • the communication module 202 is configured to send the captured video data to the processing terminal;
  • the tracking display module 203 is configured to receive a location area returned by the processing terminal for real-time tracking of the damaged part, and display the tracked location area, where the damaged part includes the processing terminal to the captured video data The video image in the image is detected and identified.
  • the tracking display module 203 can be a display unit including a display screen, and the photographer can specify the damaged portion in the display screen, and can also implement the position of the damaged portion of the display tracking in the display screen. region.
  • the method for acquiring a vehicle damage image provided by the present application can be implemented by a processor executing a corresponding program instruction in a computer.
  • the apparatus may include a processor and a memory for storing processor-executable instructions, the processor executing the instruction Time to achieve:
  • a fixed loss image of the vehicle is selected from the candidate image classification set according to a preset screening condition.
  • the device may be a server, and the server receives the captured video data uploaded by the client, and then performs analysis processing, including identifying the damaged part, dividing the category, selecting an image, and the like, to obtain a fixed loss image of the vehicle.
  • the device may also be a client, and the client directly analyzes and processes the damaged vehicle on the client side to obtain a fixed loss image of the vehicle. Therefore, in another embodiment of the apparatus of the present application, the captured video data of the damaged vehicle may include:
  • the terminal device acquires data information uploaded after the video data is captured
  • the vehicle fixed-loss image acquiring device performs captured video data obtained by video capturing the damaged vehicle.
  • the device acquires the captured video data and directly performs the analysis process to obtain the fixed-loss image
  • the obtained fixed-loss image may also be sent to the server for storage or further loss processing. Therefore, in another embodiment of the apparatus, if the captured video data of the damaged vehicle is obtained by video capture of the vehicle-damaged image acquisition device, the processor further includes when the instruction is executed. :
  • the fixed loss image is asynchronously transmitted to a designated processing terminal.
  • the device or the device automatically generates a description of the embodiment of the fixed-loss image, the damaged portion location tracking, and the like.
  • the device for automatically generating the fixed-loss image by the client side may further include other implementation manners, such as generating a video capture.
  • After the prompt message it is directly displayed on the terminal device, the specific division and identification of the fixed loss image category, the classification method, the location and tracking of the damaged part. For details, refer to the description of the related embodiments, and details are not described herein.
  • the photographer can perform video shooting on the damaged vehicle through the vehicle damage image acquisition device provided by the present application to generate captured video data. Then, the captured video data is analyzed to obtain candidate images of different categories required for the fixed loss. Further, a lossy image of the damaged vehicle can be generated from the candidate image.
  • video shooting can be directly performed on the client side, and a high-quality fixed-loss image that meets the requirements of the fixed-loss processing can be automatically and quickly generated to meet the requirements of the fixed-loss processing and improve the acquisition efficiency of the fixed-loss image. It also reduces the cost of acquiring and processing the damage image of insurance company operators.
  • the method or the device described in the above embodiments of the present application can implement the business logic and record on the storage medium by using a computer program, and the storage medium can be read and executed by the computer to achieve the effect of the solution described in the embodiment of the present application. Accordingly, the present application also provides a computer readable storage medium having stored thereon computer instructions that, when executed, may implement the following steps:
  • a fixed loss image of the vehicle is selected from the candidate image classification set according to a preset screening condition.
  • the present application further provides another computer readable storage medium having stored thereon computer instructions that, when executed, implement the following steps:
  • the computer readable storage medium may include physical means for storing information, typically by digitizing the information and then storing it in a medium that utilizes electrical, magnetic or optical means.
  • the computer readable storage medium of this embodiment may include: means for storing information by means of electrical energy, such as various types of memories, such as RAM, ROM, etc.; means for storing information by magnetic energy means, such as hard disk, floppy disk, magnetic tape, magnetic Core memory, bubble memory, U disk; means for optically storing information such as CD or DVD.
  • electrical energy such as various types of memories, such as RAM, ROM, etc.
  • magnetic energy means such as hard disk, floppy disk, magnetic tape, magnetic Core memory, bubble memory, U disk
  • means for optically storing information such as CD or DVD.
  • quantum memories graphene memories, and the like.
  • the apparatus or method described above or the computer readable storage medium may be used in a server for acquiring a vehicle loss image to automatically acquire a vehicle loss image based on the vehicle image video.
  • the server may be a separate server, a system cluster composed of multiple application servers, or a server in a distributed system.
  • the server may include a processor and a memory for storing processor-executable instructions, when the processor executes the instructions:
  • a fixed loss image of the vehicle is selected from the candidate image classification set according to a preset screening condition.
  • the apparatus or method or the computer readable storage medium described above may be used in a terminal device for acquiring a vehicle loss image to automatically acquire a vehicle loss image based on a vehicle image video.
  • the terminal device may be implemented in the form of a server, or may be implemented by a client that performs video shooting on the damaged vehicle in the field.
  • FIG. 13 is a schematic structural diagram of an embodiment of a terminal device provided by the present application.
  • the device on the terminal may include a processor and a memory for storing executable instructions of the processor, where the processing is performed. When the instruction is executed, it can be implemented:
  • a fixed loss image of the vehicle is selected from the candidate image classification set according to a preset screening condition.
  • the captured video data packet obtained by the terminal device may be acquired by the terminal device after the captured video data is captured, or may be the captured video data obtained by the terminal device directly performing video capture on the damaged vehicle.
  • the processor may further implement the instruction:
  • the fixed loss image is transmitted asynchronously to the designated server.
  • the photographer can perform video shooting on the damaged vehicle through the terminal device of the vehicle-damaged image provided by the present application to generate captured video data. Then, the captured video data is analyzed to identify the damaged part, and the candidate images of different categories required for the fixed loss are obtained. Further, a lossy image of the damaged vehicle can be generated from the candidate image.
  • video shooting can be directly performed on the client side, and a high-quality fixed-loss image that meets the requirements of the fixed-loss processing can be automatically and quickly generated to meet the requirements of the fixed-loss processing and improve the acquisition efficiency of the fixed-loss image. It also reduces the cost of acquiring and processing the damage image of insurance company operators.
  • the present application refers to the method of tracking damaged areas, using CNN and RPN networks to detect damaged parts and vehicle parts, image recognition and classification based on damaged parts, data acquisition, data acquisition, interaction, calculation,
  • the description and the like are described, however, the application is not limited to the case that it must conform to industry communication standards, standard data models, computer processing and storage rules, or embodiments described herein.
  • Certain industry standards or implementations that have been modified in a manner that uses a custom approach or an embodiment described above may also achieve the same, equivalent, or similar, or post-deformation implementation effects of the above-described embodiments.
  • Embodiments obtained by applying these modified or modified data acquisition, storage, judgment, processing methods, etc. may still fall within the scope of alternative embodiments of the present application.
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • HDL Hardware Description Language
  • the controller can be implemented in any suitable manner, for example, the controller can take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (eg, software or firmware) executable by the (micro)processor.
  • computer readable program code eg, software or firmware
  • examples of controllers include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, The Microchip PIC18F26K20 and the Silicone Labs C8051F320, the memory controller can also be implemented as part of the memory's control logic.
  • the controller can be logically programmed by means of logic gates, switches, ASICs, programmable logic controllers, and embedding.
  • Such a controller can therefore be considered a hardware component, and the means for implementing various functions included therein can also be considered as a structure within the hardware component.
  • a device for implementing various functions can be considered as a software module that can be both a method of implementation and a structure within a hardware component.
  • the system, device, module or unit illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product having a certain function.
  • a typical implementation device is a computer.
  • the computer can be, for example, a personal computer, a laptop computer, a car-mounted human-machine interaction device, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet.
  • each module may be implemented in the same software or software, or the modules that implement the same function may be implemented by a plurality of sub-modules or a combination of sub-units.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or integrated. Go to another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the controller can be logically programmed by means of logic gates, switches, ASICs, programmable logic controllers, and embedding.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
  • a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
  • processors CPUs
  • input/output interfaces network interfaces
  • memory volatile and non-volatile memory
  • the memory may include non-persistent memory, random access memory (RAM), and/or non-volatile memory in a computer readable medium, such as read only memory (ROM) or flash memory.
  • RAM random access memory
  • ROM read only memory
  • Memory is an example of a computer readable medium.
  • Computer readable media includes both permanent and non-persistent, removable and non-removable media.
  • Information storage can be implemented by any method or technology.
  • the information can be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory. (ROM), electrically erasable programmable read only memory (EEPROM), flash memory or other memory technology, compact disk read only memory (CD-ROM), digital versatile disk (DVD) or other optical storage, Magnetic tape cartridges, magnetic tape storage or other magnetic storage devices or any other non-transportable media can be used to store information that can be accessed by a computing device.
  • computer readable media does not include temporary storage of computer readable media, such as modulated data signals and carrier waves.
  • embodiments of the present application can be provided as a method, system, or computer program product.
  • the present application can take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment in combination of software and hardware.
  • the application can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • the application can be described in the general context of computer-executable instructions executed by a computer, such as a program module.
  • program modules include routines, programs, objects, components, data structures, and the like that perform particular tasks or implement particular abstract data types.
  • the present application can also be practiced in distributed computing environments where tasks are performed by remote processing devices that are connected through a communication network.
  • program modules can be located in both local and remote computer storage media including storage devices.

Abstract

本申请实施例公开了一种车辆定损图像获取方法、装置、服务器和终端设备。客户端获取拍摄视频数据,将所述拍摄视频数据发送至服务器;所述服务器对所述拍摄视频数据中的视频图像进行检测,识别所述视频图像中的受损部位;所述服务器基于检测出的受损部位对所述视频图像进行分类,确定所述受损部位的候选图像分类集合;按照预设筛选条件从所述候选图像分类集合中选出车辆的定损图像。利用本申请各个实施例,可以自动、快速的生成符合定损处理需求的高质量定损图像,满足定损处理需求,提高定损图像的获取效率。

Description

车辆定损图像获取方法、装置、服务器和终端设备 技术领域
本申请属于计算机图像数据处理技术领域,尤其涉及一种车辆定损图像获取方法、装置、服务器和终端设备。
背景技术
发生车辆交通事故后,保险公司需要若干定损图像来对出险车辆进行定损核损,并进行出险的资料进行存档。
目前车辆定损的图像通常是由作业人员现场进行拍照获得,然后根据现场拍照的照片进行车辆定损处理。车辆定损的图像要求需要能够清楚的反应出车辆受损的具体部位、损伤部件、损伤类型、损伤程度等信息,这通常需要拍照人员具有专业车辆定损的相关知识,才能拍照获取符合定损处理要求的图像,这显然需要比较大的人力培训和定损处理的经验成本。尤其是在一些发生车辆交通事故后需要尽快撤离或移动车辆现场的情况下,保险公司作业人员赶到事故现场需要耗费较长的时间。并且,如果车主用户主动或者在保险公司作业人员要求下先行拍照,获取一些原始定损图像,由于非专业行,车主用户拍照获得的定损图像常常不符合定损图像处理要求。另外,作业人员现场拍照获得的图像往往也需要后期再次从拍摄设备导出,进行人工筛选,确定需要的定损图像,这同样需要消耗较大人力和时间,进而降低最终定损处理需要的定损图像的获取效率。
现有的保险公司作业人员或车主用户现场拍照获取定损图像的方式,需要专业的车辆定损的相关知识,人力和时间成本较大,获取符合定损处理需求的定损图像的方式效率仍然较低。
发明内容
本申请目的在于提供一种车辆定损图像获取方法、装置、服务器和终端设备,通过拍摄者对受损车辆的受损部位进行视频拍摄,可以自动、快速的生成符合定损处理需求的高质量定损图像,满足定损处理需求,提高定损图像的获取效率,便于作业人员作业。
本申请提供的一种车辆定损图像获取方法、装置、服务器和终端设备是这样实现的:
一种车辆定损图像获取方法,包括:
客户端获取拍摄视频数据,将所述拍摄视频数据发送至服务器;
所述服务器对所述拍摄视频数据中的视频图像进行检测,识别所述视频图像中的受损部位;
所述服务器基于检测出的受损部位对所述视频图像进行分类,确定所述受损部位的候选图像分类集合;
按照预设筛选条件从所述候选图像分类集合中选出车辆的定损图像。
一种车辆定损图像获取方法,所述方法包括:
接收终端设备上传的受损车辆的拍摄视频数据,对所述拍摄视频数据中的视频图像进行检测,识别所述视频图像中的受损部位;
基于检测出的受损部位对所述视频图像进行分类,确定所述受损部位的候选图像分类集合;
按照预设筛选条件从所述候选图像分类集合中选出车辆的定损图像。
一种车辆定损图像获取方法,所述方法包括:
对受损车辆进行视频拍摄,获取拍摄视频数据;
将所述拍摄视频数据发送至处理终端;
接收所述处理终端返回的对受损部位实时跟踪的位置区域,显示所述跟踪的位置区域,所述受损部位包括所述处理终端对所述拍摄视频数据中的视频图像进行检测识别得到。
一种车辆定损图像获取方法,所述方法包括:
接收受损车辆的拍摄视频数据;
对所述拍摄视频数据中的视频图像进行检测,识别所述视频图像中的受损部位;
基于检测出的受损部位对所述视频图像进行分类,确定所述受损部位的候选图像分类集合;
按照预设筛选条件从所述候选图像分类集合中选出车辆的定损图像。
一种车辆定损图像获取装置,所述装置包括:
数据接收模块,用于接收终端设备上传的受损车辆的拍摄视频数据;
受损部位识别模块,对所述拍摄视频数据中的视频图像进行检测,识别所述视频图像中的受损部位;
分类模块,用于基于检测出的受损部位对所述视频图像进行分类,确定所述受损部位的候选图像分类集合;
筛选模块,用于按照预设筛选条件从所述候选图像分类集合中选出车辆的定损图像。
一种车辆定损图像获取装置,所述装置包括:
拍摄模块,用于对受损车辆进行视频拍摄,获取拍摄视频数据;
通信模块,用于将所述拍摄视频数据发送至处理终端;
跟踪模块,用于接收所述处理终端返回的对受损部位实时跟踪的位置区域,显示所述跟踪的位置区域,所述受损部位包括所述处理终端对所述拍摄视频数据中的视频图像进行检测识别得到。
一种车辆定损图像获取装置,包括处理器以及用于存储处理器可执行指令的存储器,所述处理器执行所述指令时实现:
接收受损车辆的拍摄视频数据;
对所述拍摄视频数据中的视频图像进行检测,识别所述视频图像中的受损部位;
基于检测出的受损部位对所述视频图像进行分类,确定所述受损部位的候 选图像分类集合;
按照预设筛选条件从所述候选图像分类集合中选出车辆的定损图像。
一种计算机可读存储介质,其上存储有计算机指令,所述指令被执行时实现以下步骤:
接收受损车辆的拍摄视频数据;
对所述拍摄视频数据中的视频图像进行检测,识别所述视频图像中的受损部位;
基于检测出的受损部位对所述视频图像进行分类,确定所述受损部位的候选图像分类集合;
按照预设筛选条件从所述候选图像分类集合中选出车辆的定损图像。
一种计算机可读存储介质,其上存储有计算机指令,所述指令被执行时实现以下步骤:
对受损车辆进行视频拍摄,获取拍摄视频数据;
将所述拍摄视频数据发送至处理终端;
接收所述处理终端返回的对受损部位实时跟踪的位置区域,显示所述跟踪的位置区域,所述受损部位包括所述处理终端对所述拍摄视频数据中的视频图像进行检测识别得到。
一种服务器,包括处理器以及用于存储处理器可执行指令的存储器,所述处理器执行所述指令时实现:
接收终端设备上传的受损车辆的拍摄视频数据;
对所述拍摄视频数据中的视频图像进行检测,识别所述视频图像中的受损部位;
基于检测出的受损部位对所述视频图像进行分类,确定所述受损部位的候选图像分类集合;
按照预设筛选条件从所述候选图像分类集合中选出车辆的定损图像。
一种终端设备,包括处理器以及用于存储处理器可执行指令的存储器,所 述处理器执行所述指令时实现:
获取对受损车辆进行视频拍摄的拍摄视频数据;
对所述拍摄视频数据中的视频图像进行检测,识别所述视频图像中的受损部位;
基于检测出的受损部位对所述视频图像进行分类,确定所述受损部位的候选图像分类集合;
按照预设筛选条件从所述候选图像分类集合中选出车辆的定损图像。
本申请提供的一种车辆定损图像获取方法、装置、服务器和终端设备,提出了基于视频的车辆定损图像自动生成方案。拍摄者可以通过终端设备对受损车辆进行视频拍摄,拍摄的视频数据可以传输到系统的服务器,服务器再对视频数据进行分析,识别出受损部位,根据受损部位获取定损所需的不同类别的候选图像,然后可以从候选图像中产生受损车辆的定损图像。利用本申请实施方案,可以自动、快速的生成符合定损处理需求的高质量定损图像,满足定损处理需求,提高定损图像的获取效率,同时也减少了保险公司作业人员的定损图像获取和处理成本。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请中记载的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请所述一种车辆定损图像获取方法实施例的流程示意图;
图2是本申请所述方法构建的一种识别视频图像中受损部位的模型结构示意图;
图3是本申请所述方法一种利用损伤检测模型识别受损部位的实施场景示 意图;
图4是本申请一个实施例中基于识别出的受损部位确定为近景图像的示意图;
图5是本申请所述方法构建的一种识别视频图像中受损部件的模型结构示意图;
图6是本申请所述方一种车辆定损图像获取方法的处理场景示意图;
图7是本申请所述方法另一个实施例的流程示意图;
图8是本申请所述方法另一个实施例的流程示意图;
图9是本申请所述方法另一个实施例的流程示意图;
图10是本申请所述方法另一个实施例的流程示意图;
图11是本申请提供的一种车辆定损图像获取装置实施例的模块结构示意图;
图12是本申请提供的另一种车辆定损图像获取装置实施例的模块结构示意图;
图13是本申请提供的一种终端设备实施例的结构示意图。
具体实施方式
为了使本技术领域的人员更好地理解本申请中的技术方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都应当属于本申请保护的范围。
图1是本申请所述一种车辆定损图像获取方法实施例的流程示意图。虽然本申请提供了如下述实施例或附图所示的方法操作步骤或装置结构,但基于常规或者无需创造性的劳动在所述方法或装置中可以包括更多或者部分合并后更少的操作步骤或模块单元。在逻辑性上不存在必要因果关系的步骤或结构中, 这些步骤的执行顺序或装置的模块结构不限于本申请实施例或附图所示的执行顺序或模块结构。所述的方法或模块结构的在实际中的装置、服务器或终端产品应用时,可以按照实施例或者附图所示的方法或模块结构进行顺序执行或者并行执行(例如并行处理器或者多线程处理的环境、甚至包括分布式处理、服务器集群的实施环境)。
为了清楚起见,下述实施例以具体的一个拍摄者通过移动终端进行视频拍摄、服务器对拍摄视频数据进行处理获取定损图像的实施场景进行说明。拍摄者可以为保险公司作业人员,拍摄者手持移动终端对受损车辆进行视频拍摄。所述的移动终端可以包括手机、平板电脑,或者其他有视频拍摄功能和数据通信功能的通用或专用设备。所述的移动终端和服务器可以部署有相应的应用模块(如移动终端安装的某个车辆定损APP(application,应用),以实现相应的数据处理。但是,本领域技术人员能够理解到,可以将本方案的实质精神应用到获取车辆定损图像的其他实施场景中,如拍摄者也可以为车主用户,或者移动终端拍摄后直接在移动终端一侧对视频数据进行处理并获取定损图像等。
具体的一种实施例如图1所示,本申请提供的一种车辆定损图像获取方法的一种实施例中,所述方法可以包括:
S1:客户端获取拍摄视频数据,将所述拍摄视频数据发送至服务器。
所述的客户端可以包括具有视频拍摄功能和数据通信功能的通用或专用设备,如手机、平板电脑等的终端设备。本实施例其他的实施场景中,所述的客户端也可以包括具有数据通信功能的固定计算机设备(如PC端)和与其连接的可移动的视频拍摄设备,两者组合后视为本实施例的一种客户端的终端设备。拍摄者通过客户端拍摄视频数据,所述的拍摄视频数据可以传输到服务器。所述的服务器可以包括对所述视频数据中的帧图像进行分析处理并确定出定损图像的处理设备。所述服务器可以包括具有图像数据处理和数据通信功能的逻辑单元装置,如本实施例应用场景的服务器。从数据交互的角度来看,所述服务器是相对于所述客户端作为第一终端设备时的另一个与第一终端设备进行数据 通信的第二终端设备,因此,为以便描述,在此可以将对车辆视频拍摄生成拍摄视频数据的一侧称为客户端,将对所述拍摄视频数据进行处理生成定损图像的一侧称为服务器。本申请不排除在一些实施例中所述的客户端与服务器为物理连接的同一终端设备。
本申请的一些实施方式中,客户端拍摄得到的视频数据可以实时传输到服务器,以便于服务器快速处理。其他的实施方式中,也可以在客户端视频拍摄完成后再传输至服务器。如拍摄者使用的移动终端当前没有网络连接,则可以先进行视频拍摄,等连接上移动蜂窝数据或WLAN(Wireless Local Area Networks,无线局域网)或者专有网络后再进行传输。当然,即使在客户端可以与服务器进行正常数据通信的情况下,也可以将拍摄视频数据异步传输至服务器。
需要说明的是,本实施例中拍摄者对车辆受损部位进行拍摄获取的拍摄视频数据,可以为一个视频片段,也可以为多个视频片段。如对同一个受损部位进行了多次不同角度和远近距离的拍摄生成的多段拍摄视频数据,或者对不同的受损部位分别进行拍摄得到各个受损部位的拍摄视频数据。当然,一些实施场景下,也可以围绕受损车辆的各个受损部位进行一次完整拍摄,得到一个相对时间较长的视频片段。
S2:所述服务器对所述拍摄视频数据中的视频图像进行检测,识别所述视频图像中的受损部位。
在本实施例实施方式中,服务器可以对拍摄视频数据中的视频图像进行图像检测,识别处理视频图像中车辆的受损部位。一般的,识别出的受损部位占据视频图像上的一块区域,并有相应的区域信息,如受损部位所在区域的位置和大小等。
检测视频图像中受损部位具体的一种实现方式上,可以通过构建的损伤检测模型来识别视频图像中的受损部位,所述的损伤检测模型使用深度神经网络 检测车辆受损部位和其在图像中的区域。本申请的一个实施例中可以基于卷积神经网络(Convolutional Neural Network,CNN)和区域建议网络(Region Proposal Network,RPN),结合池化层、全连接层等构建所述损伤检测模型。
在本实施例中,可以预先采用设计的机器学习算法构建用于识别视频图像中包含的受损部位的损伤检测模型。该损伤检测模型经过样本训练后,可以识别出所述视频图像中的一处或多处受损部位。所述的损伤检测模型可以采用深度神经网络的网络模型或者其变种后的网络模型经过样本训练后构建生成。实施例中,可以基于卷积神经网络和区域建议网络,结合其他例如全连接层(Fully-Connected Layer,FC)、池化层、数据归一化层等。当然,其他的实施例中,如果需要对受损部位进行分类,还可以损伤检测模型中加入概率输出层(Softmax)等。具体的一个示例如图2所示,图2是本申请所述方法构建的一种识别视频图像中受损部位的模型结构示意图。图3是本申请所述方法一种利用损伤检测模型识别受损部位的实施场景示意图,识别出的受损部位可以实时显示在客户端。
卷积神经网络一般指以卷积层(CNN)为主要结构并结合其他如激活层等组成的神经网络,主要用于图像识别。本实施例中所述的深度神经网络可以包括卷积层和其他重要的层(如输入模型训练的损伤样本图像,数据归一化层,激活层等),并结合区域建议网络(RPN)共同组建生成。卷积神经网络通常是将图像处理中的二维离散卷积运算和人工神经网络相结合。这种卷积运算可以用于自动提取特征。区域建议网络(RPN)可以将一个图像(任意大小)提取的特征作为输入(可以使用卷积神经网络提取的二维特征),输出矩形目标建议框的集合,每个框有一个对象的得分。
上述的实施方式在模型训练时可以识别出视频图像上的一个或多个受损部位。具体的在样本训练时,输入为一张图片,可以输出多个图片区域。如果有一个受损部位,可以输出一个图片区域;如果有k个受损,则可以输出k个图片区域;如果没有损伤部位,则输出0个图片区域。选取的神经网络的参数通 可以过使用打标数据进行小批量梯度下降(mini-batch gradient descent)训练得到,比如mini-batch=32时,同时32张训练图片作为输入来训练。
其他的实施方式中,所述的损伤检测模型可以使用基于卷积神经网络和区域建议网络的多种模型和变种,如Faster R-CNN、YOLO、Mask-FCN等。其中的卷积神经网络(CNN)可以用任意CNN模型,如ResNet、Inception,VGG等及其变种。通常神经网络中的卷积网络(CNN)部分可以使用在物体识别取得较好效果的成熟网络结构,如Inception、ResNet等网络,如ResNet网络,输入为一张图片,输出为多个受损区域,和对应的受损区域和置信度(这里的置信度为表示识别出来的受损区域真实性程度的参量)。faster R-CNN、YOLO、Mask-FCN等都是属于本实施例可以使用的包含卷积层的深度神经网络。本实施例使用的深度神经网络结合区域建议层和CNN层能检测出视频图像中的受损部位,并确认所述受损部位在视频图像中的区域。具体的,本申请可以的卷积网络(CNN)部分可以使用在物体识别取得很好效果的成熟网络结构,ResNet网络,该模型参数可以通过使用打标数据进行小批量梯度下降(mini-batch gradient descent)训练得到。
拍摄者使用客户端进行能视频拍摄时,服务器识别出的受损部位的位置区域可以实时显示在客户端上,以便于用户观察和确认受损部位。识别出受损部位后,服务器可以自动跟踪受损部位,并且在跟随过程中,随着拍摄距离和角度变化,该受损部位在视频图像中对应的位置区域大小和位置也可以相应的变化。
另一种实施方式中,拍摄者可以交互式修改识别出的受损部位的位置和大小。例如客户端一侧实时显示服务器检测出的受损部位的位置区域。若拍摄者认为服务器识别的受损部位的位置区域未能全部覆盖现场观察的受损部位,需要进行调整,则可以在客户端上调整该受损部位的位置区域的位置和大小。如长按受损部位选择该位置区域后,进行移动,调整受损部位的位置,或者拉伸受损部位位置区域的边框调整大小等。拍摄者在客户端调整修改受损部位的位 置区域后可以生成新的受损部位,然后将新的受损部位发送给服务器。
这样,拍摄者可以方便、灵活根据实际现场受损部位情况调整受损部位在视频图像中的位置区域,更准确的定位受损部位,便于服务器更准确可靠的获取高质量的定损图像。
所述服务器接收客户端上传的拍摄视频数据,对所述拍摄视频数据中的视频图像进行检测,识别所述视频图像中的受损部位。
S3:所述服务器基于检测出的受损部位对所述视频图像进行分类,确定所述受损部位的候选图像分类集合。
车辆定损常常需要不同类别的图像数据,如整车的不同角度的图像、能展示出受损部件的图像、具体受损部位的近景细节图等。本申请在获取定损图像的处理中,可以对视频图像进行识别,如是否为受损车辆的图像、识别图像中包含的车辆部件、包含一个还是多个车辆部件、车辆部件上是否有损伤等等。本申请实施例的一个场景中,可以将车辆定损需要的定损图像相应的分为不同的类别,其他不符合定损图像要求的可以单独另分为一个类别。具体的可以提取拍摄视频的每一帧图像,对每一帧图像进行识别后分类,形成受损部位的候选图像分类集合。
本申请提供所述方法的另一种实施例中,确定出的所述候选图像分类集合可以包括:
S301:显示受损部位的近景图像集合、展示受损部位所属车辆部件的部件图像集合。
近景图像集合中包括了受损部位的近景图像,部件图像集合中包括了受损车辆的受损部件,受损部件上有至少一处受损部位。具体的在本实施例应用场景中,拍摄者可以对受损车辆上的受损部位进行由近到远(或者由远到近)的拍摄,可以通过拍摄者移动或者变焦来完成。服务器端可以对拍摄视频中的帧图像(可以是对每一帧图像进行处理,也可以选取一段视频的帧图像进行处理) 进行识别处理,确定视频图像的分类。在本实施例应用场景中,可以将拍摄视频的视频图像分成包括下述的3类,具体的包括:
a:近景图,为受损部位的近景图像,能清晰显示受损部位的细节信息;
b:部件图,包含受损部位,并能展示受损部位所在的车辆部件;
c:a类和b类都不满足的图像。
具体的,可以根据定损图像中受损部位近景图像的需求来确定a类图像的识别算法/分类要求等。本申请在a类图像的识别处理过程中,一种实施方式中可以通过受损部位在当前所在的视频图像中所占区域的大小(面积或区域跨度)来识别确定。如果受损部位在视频图像中的占有较大区域(如大于一定阈值,比如长或者宽大于四分之一视频图像大小),则可以确定该视频图像为a类图像。本申请提供的另一种实施方式中,如果在属于同一个受损部件的已分析处理帧图像中,当前受损部位相对于包含所述当前受损部位的其他已分析处理帧图像的区域面积相对较大(处于一定比例或TOP范围内),则可以确定该当前帧图像为a类图像。因此,本申请所述方法的另一种实施例中,可以采用下述中的至少一种方式确定所述近景图像集合中的视频图像:
S3011:受损部位在所属视频图像中所占区域的面积比值大于第一预设比例:
S3012:受损部位的横坐标跨度与所属视频图像长度的比值大于第二预设比例,和/或,受损部位的纵坐标与所属视频图像高度的比例大于第三预设比例;
S3013:从相同受损部位的视频图像中,选择受损部位的面积降序后的前K张视频图像,或者所述面积降序后属于第四预设比例内的视频图像,K≥1。
a类型的受损细节图像中受损部位通常占据较大的区域范围,通过S3011中第一预设比例的设置,可以很好的控制受损部位细节图像的选取,得到符合处理需求的a类型图像。a类型图像中受损区域的面积可以通过所述受损区域所述包含的像素点统计得到。
另一个实施方式S3012中,也可以根据受损部位相对于视频图像的坐标跨 度来确认是否为a类型图像。例如一个示例中,视频图像为800*650像素,受损车辆的损伤的两条较长的划痕,该划痕对应的横坐标跨度长600像素,每条划痕的跨度却很窄。虽然此时受损部位的区域面积不足所属视频图像的十分之一,但因该受损部位的横向跨度600像素占整个视频图像长度800像素的四分之三,则此时可以将该视频图像标记为a类型图像,如图4所示,图4是本申请一个实施例中基于识别出的受损部位确定为近景图像的示意图。
S3013中实施方式中,所述的受损部位的面积可以为S3011中的受损部位的区域面积,也可以为受损部位长或者高的跨度数值。
当然,也可以结合上述多种方式来识别出a类图像,如受损部位的区域面积既满足占用一定比例的视频图像,又在所有的相同受损区域图像中属于区域面积最大的第四预设比例范围内。本实施例场景中所述的a类图像通常包含受损部位的全部或者部分细节图像信息。
上述中所述的第一预设比例、第二预设比例、第三预设比例、第四预设比例等具体的可以根据图像识别精度或分类精度或其他处理需求等进行相应的设置,例如所述第二预设比例或第三预设比例取值可以为四分之一。
b类图像的识别处理的一种实现方式上,可以通过构建的车辆部件检测模型来识别视频图像中所包括的部件(如前保险杠、左前叶子板、右后门等)和其所在的位置。如果受损部位处于检测出的受损部件上,则可以确认该视频图像属于b类图像。具体的,例如在一个视频图像P1中,若P1中检测出来的受损部件的部件区域包含识别的受损部位(通常识别出的部件区域的面积大于受损部位的面积),则可以认为P1中该部件区域为受损部件。或者,视频图像P2中,若P2中检测出来的受损区域与P2中检测出来的部件区域有重合区域,则也可以认为P2中所述部件区域对应的车辆部件也为受损部件,将该视频图像分类为b类图像。
本实施例中所述的部件检测模型使用深度神经网络检测出部件和部件在图像中的区域。本申请的一个实施例中可以基于卷积神经网络(Convolutional  Neural Network,CNN)和区域建议网络(Region Proposal Network,RPN),结合池化层、全连接层等构建所述部件损伤识别模型。例如部件识别模型中,可以使用基于卷积神经网络和区域建议网络的多种模型和变种,如Faster R-CNN、YOLO、Mask-FCN等。其中的卷积神经网络(CNN)可以用任意CNN模型,如ResNet、Inception,VGG等及其变种。通常神经网络中的卷积网络(CNN)部分可以使用在物体识别取得较好效果的成熟网络结构,如Inception、ResNet等网络,如ResNet网络,输入为一张图片,输出为多个部件区域,和对应的部件分类和置信度(这里的置信度为表示识别出来的车辆部件真实性程度的参量)。faster R-CNN、YOLO、Mask-FCN等都是属于本实施例可以使用的包含卷积层的深度神经网络。本实施例使用的深度神经网络结合区域建议层和CNN层能检测出所述待处理图像中的车辆部件,并确认所述车辆部件在待处理图像中的部件区域。具体的,本申请可以卷积网络(CNN)部分可以使用在物体识别取得很好效果的成熟网络结构,ResNet网络,该模型参数可以通过使用打标数据进行小批量梯度下降(mini-batch gradient descent)训练得到。图5是本申请所述方法构建的一种识别视频图像中受损部件的模型结构示意图。
一种应用场景中,如果同一个视频图像同时满足a类和b类图像的判断逻辑,则可以同时属于a类和b类图像。
所述服务器可以提取所述拍摄视频数据中的视频图像,基于检测出的受损部位对所述视频图像进行分类,确定所述受损部位的候选图像分类集合。
S4:按照预设筛选条件从所述候选图像分类集合中选出车辆的定损图像。
可以根据定损图像的类别、清晰度等从所述候选图像分类集合中选取符合预设筛选条件的图像作为定损图像。所述的预设筛选条件可以自定义的设置,例如一种实施方式中,可以在a类和b类图像中根据图像的清晰度,分别选取多张(比如5或10张)清晰度最高,并且拍摄角度不一样的图像作为识别出的受损部位的定损图像。图像的清晰度可以通过对受损部位和检测出来的车辆部 件所在的图像区域进行计算,例如可以使用基于空间域的算子(如Gabor算子)或者基于频域的算子(如快速傅立叶变换)等方法得到。对于a类图像中,通常需要保证一张或多个图像组合后可以显示受损部位中的全部区域,这样可以保障得到全面的受损区域信息。
本申请提供的一种车辆定损图像获取方法,提供基于视频的车辆定损图像自动生成方案。拍摄者可以通过终端设备对受损车辆进行视频拍摄,拍摄的视频数据可以传输到系统的服务器端,系统在服务器端再对视频数据进行分析,识别受损部位,根据受损部位获取定损所需的不同类别的候选图像,然后可以从候选图像中产生受损车辆的定损图像。利用本申请实施方案,可以自动、快速的生成符合定损处理需求的高质量定损图像,满足定损处理需求,提高定损图像的获取效率,同时也减少了保险公司作业人员的定损图像获取和处理成本。
本申请所述方法的一个实施例中,所述客户端拍摄的视频传输给服务器,服务器可以根据受损部位实时跟踪受损部位在视频中的位置。如在上述实施例场景中,因为车辆为静止物体,移动终端在跟随拍摄者移动,此时可以采用一些图像算法求得拍摄视频相邻帧图像之间的对应关系,比如使用基于光流(optical flow)的算法,实现完成对受损部位的跟踪。如果移动终端存在比如加速度仪和陀螺仪等传感器,则可以结合这些传感器的信号数据进一步确定拍摄者运动的方向和角度,实现更加精确的对受损部位的跟踪。因此,本申请所述方法的另一种实施例中,在识别出所述视频图像的受损部位后,还可以包括:
S200:服务器实时跟踪所述受损部位在所述拍摄视频数据中的位置区域;
以及,在所述服务器判断所述受损部位脱离视频图像后重新进入视频图像时,基于所述受损部位的图像特征数据重新对所述受损部位的位置区域进行定位和跟踪。
服务器可以提取受损区域的图像特征数据,比如SIFT特征数据(Scale-invariant feature transform,尺度不变特征变换)。如果受损部位脱离视频 图像后重新进入视频图像,系统能自动定位和继续跟踪,例如拍摄设备断电后重启或者拍摄区域位移到无损伤部位后又返回再次拍摄相同损伤部位。
服务器识别出的受损部位的位置区域可以实时显示在客户端上,以便于用户观察和确认受损部位。客户端和服务器可以同时显示识别出的受损部位。服务器可以自动跟踪识别出受损部位,并且随着拍摄距离和角度变化该受损部位在视频图像中对应的位置区域大小和位置也可以相应的变化。这样,服务器一侧可以实时展示客户端跟踪的受损部位,便于服务器的作业人员观察和使用。
另一种实施方式中,服务器在实时跟踪时可以将跟踪的所述受损部位的位置区域发送给客户端,这样客户端可以与服务器同步实时显示所述受损部位,以便于拍摄者观察服务器定位跟踪的受损部位。因此,所述方法的另一个实施例中,还可以包括:
S210:所述服务器将跟踪的所述受损部位的位置区域发送给所述客户端,以使客户端实时显示所述受损部位的位置区域。
另一种实施方式中,拍摄者可以交互式修改受损部位的位置和大小。例如客户端显示识别出受损部位时,若拍摄者认为识别出的受损部位的位置区域未能全部覆盖受损部位,需要进行调整,则可以再调整该位置区域的位置和大小,如长按受损部位选择该位置区域后,进行移动,调整受损部位的位置,或者拉伸受损部位位置区域的边框可以调整大小等。拍摄者在客户端调整修改受损部位的位置区域后可以生成新的受损部位,然后将新的受损部位发送给服务器。同时,服务器可以同步更新客户端修改后的新的受损部位。服务器可以根据新的受损部位对后续的视频图像进行识别处理。具体的,本申请提供的所述方法的另一种实施例中,所述方法还可以包括:
S220:接收所述客户端发送的新的受损部位,所述新的受损部位包括所述客户端基于接收的交互指令修改所述受损部位的位置区域后重新确定的受损部位;
相应的,所述基于检测出的受损部位对所述视频图像进行分类包括基于所 述新的受损部位对视频图像进行分类。
这样,拍摄者可以方便、灵活根据实际现场受损部位情况调整受损部位在视频图像中的位置区域,更准确的定位受损部位,便于服务器获取高质量的定损图像。
所述方法的另一个应用场景中,在拍摄受损部位的近景时,拍摄者可以对其从不同角度的连续拍摄。服务器一侧可以根据受损部位的跟踪,求得每帧图像的拍摄角度,进而选取一组不同角度的视频图像作为该受损部位的定损图像,从而确保定损图像能准确的反应出受损的类型和程度。因此,本申请所述方法的另一个实施例中,所述按照预设筛选条件从所述候选图像分类集合中选出车辆的定损图像包括:
S401:从指定的所述受损部位候选图像分类集合中,根据视频图像的清晰度和所述受损部位的拍摄角度,分别选取至少一张视频图像作为所述受损部位的定损图像。
比如在一些事故现场中,部件变形在某些角度会相对于其他角度非常明显,或者如果受损部件有反光或者倒影,反光或者倒影会随着拍摄角度变化而变化等,而利用本申请实施方案选取不同角度的图像作为定损图像,可以大幅减少这些因素对定损的干扰。可选的,如果客户端存在比如加速度仪和陀螺仪等传感器,也可以通过这些传感器的信号得到或者辅助计算得到拍摄角度。
具体的一个示例中,可以生成多个候选图像分类集合,但在具体选取定损图像时可以仅适用其中的一个或多个类型的候选图像分类集合,如上述所示的a类、b类和c类。选取最终需要的定损图像时,可以指定从a类和b类的候选图像分类集合中选取。在a类和b类图像中,可以根据视频图像的清晰度,分别选取多张(比如同一个部件的图像选取5张,同一个受损部位的图像选取10张)清晰度最高,并且拍摄角度不一样的图像作为定损图像。图像的清晰度可以通过对受损部位和检测出来的车辆部件部位所在的图像区域进行计算,例如 可以使用基于空间域的算子(如Gabor算子)或者基于频域的算子(如快速傅立叶变换)等方法得到。一般的,对于a类图像,需要保证受损部位中的任意区域至少在一张图像中存在。
在本申请所述方法的另一种实施场景中,如果服务器检测到受损车辆存在多个受损部位,并且受损部位的距离很近,则可以同时跟踪多个受损部位,对并每个受损部位进行分析处理,获取相应的定损图像。服务器对识别出的所有的受损部位都按照上述处理获取每个受损部位的定损图像,然后可以将所有产生的定损图像作为整个受损车辆的定损图像。图6是本申请所述方一种车辆定损图像获取方法的处理场景示意图,如图6所示,受损部位A和受损部位B距离较近,则可以同时进行跟踪处理,但受损部位C位于受损车辆的另一侧,在拍摄视频中距离受损部位A和受损部位B较远,则可以先不跟踪受损部位C,等受损部位A和受损部位B拍摄完后再单独拍摄受损部位C。因此本申请所述方法的另一个实施例中,若检测到视频图像中存在至少两个受损部位,则判断所述至少两个受损部位的距离是否符合设置的邻近条件;
若是,则同时跟踪所述至少两个受损部位,并分别产生相应的定损图像。
所述的邻近条件可以根据同一个视频图像中受损部位的个数、受损部位的大小、受损部位之间的距离等进行设置。
如果服务器检测到所述受损部位的近景图像集合、部件图像集合中的至少一个为空,或者所述近景图像集合中的视频图像未覆盖到对应受损部位的全部区域时,则可以生成视频拍摄提示消息,然后可以向对应于所述拍摄视频数据的客户端发送所述视频拍摄提示消息。
例如上述示例实施场景中,如果服务器无法得到能确定受损部位所在车辆部件的b类定损图像,则可以反馈给拍摄者,提示其拍摄包括受损部位在内的相邻多个车辆部件,从而确保得到(b)类定损图像。如果服务器无法得到a类定损图像,或者a类图像没能覆盖到受损部位的全部区域,则可以反馈给拍摄者, 提示其拍摄受损部位的近景。
本申请所述方法的其他实施例中,如果服务器检测出拍摄的视频图像清晰度不足(低于一个事先设定的阈值或者低于最近一段拍摄视频中的平均清晰度),则可以提示拍摄者缓慢移动,保证拍摄图像质量。例如反馈到移动终端APP上,提示用户拍摄图像时注意对焦,光照等影响清晰度的因素,如显示提示信息“速度过快,请缓慢移动,以保障图像质量”。
可选的,服务器可以保留产生定损图像的视频片段,以便于后续的查看和验证等。或者客户端可以在视频图像拍摄后将定损图像批量上传或者拷贝到远端服务器。
上述实施例所述车辆定损图像获取方法,提出了基于视频的车辆定损图像自动生成方案。拍摄者可以通过终端设备对受损车辆进行视频拍摄,拍摄的视频数据可以传输到服务器,由服务器再对视频数据进行分析,识别出受损部位,根据受损部位获取定损所需的不同类别的候选图像。然后可以从候选图像中产生受损车辆的定损图像。利用本申请实施方案,可以自动、快速的生成符合定损处理需求的高质量定损图像,满足定损处理需求,提高定损图像的获取效率,同时也减少了保险公司作业人员的定损图像获取和处理成本。
上述实施例从客户端与服务器交互的实施场景中描述了本申请通过受损车辆拍摄视频数据自动获取定损图像的实施方案。基于上述所述,本申请提供一种可以用于服务器一侧的车辆定损图像获取方法,图7是本申请所述方法另一个实施例的流程示意图,如图7所示,可以包括:
S10:接收终端设备上传的受损车辆的拍摄视频数据,对所述拍摄视频数据中的视频图像进行检测,识别所述视频图像中的受损部位;
S11:基于检测出的受损部位对所述视频图像进行分类,确定所述受损部位的候选图像分类集合;
S12:按照预设筛选条件从所述候选图像分类集合中选出车辆的定损图像。
所述的终端设备可以为前述实施例所述的客户端,但本申请不排除可以为其他的终端设备,如数据库系统、第三方服务器、闪存等。在本实施例中,服务器接收客户端上传的或者拷贝来的对受损车辆进行拍摄获取的拍摄视频数据后,可以对拍摄视频数据进行检测,识别受损部位,然后根据识别出的受损部位对视频图像进行分类。进一步的通过筛选自动生成车辆的定损图像。利用本申请实施方案,可以自动、快速的生成符合定损处理需求的高质量定损图像,满足定损处理需求,提高定损图像的获取效率,便于作业人员作业。
车辆定损常常需要不同类别的图像数据,如整车的不同角度的图像、能展示出受损部件的图像、具体受损部位的近景细节图等。本申请的一个实施例中,可以将需要的定损图像相应的分为不同的类别,具体的所述方法另一个实施例中,确定出的所述候选图像分类集合具体的可以包括:
显示受损部位的近景图像集合、展示受损部位所属车辆部件的部件图像集合。
一般的,所述部件图像集合中的视频图像中包括至少一处受损部位,如上述所述的a类近景图、b类部件图、a类和b类都不满足的c类图像。
所述一种车辆定损图像获取方法的另一个实施例中,可以采用下述中的至少一种方式确定所述近景图像集合中的视频图像:
受损部位在所属视频图像中所占区域的面积比值大于第一预设比例:
受损部位的横坐标跨度与所属视频图像长度的比值大于第二预设比例,和/或,受损部位的纵坐标与所属视频图像高度的比例大于第三预设比例;
从相同受损部位的视频图像中,选择受损部位的面积降序后的前K张视频图像,或者所述面积降序后属于第四预设比例内的视频图像,K≥1。
具体的可以根据定损处理所需的受损部位近景图像的要求来确定a类图像的识别算法/分类要求等。本申请在a类图像的识别处理过程中,一种实施方式中可以通过受损部位在当前所在的视频图像中所占区域的大小(面积或区域跨 度)来识别确定。如果受损部位在视频图像中的占有较大区域(如大于一定阈值,比如长或者宽大于四分之一视频图像大小),则可以确定该视频图像为a类图像。本申请提供的另一种实施方式中,如果在对该受损部位所在的受损部件的其他已分析处理的当前帧图像中,该受损部位相对于其他相同受损部位的区域面积相对较大(处于一定比例或TOP范围内),则可以确定该当前帧图像为a类图像。
所述一种车辆定损图像获取方法另一个实施例中,还可以包括:
若检测到所述受损部位的近景图像集合、部件图像集合中的至少一个为空,或者所述近景图像集合中的视频图像未覆盖到对应受损部位的全部区域时,生成视频拍摄提示消息;
向对应于所述终端设备发送所述视频拍摄提示消息。
所述的终端设备可以为前述与服务器交互的客户端,如手机。
所述一种车辆定损图像获取方法另一个实施例中,所述方法还可以包括:
实时跟踪所述受损部位在所述拍摄视频数据中的位置区域;
以及,在所述受损部位脱离视频图像后重新进入视频图像时,基于所述受损部位的图像特征数据重新对所述受损部位的位置区域进行定位和跟踪。
重新定位和跟踪的受损部位的位置区域可以显示在服务器上。
所述一种车辆定损图像获取方法另一个实施例中,所述方法还可以包括:
将跟踪的所述受损部位的位置区域发送至所述终端设备,以使所述终端设备实时显示所述受损部位的位置区域。
拍摄者在客户端上可以实时显示识别出的受损部位,以便于用户观察和确认受损部位。这样,拍摄者可以方便、灵活根据实际现场受损部位情况调整受损部位在视频图像中的位置区域,更准确的定位受损部位,便于服务器获取高质量的定损图像。
另一种实施方式中,拍摄者可以交互式修改受损部位的位置和大小。拍摄 者在客户端调整修改识别出的受损部位的位置区域后可以生成新的受损部位,然后将新的受损部位发送给服务器。同时,服务器可以同步更新客户端修改后的新的受损部位。服务器可以根据新的受损部位对后续的视频图像进行识别处理。因此,所述一种车辆定损图像获取方法另一个实施例中,所述方法还可以包括:
接收所述终端设备发送的新的受损部位,所述新的受损部位包括所述终端设备基于接收的交互指令修改所述识别出的受损部位的位置区域后重新确定的受损部位;
相应的,所述基于检测出的受损部位对所述视频图像进行分类包括基于所述新的受损部位对视频图像进行分类。
这样,拍摄者可以方便、灵活根据实际现场受损部位情况调整受损部位在视频图像中的位置区域,更准确的定位受损部位,便于服务器获取高质量的定损图像。
在拍摄受损部位的近景时,拍摄者可以对其从不同角度的连续拍摄。服务器一侧可以根据受损部位的跟踪,求得每帧图像的拍摄角度,进而选取一组不同角度的视频图像作为该受损部位的定损图像,从而确保定损图像能准确的反应出受损的类型和程度。因此,所述一种车辆定损图像获取方法另一个实施例中,所述按照预设筛选条件从所述候选图像分类集合中选出车辆的定损图像包括:
从指定的所述受损部位候选图像分类集合中,根据视频图像的清晰度和所述受损部位的拍摄角度,分别选取至少一张视频图像作为所述受损部位的定损图像。
如果识别出受损车辆存在多个受损部位,并且受损部位的距离很近,则服务器可以同时跟踪这多个受损部位,并产生每个受损部位的定损图像。服务器对拍摄者指定的所有的受损部位都按照上述处理获取每个受损部位的定损图 像,然后可以将所有产生的定损图像作为整个受损车辆的定损图像。因此,所述一种车辆定损图像获取方法另一个实施例中,若检测到视频图像中存在至少两个受损部位,则判断所述至少两个受损部位的距离是否符合设置的邻近条件;
若是,则同时跟踪所述至少两个受损部位,并分别产生相应的定损图像。
所述的邻近条件可以根据同一个视频图像中识别出的受损部位的个数、受损部位的大小、受损部位之间的距离等进行设置。
基于前述客户端与服务器交互的实施场景中描述的通过受损车辆拍摄视频数据自动获取定损图像的实施方案,本申请还提供一种可以用于客户端一侧的车辆定损图像获取方法,图8是本申请所述方法另一个实施例的流程示意图,如图8所示,可以包括:
S20:对受损车辆进行视频拍摄,获取拍摄视频数据;
S21:将所述拍摄视频数据发送至处理终端;
S22:接收所述处理终端返回的对受损部位实时跟踪的位置区域,显示所述跟踪的位置区域,所述受损部位包括所述处理终端对所述拍摄视频数据中的视频图像进行检测识别得到。
所述的处理终端包括对所述拍摄视频数据进行处理,基于识别出的受损部位自动生成受损车辆的定损图像的终端设备,如可以为定损图像处理的远程服务器。
另一个实施例中,确定出的所述候选图像分类集合也可以包括:显示受损部位的近景图像集合、展示受损部位所属车辆部件的部件图像集合。如上述的a类图像、b类图像等。如果服务器无法得到能确定受损部位所在车辆部件的b类定损图像,服务器可以反馈给拍摄者发送视频拍摄提示消息,提示其拍摄包括受损部位在内的相邻多个车辆部件,从而确保得到b类定损图像。如果系统无法得到a类定损图像,或者a类图像没能覆盖到受损部位的全部区域,同样 可以发送给拍摄者,提示其拍摄受损部位的近景图。因此,另一种实施例中,所述方法还可以包括:
S23:接收并显示所述处理终端发送的视频拍摄提示消息,所述视频拍摄提示消息包括在所述处理终端检测到所述受损部位的近景图像集合、部件图像集合中的至少一个为空,或者所述近景图像集合中的视频图像未覆盖到对应受损部位的全部区域时生成。
如前述所述,另一种实施方式中,客户端可以实时显示服务器跟踪的受损部位的位置区域,并且可以在客户端一侧交互式修改该位置区域的位置和大小。因此所述方法的另一个实施例中,还可以包括:
S24:基于接收的交互指令修改所述受损部位的位置区域后,重新确定新的受损部位;
将所述新的受损部位发送给所述处理终端,以使所述处理终端基于所述新的受损并部位对视频图像进行分类。
上述实施例提供的车辆定损图像获取方法,拍摄者可以通过终端设备对受损车辆进行视频拍摄,拍摄的视频数据可以传输到系统的服务器,服务器再对视频数据进行分析,识别出受损部位,根据受损部位获取定损所需的不同类别的候选图像,然后可以从候选图像中产生受损车辆的定损图像。利用本申请实施方案,可以实现自动、快速的生成符合定损处理需求的高质量定损图像,满足定损处理需求,提高定损图像的获取效率,同时也减少了保险公司作业人员的定损图像获取和处理成本。
前述实施例分别从客户端与服务器交互、客户端、服务器的角度的实施场景中描述了本申请通过受损车辆拍摄视频数据自动获取定损图像的实施方案。本申请的另一种实施方式中,拍摄者在客户端在拍摄车辆视频的同时(或者拍摄完后),可以直接在客户端一侧对拍摄视频进行分析处理,并生成定损图像。具体的,图9本申请所述方法另一个实施例的流程示意图,如图9所示,所述 方法包括:
S30:接收受损车辆的拍摄视频数据;
S31:对所述拍摄视频数据中的视频图像进行检测,识别所述视频图像中的受损部位;
S32:基于检测出的受损部位对所述视频图像进行分类,确定所述受损部位的候选图像分类集合;
S33:按照预设筛选条件从所述候选图像分类集合中选出车辆的定损图像。
具体的一个实现方式中可以由部署在客户端的应用模块组成。一般的,所述终端设备可以为具有视频拍摄功能和图像处理能力的通用或者专用设备,如手机、平板电脑等客户端。摄者使用可以客户端对受损车辆进行视频拍摄,同时对拍摄视频数据进行分析,识别受损部位,产生定损图像。
可选的,还可以包括一个服务器端,用来接收客户端生成的定损图像。客户端可以产生的定损图像实时或者异步传输至指定的服务器。因此,所述方法的另一个实施例中还可以包括:
S3301:将所述定损图像实时传输至指定的服务器;
或者,
S3302:将所述定损图像异步传输至指定的服务器。
图10是本申请所述方法另一个实施例的流程示意图,如图10所示,客户端可以将生成的定损图像立即上传给远端服务器,或者也可以在事后将定损图像批量上传或者拷贝到远端服务器。
基于前述服务器自动生成定损图像、受损部位定位跟踪等实施例的描述,本申请由客户端一侧自动生成定损图像的方法还可以包括其他的实施方式,如生成视频拍摄提示消息后直接显示在拍摄终端上、定损图像类别的具体划分和识别、分类方式,受损部位的识别、定位和跟踪等。具体的可以参照相关实施例的描述,在此不做一一赘述。
本申请提供的一种车辆定损图像获取方法,在客户端一侧可以基于受损车 辆的拍摄视频自动生成定损图像。拍摄者可以通过客户端对受损车辆进行视频拍摄,产生拍摄视频数据。然后再对拍摄视频数据进行分析,识别受损部位,获取定损所需的不同类别的候选图像。进一步可以从候选图像中产生受损车辆的定损图像。利用本申请实施方案,可以直接在客户端一侧进行视频拍摄,并自动、快速的生成符合定损处理需求的高质量定损图像,满足定损处理需求,提高定损图像的获取效率,同时也减少了保险公司作业人员的定损图像获取和处理成本。
基于上述所述的辆定损图像获取方法,本申请还提供一种车辆定损图像获取装置。所述的装置可以包括使用了本申请所述方法的系统(包括分布式系统)、软件(应用)、模块、组件、服务器、客户端等并结合必要的实施硬件的装置。基于同一创新构思,本申请提供的一种实施例中的装置如下面的实施例所述。由于装置解决问题的实现方案与方法相似,因此本申请具体的装置的实施可以参见前述方法的实施,重复之处不再赘述。以下所使用的,术语“单元”或者“模块”可以实现预定功能的软件和/或硬件的组合。尽管以下实施例所描述的装置较佳地以软件来实现,但是硬件,或者软件和硬件的组合的实现也是可能并被构想的。具体的,图11是本申请提供的一种车辆定损图像获取装置实施例的模块结构示意图,如图11所示,所述装置可以包括:
数据接收模块101,可以用于接收终端设备上传的受损车辆的拍摄视频数据;
受损部位识别模块102,可以对所述拍摄视频数据中的视频图像进行检测,识别所述视频图像中的受损部位;
分类模块103,可以用于基于检测出的受损部位对所述视频图像进行分类,确定所述受损部位的候选图像分类集合;
筛选模块104,可以用于按照预设筛选条件从所述候选图像分类集合中选出车辆的定损图像。
上述所述的装置可以用于服务器一侧,实现对客户端上传的拍摄视频数据分析处理后获取定损图像。本申请还提供一种可以用于客户端一侧的车辆定损图像获取装置。如图12所示,图12为本申请所装置另一个实施例的模块结构示意图,具体的可以包括:
拍摄模块201,可以用于对受损车辆进行视频拍摄,获取拍摄视频数据;
通信模块202,可以用于将所述拍摄视频数据发送至处理终端;
跟踪显示模块203,可以用于接收所述处理终端返回的对受损部位实时跟踪的位置区域,以及显示所述跟踪的位置区域,所述受损部位包括所述处理终端对所述拍摄视频数据中的视频图像进行检测识别得到。
一种实施方式中,所述的跟踪显示模块203可以为包含显示屏的显示单元,拍摄者可以在显示屏中指定受损部位,同时也可以在显示屏中实施显示跟踪的受损部位的位置区域。
本申请提供的车辆定损图像获取方法可以在计算机中由处理器执行相应的程序指令来实现。具体的,本申请提供的一种车辆定损图像获取装置的另一种实施例中,所述装置可以包括处理器以及用于存储处理器可执行指令的存储器,所述处理器执行所述指令时实现:
接收受损车辆的拍摄视频数据;
对所述拍摄视频数据中的视频图像进行检测,识别所述视频图像中的受损部位;
基于检测出的受损部位对所述视频图像进行分类,确定所述受损部位的候选图像分类集合;
按照预设筛选条件从所述候选图像分类集合中选出车辆的定损图像。
所述的装置可以为服务器,服务器接收客户端上传的拍摄视频数据,然后进行分析处理,包括识别受损部位、划分类别、选取图像等,得到车辆的定损图像。另一种实施方式中,所述装置也可以为客户端,客户端对受损车辆进行 视频拍摄后直接在客户端一侧进行分析处理,得到车辆的定损图像。因此,本申请所述装置的另一种实施例中,所述受损车辆的拍摄视频数据可以包括:
终端设备获取拍摄视频数据后上传的数据信息;
或者,
所述车辆定损图像获取装置对受损车辆进行视频拍摄获取的拍摄视频数据。
进一步的,在所述装置获取拍摄视频数据并直接进行分析处理获取定损图像的实施场景中,还可以将得到的定损图像发送给服务器,由服务器进行存储或进一步定损处理。因此,所述装置的另一种实施例中,若所述受损车辆的拍摄视频数据为所述车辆定损图像获取装置进行视频拍摄获取得到,则所述处理器执行所述指令时还包括:
将所述定损图像实时传输至指定的处理终端;
或者,
将所述定损图像异步传输至指定的处理终端。
基于前述实施例方法或装置自动生成定损图像、受损部位定位跟踪等实施例的描述,本申请由客户端一侧自动生成定损图像的装置还可以包括其他的实施方式,如生成视频拍摄提示消息后直接显示在终端设备上、定损图像类别的具体划分和识别、分类方式,受损部位定位和跟踪等。具体的可以参照相关实施例的描述,在此不做一一赘述。
拍摄者可以通过本申请提供的车辆定损图像获取装置,对受损车辆进行视频拍摄,产生拍摄视频数据。然后再对拍摄视频数据进行分析,获取定损所需的不同类别的候选图像。进一步可以从候选图像中产生受损车辆的定损图像。利用本申请实施方案,可以直接在客户端一侧进行视频拍摄,并自动、快速的生成符合定损处理需求的高质量定损图像,满足定损处理需求,提高定损图像的获取效率,同时也减少了保险公司作业人员的定损图像获取和处理成本。
本申请上述实施例所述的方法或装置可以通过计算机程序实现业务逻辑并记录在存储介质上,所述的存储介质可以计算机读取并执行,实现本申请实施例所描述方案的效果。因此,本申请还提供一种计算机可读存储介质,其上存储有计算机指令,所述指令被执行时可以实现以下步骤:
接收受损车辆的拍摄视频数据;
对所述拍摄视频数据中的视频图像进行检测,识别所述视频图像中的受损部位;
基于检测出的受损部位对所述视频图像进行分类,确定所述受损部位的候选图像分类集合;
按照预设筛选条件从所述候选图像分类集合中选出车辆的定损图像。
本申请还提供的另一种计算机可读存储介质,其上存储有计算机指令,所述指令被执行时实现以下步骤:
对受损车辆进行视频拍摄,获取拍摄视频数据;
将所述拍摄视频数据发送至处理终端;
接收所述处理终端返回的对受损部位实时跟踪的位置区域,显示所述跟踪的位置区域,所述受损部位包括所述处理终端对所述拍摄视频数据中的视频图像进行检测识别得到。
所述计算机可读存储介质可以包括用于存储信息的物理装置,通常是将信息数字化后再以利用电、磁或者光学等方式的媒体加以存储。本实施例所述的计算机可读存储介质有可以包括:利用电能方式存储信息的装置如,各式存储器,如RAM、ROM等;利用磁能方式存储信息的装置如,硬盘、软盘、磁带、磁芯存储器、磁泡存储器、U盘;利用光学方式存储信息的装置如,CD或DVD。当然,还有其他方式的可读存储介质,例如量子存储器、石墨烯存储器等等。
上述所述的装置或方法或计算机可读存储介质可以用于获取车辆定损图像的服务器中,实现基于车辆图像视频自动获取车辆定损图像。所述的服务器可 以是单独的服务器,也可以是多台应用服务器组成的系统集群,也可以是分布式系统中的服务器。具体的,一种实施例中,所述服务器可以包括处理器以及用于存储处理器可执行指令的存储器,所述处理器执行所述指令时实现:
接收终端设备上传的受损车辆的拍摄视频数据;
对所述拍摄视频数据中的视频图像进行检测,识别所述视频图像中的受损部位;
基于检测出的受损部位对所述视频图像进行分类,确定所述受损部位的候选图像分类集合;
按照预设筛选条件从所述候选图像分类集合中选出车辆的定损图像。
上述所述的装置或方法或计算机可读存储介质可以用于获取车辆定损图像的终端设备中,实现基于车辆图像视频自动获取车辆定损图像。所述的终端设备可以以服务器的方式实施,也可以为现场对受损车辆进行视频拍摄的客户端实施。图13是本申请提供的一种终端设备实施例的结构示意图,具体的,一种实施例中,所述终端上设备可以包括处理器以及用于存储处理器可执行指令的存储器,所述处理器执行所述指令时可以实现:
获取对受损车辆进行视频拍摄的拍摄视频数据;
对所述拍摄视频数据中的视频图像进行检测,识别所述视频图像中的受损部位;
基于检测出的受损部位对所述视频图像进行分类,确定所述受损部位的候选图像分类集合;
按照预设筛选条件从所述候选图像分类集合中选出车辆的定损图像。
其中获取的拍摄视频数据包可以为终端设备获取拍摄视频数据后上传的数据信息,或者也可以为所述终端设备直接对受损车辆进行视频拍摄获取的拍摄视频数据。
进一步的,如果所述终端设备为视频拍摄的客户端一侧的实施方式,则所 述处理器执行所述指令时还可以实现:
将所述定损图像实时传输至指定的服务器;
或者,
将所述定损图像异步传输至指定的服务器。
拍摄者可以通过本申请提供的车辆定损图像的终端设备,对受损车辆进行视频拍摄,产生拍摄视频数据。然后再对拍摄视频数据进行分析,识别受损部位,获取定损所需的不同类别的候选图像。进一步可以从候选图像中产生受损车辆的定损图像。利用本申请实施方案,可以直接在客户端一侧进行视频拍摄,并自动、快速的生成符合定损处理需求的高质量定损图像,满足定损处理需求,提高定损图像的获取效率,同时也减少了保险公司作业人员的定损图像获取和处理成本。
尽管本申请内容中提到受损区域跟踪方式、采用CNN和RPN网络检测受损部位和车辆部件、基于受损部位的图像识别和分类等之类的数据模型构建、数据获取、交互、计算、判断等描述,但是,本申请并不局限于必须是符合行业通信标准、标准数据模型、计算机处理和存储规则或本申请实施例所描述的情况。某些行业标准或者使用自定义方式或实施例描述的实施基础上略加修改后的实施方案也可以实现上述实施例相同、等同或相近、或变形后可预料的实施效果。应用这些修改或变形后的数据获取、存储、判断、处理方式等获取的实施例,仍然可以属于本申请的可选实施方案范围之内。
在20世纪90年代,对于一个技术的改进可以很明显地区分是硬件上的改进(例如,对二极管、晶体管、开关等电路结构的改进)还是软件上的改进(对于方法流程的改进)。然而,随着技术的发展,当今的很多方法流程的改进已经可以视为硬件电路结构的直接改进。设计人员几乎都通过将改进的方法流程编程到硬件电路中来得到相应的硬件电路结构。因此,不能说一个方法流程的改进就不能用硬件实体模块来实现。例如,可编程逻辑器件(Programmable Logic  Device,PLD)(例如现场可编程门阵列(Field Programmable Gate Array,FPGA))就是这样一种集成电路,其逻辑功能由用户对器件编程来确定。由设计人员自行编程来把一个数字系统“集成”在一片PLD上,而不需要请芯片制造厂商来设计和制作专用的集成电路芯片。而且,如今,取代手工地制作集成电路芯片,这种编程也多半改用“逻辑编译器(logic compiler)”软件来实现,它与程序开发撰写时所用的软件编译器相类似,而要编译之前的原始代码也得用特定的编程语言来撰写,此称之为硬件描述语言(Hardware Description Language,HDL),而HDL也并非仅有一种,而是有许多种,如ABEL(Advanced Boolean Expression Language)、AHDL(Altera Hardware Description Language)、Confluence、CUPL(Cornell University Programming Language)、HDCal、JHDL(Java Hardware Description Language)、Lava、Lola、MyHDL、PALASM、RHDL(Ruby Hardware Description Language)等,目前最普遍使用的是VHDL(Very-High-Speed Integrated Circuit Hardware Description Language)与Verilog。本领域技术人员也应该清楚,只需要将方法流程用上述几种硬件描述语言稍作逻辑编程并编程到集成电路中,就可以很容易得到实现该逻辑方法流程的硬件电路。
控制器可以按任何适当的方式实现,例如,控制器可以采取例如微处理器或处理器以及存储可由该(微)处理器执行的计算机可读程序代码(例如软件或固件)的计算机可读介质、逻辑门、开关、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程逻辑控制器和嵌入微控制器的形式,控制器的例子包括但不限于以下微控制器:ARC 625D、Atmel AT91SAM、Microchip PIC18F26K20以及Silicone Labs C8051F320,存储器控制器还可以被实现为存储器的控制逻辑的一部分。本领域技术人员也知道,除了以纯计算机可读程序代码方式实现控制器以外,完全可以通过将方法步骤进行逻辑编程来使得控制器以逻辑门、开关、专用集成电路、可编程逻辑控制器和嵌入微控制器等的形式来实现相同功能。因此这种控制器可以被认为是一种硬件部件,而对其内包括的用于实现各种功能的装置也可以视为硬件部件内的结构。或者甚至,可以 将用于实现各种功能的装置视为既可以是实现方法的软件模块又可以是硬件部件内的结构。
上述实施例阐明的系统、装置、模块或单元,具体可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。一种典型的实现设备为计算机。具体的,计算机例如可以为个人计算机、膝上型计算机、车载人机交互设备、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任何设备的组合。
虽然本申请提供了如实施例或流程图所述的方法操作步骤,但基于常规或者无创造性的手段可以包括更多或者更少的操作步骤。实施例中列举的步骤顺序仅仅为众多步骤执行顺序中的一种方式,不代表唯一的执行顺序。在实际中的装置或终端产品执行时,可以按照实施例或者附图所示的方法顺序执行或者并行执行(例如并行处理器或者多线程处理的环境,甚至为分布式数据处理环境)。术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、产品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、产品或者设备所固有的要素。在没有更多限制的情况下,并不排除在包括所述要素的过程、方法、产品或者设备中还存在另外的相同或等同要素。
为了描述的方便,描述以上装置时以功能分为各种模块分别描述。当然,在实施本申请时可以把各模块的功能在同一个或多个软件和/或硬件中实现,也可以将实现同一功能的模块由多个子模块或子单元的组合实现等。以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
本领域技术人员也知道,除了以纯计算机可读程序代码方式实现控制器以外,完全可以通过将方法步骤进行逻辑编程来使得控制器以逻辑门、开关、专用集成电路、可编程逻辑控制器和嵌入微控制器等的形式来实现相同功能。因此这种控制器可以被认为是一种硬件部件,而对其内部包括的用于实现各种功能的装置也可以视为硬件部件内的结构。或者甚至,可以将用于实现各种功能的装置视为既可以是实现方法的软件模块又可以是硬件部件内的结构。
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
本领域技术人员应明白,本申请的实施例可提供为方法、系统或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本申请可以在由计算机执行的计算机可执行指令的一般上下文中描述,例如程序模块。一般地,程序模块包括执行特定任务或实现特定抽象数据类型的例程、程序、对象、组件、数据结构等等。也可以在分布式计算环境中实践本申请,在这些分布式计算环境中,由通过通信网络而被连接的远程处理设备来执行任务。在分布式计算环境中,程序模块可以位于包括存储设备在内的本地和远程计算机存储介质中。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于系统实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描 述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
以上所述仅为本申请的实施例而已,并不用于限制本申请。对于本领域技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本申请的权利要求范围之内。

Claims (40)

  1. 一种车辆定损图像获取方法,所述方法包括:
    客户端获取拍摄视频数据,将所述拍摄视频数据发送至服务器;
    所述服务器对所述拍摄视频数据中的视频图像进行检测,识别所述视频图像中的受损部位;
    所述服务器基于检测出的受损部位对所述视频图像进行分类,确定所述受损部位的候选图像分类集合;
    按照预设筛选条件从所述候选图像分类集合中选出车辆的定损图像。
  2. 一种车辆定损图像获取方法,所述方法包括:
    接收终端设备上传的受损车辆的拍摄视频数据,对所述拍摄视频数据中的视频图像进行检测,识别所述视频图像中的受损部位;
    基于检测出的受损部位对所述视频图像进行分类,确定所述受损部位的候选图像分类集合;
    按照预设筛选条件从所述候选图像分类集合中选出车辆的定损图像。
  3. 如权利要求2所述的一种车辆定损图像获取方法,确定出的所述候选图像分类集合包括:
    显示受损部位的近景图像集合、展示受损部位所属车辆部件的部件图像集合。
  4. 如权利要求3所述的一种车辆定损图像获取方法,采用下述中的至少一种方式确定所述近景图像集合中的视频图像:
    受损部位在所属视频图像中所占区域的面积比值大于第一预设比例:
    受损部位的横坐标跨度与所属视频图像长度的比值大于第二预设比例,和/或,受损部位的纵坐标与所属视频图像高度的比例大于第三预设比例;
    从相同受损部位的视频图像中,选择受损部位的面积降序后的前K张视频图像,或者所述面积降序后属于第四预设比例内的视频图像,K≥1。
  5. 如权利要求3所述的一种车辆定损图像获取方法,还包括:
    若检测到所述受损部位的近景图像集合、部件图像集合中的至少一个为空,或者所述近景图像集合中的视频图像未覆盖到对应受损部位的全部区域时,生成视频拍摄提示消息;
    向所述终端设备发送所述视频拍摄提示消息。
  6. 如权利要求2所述的一种车辆定损图像获取方法,在识别出所述视频图像的受损部位后,所述方法还包括:
    实时跟踪所述受损部位在所述拍摄视频数据中的位置区域;
    以及,在所述受损部位脱离视频图像后重新进入视频图像时,基于所述受损部位的图像特征数据重新对所述受损部位的位置区域进行定位和跟踪。
  7. 如权利要求6所述的一种车辆定损图像获取方法,所述方法还包括:
    将跟踪的所述受损部位的位置区域发送至所述终端设备,以使所述终端设备实时显示所述受损部位的位置区域。
  8. 如权利要求7所述的一种车辆定损图像获取方法,所述方法还包括:
    接收所述终端设备发送的新的受损部位,所述新的受损部位包括所述终端设备基于接收的交互指令修改所述识别出的受损部位的位置区域后重新确定的受损部位;
    相应的,所述基于检测出的受损部位对所述视频图像进行分类包括基于所述新的受损部位对视频图像进行分类。
  9. 如权利要求2至5中任意一项所述的一种车辆定损图像获取方法,所述按照预设筛选条件从所述候选图像分类集合中选出车辆的定损图像包括:
    从指定的所述受损部位候选图像分类集合中,根据视频图像的清晰度和所述受损部位的拍摄角度,分别选取至少一张视频图像作为所述受损部位的定损图像。
  10. 如权利要求6至8中任意一项所述的一种车辆定损图像获取方法,若检测到视频图像中存在至少两个受损部位,则判断所述至少两个受损部位的距离是否符合设置的邻近条件;
    若是,则同时跟踪所述至少两个受损部位,并分别产生相应的定损图像。
  11. 一种车辆定损图像获取方法,所述方法包括:
    对受损车辆进行视频拍摄,获取拍摄视频数据;
    将所述拍摄视频数据发送至处理终端;
    接收所述处理终端返回的对受损部位实时跟踪的位置区域,显示所述跟踪的位置区域,所述受损部位包括所述处理终端对所述拍摄视频数据中的视频图像进行检测识别得到。
  12. 如权利要求11所述的一种车辆定损图像获取方法,还包括:
    接收并显示所述处理终端发送的视频拍摄提示消息,所述视频拍摄提示消息包括在所述处理终端检测到所述受损部位的近景图像集合、部件图像集合中的至少一个为空,或者所述近景图像集合中的视频图像未覆盖到对应受损部位的全部区域时生成。
  13. 如权利要求11或12所述的一种车辆定损图像获取方法,所述方法还包括:
    基于接收的交互指令修改所述受损部位的位置区域后,重新确定新的受损部位;
    将所述新的受损部位发送给所述处理终端,以使所述处理终端基于所述新的受损部位对视频图像进行分类。
  14. 一种车辆定损图像获取方法,所述方法包括:
    接收受损车辆的拍摄视频数据;
    对所述拍摄视频数据中的视频图像进行检测,识别所述视频图像中的受损部位;
    基于检测出的受损部位对所述视频图像进行分类,确定所述受损部位的候选图像分类集合;
    按照预设筛选条件从所述候选图像分类集合中选出车辆的定损图像。
  15. 如权利要求14所述的一种车辆定损图像获取方法,确定出的所述候选 图像分类集合包括:
    显示受损部位的近景图像集合、展示受损部位所属车辆部件的部件图像集合。
  16. 如权利要求15所述的一种车辆定损图像获取方法,采用下述中的至少一种方式确定所述近景图像集合中的视频图像:
    受损部位在所属视频图像中所占区域的面积比值大于第一预设比例:
    受损部位的横坐标跨度与所属视频图像长度的比值大于第二预设比例,和/或,受损部位的纵坐标与所属视频图像高度的比例大于第三预设比例;
    从相同受损部位的视频图像中,选择受损部位的面积降序后的前K张视频图像,或者所述面积降序后属于第四预设比例内的视频图像,K≥1。
  17. 如权利要求15所述的一种车辆定损图像获取方法,还包括:
    若检测到所述受损部位的近景图像集合、部件图像集合中的至少一个为空,或者所述近景图像集合中的视频图像未覆盖到对应受损部位的全部区域时,生成视频拍摄提示消息;
    显示所述视频拍摄提示消息。
  18. 如权利要求14所述的一种车辆定损图像获取方法,在识别出所述视频图像的受损部位后,还包括:
    实时跟踪并显示所述受损部位在所述拍摄视频数据中的位置区域;
    以及,在所述受损部位脱离视频图像后重新进入视频图像时,基于所述受损部位的图像特征数据重新对所述受损部位的位置区域进行定位和跟踪。
  19. 如权利要求18所述的一种车辆定损图像获取方法,还包括:
    基于接收的交互指令修改所述识别出的受损部位的位置区域,重新确定新的受损部位;
    相应的,所述基于检测出的受损部位对所述视频图像进行分类包括基于所述新的受损部位对视频图像进行分类。
  20. 如权利要求14至17中任意一项所述的一种车辆定损图像获取方法, 所述按照预设筛选条件从所述候选图像分类集合中选出车辆的定损图像包括:
    从指定的所述受损部位候选图像分类集合中,根据视频图像的清晰度和所述受损部位的拍摄角度,分别选取至少一张视频图像作为所述受损部位的定损图像。
  21. 如权利要求18或19所述的一种车辆定损图像获取方法,若检测到视频图像中存在至少两个受损部位,则判断所述至少两个受损部位的距离是否符合设置的邻近条件;
    若是,则同时跟踪所述至少两个受损部位,并分别产生相应的定损图像。
  22. 如权利要求14所述的一种车辆定损图像获取方法,还包括:
    将所述定损图像实时传输至指定的服务器;
    或者,
    将所述定损图像异步传输至指定的服务器。
  23. 一种车辆定损图像获取装置,所述装置包括:
    数据接收模块,用于接收终端设备上传的受损车辆的拍摄视频数据;
    受损部位识别模块,对所述拍摄视频数据中的视频图像进行检测,识别所述视频图像中的受损部位;
    分类模块,用于基于检测出的受损部位对所述视频图像进行分类,确定所述受损部位的候选图像分类集合;
    筛选模块,用于按照预设筛选条件从所述候选图像分类集合中选出车辆的定损图像。
  24. 一种车辆定损图像获取装置,所述装置包括:
    拍摄模块,用于对受损车辆进行视频拍摄,获取拍摄视频数据;
    通信模块,用于将所述拍摄视频数据发送至处理终端;
    跟踪模块,用于接收所述处理终端返回的对受损部位实时跟踪的位置区域,显示所述跟踪的位置区域,所述受损部位包括所述处理终端对所述拍摄视频数据中的视频图像进行检测识别得到。
  25. 一种车辆定损图像获取装置,包括处理器以及用于存储处理器可执行指令的存储器,所述处理器执行所述指令时实现:
    接收受损车辆的拍摄视频数据;
    对所述拍摄视频数据中的视频图像进行检测,识别所述视频图像中的受损部位;
    基于检测出的受损部位对所述视频图像进行分类,确定所述受损部位的候选图像分类集合;
    按照预设筛选条件从所述候选图像分类集合中选出车辆的定损图像。
  26. 如权利要求25所述的一种车辆定损图像获取装置,所述受损车辆的拍摄视频数据包括:
    终端设备获取拍摄视频数据后上传的数据信息;
    或者,
    所述车辆定损图像获取装置对受损车辆进行视频拍摄获取的拍摄视频数据。
  27. 如权利要求26所述的一种车辆定损图像获取装置,所述处理器执行所述指令时,确定出的所述候选图像分类集合包括:
    显示受损部位的近景图像集合、展示受损部位所属车辆部件的部件图像集合。
  28. 如权利要求27所述的一种车辆定损图像获取装置,所述处理器执行所述指令时,采用下述中的至少一种方式确定所述近景图像集合中的视频图像:
    受损部位在所属视频图像中所占区域的面积比值大于第一预设比例:
    受损部位的横坐标跨度与所属视频图像长度的比值大于第二预设比例,和/或,受损部位的纵坐标与所属视频图像高度的比例大于第三预设比例;
    从相同受损部位的视频图像中,选择受损部位的面积降序后的前K张视频图像,或者所述面积降序后属于第四预设比例内的视频图像,K≥1。
  29. 如权利要求28所述的一种车辆定损图像获取装置,所述处理器执行所 述指令时还实现:
    若检测到所述受损部位的近景图像集合、部件图像集合中的至少一个为空,或者所述近景图像集合中的视频图像未覆盖到对应受损部位的全部区域时,生成视频拍摄提示消息;
    所述视频拍摄提示消息用于显示在所述终端设备。
  30. 如权利要求26所述的一种车辆定损图像获取装置,所述处理器执行所述指令时还实现:
    实时跟踪所述受损部位在所述拍摄视频数据中的位置区域;
    以及,在所述受损部位脱离视频图像后重新进入视频图像时,基于所述受损部位的图像特征数据重新对所述受损部位的位置区域进行定位和跟踪。
  31. 如权利要求30所述的一种车辆定损图像获取装置,若所拍摄视频数据为所述终端设备上传的数据信息,则所述处理器执行所述指令时还实现:
    将跟踪的所述受损部位的位置区域发送至所述终端设备,以使所述终端设备实时显示所述受损部位的位置区域。
  32. 如权利要求26所述的一种车辆定损图像获取装置,所述处理器执行所述指令时还实现:
    接收修改所述识别出的受损部位的位置区域后重新确定新的受损部位;
    相应的,所述基于检测出的受损部位对所述视频图像进行分类包括基于所述新的受损部位对视频图像进行分类。
  33. 如权利要求30所述的一种车辆定损图像获取装置,所述处理器执行所述指令时所述按照预设筛选条件从所述候选图像分类集合中选出车辆的定损图像包括:
    从指定的所述受损部位候选图像分类集合中,根据视频图像的清晰度和所述受损部位的拍摄角度,分别选取至少一张视频图像作为所述受损部位的定损图像。
  34. 如权利要求30至32中任意一项所述的一种车辆定损图像获取装置, 所述处理器执行所述指令时若检测到视频图像中存在至少两个受损部位,则判断所述至少两个受损部位的距离是否符合设置的邻近条件;
    若是,则同时跟踪所述至少两个受损部位,并分别产生相应的定损图像。
  35. 如权利要求26所述的一种车辆定损图像获取装置,若所述受损车辆的拍摄视频数据为所述车辆定损图像获取装置进行视频拍摄获取得到,则所述处理器执行所述指令时还包括:
    将所述定损图像实时传输至指定的处理终端;
    或者,
    将所述定损图像异步传输至指定的处理终端。
  36. 一种计算机可读存储介质,其上存储有计算机指令,所述指令被执行时实现以下步骤:
    接收受损车辆的拍摄视频数据;
    对所述拍摄视频数据中的视频图像进行检测,识别所述视频图像中的受损部位;
    基于检测出的受损部位对所述视频图像进行分类,确定所述受损部位的候选图像分类集合;
    按照预设筛选条件从所述候选图像分类集合中选出车辆的定损图像。
  37. 一种计算机可读存储介质,其上存储有计算机指令,所述指令被执行时实现以下步骤:
    对受损车辆进行视频拍摄,获取拍摄视频数据;
    将所述拍摄视频数据发送至处理终端;
    接收所述处理终端返回的对受损部位实时跟踪的位置区域,显示所述跟踪的位置区域,所述受损部位包括所述处理终端对所述拍摄视频数据中的视频图像进行检测识别得到。
  38. 一种服务器,包括处理器以及用于存储处理器可执行指令的存储器,所述处理器执行所述指令时实现:
    接收终端设备上传的受损车辆的拍摄视频数据;
    对所述拍摄视频数据中的视频图像进行检测,识别所述视频图像中的受损部位;
    基于检测出的受损部位对所述视频图像进行分类,确定所述受损部位的候选图像分类集合;
    按照预设筛选条件从所述候选图像分类集合中选出车辆的定损图像。
  39. 一种终端设备,包括处理器以及用于存储处理器可执行指令的存储器,所述处理器执行所述指令时实现:
    获取对受损车辆进行视频拍摄的拍摄视频数据;
    对所述拍摄视频数据中的视频图像进行检测,识别所述视频图像中的受损部位;
    基于检测出的受损部位对所述视频图像进行分类,确定所述受损部位的候选图像分类集合;
    按照预设筛选条件从所述候选图像分类集合中选出车辆的定损图像。
  40. 如权利要求39所述的一种终端设备,所述处理器执行所述指令时还实现:
    通过所述数据通信模块将所述定损图像实时传输至指定的服务器;
    或者,
    将所述定损图像异步传输至指定的服务器。
PCT/CN2018/084760 2017-04-28 2018-04-27 车辆定损图像获取方法、装置、服务器和终端设备 WO2018196837A1 (zh)

Priority Applications (6)

Application Number Priority Date Filing Date Title
JP2019558552A JP6905081B2 (ja) 2017-04-28 2018-04-27 車両損失査定画像を取得するための方法および装置、サーバ、ならびに端末デバイス
EP18791520.2A EP3605386A4 (en) 2017-04-28 2018-04-27 METHOD AND APPARATUS FOR OBTAINING VEHICLE LOSS EVALUATION IMAGE, SERVER AND TERMINAL DEVICE
KR1020197033366A KR20190139262A (ko) 2017-04-28 2018-04-27 차량 손실 평가 이미지를 획득하기 위한 방법과 장치, 서버 및 단말기 디바이스
SG11201909740R SG11201909740RA (en) 2017-04-28 2018-04-27 Method and apparatus for obtaining vehicle loss assessment image, server and terminal device
US16/655,001 US11151384B2 (en) 2017-04-28 2019-10-16 Method and apparatus for obtaining vehicle loss assessment image, server and terminal device
PH12019502401A PH12019502401A1 (en) 2017-04-28 2019-10-23 Method and apparatus for obtaining vehicle loss assessment image, server and terminal device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710294010.4 2017-04-28
CN201710294010.4A CN107194323B (zh) 2017-04-28 2017-04-28 车辆定损图像获取方法、装置、服务器和终端设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/655,001 Continuation US11151384B2 (en) 2017-04-28 2019-10-16 Method and apparatus for obtaining vehicle loss assessment image, server and terminal device

Publications (1)

Publication Number Publication Date
WO2018196837A1 true WO2018196837A1 (zh) 2018-11-01

Family

ID=59872897

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/084760 WO2018196837A1 (zh) 2017-04-28 2018-04-27 车辆定损图像获取方法、装置、服务器和终端设备

Country Status (9)

Country Link
US (1) US11151384B2 (zh)
EP (1) EP3605386A4 (zh)
JP (1) JP6905081B2 (zh)
KR (1) KR20190139262A (zh)
CN (2) CN107194323B (zh)
PH (1) PH12019502401A1 (zh)
SG (1) SG11201909740RA (zh)
TW (1) TW201839666A (zh)
WO (1) WO2018196837A1 (zh)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110033386A (zh) * 2019-03-07 2019-07-19 阿里巴巴集团控股有限公司 车辆事故的鉴定方法及装置、电子设备
CN110473418A (zh) * 2019-07-25 2019-11-19 平安科技(深圳)有限公司 危险路段识别方法、装置、服务器及存储介质
CN110688513A (zh) * 2019-08-15 2020-01-14 平安科技(深圳)有限公司 基于视频的农作物查勘方法、装置及计算机设备
US20200065632A1 (en) * 2018-08-22 2020-02-27 Alibaba Group Holding Limited Image processing method and apparatus
CN112465018A (zh) * 2020-11-26 2021-03-09 深源恒际科技有限公司 一种基于深度学习的车辆视频定损系统的智能截图方法及系统
JP2022537857A (ja) * 2018-12-31 2022-08-31 アジャイルソーダ インコーポレイテッド ディープラーニングに基づいた自動車部位別の破損程度の自動判定システムおよび方法
JP7356941B2 (ja) 2020-03-26 2023-10-05 株式会社奥村組 管渠損傷特定装置、管渠損傷特定方法および管渠損傷特定プログラム

Families Citing this family (49)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107194323B (zh) 2017-04-28 2020-07-03 阿里巴巴集团控股有限公司 车辆定损图像获取方法、装置、服务器和终端设备
CN107610091A (zh) * 2017-07-31 2018-01-19 阿里巴巴集团控股有限公司 车险图像处理方法、装置、服务器及系统
CN107766805A (zh) * 2017-09-29 2018-03-06 阿里巴巴集团控股有限公司 提升车辆定损图像识别结果的方法、装置及服务器
CN109753985A (zh) * 2017-11-07 2019-05-14 北京京东尚科信息技术有限公司 视频分类方法及装置
CN108090838B (zh) * 2017-11-21 2020-09-29 阿里巴巴集团控股有限公司 识别车辆受损部件的方法、装置、服务器、客户端及系统
CN108038459A (zh) * 2017-12-20 2018-05-15 深圳先进技术研究院 一种水下生物的检测识别方法、终端设备及存储介质
CN108647563A (zh) * 2018-03-27 2018-10-12 阿里巴巴集团控股有限公司 一种车辆定损的方法、装置及设备
CN108921811B (zh) 2018-04-03 2020-06-30 阿里巴巴集团控股有限公司 检测物品损伤的方法和装置、物品损伤检测器
CN113179368B (zh) * 2018-05-08 2023-10-27 创新先进技术有限公司 一种车辆定损的数据处理方法、装置、处理设备及客户端
CN108665373B (zh) * 2018-05-08 2020-09-18 阿里巴巴集团控股有限公司 一种车辆定损的交互处理方法、装置、处理设备及客户端
CN108682010A (zh) * 2018-05-08 2018-10-19 阿里巴巴集团控股有限公司 车辆损伤识别的处理方法、处理设备、客户端及服务器
CN108647712A (zh) * 2018-05-08 2018-10-12 阿里巴巴集团控股有限公司 车辆损伤识别的处理方法、处理设备、客户端及服务器
CN110634120B (zh) * 2018-06-05 2022-06-03 杭州海康威视数字技术股份有限公司 一种车辆损伤判别方法及装置
CN110609877B (zh) * 2018-06-14 2023-04-18 百度在线网络技术(北京)有限公司 一种图片采集的方法、装置、设备和计算机存储介质
CN111666832B (zh) * 2018-07-27 2023-10-31 创新先进技术有限公司 一种检测方法及装置、一种计算设备及存储介质
CN109034264B (zh) * 2018-08-15 2021-11-19 云南大学 交通事故严重性预测csp-cnn模型及其建模方法
CN108989684A (zh) * 2018-08-23 2018-12-11 阿里巴巴集团控股有限公司 控制拍摄距离的方法和装置
CN110569695B (zh) * 2018-08-31 2021-07-09 创新先进技术有限公司 基于定损图像判定模型的图像处理方法和装置
CN110569694A (zh) * 2018-08-31 2019-12-13 阿里巴巴集团控股有限公司 车辆的部件检测方法、装置及设备
CN110570316A (zh) 2018-08-31 2019-12-13 阿里巴巴集团控股有限公司 训练损伤识别模型的方法及装置
CN110569697A (zh) * 2018-08-31 2019-12-13 阿里巴巴集团控股有限公司 车辆的部件检测方法、装置及设备
CN110567728B (zh) * 2018-09-03 2021-08-20 创新先进技术有限公司 用户拍摄意图的识别方法、装置及设备
CN110569864A (zh) * 2018-09-04 2019-12-13 阿里巴巴集团控股有限公司 基于gan网络的车损图像生成方法和装置
CN109359542A (zh) * 2018-09-18 2019-02-19 平安科技(深圳)有限公司 基于神经网络的车辆损伤级别的确定方法及终端设备
CN110569700B (zh) * 2018-09-26 2020-11-03 创新先进技术有限公司 优化损伤识别结果的方法及装置
CN109389169A (zh) * 2018-10-08 2019-02-26 百度在线网络技术(北京)有限公司 用于处理图像的方法和装置
CN109615649A (zh) * 2018-10-31 2019-04-12 阿里巴巴集团控股有限公司 一种图像标注方法、装置及系统
CN109447071A (zh) * 2018-11-01 2019-03-08 博微太赫兹信息科技有限公司 一种基于fpga和深度学习的毫米波成像危险物品检测方法
CN110033608B (zh) * 2018-12-03 2020-12-11 创新先进技术有限公司 车辆损伤检测的处理方法、装置、设备、服务器和系统
CN109657599B (zh) * 2018-12-13 2023-08-01 深源恒际科技有限公司 距离自适应的车辆外观部件的图片识别方法
CN109784171A (zh) * 2018-12-14 2019-05-21 平安科技(深圳)有限公司 车辆定损图像筛选方法、装置、可读存储介质及服务器
CN110569702B (zh) * 2019-02-14 2021-05-14 创新先进技术有限公司 视频流的处理方法和装置
CN110287768A (zh) * 2019-05-06 2019-09-27 浙江君嘉智享网络科技有限公司 图像智能识别车辆定损方法
CN110363238A (zh) * 2019-07-03 2019-10-22 中科软科技股份有限公司 智能车辆定损方法、系统、电子设备及存储介质
CN110969183B (zh) * 2019-09-20 2023-11-21 北京方位捷讯科技有限公司 一种根据图像数据确定目标对象受损程度的方法及系统
CN113038018B (zh) * 2019-10-30 2022-06-28 支付宝(杭州)信息技术有限公司 辅助用户拍摄车辆视频的方法及装置
WO2021136947A1 (en) 2020-01-03 2021-07-08 Tractable Ltd Vehicle damage state determination method
CN111612104B (zh) * 2020-06-30 2021-04-13 爱保科技有限公司 车辆定损图像获取方法、装置、介质和电子设备
CN112541096B (zh) * 2020-07-27 2023-01-24 中咨数据有限公司 一种用于智慧城市的视频监控方法
WO2022047736A1 (zh) * 2020-09-04 2022-03-10 江苏前沿交通研究院有限公司 一种基于卷积神经网络的损伤检测方法
CN112492105B (zh) * 2020-11-26 2022-04-15 深源恒际科技有限公司 一种基于视频的车辆外观部件自助定损采集方法及系统
CN112712498A (zh) * 2020-12-25 2021-04-27 北京百度网讯科技有限公司 移动终端执行的车辆定损方法、装置、移动终端、介质
CN113033372B (zh) * 2021-03-19 2023-08-18 北京百度网讯科技有限公司 车辆定损方法、装置、电子设备及计算机可读存储介质
EP4309124A1 (en) * 2021-04-21 2024-01-24 Siemens Mobility GmbH Automated selection and semantic connection of images
CN113033517B (zh) * 2021-05-25 2021-08-10 爱保科技有限公司 车辆定损图像获取方法、装置和存储介质
CN113361426A (zh) * 2021-06-11 2021-09-07 爱保科技有限公司 车辆定损图像获取方法、介质、装置和电子设备
CN113361424A (zh) * 2021-06-11 2021-09-07 爱保科技有限公司 一种车辆智能定损图像获取方法、装置、介质和电子设备
US20230125477A1 (en) * 2021-10-26 2023-04-27 Nvidia Corporation Defect detection using one or more neural networks
WO2023083182A1 (en) * 2021-11-09 2023-05-19 Alpha Ai Technology Limited A system for assessing a damage condition of a vehicle and a platform for facilitating repairing or maintenance services of a vehicle

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105719188A (zh) * 2016-01-22 2016-06-29 平安科技(深圳)有限公司 基于多张图片一致性实现保险理赔反欺诈的方法及服务器
CN106600421A (zh) * 2016-11-21 2017-04-26 中国平安财产保险股份有限公司 一种基于图片识别的车险智能定损方法及系统
CN107194323A (zh) * 2017-04-28 2017-09-22 阿里巴巴集团控股有限公司 车辆定损图像获取方法、装置、服务器和终端设备

Family Cites Families (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0981739A (ja) * 1995-09-12 1997-03-28 Toshiba Corp 損害額算出システム及び損傷位置検出装置
JP2001188906A (ja) * 1999-12-28 2001-07-10 Hitachi Ltd 画像自動分類方法及び画像自動分類装置
US7546219B2 (en) * 2005-08-31 2009-06-09 The Boeing Company Automated damage assessment, report, and disposition
US8379914B2 (en) 2008-01-18 2013-02-19 Mitek Systems, Inc. Systems and methods for mobile image capture and remittance processing
US20130297353A1 (en) 2008-01-18 2013-11-07 Mitek Systems Systems and methods for filing insurance claims using mobile imaging
CN101739611A (zh) * 2009-12-08 2010-06-16 上海华平信息技术股份有限公司 一种高清远程协同车辆定损系统及方法
US20130262156A1 (en) 2010-11-18 2013-10-03 Davidshield L.I.A. (2000) Ltd. Automated reimbursement interactions
WO2013093932A2 (en) * 2011-09-29 2013-06-27 Tata Consultancy Services Limited Damage assessment of an object
US8510196B1 (en) * 2012-08-16 2013-08-13 Allstate Insurance Company Feedback loop in mobile damage assessment and claims processing
US8712893B1 (en) * 2012-08-16 2014-04-29 Allstate Insurance Company Enhanced claims damage estimation using aggregate display
US20140114692A1 (en) * 2012-10-23 2014-04-24 InnovaPad, LP System for Integrating First Responder and Insurance Information
FR3007172B1 (fr) * 2013-06-12 2020-12-18 Renault Sas Procede et systeme d'identification d'un degat cause a un vehicule
US10748216B2 (en) 2013-10-15 2020-08-18 Audatex North America, Inc. Mobile system for generating a damaged vehicle insurance estimate
KR20150112535A (ko) 2014-03-28 2015-10-07 한국전자통신연구원 비디오 대표 이미지 관리 장치 및 방법
US10423982B2 (en) 2014-05-19 2019-09-24 Allstate Insurance Company Content output systems using vehicle-based data
CN104268783B (zh) * 2014-05-30 2018-10-26 翱特信息系统(中国)有限公司 车辆定损估价的方法、装置和终端设备
KR101713387B1 (ko) 2014-12-29 2017-03-22 주식회사 일도엔지니어링 사진 자동 분류 및 저장 시스템 및 그 방법
KR101762437B1 (ko) 2015-05-14 2017-07-28 (주)옥천당 관능성이 개선된 커피 원두의 제조 방법
KR20160134401A (ko) * 2015-05-15 2016-11-23 (주)플래닛텍 자동차의 수리견적 자동산출시스템 및 그 방법
US10529028B1 (en) 2015-06-26 2020-01-07 State Farm Mutual Automobile Insurance Company Systems and methods for enhanced situation visualization
CN106407984B (zh) * 2015-07-31 2020-09-11 腾讯科技(深圳)有限公司 目标对象识别方法及装置
GB201517462D0 (en) * 2015-10-02 2015-11-18 Tractable Ltd Semi-automatic labelling of datasets
CN105678622A (zh) * 2016-01-07 2016-06-15 平安科技(深圳)有限公司 车险理赔照片的分析方法及系统
US10692050B2 (en) * 2016-04-06 2020-06-23 American International Group, Inc. Automatic assessment of damage and repair costs in vehicles
CN105956667B (zh) * 2016-04-14 2018-09-25 平安科技(深圳)有限公司 车险定损理赔审核方法及系统
US9922471B2 (en) 2016-05-17 2018-03-20 International Business Machines Corporation Vehicle accident reporting system
CN106021548A (zh) * 2016-05-27 2016-10-12 大连楼兰科技股份有限公司 基于分布式人工智能图像识别的远程定损方法及系统
CN106127747B (zh) * 2016-06-17 2018-10-16 史方 基于深度学习的汽车表面损伤分类方法及装置
CN106251421A (zh) * 2016-07-25 2016-12-21 深圳市永兴元科技有限公司 基于移动终端的车辆定损方法、装置及系统
CN106296118A (zh) * 2016-08-03 2017-01-04 深圳市永兴元科技有限公司 基于图像识别的车辆定损方法及装置
US10902525B2 (en) 2016-09-21 2021-01-26 Allstate Insurance Company Enhanced image capture and analysis of damaged tangible objects
CN106600422A (zh) * 2016-11-24 2017-04-26 中国平安财产保险股份有限公司 一种车险智能定损方法和系统
US10424132B2 (en) * 2017-02-10 2019-09-24 Hitachi, Ltd. Vehicle component failure prevention
CN107358596B (zh) 2017-04-11 2020-09-18 阿里巴巴集团控股有限公司 一种基于图像的车辆定损方法、装置、电子设备及系统
US10470023B2 (en) 2018-01-16 2019-11-05 Florey Insurance Agency, Inc. Emergency claim response unit
US10997413B2 (en) * 2018-03-23 2021-05-04 NthGen Software Inc. Method and system for obtaining vehicle target views from a video stream

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105719188A (zh) * 2016-01-22 2016-06-29 平安科技(深圳)有限公司 基于多张图片一致性实现保险理赔反欺诈的方法及服务器
CN106600421A (zh) * 2016-11-21 2017-04-26 中国平安财产保险股份有限公司 一种基于图片识别的车险智能定损方法及系统
CN107194323A (zh) * 2017-04-28 2017-09-22 阿里巴巴集团控股有限公司 车辆定损图像获取方法、装置、服务器和终端设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3605386A4 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200065632A1 (en) * 2018-08-22 2020-02-27 Alibaba Group Holding Limited Image processing method and apparatus
US10984293B2 (en) 2018-08-22 2021-04-20 Advanced New Technologies Co., Ltd. Image processing method and apparatus
WO2020041399A1 (en) * 2018-08-22 2020-02-27 Alibaba Group Holding Limited Image processing method and apparatus
JP7277997B2 (ja) 2018-12-31 2023-05-19 アジャイルソーダ インコーポレイテッド ディープラーニングに基づいた自動車部位別の破損程度の自動判定システムおよび方法
JP2022537857A (ja) * 2018-12-31 2022-08-31 アジャイルソーダ インコーポレイテッド ディープラーニングに基づいた自動車部位別の破損程度の自動判定システムおよび方法
CN110033386B (zh) * 2019-03-07 2020-10-02 阿里巴巴集团控股有限公司 车辆事故的鉴定方法及装置、电子设备
CN110033386A (zh) * 2019-03-07 2019-07-19 阿里巴巴集团控股有限公司 车辆事故的鉴定方法及装置、电子设备
CN110473418A (zh) * 2019-07-25 2019-11-19 平安科技(深圳)有限公司 危险路段识别方法、装置、服务器及存储介质
CN110688513A (zh) * 2019-08-15 2020-01-14 平安科技(深圳)有限公司 基于视频的农作物查勘方法、装置及计算机设备
CN110688513B (zh) * 2019-08-15 2023-08-18 平安科技(深圳)有限公司 基于视频的农作物查勘方法、装置及计算机设备
JP7356941B2 (ja) 2020-03-26 2023-10-05 株式会社奥村組 管渠損傷特定装置、管渠損傷特定方法および管渠損傷特定プログラム
CN112465018A (zh) * 2020-11-26 2021-03-09 深源恒际科技有限公司 一种基于深度学习的车辆视频定损系统的智能截图方法及系统
CN112465018B (zh) * 2020-11-26 2024-02-02 深源恒际科技有限公司 一种基于深度学习的车辆视频定损系统的智能截图方法及系统

Also Published As

Publication number Publication date
US20200050867A1 (en) 2020-02-13
CN107194323B (zh) 2020-07-03
SG11201909740RA (en) 2019-11-28
JP2020518078A (ja) 2020-06-18
CN111914692B (zh) 2023-07-14
CN107194323A (zh) 2017-09-22
JP6905081B2 (ja) 2021-07-21
US11151384B2 (en) 2021-10-19
KR20190139262A (ko) 2019-12-17
EP3605386A1 (en) 2020-02-05
CN111914692A (zh) 2020-11-10
PH12019502401A1 (en) 2020-12-07
TW201839666A (zh) 2018-11-01
EP3605386A4 (en) 2020-04-01

Similar Documents

Publication Publication Date Title
WO2018196837A1 (zh) 车辆定损图像获取方法、装置、服务器和终端设备
WO2018196815A1 (zh) 车辆定损图像获取方法、装置、服务器和终端设备
EP3457683B1 (en) Dynamic generation of image of a scene based on removal of undesired object present in the scene
US10440276B2 (en) Generating image previews based on capture information
WO2019214313A1 (zh) 一种车辆定损的交互处理方法、装置、处理设备及客户端
WO2020073310A1 (en) Method and apparatus for context-embedding and region-based object detection
CN108875456B (zh) 目标检测方法、目标检测装置和计算机可读存储介质
WO2020001219A1 (zh) 图像处理方法和装置、存储介质、电子设备
Wang et al. Mask-RCNN based people detection using a top-view fisheye camera
JP2013206458A (ja) 画像における外観及びコンテキストに基づく物体分類
CN114267041B (zh) 场景中对象的识别方法及装置
CN108875488B (zh) 对象跟踪方法、对象跟踪装置以及计算机可读存储介质
CN111523402B (zh) 一种视频处理方法、移动终端及可读存储介质
US20230098829A1 (en) Image Processing System for Extending a Range for Image Analytics
CN112965602A (zh) 一种基于手势的人机交互方法及设备
US20230098110A1 (en) System and method to improve object detection accuracy by focus bracketing
US20240037985A1 (en) Cascaded detection of facial attributes
CN115543161B (zh) 一种适用于白板一体机的抠图方法及装置
Kerdvibulvech Hybrid model of human hand motion for cybernetics application
TW202411949A (zh) 臉部屬性的串級偵測
KR20210067710A (ko) 실시간 객체 검출 방법 및 장치
JP2016207106A (ja) 物体検出における誤検出低減方法および装置
CN113723152A (zh) 图像处理方法、装置以及电子设备
KR20210046550A (ko) 심층 신경회로망을 이용한 이동궤적 분류 장치 및 그 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18791520

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019558552

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018791520

Country of ref document: EP

Effective date: 20191025

ENP Entry into the national phase

Ref document number: 20197033366

Country of ref document: KR

Kind code of ref document: A