WO2018196837A1 - 车辆定损图像获取方法、装置、服务器和终端设备 - Google Patents
车辆定损图像获取方法、装置、服务器和终端设备 Download PDFInfo
- Publication number
- WO2018196837A1 WO2018196837A1 PCT/CN2018/084760 CN2018084760W WO2018196837A1 WO 2018196837 A1 WO2018196837 A1 WO 2018196837A1 CN 2018084760 W CN2018084760 W CN 2018084760W WO 2018196837 A1 WO2018196837 A1 WO 2018196837A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- damaged
- vehicle
- video
- damaged portion
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 156
- 238000012545 processing Methods 0.000 claims abstract description 101
- 238000012216 screening Methods 0.000 claims abstract description 37
- 230000006378 damage Effects 0.000 claims description 88
- 230000015654 memory Effects 0.000 claims description 32
- 238000003860 storage Methods 0.000 claims description 27
- 238000001514 detection method Methods 0.000 claims description 15
- 238000004891 communication Methods 0.000 claims description 14
- 230000002452 interceptive effect Effects 0.000 claims description 6
- 238000013527 convolutional neural network Methods 0.000 description 28
- 230000006870 function Effects 0.000 description 23
- 238000010586 diagram Methods 0.000 description 19
- 230000008569 process Effects 0.000 description 18
- 238000013528 artificial neural network Methods 0.000 description 13
- 230000006872 improvement Effects 0.000 description 9
- 230000008859 change Effects 0.000 description 7
- 238000004590 computer program Methods 0.000 description 7
- 230000000694 effects Effects 0.000 description 6
- 238000012549 training Methods 0.000 description 6
- 238000004422 calculation algorithm Methods 0.000 description 5
- 230000003993 interaction Effects 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000011176 pooling Methods 0.000 description 3
- 206010039203 Road traffic accident Diseases 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 230000002085 persistent effect Effects 0.000 description 2
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 1
- 208000027418 Wounds and injury Diseases 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001364 causal effect Effects 0.000 description 1
- 238000013499 data model Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 229910021389 graphene Inorganic materials 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 208000014674 injury Diseases 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 229920001296 polysiloxane Polymers 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 239000010979 ruby Substances 0.000 description 1
- 229910001750 ruby Inorganic materials 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/5838—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q40/00—Finance; Insurance; Tax strategies; Processing of corporate or income taxes
- G06Q40/08—Insurance
-
- G06Q50/40—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
Definitions
- the present application belongs to the technical field of computer image data processing, and in particular, to a method, device, server and terminal device for acquiring a vehicle damage image.
- the insurance company After a vehicle traffic accident occurs, the insurance company needs a number of fixed-loss images to make a damage loss to the risk-taking vehicle, and to archive the data for the risk.
- the image of the vehicle's fixed damage is usually obtained by the operator on the scene, and then the vehicle is subjected to the fixed damage processing according to the photograph taken on the spot.
- the image requirements for vehicle damage need to be able to clearly reflect the specific parts of the vehicle damaged, the damaged parts, the type of damage, the degree of damage, etc. This usually requires the photographer to have knowledge of the damage of the professional vehicle in order to take photos and obtain the corresponding damage. Processing the required images, which obviously requires a relatively large manual cost of training and damage processing. Especially in the case of some vehicles that need to evacuate or move the vehicle as soon as possible after a traffic accident, it takes a long time for the insurance company operator to rush to the scene of the accident.
- the fixed loss image obtained by the owner of the vehicle often does not meet the requirements of the fixed loss image processing due to the non-professional line.
- the images obtained by the operator on the scene often need to be exported from the shooting device again, and the manual screening is performed to determine the required damage image. This also requires a large manpower and time, thereby reducing the damage required for the final loss processing. Image acquisition efficiency.
- the existing insurance company operators or vehicle owners take pictures on the spot to obtain the loss image.
- the professional vehicle damage knowledge is required, the manpower and time cost are large, and the efficiency of obtaining the fixed loss image that meets the requirements of the fixed loss processing is still efficient. Lower.
- the purpose of the present application is to provide a method, a device, a server and a terminal device for acquiring a vehicle damage image, which can automatically and quickly generate high quality in accordance with the requirements of the fixed loss processing process by the photographer capturing the damaged part of the damaged vehicle.
- the fixed loss image satisfies the requirements of the fixed loss processing, improves the acquisition efficiency of the fixed loss image, and is convenient for the operator to work.
- a method, device, server and terminal device for acquiring a vehicle damage image provided by the present application are implemented as follows:
- a method for acquiring a vehicle damage image comprising:
- the client acquires the captured video data, and sends the captured video data to the server;
- the server detects a video image in the captured video data, and identifies a damaged portion in the video image;
- the server classifies the video image based on the detected damaged portion, and determines a candidate image classification set of the damaged portion;
- a fixed loss image of the vehicle is selected from the candidate image classification set according to a preset screening condition.
- a method for acquiring a vehicle damage image comprising:
- a fixed loss image of the vehicle is selected from the candidate image classification set according to a preset screening condition.
- a method for acquiring a vehicle damage image comprising:
- a method for acquiring a vehicle damage image comprising:
- a fixed loss image of the vehicle is selected from the candidate image classification set according to a preset screening condition.
- a vehicle fixed-loss image acquiring device comprising:
- a data receiving module configured to receive captured video data of the damaged vehicle uploaded by the terminal device
- a damaged part identification module detecting a video image in the captured video data, and identifying a damaged part in the video image
- a classification module configured to classify the video image based on the detected damaged portion, and determine a candidate image classification set of the damaged portion
- a screening module configured to select a fixed loss image of the vehicle from the candidate image classification set according to a preset screening condition.
- a vehicle fixed-loss image acquiring device comprising:
- a shooting module for performing video shooting on a damaged vehicle to acquire captured video data
- a communication module configured to send the captured video data to a processing terminal
- a tracking module configured to receive a location area returned by the processing terminal for real-time tracking of the damaged part, and display the tracked location area, where the damaged part includes a video image of the processing terminal for the captured video data The detection is obtained.
- a vehicle fixed loss image acquisition device includes a processor and a memory for storing processor executable instructions, the processor implementing the instructions to:
- a fixed loss image of the vehicle is selected from the candidate image classification set according to a preset screening condition.
- a computer readable storage medium having stored thereon computer instructions that, when executed, implement the following steps:
- a fixed loss image of the vehicle is selected from the candidate image classification set according to a preset screening condition.
- a computer readable storage medium having stored thereon computer instructions that, when executed, implement the following steps:
- a server comprising a processor and a memory for storing processor-executable instructions, the processor implementing the instructions to:
- a fixed loss image of the vehicle is selected from the candidate image classification set according to a preset screening condition.
- a terminal device includes a processor and a memory for storing processor-executable instructions that are implemented when the processor executes the instructions:
- a fixed loss image of the vehicle is selected from the candidate image classification set according to a preset screening condition.
- the present invention provides a method, device, server and terminal device for acquiring vehicle damage images, and proposes a video-based automatic generation scheme for vehicle damage images.
- the photographer can video capture the damaged vehicle through the terminal device, and the captured video data can be transmitted to the server of the system, and the server analyzes the video data to identify the damaged part and obtain the required damage according to the damaged part.
- a candidate image of the category can then generate a lossy image of the damaged vehicle from the candidate image.
- a high-quality fixed-loss image that meets the requirements of the fixed-loss processing can be automatically and quickly generated, meets the requirements of the fixed-loss processing, improves the acquisition efficiency of the fixed-loss image, and reduces the damage image of the insurance company operator. Get and process costs.
- FIG. 1 is a schematic flow chart of an embodiment of a method for acquiring a vehicle damage image according to the present application
- FIG. 2 is a schematic structural diagram of a model for identifying a damaged part in a video image constructed by the method of the present application;
- FIG. 3 is a schematic diagram of an implementation scenario of identifying a damaged portion using a damage detection model according to the method of the present application
- FIG. 4 is a schematic diagram of determining a close-up image based on the identified damaged portion in one embodiment of the present application
- FIG. 5 is a schematic diagram of a model structure for identifying damaged components in a video image constructed by the method of the present application
- FIG. 6 is a schematic diagram of a processing scenario of a method for acquiring a vehicle damage image according to the method of the present application
- FIG. 7 is a schematic flow chart of another embodiment of the method described in the present application.
- FIG. 10 is a schematic flow chart of another embodiment of the method according to the present application.
- FIG. 11 is a block diagram showing a module structure of an embodiment of a vehicle damage image acquisition device provided by the present application.
- FIG. 12 is a schematic structural diagram of another embodiment of an apparatus for acquiring a vehicle damage image according to the present application.
- FIG. 13 is a schematic structural diagram of an embodiment of a terminal device provided by the present application.
- FIG. 1 is a schematic flow chart of an embodiment of a method for acquiring a vehicle damage image according to the present application.
- the present application provides method operation steps or device structures as shown in the following embodiments or figures, there may be more or partial merged fewer operational steps in the method or device based on conventional or no inventive labor. Or module unit.
- the execution order of the steps or the module structure of the device is not limited to the execution order or the module structure shown in the embodiment of the present application or the drawings.
- the device, server or terminal product of the method or module structure When the device, server or terminal product of the method or module structure is applied, it may be executed sequentially or in parallel according to the method or module structure shown in the embodiment or the drawing (for example, parallel processor or multi-thread processing). Environment, even including distributed processing, server cluster implementation environment).
- the following embodiment describes a scenario in which a specific photographer performs video capture by a mobile terminal and the server processes the captured video data to obtain a lossy image.
- the photographer can be an insurance company operator, and the photographer holds a mobile terminal to perform video shooting on the damaged vehicle.
- the mobile terminal may include a mobile phone, a tablet computer, or other general purpose or special purpose device having a video capturing function and a data communication function.
- the mobile terminal and the server may be deployed with corresponding application modules (such as a certain vehicle loss application APP installed by the mobile terminal to implement corresponding data processing.
- application modules such as a certain vehicle loss application APP installed by the mobile terminal to implement corresponding data processing.
- the essence of the solution is applied to other implementation scenarios for acquiring a fixed-loss image of a vehicle.
- the photographer may also process the video data directly on the mobile terminal side and obtain a fixed-loss image, etc., for the owner of the vehicle or the mobile terminal. .
- FIG. 1 is an embodiment of a method for acquiring a vehicle damage image provided by the present application, where the method may include:
- S1 The client acquires the captured video data, and sends the captured video data to the server.
- the client may include a general-purpose or special-purpose device having a video capturing function and a data communication function, such as a terminal device of a mobile phone, a tablet computer, or the like.
- the client may also include a fixed computer device (such as a PC end) having a data communication function and a movable video capturing device connected thereto, and the combination is considered as the embodiment.
- the server may include a processing device that analyzes a frame image in the video data and determines a lossy image.
- the server may include a logical unit device having image data processing and data communication functions, such as a server of the application scenario of the present embodiment.
- the server is a second terminal device that performs data communication with the first terminal device when the client is the first terminal device, and thus, for the sake of description,
- the side on which the captured video data is generated for the vehicle video shooting is referred to as a client, and the side on which the captured video data is processed to generate a fixed-loss image is referred to as a server.
- This application does not exclude the same terminal device in which the client and the server are physically connected as described in some embodiments.
- video data captured by the client may be transmitted to the server in real time for the server to process quickly.
- the video can also be transmitted to the server after the client video capture is completed. If the mobile terminal used by the photographer does not currently have a network connection, video capture can be performed first, and then connected to mobile cellular data or WLAN (Wireless Local Area Networks) or a proprietary network. Of course, even if the client can perform normal data communication with the server, the captured video data can be asynchronously transmitted to the server.
- WLAN Wireless Local Area Networks
- the captured video data obtained by the photographer to capture the damaged part of the vehicle may be a video clip or multiple video clips.
- multi-segment video data generated by shooting at different angles and near and far distances is performed on the same damaged part, or different damaged parts are separately photographed to obtain captured video data of each damaged part.
- a complete shot can be taken around each damaged part of the damaged vehicle to obtain a relatively long time video clip.
- the server detects a video image in the captured video data, and identifies a damaged part in the video image.
- the server may perform image detection on the video image in the captured video data to identify the damaged portion of the vehicle in the processed video image.
- the identified damaged portion occupies an area on the video image and has corresponding area information, such as the location and size of the damaged area.
- the damaged portion in the video image can be identified by the constructed damage detection model, which uses the deep neural network to detect the damaged portion of the vehicle and its The area in the image.
- the damage detection model may be constructed based on a Convolutional Neural Network (CNN) and a Region Proposal Network (RPN) in combination with a pooling layer, a fully connected layer, and the like.
- the damage detection model for identifying the damaged portion included in the video image may be constructed in advance using the designed machine learning algorithm. After the damage detection model is trained by the sample, one or more damaged parts in the video image can be identified.
- the damage detection model may be constructed by using a network model of a deep neural network or a network model of the variant after being trained by the sample. In an embodiment, it may be based on a convolutional neural network and a regional suggestion network, in combination with other, such as a Fully-Connected Layer (FC), a pooling layer, a data normalization layer, and the like.
- FC Fully-Connected Layer
- Softmax probability output layer
- FIG. 2 is a schematic structural diagram of a model for identifying a damaged part in a video image constructed by the method of the present application.
- FIG. 3 is a schematic diagram of an implementation scenario of using the damage detection model to identify a damaged part according to the method of the present application, and the identified damaged part can be displayed on the client in real time.
- Convolutional neural networks generally refer to a neural network composed of a convolutional layer (CNN) as the main structure combined with other active layers, etc., mainly for image recognition.
- the deep neural network described in this embodiment may include a convolution layer and other important layers (such as an injury sample image trained by an input model, a data normalization layer, an activation layer, etc.), and combined with a regional suggestion network (RPN).
- RPN regional suggestion network
- Create a build Convolutional neural networks typically combine two-dimensional discrete convolution operations in image processing with artificial neural networks. This convolution operation can be used to automatically extract features.
- the Regional Suggestion Network (RPN) can take the features extracted by an image (arbitrary size) as input (a two-dimensional feature that can be extracted using a convolutional neural network), and output a set of rectangular target suggestion boxes, each of which has an object score.
- the above described embodiment can identify one or more damaged portions of the video image during model training. Specifically, when the sample is trained, the input is a picture, and multiple picture areas can be output. If there is a damaged part, you can output a picture area; if there are k damage, you can output k picture areas; if there is no damage, output 0 picture areas.
- the damage detection model may use various models and variants based on a convolutional neural network and a regional suggestion network, such as Faster R-CNN, YOLO, Mask-FCN, and the like.
- the convolutional neural network can use any CNN model, such as ResNet, Inception, VGG, etc. and its variants.
- the convolutional network (CNN) part of the neural network can use mature network structures that achieve better effects in object recognition, such as Inception, ResNet, etc., such as ResNet network, input as a picture, and output as multiple damaged areas. , and the corresponding damaged area and confidence (the confidence here is a parameter indicating the degree of authenticity of the identified damaged area).
- Fast R-CNN, YOLO, Mask-FCN, etc. are all deep neural networks including convolutional layers that can be used in this embodiment.
- the deep neural network used in this embodiment in combination with the region suggestion layer and the CNN layer, can detect the damaged portion in the video image and confirm the region of the damaged portion in the video image.
- the convolutional network (CNN) part of the present application can use a mature network structure in which object recognition achieves good effects, and the ResNet network can perform small-scale gradient descent by using marking data (mini-batch gradient). Descent) training.
- the location area of the damaged part recognized by the server can be displayed on the client in real time, so that the user can observe and confirm the damaged part.
- the server can automatically track the damaged part, and in the following process, as the shooting distance and angle change, the size and position of the corresponding position area of the damaged part in the video image can also change accordingly.
- the photographer can interactively modify the position and size of the identified damaged portion.
- the client side displays the location area of the damaged part detected by the server in real time. If the photographer believes that the location area of the damaged part identified by the server does not completely cover the damaged part observed in the scene and needs to be adjusted, the position and size of the position area of the damaged part can be adjusted on the client. If the position area is selected by long-pressing the damaged part, move it, adjust the position of the damaged part, or stretch the frame of the damaged part position area to adjust the size. The photographer can generate a new damaged part after the client adjusts the location of the damaged part, and then sends the new damaged part to the server.
- the photographer can conveniently and flexibly adjust the position of the damaged part in the video image according to the actual damaged part of the scene, and more accurately locate the damaged part, so that the server can obtain a high-quality fixed-loss image more accurately and reliably.
- the server receives the captured video data uploaded by the client, detects the video image in the captured video data, and identifies the damaged portion in the video image.
- S3 The server classifies the video image based on the detected damaged part, and determines a candidate image classification set of the damaged part.
- Vehicle damage often requires different types of image data, such as images of different angles of the vehicle, images showing damaged parts, and close-up details of specific damaged parts.
- the present application can identify a video image, such as whether it is an image of a damaged vehicle, a vehicle component included in the recognition image, whether one or more vehicle components are included, whether there is damage on the vehicle component, etc. Wait.
- the fixed loss images required for the vehicle to be damaged may be divided into different categories, and other requirements that do not meet the requirements of the fixed loss image may be separately classified into one category. Specifically, each frame of the captured video may be extracted, and each frame of the image is identified and classified to form a candidate image classification set of the damaged portion.
- the determined candidate image classification set may include:
- S301 Display a close-up image set of the damaged part, and display a part image set of the vehicle component to which the damaged part belongs.
- the close-up image collection includes a close-up image of the damaged portion, and the component image set includes the damaged component of the damaged vehicle, and the damaged component has at least one damaged portion.
- the photographer can perform shooting from near to far (or from far to near) on the damaged part of the damaged vehicle, which can be completed by the photographer moving or zooming.
- the server side can perform frame recognition on the frame image in the captured video (which can be processed for each frame image or a frame image of a video segment) to determine the classification of the video image.
- the video image of the captured video may be divided into three categories, which are specifically included in the following:
- the recognition algorithm/classification requirement of the class a image and the like may be determined according to the requirement of the near-field image of the damaged portion in the fixed-loss image.
- the determination may be determined by the size (area or region span) of the area occupied by the damaged part in the currently located video image. If the damaged portion occupies a large area in the video image (eg, greater than a certain threshold, such as a length or a width greater than a quarter of the video image size), then the video image may be determined to be a type a image.
- the video image in the close-up image set may be determined by at least one of the following methods:
- S3011 The area ratio of the damaged area in the video image is greater than the first preset ratio:
- the ratio of the abscissa span of the damaged part to the length of the associated video image is greater than the second preset ratio, and/or the ratio of the ordinate of the damaged part to the height of the associated video image is greater than the third preset ratio;
- S3013 Select, from the video image of the same damaged part, the first K video images after the area of the damaged portion is descended, or the video image within the fourth preset ratio after the area is descended, K ⁇ 1.
- the damaged part In the damaged detail image of type a, the damaged part usually occupies a large area range.
- the selection of the detailed image of the damaged part can be well controlled, and the type of a that meets the processing requirements is obtained. image.
- the area of the damaged area in the a type image can be obtained by counting the pixel points contained in the damaged area.
- the video image is 800*650 pixels
- two long scratches of the damaged vehicle damage the scratch corresponding to the abscissa span is 600 pixels long
- the span of each scratch is narrow.
- the area of the damaged portion is less than one tenth of the video image of the video at this time, since the horizontal span of the damaged portion is 600 pixels, which is three quarters of the length of the entire video image, the video can be used at this time.
- the image is labeled as an a type image, as shown in FIG. 4, which is a schematic diagram of determining a close-up image based on the identified damaged portion in one embodiment of the present application.
- the area of the damaged portion may be the area of the damaged portion in S3011, or may be the span value of the damaged portion being long or high.
- a type of image can also be identified in combination with the above various methods, for example, the area of the damaged portion satisfies both a certain proportion of the video image and the fourth pre-region with the largest area in all the same damaged area images. Set within the proportional range.
- the class a image described in the scene of this embodiment typically contains all or part of the detailed image information of the damaged portion.
- the first preset ratio, the second preset ratio, the third preset ratio, and the fourth preset ratio described above may be specifically set according to image recognition accuracy or classification accuracy or other processing requirements, for example,
- the second preset ratio or the third preset ratio may be a quarter.
- the components included in the video image can be identified by the constructed vehicle component detection model. If the damaged portion is on the detected damaged component, it can be confirmed that the video image belongs to the b-type image. Specifically, for example, in a video image P1, if the component area of the damaged component detected in P1 includes the identified damaged portion (the area of the normally identified component region is larger than the area of the damaged portion), P1 can be considered. The part area is a damaged part. Alternatively, in the video image P2, if the damaged area detected in P2 overlaps with the detected component area in P2, it may be considered that the vehicle component corresponding to the component area in P2 is also a damaged component, and Video images are classified into class b images.
- the component detection model described in this embodiment uses a deep neural network to detect the regions of components and components in the image.
- the component damage recognition model may be constructed based on a Convolutional Neural Network (CNN) and a Region Proposal Network (RPN) in combination with a pooling layer, a fully connected layer, and the like.
- CNN Convolutional Neural Network
- RPN Region Proposal Network
- various models and variants based on convolutional neural networks and regional suggestion networks, such as Faster R-CNN, YOLO, Mask-FCN, etc. can be used.
- the convolutional neural network (CNN) can use any CNN model, such as ResNet, Inception, VGG, etc. and its variants.
- the convolutional network (CNN) part of the neural network can use a mature network structure that achieves better effects in object recognition, such as Inception, ResNet, etc., such as a ResNet network, where the input is a picture and the output is a plurality of component areas. And the corresponding component classification and confidence (the confidence here is a parameter indicating the degree of authenticity of the identified vehicle components).
- Fast R-CNN, YOLO, Mask-FCN, etc. are all deep neural networks including convolutional layers that can be used in this embodiment.
- the deep neural network used in this embodiment, in conjunction with the region suggestion layer and the CNN layer, can detect the vehicle components in the image to be processed and confirm the component regions of the vehicle component in the image to be processed.
- FIG. 5 is a schematic diagram of a model structure for identifying damaged components in a video image constructed by the method of the present application.
- the same video image satisfies the judgment logic of the class a and class b images at the same time, it can belong to the class a and class b images at the same time.
- the server may extract a video image in the captured video data, classify the video image based on the detected damaged portion, and determine a candidate image classification set of the damaged portion.
- S4 Select a fixed loss image of the vehicle from the candidate image classification set according to a preset screening condition.
- An image that meets the preset screening condition may be selected from the candidate image classification set according to the category, the sharpness, and the like of the fixed-loss image as the fixed-loss image.
- the preset screening condition may be a customized setting. For example, in one embodiment, multiple (eg, 5 or 10) sharpness may be selected according to the sharpness of the image in the type a and b images, respectively. And an image with a different angle is taken as a fixed-loss image of the identified damaged portion.
- the sharpness of the image can be calculated by the image area where the damaged part and the detected vehicle part are located, for example, a space domain based operator (such as a Gabor operator) or a frequency domain based operator (such as a fast Fourier transform) can be used. ) and other methods are obtained.
- a space domain based operator such as a Gabor operator
- a frequency domain based operator such as a fast Fourier transform
- the invention provides a vehicle loss image acquisition method, which provides a video-based vehicle loss generation image automatic generation scheme.
- the photographer can video capture the damaged vehicle through the terminal device, and the captured video data can be transmitted to the server end of the system.
- the system analyzes the video data on the server side, identifies the damaged part, and obtains the damage according to the damaged part. Different types of candidate images are required, and then a damaged image of the damaged vehicle can be generated from the candidate images.
- a high-quality fixed-loss image that meets the requirements of the fixed-loss processing can be automatically and quickly generated, meets the requirements of the fixed-loss processing, improves the acquisition efficiency of the fixed-loss image, and reduces the damage image of the insurance company operator. Get and process costs.
- the video captured by the client is transmitted to the server, and the server can track the location of the damaged part in the video in real time according to the damaged part.
- some image algorithms may be used to obtain a correspondence between the adjacent video images of the captured video, such as using optical flow (optical). Flow) algorithm to achieve tracking of damaged parts.
- the mobile terminal has sensors such as an accelerometer and a gyroscope, the signal data of the sensors can be combined to further determine the direction and angle of the photographer's motion, thereby achieving more accurate tracking of the damaged portion. Therefore, in another embodiment of the method of the present application, after identifying the damaged portion of the video image, the method may further include:
- S200 The server tracks the location area of the damaged part in the captured video data in real time
- the server determines that the damaged part re-enters the video image after leaving the video image, re-positioning and tracking the location area of the damaged part based on the image feature data of the damaged part.
- the server can extract image feature data of the damaged area, such as SIFT feature data (Scale-invariant feature transform). If the damaged part is removed from the video image and then re-entered into the video image, the system can automatically locate and continue tracking. For example, the camera restarts after the power is turned off or the shooting area is displaced to the non-damaged part and then returns to the same damaged part.
- SIFT feature data Scale-invariant feature transform
- the location area of the damaged part identified by the server can be displayed on the client in real time, so that the user can observe and confirm the damaged part.
- the client and server can simultaneously display the identified damaged parts.
- the server can automatically track and identify the damaged part, and the size and position of the corresponding position area of the damaged part in the video image can also change correspondingly as the shooting distance and angle change. In this way, the server side can display the damaged parts tracked by the client in real time, which is convenient for the server operator to observe and use.
- the server can send the tracked location area of the damaged part to the client during real-time tracking, so that the client can display the damaged part in real time in synchronization with the server, so that the photographer can observe the server. Locate the damaged part of the track. Therefore, in another embodiment of the method, the method may further include:
- S210 The server sends the tracked location area of the damaged part to the client, so that the client displays the location area of the damaged part in real time.
- the photographer can interactively modify the position and size of the damaged portion. For example, when the client displays the damaged part, if the photographer believes that the identified area of the damaged part does not completely cover the damaged part and needs to be adjusted, the position and size of the position area may be adjusted, such as After selecting the location area according to the damaged part, move it to adjust the position of the damaged part, or stretch the border of the damaged part position area to adjust the size.
- the photographer can generate a new damaged part after the client adjusts the location area of the damaged part, and then sends the new damaged part to the server.
- the server can synchronize the new damaged parts modified by the client.
- the server can identify subsequent video images based on the new damaged portion.
- the method may further include:
- S220 Receive a new damaged part sent by the client, where the new damaged part includes a damaged part that is re-determined after the client modifies the location area of the damaged part based on the received interactive instruction;
- the classifying the video image based on the detected damaged portion comprises classifying the video image based on the new damaged portion.
- the photographer can conveniently and flexibly adjust the position area of the damaged part in the video image according to the actual damaged part of the scene, and more accurately locate the damaged part, so that the server can obtain a high quality fixed loss image.
- the photographer can continuously shoot from different angles when shooting a close-up of the damaged portion.
- the server side can obtain the shooting angle of each frame according to the tracking of the damaged part, and then select a set of video images of different angles as the fixed loss image of the damaged part, thereby ensuring that the fixed loss image can accurately reflect the received image.
- the type and extent of damage Therefore, in another embodiment of the method of the present application, the determining, by the predetermined screening condition, the fixed loss image of the vehicle from the candidate image classification set comprises:
- S401 Select, from the specified damaged part candidate image classification set, at least one video image as the fixed loss image of the damaged part according to the definition of the video image and the shooting angle of the damaged part.
- the deformation of the component may be very obvious at some angles relative to other angles, or if the damaged component has reflection or reflection, the reflection or reflection may change with the change of the shooting angle, etc., and the embodiment of the present application is utilized.
- Selecting images of different angles as fixed loss images can greatly reduce the interference of these factors on the fixed loss.
- sensors such as accelerometers and gyroscopes on the client, the angles of the shots can also be obtained or assisted by the signals of these sensors.
- multiple candidate image classification sets may be generated, but only one or more types of candidate image classification sets may be applied when specifically selecting the lossy image, such as the a class and the b class and the above.
- Class c When selecting the final required fixed loss image, you can specify the candidate image classification set from class a and class b.
- the type a and b images multiple images can be selected according to the sharpness of the video image (for example, 5 images of the same component are selected, and 10 images of the same damaged portion are selected), and the sharpness is the highest, and the shooting angle is the same. Different images are used as the lossy image.
- the sharpness of the image can be calculated by calculating the image area of the damaged part and the detected part of the vehicle component, for example, a spatial domain based operator (such as Gabor operator) or a frequency domain based operator (such as fast Fourier) can be used. Transform) and other methods.
- a spatial domain based operator such as Gabor operator
- a frequency domain based operator such as fast Fourier
- Transform and other methods.
- FIG. 6 is a schematic diagram of a processing scenario of a method for acquiring a fixed-loss image of a vehicle according to the present application. As shown in FIG. 6, when the damaged portion A and the damaged portion B are close to each other, tracking processing can be performed at the same time, but damaged.
- the part C is located on the other side of the damaged vehicle. If the distance between the damaged part A and the damaged part B is far away in the captured video, the damaged part C may not be tracked first, and the damaged part A and the damaged part B are photographed. Then take the damaged part C separately. Therefore, in another embodiment of the method of the present application, if it is detected that at least two damaged parts are present in the video image, it is determined whether the distances of the at least two damaged parts meet the set neighboring conditions;
- the at least two damaged parts are simultaneously tracked, and corresponding fixed loss images are respectively generated.
- the proximity condition may be set according to the number of damaged parts in the same video image, the size of the damaged part, the distance between the damaged parts, and the like.
- the server detects that at least one of the close-up image set and the component image set of the damaged part is empty, or the video image in the close-up image set does not cover all areas corresponding to the damaged part, the video may be generated The prompt message is photographed, and then the video shooting prompt message may be sent to a client corresponding to the captured video data.
- the server can obtain the b-type fixed-loss image that can determine the vehicle component in which the damaged portion is located, the camera can be fed back to the photographer to shoot adjacent vehicle components including the damaged portion. This ensures that the (b) type of lossy image is obtained. If the server cannot obtain a type A fixed loss image, or if the type a image does not cover the entire area of the damaged area, it can be fed back to the photographer to prompt him to take a close shot of the damaged part.
- the server may be prompted Move slowly to ensure the quality of the captured image.
- the photographer may be prompted Move slowly to ensure the quality of the captured image.
- feedback to the mobile terminal APP prompts the user to pay attention to the factors such as focus, illumination, etc. when shooting images, such as displaying the prompt message “The speed is too fast, please move slowly to ensure image quality”.
- the server may retain the video segment that produces the lossy image for subsequent viewing and verification, and the like.
- the client can upload or copy the fixed loss image to the remote server after the video image is taken.
- a video-based vehicle loss generation image automatic generation scheme is proposed.
- the photographer can video capture the damaged vehicle through the terminal device, and the captured video data can be transmitted to the server, and the server analyzes the video data to identify the damaged part and obtain different categories required for the damage according to the damaged part.
- Candidate image A fixed loss image of the damaged vehicle can then be generated from the candidate image.
- a high-quality fixed-loss image that meets the requirements of the fixed-loss processing can be automatically and quickly generated, meets the requirements of the fixed-loss processing, improves the acquisition efficiency of the fixed-loss image, and reduces the damage image of the insurance company operator. Get and process costs.
- FIG. 7 is a schematic flowchart of another embodiment of the method of the present application. As shown in FIG. 7, the method may include:
- S10 receiving shooting video data of the damaged vehicle uploaded by the terminal device, detecting a video image in the captured video data, and identifying a damaged part in the video image;
- S11 classifying the video image based on the detected damaged portion, and determining a candidate image classification set of the damaged portion;
- S12 Select a fixed loss image of the vehicle from the candidate image classification set according to a preset screening condition.
- the terminal device may be the client described in the foregoing embodiment, but the application does not exclude other terminal devices, such as a database system, a third-party server, a flash memory, and the like.
- the server may detect the captured video data, identify the damaged part, and then identify the damaged part. Classify video images.
- the fixed loss image of the vehicle is automatically generated by screening.
- the required lossy image may be correspondingly divided into different categories.
- the determined candidate image classification set may specifically include:
- a close-up image collection of the damaged portion is displayed, and a component image set of the vehicle component to which the damaged portion belongs is displayed.
- the video image in the component image set includes at least one damaged portion, such as the class a close-up view, the b-type component map, the class c and the b-type unsatisfied c-type image.
- the video image in the close-up image set may be determined by at least one of the following:
- the area ratio of the damaged portion in the area occupied by the video image is greater than the first preset ratio:
- the ratio of the abscissa span of the damaged portion to the length of the video image to be associated is greater than the second preset ratio, and/or the ratio of the ordinate of the damaged portion to the height of the associated video image is greater than the third predetermined ratio;
- the first K video images after the area of the damaged portion is descended, or the video image within the fourth preset ratio after the area descending is selected, K ⁇ 1.
- the recognition algorithm/classification requirement of the class a image and the like can be determined according to the requirements of the near-field image of the damaged portion required for the fixed-loss processing.
- the size of the region occupied by the damaged portion in the video image currently occupied can be identified and determined. If the damaged portion occupies a large area in the video image (eg, greater than a certain threshold, such as a length or a width greater than a quarter of the video image size), then the video image may be determined to be a type a image.
- the current frame image of the other analyzed processing of the damaged component where the damaged portion is located if the current frame image of the other analyzed processing of the damaged component where the damaged portion is located, the area of the damaged portion relative to the other damaged portion is relatively larger. Large (in a certain ratio or TOP range), it can be determined that the current frame image is a type A image.
- the method further includes:
- the terminal device may be a client that interacts with the server, such as a mobile phone.
- the method may further include:
- the positional area of the damaged part is repositioned and tracked based on the image feature data of the damaged part.
- the location area of the damaged portion that is repositioned and tracked can be displayed on the server.
- the method may further include:
- the photographer can display the identified damaged parts in real time on the client, so that the user can observe and confirm the damaged parts.
- the photographer can conveniently and flexibly adjust the position area of the damaged part in the video image according to the actual damaged part of the scene, and more accurately locate the damaged part, so that the server can obtain a high quality fixed loss image.
- the photographer can interactively modify the position and size of the damaged portion.
- the photographer can generate a new damaged part after the client adjusts and modifies the identified area of the damaged part, and then sends the new damaged part to the server.
- the server can synchronize the new damaged parts modified by the client.
- the server can identify subsequent video images based on the new damaged portion. Therefore, in another embodiment of the method for acquiring a vehicle-damaged image, the method may further include:
- the classifying the video image based on the detected damaged portion comprises classifying the video image based on the new damaged portion.
- the photographer can conveniently and flexibly adjust the position area of the damaged part in the video image according to the actual damaged part of the scene, and more accurately locate the damaged part, so that the server can obtain a high quality fixed loss image.
- the determining a fixed-loss image of the vehicle from the candidate image classification set according to a preset screening condition includes:
- At least one video image is respectively selected as the fixed loss image of the damaged part according to the sharpness of the video image and the shooting angle of the damaged part.
- the server can simultaneously track the multiple damaged parts and generate a fixed-loss image of each damaged part.
- the server obtains the damage image of each damaged part according to the above processing for all the damaged parts designated by the photographer, and then can use all the generated damage images as the fixed damage image of the entire damaged vehicle. Therefore, in another embodiment of the method for acquiring a vehicle-damaged image, if it is detected that at least two damaged parts are present in the video image, it is determined whether the distance of the at least two damaged parts meets the set neighboring condition. ;
- the at least two damaged parts are simultaneously tracked, and corresponding fixed loss images are respectively generated.
- the adjacent condition may be set according to the number of damaged parts identified in the same video image, the size of the damaged part, the distance between the damaged parts, and the like.
- the present invention further provides an implementation method for acquiring a fixed-loss image of a vehicle that can be used on a client side, based on an embodiment of automatically acquiring a loss-of-loss image by capturing video data from a damaged vehicle, which is described in the foregoing implementation scenario of client-server interaction.
- FIG. 8 is a schematic flowchart of another embodiment of the method according to the present application. As shown in FIG. 8, the method may include:
- S20 performing video shooting on the damaged vehicle to obtain captured video data
- S21 Send the captured video data to a processing terminal
- S22 Receive a location area that is tracked in real time by the processing terminal for the damaged part, and display the tracked location area, where the damaged part includes the processing terminal detecting and identifying a video image in the captured video data. get.
- the processing terminal includes a terminal device that processes the captured video data and automatically generates a lossy image of the damaged vehicle based on the identified damaged portion, such as a remote server that can be a fixed-loss image processing.
- the determined set of candidate image classifications may also include: displaying a close-up image set of the damaged portion, and displaying a component image set of the vehicle component to which the damaged portion belongs.
- a close-up image set of the damaged portion may also include: displaying a close-up image set of the damaged portion, and displaying a component image set of the vehicle component to which the damaged portion belongs.
- a component image set of the vehicle component to which the damaged portion belongs Such as the above type a image, b type image, and the like. If the server is unable to obtain a type b fixed-loss image that identifies the vehicle component in which the damaged part is located, the server can feed back to the photographer to send a video capture alert message prompting him to take a number of adjacent vehicle components, including the damaged location, to ensure A type b fixed loss image is obtained.
- the method may further include:
- S23 receiving and displaying a video shooting prompt message sent by the processing terminal, where the video shooting prompt message includes at least one of a close-up image set and a component image set of the damaged portion detected by the processing terminal being empty. Or generated when the video image in the close-up image set does not cover the entire area corresponding to the damaged part.
- the client can display the location area of the damaged part tracked by the server in real time, and can interactively modify the position and size of the location area on the client side. Therefore, in another embodiment of the method, the method may further include:
- the photographer can perform video shooting on the damaged vehicle through the terminal device, and the captured video data can be transmitted to the server of the system, and the server analyzes the video data to identify the damaged portion.
- the candidate images of different categories required for the loss are acquired according to the damaged portion, and then the damage image of the damaged vehicle can be generated from the candidate images.
- FIG. 9 is a schematic flowchart of another embodiment of the method in the present application. As shown in FIG. 9, the method includes:
- S32 classify the video image based on the detected damaged portion, and determine a candidate image classification set of the damaged portion;
- S33 Select a fixed loss image of the vehicle from the candidate image classification set according to a preset screening condition.
- a specific implementation may be composed of application modules deployed on the client.
- the terminal device may be a general-purpose or special-purpose device with a video capturing function and an image processing capability, such as a mobile phone, a tablet computer, and the like.
- the photographer can use the client to video capture the damaged vehicle, and analyze the captured video data to identify the damaged part and generate a lossy image.
- a server side may also be included to receive the fixed loss image generated by the client.
- the fixed loss image that the client can generate is transmitted to the specified server in real time or asynchronously. Therefore, another embodiment of the method may further include:
- S3302 Asynchronously transfer the fixed loss image to a designated server.
- the client may upload the generated loss image to the remote server immediately, or may upload the loss image in batches afterwards or Copy to the remote server.
- the method for automatically generating a fixed-loss image by the client side may further include other implementation manners, such as directly generating a video shooting prompt message, based on the foregoing description of the embodiment of the server that automatically generates a fixed-loss image, a damaged portion location tracking, and the like. It is displayed on the shooting terminal, the specific division and recognition of the damage image category, the classification method, the identification, location and tracking of the damaged part. For details, refer to the description of the related embodiments, and details are not described herein.
- the present invention provides a method for acquiring a vehicle damage image, which can automatically generate a loss image based on a captured video of a damaged vehicle on the client side.
- the photographer can video capture the damaged vehicle through the client to generate captured video data.
- the captured video data is analyzed to identify the damaged part, and the candidate images of different categories required for the fixed loss are obtained.
- a lossy image of the damaged vehicle can be generated from the candidate image.
- video shooting can be directly performed on the client side, and a high-quality fixed-loss image that meets the requirements of the fixed-loss processing can be automatically and quickly generated to meet the requirements of the fixed-loss processing and improve the acquisition efficiency of the fixed-loss image. It also reduces the cost of acquiring and processing the damage image of insurance company operators.
- the present application further provides a vehicle-based damage image acquisition device.
- the apparatus may include a system (including a distributed system), software (applications), modules, components, servers, clients, etc., using the methods described herein in conjunction with the necessary means of implementing the hardware.
- the apparatus in one embodiment provided by the present application is as described in the following embodiments. Since the implementation of the device to solve the problem is similar to the method, the implementation of the specific device of the present application can be referred to the implementation of the foregoing method, and the repeated description is not repeated.
- the term "unit” or "module” may implement a combination of software and/or hardware of a predetermined function.
- FIG. 11 is a schematic structural diagram of a module of an embodiment of a vehicle-based image loss acquiring apparatus provided by the present application. As shown in FIG. 11, the apparatus may include:
- the data receiving module 101 is configured to receive the captured video data of the damaged vehicle uploaded by the terminal device;
- the damaged part identification module 102 can detect a video image in the captured video data to identify a damaged part in the video image;
- the classification module 103 is configured to classify the video image based on the detected damaged portion, and determine a candidate image classification set of the damaged portion;
- the screening module 104 is configured to select a fixed loss image of the vehicle from the candidate image classification set according to a preset screening condition.
- FIG. 12 is a schematic structural diagram of a module according to another embodiment of the apparatus of the present application.
- the specific structure may include:
- the shooting module 201 can be configured to perform video shooting on the damaged vehicle to obtain captured video data.
- the communication module 202 is configured to send the captured video data to the processing terminal;
- the tracking display module 203 is configured to receive a location area returned by the processing terminal for real-time tracking of the damaged part, and display the tracked location area, where the damaged part includes the processing terminal to the captured video data The video image in the image is detected and identified.
- the tracking display module 203 can be a display unit including a display screen, and the photographer can specify the damaged portion in the display screen, and can also implement the position of the damaged portion of the display tracking in the display screen. region.
- the method for acquiring a vehicle damage image provided by the present application can be implemented by a processor executing a corresponding program instruction in a computer.
- the apparatus may include a processor and a memory for storing processor-executable instructions, the processor executing the instruction Time to achieve:
- a fixed loss image of the vehicle is selected from the candidate image classification set according to a preset screening condition.
- the device may be a server, and the server receives the captured video data uploaded by the client, and then performs analysis processing, including identifying the damaged part, dividing the category, selecting an image, and the like, to obtain a fixed loss image of the vehicle.
- the device may also be a client, and the client directly analyzes and processes the damaged vehicle on the client side to obtain a fixed loss image of the vehicle. Therefore, in another embodiment of the apparatus of the present application, the captured video data of the damaged vehicle may include:
- the terminal device acquires data information uploaded after the video data is captured
- the vehicle fixed-loss image acquiring device performs captured video data obtained by video capturing the damaged vehicle.
- the device acquires the captured video data and directly performs the analysis process to obtain the fixed-loss image
- the obtained fixed-loss image may also be sent to the server for storage or further loss processing. Therefore, in another embodiment of the apparatus, if the captured video data of the damaged vehicle is obtained by video capture of the vehicle-damaged image acquisition device, the processor further includes when the instruction is executed. :
- the fixed loss image is asynchronously transmitted to a designated processing terminal.
- the device or the device automatically generates a description of the embodiment of the fixed-loss image, the damaged portion location tracking, and the like.
- the device for automatically generating the fixed-loss image by the client side may further include other implementation manners, such as generating a video capture.
- After the prompt message it is directly displayed on the terminal device, the specific division and identification of the fixed loss image category, the classification method, the location and tracking of the damaged part. For details, refer to the description of the related embodiments, and details are not described herein.
- the photographer can perform video shooting on the damaged vehicle through the vehicle damage image acquisition device provided by the present application to generate captured video data. Then, the captured video data is analyzed to obtain candidate images of different categories required for the fixed loss. Further, a lossy image of the damaged vehicle can be generated from the candidate image.
- video shooting can be directly performed on the client side, and a high-quality fixed-loss image that meets the requirements of the fixed-loss processing can be automatically and quickly generated to meet the requirements of the fixed-loss processing and improve the acquisition efficiency of the fixed-loss image. It also reduces the cost of acquiring and processing the damage image of insurance company operators.
- the method or the device described in the above embodiments of the present application can implement the business logic and record on the storage medium by using a computer program, and the storage medium can be read and executed by the computer to achieve the effect of the solution described in the embodiment of the present application. Accordingly, the present application also provides a computer readable storage medium having stored thereon computer instructions that, when executed, may implement the following steps:
- a fixed loss image of the vehicle is selected from the candidate image classification set according to a preset screening condition.
- the present application further provides another computer readable storage medium having stored thereon computer instructions that, when executed, implement the following steps:
- the computer readable storage medium may include physical means for storing information, typically by digitizing the information and then storing it in a medium that utilizes electrical, magnetic or optical means.
- the computer readable storage medium of this embodiment may include: means for storing information by means of electrical energy, such as various types of memories, such as RAM, ROM, etc.; means for storing information by magnetic energy means, such as hard disk, floppy disk, magnetic tape, magnetic Core memory, bubble memory, U disk; means for optically storing information such as CD or DVD.
- electrical energy such as various types of memories, such as RAM, ROM, etc.
- magnetic energy means such as hard disk, floppy disk, magnetic tape, magnetic Core memory, bubble memory, U disk
- means for optically storing information such as CD or DVD.
- quantum memories graphene memories, and the like.
- the apparatus or method described above or the computer readable storage medium may be used in a server for acquiring a vehicle loss image to automatically acquire a vehicle loss image based on the vehicle image video.
- the server may be a separate server, a system cluster composed of multiple application servers, or a server in a distributed system.
- the server may include a processor and a memory for storing processor-executable instructions, when the processor executes the instructions:
- a fixed loss image of the vehicle is selected from the candidate image classification set according to a preset screening condition.
- the apparatus or method or the computer readable storage medium described above may be used in a terminal device for acquiring a vehicle loss image to automatically acquire a vehicle loss image based on a vehicle image video.
- the terminal device may be implemented in the form of a server, or may be implemented by a client that performs video shooting on the damaged vehicle in the field.
- FIG. 13 is a schematic structural diagram of an embodiment of a terminal device provided by the present application.
- the device on the terminal may include a processor and a memory for storing executable instructions of the processor, where the processing is performed. When the instruction is executed, it can be implemented:
- a fixed loss image of the vehicle is selected from the candidate image classification set according to a preset screening condition.
- the captured video data packet obtained by the terminal device may be acquired by the terminal device after the captured video data is captured, or may be the captured video data obtained by the terminal device directly performing video capture on the damaged vehicle.
- the processor may further implement the instruction:
- the fixed loss image is transmitted asynchronously to the designated server.
- the photographer can perform video shooting on the damaged vehicle through the terminal device of the vehicle-damaged image provided by the present application to generate captured video data. Then, the captured video data is analyzed to identify the damaged part, and the candidate images of different categories required for the fixed loss are obtained. Further, a lossy image of the damaged vehicle can be generated from the candidate image.
- video shooting can be directly performed on the client side, and a high-quality fixed-loss image that meets the requirements of the fixed-loss processing can be automatically and quickly generated to meet the requirements of the fixed-loss processing and improve the acquisition efficiency of the fixed-loss image. It also reduces the cost of acquiring and processing the damage image of insurance company operators.
- the present application refers to the method of tracking damaged areas, using CNN and RPN networks to detect damaged parts and vehicle parts, image recognition and classification based on damaged parts, data acquisition, data acquisition, interaction, calculation,
- the description and the like are described, however, the application is not limited to the case that it must conform to industry communication standards, standard data models, computer processing and storage rules, or embodiments described herein.
- Certain industry standards or implementations that have been modified in a manner that uses a custom approach or an embodiment described above may also achieve the same, equivalent, or similar, or post-deformation implementation effects of the above-described embodiments.
- Embodiments obtained by applying these modified or modified data acquisition, storage, judgment, processing methods, etc. may still fall within the scope of alternative embodiments of the present application.
- PLD Programmable Logic Device
- FPGA Field Programmable Gate Array
- HDL Hardware Description Language
- the controller can be implemented in any suitable manner, for example, the controller can take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (eg, software or firmware) executable by the (micro)processor.
- computer readable program code eg, software or firmware
- examples of controllers include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, The Microchip PIC18F26K20 and the Silicone Labs C8051F320, the memory controller can also be implemented as part of the memory's control logic.
- the controller can be logically programmed by means of logic gates, switches, ASICs, programmable logic controllers, and embedding.
- Such a controller can therefore be considered a hardware component, and the means for implementing various functions included therein can also be considered as a structure within the hardware component.
- a device for implementing various functions can be considered as a software module that can be both a method of implementation and a structure within a hardware component.
- the system, device, module or unit illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product having a certain function.
- a typical implementation device is a computer.
- the computer can be, for example, a personal computer, a laptop computer, a car-mounted human-machine interaction device, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet.
- each module may be implemented in the same software or software, or the modules that implement the same function may be implemented by a plurality of sub-modules or a combination of sub-units.
- the device embodiments described above are merely illustrative.
- the division of the unit is only a logical function division.
- there may be another division manner for example, multiple units or components may be combined or integrated. Go to another system, or some features can be ignored or not executed.
- the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
- the controller can be logically programmed by means of logic gates, switches, ASICs, programmable logic controllers, and embedding.
- the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
- the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
- These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
- the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
- a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
- processors CPUs
- input/output interfaces network interfaces
- memory volatile and non-volatile memory
- the memory may include non-persistent memory, random access memory (RAM), and/or non-volatile memory in a computer readable medium, such as read only memory (ROM) or flash memory.
- RAM random access memory
- ROM read only memory
- Memory is an example of a computer readable medium.
- Computer readable media includes both permanent and non-persistent, removable and non-removable media.
- Information storage can be implemented by any method or technology.
- the information can be computer readable instructions, data structures, modules of programs, or other data.
- Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory. (ROM), electrically erasable programmable read only memory (EEPROM), flash memory or other memory technology, compact disk read only memory (CD-ROM), digital versatile disk (DVD) or other optical storage, Magnetic tape cartridges, magnetic tape storage or other magnetic storage devices or any other non-transportable media can be used to store information that can be accessed by a computing device.
- computer readable media does not include temporary storage of computer readable media, such as modulated data signals and carrier waves.
- embodiments of the present application can be provided as a method, system, or computer program product.
- the present application can take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment in combination of software and hardware.
- the application can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
- the application can be described in the general context of computer-executable instructions executed by a computer, such as a program module.
- program modules include routines, programs, objects, components, data structures, and the like that perform particular tasks or implement particular abstract data types.
- the present application can also be practiced in distributed computing environments where tasks are performed by remote processing devices that are connected through a communication network.
- program modules can be located in both local and remote computer storage media including storage devices.
Abstract
Description
Claims (40)
- 一种车辆定损图像获取方法,所述方法包括:客户端获取拍摄视频数据,将所述拍摄视频数据发送至服务器;所述服务器对所述拍摄视频数据中的视频图像进行检测,识别所述视频图像中的受损部位;所述服务器基于检测出的受损部位对所述视频图像进行分类,确定所述受损部位的候选图像分类集合;按照预设筛选条件从所述候选图像分类集合中选出车辆的定损图像。
- 一种车辆定损图像获取方法,所述方法包括:接收终端设备上传的受损车辆的拍摄视频数据,对所述拍摄视频数据中的视频图像进行检测,识别所述视频图像中的受损部位;基于检测出的受损部位对所述视频图像进行分类,确定所述受损部位的候选图像分类集合;按照预设筛选条件从所述候选图像分类集合中选出车辆的定损图像。
- 如权利要求2所述的一种车辆定损图像获取方法,确定出的所述候选图像分类集合包括:显示受损部位的近景图像集合、展示受损部位所属车辆部件的部件图像集合。
- 如权利要求3所述的一种车辆定损图像获取方法,采用下述中的至少一种方式确定所述近景图像集合中的视频图像:受损部位在所属视频图像中所占区域的面积比值大于第一预设比例:受损部位的横坐标跨度与所属视频图像长度的比值大于第二预设比例,和/或,受损部位的纵坐标与所属视频图像高度的比例大于第三预设比例;从相同受损部位的视频图像中,选择受损部位的面积降序后的前K张视频图像,或者所述面积降序后属于第四预设比例内的视频图像,K≥1。
- 如权利要求3所述的一种车辆定损图像获取方法,还包括:若检测到所述受损部位的近景图像集合、部件图像集合中的至少一个为空,或者所述近景图像集合中的视频图像未覆盖到对应受损部位的全部区域时,生成视频拍摄提示消息;向所述终端设备发送所述视频拍摄提示消息。
- 如权利要求2所述的一种车辆定损图像获取方法,在识别出所述视频图像的受损部位后,所述方法还包括:实时跟踪所述受损部位在所述拍摄视频数据中的位置区域;以及,在所述受损部位脱离视频图像后重新进入视频图像时,基于所述受损部位的图像特征数据重新对所述受损部位的位置区域进行定位和跟踪。
- 如权利要求6所述的一种车辆定损图像获取方法,所述方法还包括:将跟踪的所述受损部位的位置区域发送至所述终端设备,以使所述终端设备实时显示所述受损部位的位置区域。
- 如权利要求7所述的一种车辆定损图像获取方法,所述方法还包括:接收所述终端设备发送的新的受损部位,所述新的受损部位包括所述终端设备基于接收的交互指令修改所述识别出的受损部位的位置区域后重新确定的受损部位;相应的,所述基于检测出的受损部位对所述视频图像进行分类包括基于所述新的受损部位对视频图像进行分类。
- 如权利要求2至5中任意一项所述的一种车辆定损图像获取方法,所述按照预设筛选条件从所述候选图像分类集合中选出车辆的定损图像包括:从指定的所述受损部位候选图像分类集合中,根据视频图像的清晰度和所述受损部位的拍摄角度,分别选取至少一张视频图像作为所述受损部位的定损图像。
- 如权利要求6至8中任意一项所述的一种车辆定损图像获取方法,若检测到视频图像中存在至少两个受损部位,则判断所述至少两个受损部位的距离是否符合设置的邻近条件;若是,则同时跟踪所述至少两个受损部位,并分别产生相应的定损图像。
- 一种车辆定损图像获取方法,所述方法包括:对受损车辆进行视频拍摄,获取拍摄视频数据;将所述拍摄视频数据发送至处理终端;接收所述处理终端返回的对受损部位实时跟踪的位置区域,显示所述跟踪的位置区域,所述受损部位包括所述处理终端对所述拍摄视频数据中的视频图像进行检测识别得到。
- 如权利要求11所述的一种车辆定损图像获取方法,还包括:接收并显示所述处理终端发送的视频拍摄提示消息,所述视频拍摄提示消息包括在所述处理终端检测到所述受损部位的近景图像集合、部件图像集合中的至少一个为空,或者所述近景图像集合中的视频图像未覆盖到对应受损部位的全部区域时生成。
- 如权利要求11或12所述的一种车辆定损图像获取方法,所述方法还包括:基于接收的交互指令修改所述受损部位的位置区域后,重新确定新的受损部位;将所述新的受损部位发送给所述处理终端,以使所述处理终端基于所述新的受损部位对视频图像进行分类。
- 一种车辆定损图像获取方法,所述方法包括:接收受损车辆的拍摄视频数据;对所述拍摄视频数据中的视频图像进行检测,识别所述视频图像中的受损部位;基于检测出的受损部位对所述视频图像进行分类,确定所述受损部位的候选图像分类集合;按照预设筛选条件从所述候选图像分类集合中选出车辆的定损图像。
- 如权利要求14所述的一种车辆定损图像获取方法,确定出的所述候选 图像分类集合包括:显示受损部位的近景图像集合、展示受损部位所属车辆部件的部件图像集合。
- 如权利要求15所述的一种车辆定损图像获取方法,采用下述中的至少一种方式确定所述近景图像集合中的视频图像:受损部位在所属视频图像中所占区域的面积比值大于第一预设比例:受损部位的横坐标跨度与所属视频图像长度的比值大于第二预设比例,和/或,受损部位的纵坐标与所属视频图像高度的比例大于第三预设比例;从相同受损部位的视频图像中,选择受损部位的面积降序后的前K张视频图像,或者所述面积降序后属于第四预设比例内的视频图像,K≥1。
- 如权利要求15所述的一种车辆定损图像获取方法,还包括:若检测到所述受损部位的近景图像集合、部件图像集合中的至少一个为空,或者所述近景图像集合中的视频图像未覆盖到对应受损部位的全部区域时,生成视频拍摄提示消息;显示所述视频拍摄提示消息。
- 如权利要求14所述的一种车辆定损图像获取方法,在识别出所述视频图像的受损部位后,还包括:实时跟踪并显示所述受损部位在所述拍摄视频数据中的位置区域;以及,在所述受损部位脱离视频图像后重新进入视频图像时,基于所述受损部位的图像特征数据重新对所述受损部位的位置区域进行定位和跟踪。
- 如权利要求18所述的一种车辆定损图像获取方法,还包括:基于接收的交互指令修改所述识别出的受损部位的位置区域,重新确定新的受损部位;相应的,所述基于检测出的受损部位对所述视频图像进行分类包括基于所述新的受损部位对视频图像进行分类。
- 如权利要求14至17中任意一项所述的一种车辆定损图像获取方法, 所述按照预设筛选条件从所述候选图像分类集合中选出车辆的定损图像包括:从指定的所述受损部位候选图像分类集合中,根据视频图像的清晰度和所述受损部位的拍摄角度,分别选取至少一张视频图像作为所述受损部位的定损图像。
- 如权利要求18或19所述的一种车辆定损图像获取方法,若检测到视频图像中存在至少两个受损部位,则判断所述至少两个受损部位的距离是否符合设置的邻近条件;若是,则同时跟踪所述至少两个受损部位,并分别产生相应的定损图像。
- 如权利要求14所述的一种车辆定损图像获取方法,还包括:将所述定损图像实时传输至指定的服务器;或者,将所述定损图像异步传输至指定的服务器。
- 一种车辆定损图像获取装置,所述装置包括:数据接收模块,用于接收终端设备上传的受损车辆的拍摄视频数据;受损部位识别模块,对所述拍摄视频数据中的视频图像进行检测,识别所述视频图像中的受损部位;分类模块,用于基于检测出的受损部位对所述视频图像进行分类,确定所述受损部位的候选图像分类集合;筛选模块,用于按照预设筛选条件从所述候选图像分类集合中选出车辆的定损图像。
- 一种车辆定损图像获取装置,所述装置包括:拍摄模块,用于对受损车辆进行视频拍摄,获取拍摄视频数据;通信模块,用于将所述拍摄视频数据发送至处理终端;跟踪模块,用于接收所述处理终端返回的对受损部位实时跟踪的位置区域,显示所述跟踪的位置区域,所述受损部位包括所述处理终端对所述拍摄视频数据中的视频图像进行检测识别得到。
- 一种车辆定损图像获取装置,包括处理器以及用于存储处理器可执行指令的存储器,所述处理器执行所述指令时实现:接收受损车辆的拍摄视频数据;对所述拍摄视频数据中的视频图像进行检测,识别所述视频图像中的受损部位;基于检测出的受损部位对所述视频图像进行分类,确定所述受损部位的候选图像分类集合;按照预设筛选条件从所述候选图像分类集合中选出车辆的定损图像。
- 如权利要求25所述的一种车辆定损图像获取装置,所述受损车辆的拍摄视频数据包括:终端设备获取拍摄视频数据后上传的数据信息;或者,所述车辆定损图像获取装置对受损车辆进行视频拍摄获取的拍摄视频数据。
- 如权利要求26所述的一种车辆定损图像获取装置,所述处理器执行所述指令时,确定出的所述候选图像分类集合包括:显示受损部位的近景图像集合、展示受损部位所属车辆部件的部件图像集合。
- 如权利要求27所述的一种车辆定损图像获取装置,所述处理器执行所述指令时,采用下述中的至少一种方式确定所述近景图像集合中的视频图像:受损部位在所属视频图像中所占区域的面积比值大于第一预设比例:受损部位的横坐标跨度与所属视频图像长度的比值大于第二预设比例,和/或,受损部位的纵坐标与所属视频图像高度的比例大于第三预设比例;从相同受损部位的视频图像中,选择受损部位的面积降序后的前K张视频图像,或者所述面积降序后属于第四预设比例内的视频图像,K≥1。
- 如权利要求28所述的一种车辆定损图像获取装置,所述处理器执行所 述指令时还实现:若检测到所述受损部位的近景图像集合、部件图像集合中的至少一个为空,或者所述近景图像集合中的视频图像未覆盖到对应受损部位的全部区域时,生成视频拍摄提示消息;所述视频拍摄提示消息用于显示在所述终端设备。
- 如权利要求26所述的一种车辆定损图像获取装置,所述处理器执行所述指令时还实现:实时跟踪所述受损部位在所述拍摄视频数据中的位置区域;以及,在所述受损部位脱离视频图像后重新进入视频图像时,基于所述受损部位的图像特征数据重新对所述受损部位的位置区域进行定位和跟踪。
- 如权利要求30所述的一种车辆定损图像获取装置,若所拍摄视频数据为所述终端设备上传的数据信息,则所述处理器执行所述指令时还实现:将跟踪的所述受损部位的位置区域发送至所述终端设备,以使所述终端设备实时显示所述受损部位的位置区域。
- 如权利要求26所述的一种车辆定损图像获取装置,所述处理器执行所述指令时还实现:接收修改所述识别出的受损部位的位置区域后重新确定新的受损部位;相应的,所述基于检测出的受损部位对所述视频图像进行分类包括基于所述新的受损部位对视频图像进行分类。
- 如权利要求30所述的一种车辆定损图像获取装置,所述处理器执行所述指令时所述按照预设筛选条件从所述候选图像分类集合中选出车辆的定损图像包括:从指定的所述受损部位候选图像分类集合中,根据视频图像的清晰度和所述受损部位的拍摄角度,分别选取至少一张视频图像作为所述受损部位的定损图像。
- 如权利要求30至32中任意一项所述的一种车辆定损图像获取装置, 所述处理器执行所述指令时若检测到视频图像中存在至少两个受损部位,则判断所述至少两个受损部位的距离是否符合设置的邻近条件;若是,则同时跟踪所述至少两个受损部位,并分别产生相应的定损图像。
- 如权利要求26所述的一种车辆定损图像获取装置,若所述受损车辆的拍摄视频数据为所述车辆定损图像获取装置进行视频拍摄获取得到,则所述处理器执行所述指令时还包括:将所述定损图像实时传输至指定的处理终端;或者,将所述定损图像异步传输至指定的处理终端。
- 一种计算机可读存储介质,其上存储有计算机指令,所述指令被执行时实现以下步骤:接收受损车辆的拍摄视频数据;对所述拍摄视频数据中的视频图像进行检测,识别所述视频图像中的受损部位;基于检测出的受损部位对所述视频图像进行分类,确定所述受损部位的候选图像分类集合;按照预设筛选条件从所述候选图像分类集合中选出车辆的定损图像。
- 一种计算机可读存储介质,其上存储有计算机指令,所述指令被执行时实现以下步骤:对受损车辆进行视频拍摄,获取拍摄视频数据;将所述拍摄视频数据发送至处理终端;接收所述处理终端返回的对受损部位实时跟踪的位置区域,显示所述跟踪的位置区域,所述受损部位包括所述处理终端对所述拍摄视频数据中的视频图像进行检测识别得到。
- 一种服务器,包括处理器以及用于存储处理器可执行指令的存储器,所述处理器执行所述指令时实现:接收终端设备上传的受损车辆的拍摄视频数据;对所述拍摄视频数据中的视频图像进行检测,识别所述视频图像中的受损部位;基于检测出的受损部位对所述视频图像进行分类,确定所述受损部位的候选图像分类集合;按照预设筛选条件从所述候选图像分类集合中选出车辆的定损图像。
- 一种终端设备,包括处理器以及用于存储处理器可执行指令的存储器,所述处理器执行所述指令时实现:获取对受损车辆进行视频拍摄的拍摄视频数据;对所述拍摄视频数据中的视频图像进行检测,识别所述视频图像中的受损部位;基于检测出的受损部位对所述视频图像进行分类,确定所述受损部位的候选图像分类集合;按照预设筛选条件从所述候选图像分类集合中选出车辆的定损图像。
- 如权利要求39所述的一种终端设备,所述处理器执行所述指令时还实现:通过所述数据通信模块将所述定损图像实时传输至指定的服务器;或者,将所述定损图像异步传输至指定的服务器。
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2019558552A JP6905081B2 (ja) | 2017-04-28 | 2018-04-27 | 車両損失査定画像を取得するための方法および装置、サーバ、ならびに端末デバイス |
EP18791520.2A EP3605386A4 (en) | 2017-04-28 | 2018-04-27 | METHOD AND APPARATUS FOR OBTAINING VEHICLE LOSS EVALUATION IMAGE, SERVER AND TERMINAL DEVICE |
KR1020197033366A KR20190139262A (ko) | 2017-04-28 | 2018-04-27 | 차량 손실 평가 이미지를 획득하기 위한 방법과 장치, 서버 및 단말기 디바이스 |
SG11201909740R SG11201909740RA (en) | 2017-04-28 | 2018-04-27 | Method and apparatus for obtaining vehicle loss assessment image, server and terminal device |
US16/655,001 US11151384B2 (en) | 2017-04-28 | 2019-10-16 | Method and apparatus for obtaining vehicle loss assessment image, server and terminal device |
PH12019502401A PH12019502401A1 (en) | 2017-04-28 | 2019-10-23 | Method and apparatus for obtaining vehicle loss assessment image, server and terminal device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710294010.4 | 2017-04-28 | ||
CN201710294010.4A CN107194323B (zh) | 2017-04-28 | 2017-04-28 | 车辆定损图像获取方法、装置、服务器和终端设备 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/655,001 Continuation US11151384B2 (en) | 2017-04-28 | 2019-10-16 | Method and apparatus for obtaining vehicle loss assessment image, server and terminal device |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018196837A1 true WO2018196837A1 (zh) | 2018-11-01 |
Family
ID=59872897
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/084760 WO2018196837A1 (zh) | 2017-04-28 | 2018-04-27 | 车辆定损图像获取方法、装置、服务器和终端设备 |
Country Status (9)
Country | Link |
---|---|
US (1) | US11151384B2 (zh) |
EP (1) | EP3605386A4 (zh) |
JP (1) | JP6905081B2 (zh) |
KR (1) | KR20190139262A (zh) |
CN (2) | CN107194323B (zh) |
PH (1) | PH12019502401A1 (zh) |
SG (1) | SG11201909740RA (zh) |
TW (1) | TW201839666A (zh) |
WO (1) | WO2018196837A1 (zh) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110033386A (zh) * | 2019-03-07 | 2019-07-19 | 阿里巴巴集团控股有限公司 | 车辆事故的鉴定方法及装置、电子设备 |
CN110473418A (zh) * | 2019-07-25 | 2019-11-19 | 平安科技(深圳)有限公司 | 危险路段识别方法、装置、服务器及存储介质 |
CN110688513A (zh) * | 2019-08-15 | 2020-01-14 | 平安科技(深圳)有限公司 | 基于视频的农作物查勘方法、装置及计算机设备 |
US20200065632A1 (en) * | 2018-08-22 | 2020-02-27 | Alibaba Group Holding Limited | Image processing method and apparatus |
CN112465018A (zh) * | 2020-11-26 | 2021-03-09 | 深源恒际科技有限公司 | 一种基于深度学习的车辆视频定损系统的智能截图方法及系统 |
JP2022537857A (ja) * | 2018-12-31 | 2022-08-31 | アジャイルソーダ インコーポレイテッド | ディープラーニングに基づいた自動車部位別の破損程度の自動判定システムおよび方法 |
JP7356941B2 (ja) | 2020-03-26 | 2023-10-05 | 株式会社奥村組 | 管渠損傷特定装置、管渠損傷特定方法および管渠損傷特定プログラム |
Families Citing this family (49)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107194323B (zh) | 2017-04-28 | 2020-07-03 | 阿里巴巴集团控股有限公司 | 车辆定损图像获取方法、装置、服务器和终端设备 |
CN107610091A (zh) * | 2017-07-31 | 2018-01-19 | 阿里巴巴集团控股有限公司 | 车险图像处理方法、装置、服务器及系统 |
CN107766805A (zh) * | 2017-09-29 | 2018-03-06 | 阿里巴巴集团控股有限公司 | 提升车辆定损图像识别结果的方法、装置及服务器 |
CN109753985A (zh) * | 2017-11-07 | 2019-05-14 | 北京京东尚科信息技术有限公司 | 视频分类方法及装置 |
CN108090838B (zh) * | 2017-11-21 | 2020-09-29 | 阿里巴巴集团控股有限公司 | 识别车辆受损部件的方法、装置、服务器、客户端及系统 |
CN108038459A (zh) * | 2017-12-20 | 2018-05-15 | 深圳先进技术研究院 | 一种水下生物的检测识别方法、终端设备及存储介质 |
CN108647563A (zh) * | 2018-03-27 | 2018-10-12 | 阿里巴巴集团控股有限公司 | 一种车辆定损的方法、装置及设备 |
CN108921811B (zh) | 2018-04-03 | 2020-06-30 | 阿里巴巴集团控股有限公司 | 检测物品损伤的方法和装置、物品损伤检测器 |
CN113179368B (zh) * | 2018-05-08 | 2023-10-27 | 创新先进技术有限公司 | 一种车辆定损的数据处理方法、装置、处理设备及客户端 |
CN108665373B (zh) * | 2018-05-08 | 2020-09-18 | 阿里巴巴集团控股有限公司 | 一种车辆定损的交互处理方法、装置、处理设备及客户端 |
CN108682010A (zh) * | 2018-05-08 | 2018-10-19 | 阿里巴巴集团控股有限公司 | 车辆损伤识别的处理方法、处理设备、客户端及服务器 |
CN108647712A (zh) * | 2018-05-08 | 2018-10-12 | 阿里巴巴集团控股有限公司 | 车辆损伤识别的处理方法、处理设备、客户端及服务器 |
CN110634120B (zh) * | 2018-06-05 | 2022-06-03 | 杭州海康威视数字技术股份有限公司 | 一种车辆损伤判别方法及装置 |
CN110609877B (zh) * | 2018-06-14 | 2023-04-18 | 百度在线网络技术(北京)有限公司 | 一种图片采集的方法、装置、设备和计算机存储介质 |
CN111666832B (zh) * | 2018-07-27 | 2023-10-31 | 创新先进技术有限公司 | 一种检测方法及装置、一种计算设备及存储介质 |
CN109034264B (zh) * | 2018-08-15 | 2021-11-19 | 云南大学 | 交通事故严重性预测csp-cnn模型及其建模方法 |
CN108989684A (zh) * | 2018-08-23 | 2018-12-11 | 阿里巴巴集团控股有限公司 | 控制拍摄距离的方法和装置 |
CN110569695B (zh) * | 2018-08-31 | 2021-07-09 | 创新先进技术有限公司 | 基于定损图像判定模型的图像处理方法和装置 |
CN110569694A (zh) * | 2018-08-31 | 2019-12-13 | 阿里巴巴集团控股有限公司 | 车辆的部件检测方法、装置及设备 |
CN110570316A (zh) | 2018-08-31 | 2019-12-13 | 阿里巴巴集团控股有限公司 | 训练损伤识别模型的方法及装置 |
CN110569697A (zh) * | 2018-08-31 | 2019-12-13 | 阿里巴巴集团控股有限公司 | 车辆的部件检测方法、装置及设备 |
CN110567728B (zh) * | 2018-09-03 | 2021-08-20 | 创新先进技术有限公司 | 用户拍摄意图的识别方法、装置及设备 |
CN110569864A (zh) * | 2018-09-04 | 2019-12-13 | 阿里巴巴集团控股有限公司 | 基于gan网络的车损图像生成方法和装置 |
CN109359542A (zh) * | 2018-09-18 | 2019-02-19 | 平安科技(深圳)有限公司 | 基于神经网络的车辆损伤级别的确定方法及终端设备 |
CN110569700B (zh) * | 2018-09-26 | 2020-11-03 | 创新先进技术有限公司 | 优化损伤识别结果的方法及装置 |
CN109389169A (zh) * | 2018-10-08 | 2019-02-26 | 百度在线网络技术(北京)有限公司 | 用于处理图像的方法和装置 |
CN109615649A (zh) * | 2018-10-31 | 2019-04-12 | 阿里巴巴集团控股有限公司 | 一种图像标注方法、装置及系统 |
CN109447071A (zh) * | 2018-11-01 | 2019-03-08 | 博微太赫兹信息科技有限公司 | 一种基于fpga和深度学习的毫米波成像危险物品检测方法 |
CN110033608B (zh) * | 2018-12-03 | 2020-12-11 | 创新先进技术有限公司 | 车辆损伤检测的处理方法、装置、设备、服务器和系统 |
CN109657599B (zh) * | 2018-12-13 | 2023-08-01 | 深源恒际科技有限公司 | 距离自适应的车辆外观部件的图片识别方法 |
CN109784171A (zh) * | 2018-12-14 | 2019-05-21 | 平安科技(深圳)有限公司 | 车辆定损图像筛选方法、装置、可读存储介质及服务器 |
CN110569702B (zh) * | 2019-02-14 | 2021-05-14 | 创新先进技术有限公司 | 视频流的处理方法和装置 |
CN110287768A (zh) * | 2019-05-06 | 2019-09-27 | 浙江君嘉智享网络科技有限公司 | 图像智能识别车辆定损方法 |
CN110363238A (zh) * | 2019-07-03 | 2019-10-22 | 中科软科技股份有限公司 | 智能车辆定损方法、系统、电子设备及存储介质 |
CN110969183B (zh) * | 2019-09-20 | 2023-11-21 | 北京方位捷讯科技有限公司 | 一种根据图像数据确定目标对象受损程度的方法及系统 |
CN113038018B (zh) * | 2019-10-30 | 2022-06-28 | 支付宝(杭州)信息技术有限公司 | 辅助用户拍摄车辆视频的方法及装置 |
WO2021136947A1 (en) | 2020-01-03 | 2021-07-08 | Tractable Ltd | Vehicle damage state determination method |
CN111612104B (zh) * | 2020-06-30 | 2021-04-13 | 爱保科技有限公司 | 车辆定损图像获取方法、装置、介质和电子设备 |
CN112541096B (zh) * | 2020-07-27 | 2023-01-24 | 中咨数据有限公司 | 一种用于智慧城市的视频监控方法 |
WO2022047736A1 (zh) * | 2020-09-04 | 2022-03-10 | 江苏前沿交通研究院有限公司 | 一种基于卷积神经网络的损伤检测方法 |
CN112492105B (zh) * | 2020-11-26 | 2022-04-15 | 深源恒际科技有限公司 | 一种基于视频的车辆外观部件自助定损采集方法及系统 |
CN112712498A (zh) * | 2020-12-25 | 2021-04-27 | 北京百度网讯科技有限公司 | 移动终端执行的车辆定损方法、装置、移动终端、介质 |
CN113033372B (zh) * | 2021-03-19 | 2023-08-18 | 北京百度网讯科技有限公司 | 车辆定损方法、装置、电子设备及计算机可读存储介质 |
EP4309124A1 (en) * | 2021-04-21 | 2024-01-24 | Siemens Mobility GmbH | Automated selection and semantic connection of images |
CN113033517B (zh) * | 2021-05-25 | 2021-08-10 | 爱保科技有限公司 | 车辆定损图像获取方法、装置和存储介质 |
CN113361426A (zh) * | 2021-06-11 | 2021-09-07 | 爱保科技有限公司 | 车辆定损图像获取方法、介质、装置和电子设备 |
CN113361424A (zh) * | 2021-06-11 | 2021-09-07 | 爱保科技有限公司 | 一种车辆智能定损图像获取方法、装置、介质和电子设备 |
US20230125477A1 (en) * | 2021-10-26 | 2023-04-27 | Nvidia Corporation | Defect detection using one or more neural networks |
WO2023083182A1 (en) * | 2021-11-09 | 2023-05-19 | Alpha Ai Technology Limited | A system for assessing a damage condition of a vehicle and a platform for facilitating repairing or maintenance services of a vehicle |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105719188A (zh) * | 2016-01-22 | 2016-06-29 | 平安科技(深圳)有限公司 | 基于多张图片一致性实现保险理赔反欺诈的方法及服务器 |
CN106600421A (zh) * | 2016-11-21 | 2017-04-26 | 中国平安财产保险股份有限公司 | 一种基于图片识别的车险智能定损方法及系统 |
CN107194323A (zh) * | 2017-04-28 | 2017-09-22 | 阿里巴巴集团控股有限公司 | 车辆定损图像获取方法、装置、服务器和终端设备 |
Family Cites Families (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0981739A (ja) * | 1995-09-12 | 1997-03-28 | Toshiba Corp | 損害額算出システム及び損傷位置検出装置 |
JP2001188906A (ja) * | 1999-12-28 | 2001-07-10 | Hitachi Ltd | 画像自動分類方法及び画像自動分類装置 |
US7546219B2 (en) * | 2005-08-31 | 2009-06-09 | The Boeing Company | Automated damage assessment, report, and disposition |
US8379914B2 (en) | 2008-01-18 | 2013-02-19 | Mitek Systems, Inc. | Systems and methods for mobile image capture and remittance processing |
US20130297353A1 (en) | 2008-01-18 | 2013-11-07 | Mitek Systems | Systems and methods for filing insurance claims using mobile imaging |
CN101739611A (zh) * | 2009-12-08 | 2010-06-16 | 上海华平信息技术股份有限公司 | 一种高清远程协同车辆定损系统及方法 |
US20130262156A1 (en) | 2010-11-18 | 2013-10-03 | Davidshield L.I.A. (2000) Ltd. | Automated reimbursement interactions |
WO2013093932A2 (en) * | 2011-09-29 | 2013-06-27 | Tata Consultancy Services Limited | Damage assessment of an object |
US8510196B1 (en) * | 2012-08-16 | 2013-08-13 | Allstate Insurance Company | Feedback loop in mobile damage assessment and claims processing |
US8712893B1 (en) * | 2012-08-16 | 2014-04-29 | Allstate Insurance Company | Enhanced claims damage estimation using aggregate display |
US20140114692A1 (en) * | 2012-10-23 | 2014-04-24 | InnovaPad, LP | System for Integrating First Responder and Insurance Information |
FR3007172B1 (fr) * | 2013-06-12 | 2020-12-18 | Renault Sas | Procede et systeme d'identification d'un degat cause a un vehicule |
US10748216B2 (en) | 2013-10-15 | 2020-08-18 | Audatex North America, Inc. | Mobile system for generating a damaged vehicle insurance estimate |
KR20150112535A (ko) | 2014-03-28 | 2015-10-07 | 한국전자통신연구원 | 비디오 대표 이미지 관리 장치 및 방법 |
US10423982B2 (en) | 2014-05-19 | 2019-09-24 | Allstate Insurance Company | Content output systems using vehicle-based data |
CN104268783B (zh) * | 2014-05-30 | 2018-10-26 | 翱特信息系统(中国)有限公司 | 车辆定损估价的方法、装置和终端设备 |
KR101713387B1 (ko) | 2014-12-29 | 2017-03-22 | 주식회사 일도엔지니어링 | 사진 자동 분류 및 저장 시스템 및 그 방법 |
KR101762437B1 (ko) | 2015-05-14 | 2017-07-28 | (주)옥천당 | 관능성이 개선된 커피 원두의 제조 방법 |
KR20160134401A (ko) * | 2015-05-15 | 2016-11-23 | (주)플래닛텍 | 자동차의 수리견적 자동산출시스템 및 그 방법 |
US10529028B1 (en) | 2015-06-26 | 2020-01-07 | State Farm Mutual Automobile Insurance Company | Systems and methods for enhanced situation visualization |
CN106407984B (zh) * | 2015-07-31 | 2020-09-11 | 腾讯科技(深圳)有限公司 | 目标对象识别方法及装置 |
GB201517462D0 (en) * | 2015-10-02 | 2015-11-18 | Tractable Ltd | Semi-automatic labelling of datasets |
CN105678622A (zh) * | 2016-01-07 | 2016-06-15 | 平安科技(深圳)有限公司 | 车险理赔照片的分析方法及系统 |
US10692050B2 (en) * | 2016-04-06 | 2020-06-23 | American International Group, Inc. | Automatic assessment of damage and repair costs in vehicles |
CN105956667B (zh) * | 2016-04-14 | 2018-09-25 | 平安科技(深圳)有限公司 | 车险定损理赔审核方法及系统 |
US9922471B2 (en) | 2016-05-17 | 2018-03-20 | International Business Machines Corporation | Vehicle accident reporting system |
CN106021548A (zh) * | 2016-05-27 | 2016-10-12 | 大连楼兰科技股份有限公司 | 基于分布式人工智能图像识别的远程定损方法及系统 |
CN106127747B (zh) * | 2016-06-17 | 2018-10-16 | 史方 | 基于深度学习的汽车表面损伤分类方法及装置 |
CN106251421A (zh) * | 2016-07-25 | 2016-12-21 | 深圳市永兴元科技有限公司 | 基于移动终端的车辆定损方法、装置及系统 |
CN106296118A (zh) * | 2016-08-03 | 2017-01-04 | 深圳市永兴元科技有限公司 | 基于图像识别的车辆定损方法及装置 |
US10902525B2 (en) | 2016-09-21 | 2021-01-26 | Allstate Insurance Company | Enhanced image capture and analysis of damaged tangible objects |
CN106600422A (zh) * | 2016-11-24 | 2017-04-26 | 中国平安财产保险股份有限公司 | 一种车险智能定损方法和系统 |
US10424132B2 (en) * | 2017-02-10 | 2019-09-24 | Hitachi, Ltd. | Vehicle component failure prevention |
CN107358596B (zh) | 2017-04-11 | 2020-09-18 | 阿里巴巴集团控股有限公司 | 一种基于图像的车辆定损方法、装置、电子设备及系统 |
US10470023B2 (en) | 2018-01-16 | 2019-11-05 | Florey Insurance Agency, Inc. | Emergency claim response unit |
US10997413B2 (en) * | 2018-03-23 | 2021-05-04 | NthGen Software Inc. | Method and system for obtaining vehicle target views from a video stream |
-
2017
- 2017-04-28 CN CN201710294010.4A patent/CN107194323B/zh active Active
- 2017-04-28 CN CN202010682172.7A patent/CN111914692B/zh active Active
-
2018
- 2018-03-14 TW TW107108570A patent/TW201839666A/zh unknown
- 2018-04-27 KR KR1020197033366A patent/KR20190139262A/ko not_active IP Right Cessation
- 2018-04-27 JP JP2019558552A patent/JP6905081B2/ja active Active
- 2018-04-27 SG SG11201909740R patent/SG11201909740RA/en unknown
- 2018-04-27 WO PCT/CN2018/084760 patent/WO2018196837A1/zh unknown
- 2018-04-27 EP EP18791520.2A patent/EP3605386A4/en active Pending
-
2019
- 2019-10-16 US US16/655,001 patent/US11151384B2/en active Active
- 2019-10-23 PH PH12019502401A patent/PH12019502401A1/en unknown
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105719188A (zh) * | 2016-01-22 | 2016-06-29 | 平安科技(深圳)有限公司 | 基于多张图片一致性实现保险理赔反欺诈的方法及服务器 |
CN106600421A (zh) * | 2016-11-21 | 2017-04-26 | 中国平安财产保险股份有限公司 | 一种基于图片识别的车险智能定损方法及系统 |
CN107194323A (zh) * | 2017-04-28 | 2017-09-22 | 阿里巴巴集团控股有限公司 | 车辆定损图像获取方法、装置、服务器和终端设备 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3605386A4 * |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200065632A1 (en) * | 2018-08-22 | 2020-02-27 | Alibaba Group Holding Limited | Image processing method and apparatus |
US10984293B2 (en) | 2018-08-22 | 2021-04-20 | Advanced New Technologies Co., Ltd. | Image processing method and apparatus |
WO2020041399A1 (en) * | 2018-08-22 | 2020-02-27 | Alibaba Group Holding Limited | Image processing method and apparatus |
JP7277997B2 (ja) | 2018-12-31 | 2023-05-19 | アジャイルソーダ インコーポレイテッド | ディープラーニングに基づいた自動車部位別の破損程度の自動判定システムおよび方法 |
JP2022537857A (ja) * | 2018-12-31 | 2022-08-31 | アジャイルソーダ インコーポレイテッド | ディープラーニングに基づいた自動車部位別の破損程度の自動判定システムおよび方法 |
CN110033386B (zh) * | 2019-03-07 | 2020-10-02 | 阿里巴巴集团控股有限公司 | 车辆事故的鉴定方法及装置、电子设备 |
CN110033386A (zh) * | 2019-03-07 | 2019-07-19 | 阿里巴巴集团控股有限公司 | 车辆事故的鉴定方法及装置、电子设备 |
CN110473418A (zh) * | 2019-07-25 | 2019-11-19 | 平安科技(深圳)有限公司 | 危险路段识别方法、装置、服务器及存储介质 |
CN110688513A (zh) * | 2019-08-15 | 2020-01-14 | 平安科技(深圳)有限公司 | 基于视频的农作物查勘方法、装置及计算机设备 |
CN110688513B (zh) * | 2019-08-15 | 2023-08-18 | 平安科技(深圳)有限公司 | 基于视频的农作物查勘方法、装置及计算机设备 |
JP7356941B2 (ja) | 2020-03-26 | 2023-10-05 | 株式会社奥村組 | 管渠損傷特定装置、管渠損傷特定方法および管渠損傷特定プログラム |
CN112465018A (zh) * | 2020-11-26 | 2021-03-09 | 深源恒际科技有限公司 | 一种基于深度学习的车辆视频定损系统的智能截图方法及系统 |
CN112465018B (zh) * | 2020-11-26 | 2024-02-02 | 深源恒际科技有限公司 | 一种基于深度学习的车辆视频定损系统的智能截图方法及系统 |
Also Published As
Publication number | Publication date |
---|---|
US20200050867A1 (en) | 2020-02-13 |
CN107194323B (zh) | 2020-07-03 |
SG11201909740RA (en) | 2019-11-28 |
JP2020518078A (ja) | 2020-06-18 |
CN111914692B (zh) | 2023-07-14 |
CN107194323A (zh) | 2017-09-22 |
JP6905081B2 (ja) | 2021-07-21 |
US11151384B2 (en) | 2021-10-19 |
KR20190139262A (ko) | 2019-12-17 |
EP3605386A1 (en) | 2020-02-05 |
CN111914692A (zh) | 2020-11-10 |
PH12019502401A1 (en) | 2020-12-07 |
TW201839666A (zh) | 2018-11-01 |
EP3605386A4 (en) | 2020-04-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018196837A1 (zh) | 车辆定损图像获取方法、装置、服务器和终端设备 | |
WO2018196815A1 (zh) | 车辆定损图像获取方法、装置、服务器和终端设备 | |
EP3457683B1 (en) | Dynamic generation of image of a scene based on removal of undesired object present in the scene | |
US10440276B2 (en) | Generating image previews based on capture information | |
WO2019214313A1 (zh) | 一种车辆定损的交互处理方法、装置、处理设备及客户端 | |
WO2020073310A1 (en) | Method and apparatus for context-embedding and region-based object detection | |
CN108875456B (zh) | 目标检测方法、目标检测装置和计算机可读存储介质 | |
WO2020001219A1 (zh) | 图像处理方法和装置、存储介质、电子设备 | |
Wang et al. | Mask-RCNN based people detection using a top-view fisheye camera | |
JP2013206458A (ja) | 画像における外観及びコンテキストに基づく物体分類 | |
CN114267041B (zh) | 场景中对象的识别方法及装置 | |
CN108875488B (zh) | 对象跟踪方法、对象跟踪装置以及计算机可读存储介质 | |
CN111523402B (zh) | 一种视频处理方法、移动终端及可读存储介质 | |
US20230098829A1 (en) | Image Processing System for Extending a Range for Image Analytics | |
CN112965602A (zh) | 一种基于手势的人机交互方法及设备 | |
US20230098110A1 (en) | System and method to improve object detection accuracy by focus bracketing | |
US20240037985A1 (en) | Cascaded detection of facial attributes | |
CN115543161B (zh) | 一种适用于白板一体机的抠图方法及装置 | |
Kerdvibulvech | Hybrid model of human hand motion for cybernetics application | |
TW202411949A (zh) | 臉部屬性的串級偵測 | |
KR20210067710A (ko) | 실시간 객체 검출 방법 및 장치 | |
JP2016207106A (ja) | 物体検出における誤検出低減方法および装置 | |
CN113723152A (zh) | 图像处理方法、装置以及电子设备 | |
KR20210046550A (ko) | 심층 신경회로망을 이용한 이동궤적 분류 장치 및 그 방법 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18791520 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2019558552 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2018791520 Country of ref document: EP Effective date: 20191025 |
|
ENP | Entry into the national phase |
Ref document number: 20197033366 Country of ref document: KR Kind code of ref document: A |