WO2019100839A1 - 识别车辆受损部件的方法、装置、服务器、客户端及系统 - Google Patents

识别车辆受损部件的方法、装置、服务器、客户端及系统 Download PDF

Info

Publication number
WO2019100839A1
WO2019100839A1 PCT/CN2018/107217 CN2018107217W WO2019100839A1 WO 2019100839 A1 WO2019100839 A1 WO 2019100839A1 CN 2018107217 W CN2018107217 W CN 2018107217W WO 2019100839 A1 WO2019100839 A1 WO 2019100839A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
component
feature
vehicle
damaged
Prior art date
Application number
PCT/CN2018/107217
Other languages
English (en)
French (fr)
Inventor
郭之友
方勇
Original Assignee
阿里巴巴集团控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司 filed Critical 阿里巴巴集团控股有限公司
Priority to EP18881645.8A priority Critical patent/EP3716195A4/en
Priority to SG11202004704QA priority patent/SG11202004704QA/en
Publication of WO2019100839A1 publication Critical patent/WO2019100839A1/zh
Priority to US16/879,367 priority patent/US11341746B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • G06V10/235Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition based on user input or interaction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Definitions

  • the embodiment of the present specification belongs to the technical field of computer data processing, and in particular, to a method, an apparatus, a server, a client, and a system for identifying a damaged component of a vehicle.
  • Motor vehicle insurance that is, automobile insurance (or car insurance) refers to a type of commercial insurance that is liable for personal injury or property damage caused by natural disasters or accidents. With the development of the economy, the number of motor vehicles is increasing. At present, auto insurance has become one of the biggest insurances in China's property insurance business.
  • the insurance company When a traffic accident occurs in an insured vehicle, the insurance company usually first conducts on-site inspection and damage.
  • the fixed damage of the vehicle involves many aspects of technology and benefits such as follow-up maintenance and evaluation. It is an important process in the entire auto insurance service.
  • the insurance company Due to the lack of auto insurance knowledge or the limitations of shooting technology, the insurance company often finds out the damaged parts when using the car damage photos taken by the owner's mobile phone, or generates a large number of redundant invalid photos, which affects the efficiency of the damage processing. And accuracy.
  • the embodiment of the present specification aims to provide a method, a device, a server, a client and a system for identifying a damaged component of a vehicle, which can assist the positioning of the damaged position by identifying the relative position of the vehicle feature in the image and the mark delineated by the user. Identifying the accuracy and processing efficiency of damaged parts in the loss, greatly improving the user experience.
  • a method, device, server, client and system for identifying a damaged component of a vehicle provided by an embodiment of the present specification are implemented as follows:
  • a method of identifying a damaged component of a vehicle comprising:
  • the client acquires a captured image of the vehicle
  • the client determines the damaged area based on the damage location marking behavior in the captured image to form a marker image
  • the client sends the tag image to the server
  • the server identifies a feature component in the mark image, and determines a relative positional relationship between the feature component and the damaged area based on an image position of the feature component and the damaged area;
  • the server matches the relative positional relationship in the feature correspondence relationship library, and acquires a corresponding relationship component
  • the server determines a damaged component in the captured image based on the relationship component.
  • a method of identifying a damaged component of a vehicle comprising:
  • the markup image includes a damaged area determined based on a behavior of the damage location mark in the captured image
  • a damaged component in the captured image is determined based on the relationship component.
  • a method of identifying a damaged component of a vehicle comprising:
  • the tag image is sent to a server to cause the server to identify the damaged component based on the relative positional relationship of the damaged area and the feature in the marked image.
  • a method of identifying a damaged component of a vehicle comprising:
  • the client acquires a captured image of the vehicle and transmits the captured image to a server;
  • the server identifies a first damage location in the captured image, and marks the first damage location in the captured image to generate a marker image;
  • the server sends the tag image to the client
  • the client displays the tag information of the first damage location in the marked image
  • the client confirms the vehicle damage location based on the received interaction operation, the vehicle damage location including the first damage location;
  • the client sends the auxiliary damage image after confirming the vehicle damage location to the server;
  • the server After receiving the auxiliary damage image, the server identifies at least one feature component included in the auxiliary damage image;
  • the server determines a relative positional relationship between the feature component and a vehicle damage location in the auxiliary damage image
  • the server matches the relative positional relationship in the feature correspondence relationship library, and acquires a corresponding relationship component
  • the server determines a damaged component in the captured image based on the relationship component.
  • a method of identifying a damaged component of a vehicle comprising:
  • auxiliary damaged image received by the client, identifying at least one feature included in the auxiliary damage image, the auxiliary damaged image including image information formed after confirming a vehicle damage position in the mark image based on an interaction operation ;
  • the relative positional relationship is matched in the feature correspondence library, the corresponding relationship component is acquired, and the damaged component in the captured image is determined based on the relationship component.
  • a method of identifying a damaged component of a vehicle comprising:
  • the mark image including image information generated after marking the identified first damage position in the captured image
  • the auxiliary damage image after confirming the vehicle damage position is sent to the server.
  • a method of identifying a damaged component of a vehicle comprising:
  • a damaged component in the captured image is determined based on the relationship component.
  • a method of identifying a damaged component of a vehicle comprising:
  • the relative positional relationship is matched in the feature correspondence library, the corresponding relationship component is acquired, and the damaged component in the captured image is determined based on the relationship component.
  • a device for identifying a damaged component of a vehicle comprising:
  • a receiving module configured to receive a markup image uploaded by the client, where the markup image includes a damaged area determined based on a behavior of the damage location mark in the captured image;
  • a positional relationship determining module configured to identify a feature component in the mark image, and determine a relative positional relationship between the feature component and the damaged area based on an image position of the feature component and the damaged area;
  • a matching module configured to match the relative positional relationship in the feature correspondence relationship library, and acquire a corresponding relationship component
  • a component identification module configured to determine a damaged component in the captured image based on the relationship component.
  • a device for identifying a damaged component of a vehicle comprising:
  • An image acquisition module configured to acquire a captured image of the vehicle
  • a position marking module configured to determine a damaged area based on a lesion position marking behavior in the captured image to form a marker image
  • an image sending module configured to send the mark image to a server, so that the server identifies the damaged component based on a relative positional relationship between the damaged area and the feature component in the mark image.
  • a device for identifying a damaged component of a vehicle comprising:
  • An image marking module configured to acquire a captured image uploaded by the client, identify a first damage location in the captured image, and mark the first damage location in the captured image to generate a marker image;
  • a tag sending module configured to send the tag image to the client
  • An auxiliary interaction module configured to receive an auxiliary damaged image returned by the client, and identify at least one feature included in the auxiliary damage image, the auxiliary damaged image including confirming vehicle damage in the marked image based on an interaction operation Image information formed after the position;
  • a position determining module configured to determine a relative positional relationship between the feature component and a vehicle damage location in the auxiliary damage image
  • a component identification module configured to match the relative positional relationship in the feature correspondence relationship library, acquire a corresponding relationship component, and determine a damaged component in the captured image based on the relationship component.
  • a device for identifying a damaged component of a vehicle comprising:
  • a first image sending module configured to acquire a captured image of the vehicle, and send the captured image to a server
  • a mark receiving module configured to receive a mark image returned by the server, where the mark image includes image information generated after marking the identified first damage position in the captured image
  • a mark display module configured to display mark information of the first damaged position in the mark image
  • a damage location confirmation module configured to confirm a vehicle damage location based on the received interaction operation, the vehicle damage location including the first damage location
  • the second image sending module is configured to send the auxiliary damage image after the vehicle damage location is confirmed to the server.
  • a server comprising a processor and a memory for storing processor-executable instructions, the processor implementing the instructions to:
  • the markup image includes a damaged area determined based on a behavior of the damage location mark in the captured image
  • a damaged component in the captured image is determined based on the relationship component.
  • a client comprising a processor and a memory for storing processor executable instructions, the processor implementing the instructions to:
  • the tag image is sent to a server to cause the server to identify the damaged component based on the relative positional relationship of the damaged area and the feature in the marked image.
  • a server comprising a processor and a memory for storing processor-executable instructions, the processor implementing the instructions to:
  • auxiliary damaged image received by the client, identifying at least one feature included in the auxiliary damage image, the auxiliary damaged image including image information formed after confirming a vehicle damage position in the mark image based on an interaction operation ;
  • the relative positional relationship is matched in the feature correspondence library, the corresponding relationship component is acquired, and the damaged component in the captured image is determined based on the relationship component.
  • a client comprising a processor and a memory for storing processor executable instructions, the processor implementing the instructions to:
  • the mark image including image information generated after marking the identified first damage position in the captured image
  • the auxiliary damage image after confirming the vehicle damage position is sent to the server.
  • An electronic device includes a display screen, a processor, and a memory storing processor-executable instructions that, when executed by the processor, are implemented:
  • a damaged component in the captured image is determined based on the relationship component.
  • An electronic device includes a display screen, a processor, and a memory storing processor-executable instructions that, when executed by the processor, are implemented:
  • Marking information of the first damage location in the mark image is displayed in the display screen
  • the relative positional relationship is matched in the feature correspondence library, the corresponding relationship component is acquired, and the damaged component in the captured image is determined based on the relationship component.
  • a system for identifying a damaged component of a vehicle comprising: a first client and a first server, wherein the first client includes any one of the application scenarios in which the client manually marks the damage location and the server identifies the processing
  • the processing method of the client embodiment the first server includes a processing method for implementing any one of the server embodiments in an application scenario in which the client manually marks the damage location and the server identifies the processing.
  • a system for identifying a damaged component of a vehicle comprising a second client and a second server, wherein the second client includes an application scenario in which the client captures an image and the server initially recognizes and returns to the client for confirmation.
  • the processing method of any one of the client embodiments is implemented.
  • the second server includes an application scenario in which any client embodiment is implemented in an application scenario in which the client captures an image and the server initially recognizes the image and returns it to the client for confirmation. .
  • a method, device, server, client and system for identifying a damaged component of a vehicle may pre-establish an identifiable vehicle feature library including a plurality of vehicle components and a feature correspondence relationship of relative positional relationship of the vehicle components Library.
  • the user can manually delineate the damaged location on the client.
  • the identifiable vehicle features in the image are then identified, and their relative positions to the user-defined markers are determined based on the identifiable features.
  • the relative position is matched in the feature relation library to determine the damaged component, and the damaged position is assisted by manual and simple operation on the user site, and the insurance company is positioned to locate the damaged component of the vehicle, and the identification is damaged in the fixed damage.
  • the accuracy and processing efficiency of the components greatly enhances the user experience.
  • FIG. 2 is a schematic diagram of a user site manually marking a damage location on a client site in an implementation scenario of the present specification
  • FIG. 3 is a schematic diagram of a process for determining a relative positional relationship between a feature component and a damaged area in an implementation scenario of the present specification
  • FIG. 4 is a schematic flow chart of a method of another embodiment of the method provided by the present specification.
  • FIG. 5 is a schematic diagram of a scenario of a first damage location identified by a user on-site adjustment server according to an embodiment of the present specification
  • FIG. 6 is a schematic flow chart of a method for identifying a damaged component of a server for a server provided by the present specification
  • FIG. 7 is a schematic flow chart of a method for identifying a damaged component of a vehicle for a client provided by the present specification
  • FIG. 8 is a schematic flow chart of another method for identifying a damaged component of a vehicle for a server provided by the present specification
  • FIG. 9 is a schematic flow chart of another method for identifying a damaged component of a vehicle for a client provided by the present specification.
  • FIG. 10 is a schematic diagram of a process flow of another embodiment of the method provided by the present specification.
  • FIG. 11 is a schematic flowchart of a process of another embodiment of the method provided by the present specification.
  • FIG. 12 is a block diagram showing a hardware structure of a server for identifying a damaged component of a vehicle according to an embodiment of the present invention
  • FIG. 13 is a block diagram showing the structure of an apparatus for identifying a damaged component of a vehicle provided by the present specification.
  • Figure 14 is a block diagram showing another embodiment of an apparatus for identifying a damaged component of a vehicle.
  • 15 is a block diagram showing the structure of an apparatus for identifying a damaged component of a vehicle provided by the present specification.
  • 16 is a block diagram showing the structure of an apparatus for identifying a damaged component of a vehicle provided by the present specification.
  • 17 is a block diagram showing the structure of an apparatus for identifying a damaged component of a vehicle provided by the present specification
  • FIG. 18 is a schematic structural diagram of an embodiment of a damage location confirmation module provided by the present specification.
  • FIG. 19 is a schematic structural diagram of an embodiment of an electronic device provided by the present specification.
  • the client may include a terminal device (such as a smart phone, a tablet computer, a smart wearer) having a shooting function (at least having a photographing function) used by a vehicle loss site personnel (which may be an accident vehicle owner or an insurance company personnel). Equipment, dedicated shooting equipment, etc.
  • the client may have a communication module, and may communicate with a remote server to implement data transmission with the server.
  • the server may include a server on the insurance company side, and other implementation scenarios may also include a server of the intermediate platform, such as a server of a third-party auto insurance server platform that has a communication link with the insurance company server.
  • the server may include a single computer device, or may include a server cluster composed of a plurality of servers, or a server structure of a distributed system.
  • a feature library and a feature correspondence library may be established in advance.
  • the library of features may include builds based on a plurality of feature components of the selected vehicle, such as may include left/right headlights, license plates, door handle hands, hubs, rearview mirrors, left/right taillights, and the like.
  • the features of the feature library may include a single accessory that makes up the vehicle, and may also include a kit of components that combine multiple components, such as a front door (which may include a door and a handlebar).
  • the type of features in the feature library in some embodiments may allow for separate accessories and kits to be present at the same time, for example, the fender may be one of the feature libraries, and the fender assembly may also be a feature library. A feature in the middle. At the time of the subsequent recognition processing, one or more of these features may be captured from the image taken by the user as a reference for judging the relative position of the marked damage position on the vehicle.
  • a feature correspondence library may be established, and the feature correspondence library may include position relationship data between vehicle components established according to a spatial position between the vehicle components.
  • the feature correspondence relationship library may be established based on the feature library described above, and specifically, the feature components included in the feature library may be used as a reference reference to establish a correspondence relationship therebetween. .
  • the correspondence relationship may include a correspondence relationship between two vehicle components, such as a relationship between the vehicle component A and the vehicle component B, which may be P1; and may also include a correspondence relationship between three or more vehicle components,
  • the relationship component of the intermediate portion of the vehicle component A, the vehicle component B, and the vehicle component C is P2, or may include a correspondence relationship of one vehicle component with respect to a plurality of vehicle components, such as the vehicle component A to the vehicle component.
  • the relationship component at the position of 80% of E, 40% of the vehicle component A to the vehicle component F is P3.
  • the specific positional relationship data in the feature correspondence library may include a relationship component that represents an area to which the vehicle component belongs, a relationship component to which the area of the vehicle component specifies the orientation, and a vehicle component.
  • a plurality of correspondences such as relationship components to which a region range of a specified ratio belongs.
  • one component may have a different correspondence with respect to different reference components.
  • the feature library may include two front headlights, a front/rear door handle, a hub, and the like
  • the position relationship data of the feature component established in the feature correspondence library may include the following types. :
  • the area near the “hub” 20%-60% is positioned as the "front fender", close to the "front door handle” 0-40% area Positioned as "front door”;
  • the area between 20% and 80% between the two "headlights” is positioned as "front face air inlet grille” and the like.
  • feature parts and the positional relationship data between the components can be stored in a corresponding data library and feature correspondence library in a certain data format.
  • FIG. 1 is a schematic flow chart of an embodiment of a method for identifying a damaged component of a vehicle provided by the present specification.
  • the present specification provides method operation steps or device structures as shown in the following embodiments or figures, there may be more or partial merged fewer operational steps in the method or device based on conventional or no inventive labor. Or module unit.
  • the execution order of the steps or the module structure of the device is not limited to the execution order or the module structure shown in the embodiment or the drawings.
  • server or terminal product of the method or module structure When the device, server or terminal product of the method or module structure is applied, it may be executed sequentially or in parallel according to the method or module structure shown in the embodiment or the drawing (for example, parallel processor or multi-thread processing). Environment, even including distributed processing, server cluster implementation environment).
  • This embodiment is exemplified in the application scenario in which the user uses the mobile phone to capture the captured image at the vehicle damage scene and sends the captured image to the insurance company for the vehicle to determine the damage.
  • the client can use the smart phone for the user.
  • the user can use the smart phone with the corresponding loss application to shoot the damage of the vehicle, and the shooting process can be manually taken. Circle the damaged location area in the image and then send the captured image to the car insurance company.
  • the server on the side of the car insurance company reads the captured image, it can recognize the feature parts in the captured image and the damaged area of the part circled by the user.
  • the server may acquire a relationship component corresponding to the relative positional relationship based on the relative positional relationship among the feature areas circled by the user in the feature component, that is, the damaged component may be identified.
  • the image uploaded by the user to the server can no longer be the image information of the vehicle damage scene itself, and can also be accompanied by the information of the location area of the damaged component manually marked by the user, so as to achieve the on-site user assistance in identifying the damaged component, which is fast.
  • the purpose of the damage is not constitute a limitation on other expandable technical solutions based on the present specification.
  • the implementation provided in this specification can also be applied to an implementation scenario provided by a third-party service platform to interact with a user to implement on-site vehicle loss, or form a library that integrates the feature library and feature correspondence.
  • On-site special equipment which can directly identify the damaged parts at the same time as the vehicle damage scene or further complete the vehicle damage.
  • the method may include:
  • S2 the client determines the damaged area based on the damage location marking behavior in the captured image to form a mark image
  • S6 the server identifies the feature component in the marked image, and determines a relative positional relationship between the feature component and the damaged area;
  • the server determines the damaged component in the captured image based on the relationship component.
  • the feature component can be a component of a pre-built feature library.
  • a plurality of identifiable vehicle components can be stored in the library of features.
  • the client can be used to shoot the vehicle.
  • the user may be required to perform shooting according to a certain shooting requirement, so that at least one identifiable feature may be included in the captured image acquired by the client for subsequent determination of the relative positional relationship between the feature and the user's marked area.
  • the features described in one embodiment of the present specification may include: vehicle components included in the constructed feature library,
  • the feature correspondence relationship library may include: component position relationship data constructed by using a vehicle component in the feature library as a reference, the component position relationship data including characterization between vehicle components At least one type of relationship data of the relationship component to which the region belongs, the relationship component to which the vehicle component specifies the azimuth region, and the relationship component to which the region range of the vehicle component is specified.
  • the feature library and the feature relationship relationship library may be stored on a computer storage medium on the server side.
  • the feature library and the feature correspondence relationship library are One or all of them may be stored on a separate database server or storage device, etc., and may be queried on the database server or storage device when the auto insurance company identifies the feature components in the captured image or performs relative positional relationship matching.
  • the damaged location can be manually marked on the image captured by the client, and the client can determine a damaged area based on the damaged location.
  • the specific marking behavior may include the user sliding the finger on the touch screen of the client to take a position of the damage in the photographed photo.
  • the user may also mark the damaged position on the client by using a mouse, a magnetic/optical sensor pen or the like.
  • the damaged area determined based on the user's on-site damage location mark may be an irregularly shaped damaged area circled by the user, or may be a damaged area of the corrected regular shape, such as a rectangular damaged area.
  • 2 is a schematic diagram of a user site manually marking a damage location on a client site in an implementation scenario of the present specification.
  • the captured image at this time may be referred to as a marker image.
  • the client can send the tag image to the remote auto insurance company's server for processing.
  • the image uploaded by the user to the server may no longer be a simple vehicle damage scene image, but is accompanied by information such as the damage location of the user mark, so as to realize the on-site user assistance in identifying the damaged component, thereby realizing rapid damage of the on-site vehicle. the goal of.
  • the server can read the image information to identify the feature component in the mark image and the damaged area of the user mark.
  • a relative positional relationship of the feature member to the damaged region can then be determined based on the image location of the feature and the damaged region.
  • the relative positional relationship described herein may include a combination of one or more of relationship data of the feature component with the damaged area, relationship data of distance, relationship data of a distance percentage, and the like.
  • the damaged area P1 is on the right side of the identified feature part A
  • the relative positional relationship that can be determined is "the target object is on the right side of the feature part A”.
  • specific other information can be obtained, such as "the target object is in the range of 10-40 cm in the right side of feature A".
  • the determination of the relative positional relationship may use an algorithm of image pixels, or other image processing methods.
  • a two-dimensional or three-dimensional coordinate system may be established in the mark image to respectively locate the position coordinates of the feature part and the damaged area, and then calculate the relative positional relationship. In this way, based on the coordinate system, the relative positional relationship can be determined more quickly and accurately.
  • the determining a relative positional relationship between the feature and the damaged area may include:
  • S80 construct a coordinate system by using a center point of the damaged area as a coordinate origin
  • S84 Determine a relative positional relationship between the feature component and the damaged area based on position coordinate data of the feature component.
  • the coordinate system constructed may be a two-dimensional coordinate system, and the center point of the damaged area is taken as the coordinate origin.
  • a three-dimensional coordinate system can be constructed to calculate the relative positional relationship in a manner closer to the actual component space form.
  • it is also possible to construct a two-dimensional or three-dimensional coordinate by using one of the identified damaged components as a coordinate origin, or even construct a two-coordinate system. A relative positional relationship of the feature to the damaged area is determined.
  • a feature may be included in the captured image.
  • the captured image can be generated to the server in real time. If the server recognizes and processes at least one feature component in the uploaded image, the image captured by the user can be considered as required, and can be used for the vehicle. Damage assisted identification.
  • a method of identifying a feature component for positioning a vehicle damage component it is usually necessary to instruct the client user to shoot according to a certain access requirement, or some other auxiliary measures on site.
  • two or three feature parts may be disposed, and then the positional relationship between the feature part and the damaged area is determined in combination with the position of the damaged area, and then in the feature correspondence library. Matching feature relationships enables fast assisted positioning and identification of damaged components.
  • the method described in the present specification may be a method described in the present specification.
  • the position direction of the feature component relative to the damaged area may be marked in a vector manner, and then the spatial angle is identified based on the relative size of the plurality of feature parts, and the coordinate distance is determined to determine the damage.
  • the region is more closely matched to the region range of which feature or features. For example, where the damaged area simultaneously occupies a plurality of damaged parts, but most of the position is between the features A and B, it may indicate that the damaged part belongs to the vehicle part within the area between the features A and B. higher.
  • the damage position can be determined more accurately.
  • the determining, according to the location coordinate data of the feature component, the relative positional relationship between the feature component and the damaged area may include:
  • S842 constructing, in the coordinate system, a second regular geometric figure including the coordinate origin and the at least two feature position coordinate data;
  • S844 Calculate, respectively, an area ratio of the first regular geometric figure included in the second rule geometry
  • S846 determining area range information of the damaged area between the feature parts based on an area ratio of the first regular geometry and a coordinate distance of the feature part;
  • S848 Determine a relative positional relationship between the feature component and the damaged area based on the area range information matching.
  • the regular geometric figure generally refers to various figures abstracted from the real object, such as squares, rectangles, triangles, diamonds, trapezoids, circles, sectors, rings, and the like.
  • the damaged area manually marked by the user on the client is an irregular graphic.
  • the user can mark the trajectory into a corresponding regular geometric figure (which may be referred to as a first regular geometric figure).
  • a first regular geometric figure which may be referred to as a first regular geometric figure.
  • irregular circles can be converted into regular circles.
  • the parameters in the first geometry such as radius, side length, etc., can be adaptively set according to the trajectory or processing requirements circled by the user.
  • the center point of the damaged area is shifted to the origin of the coordinate, and the other features and the original line of the coordinates may constitute a corresponding geometric figure (referred to herein as a second regular geometric figure).
  • the damaged area can occupy a certain area in the second regular geometry formed by other components, and the larger the occupied area, the more likely the damaged area belongs to the vehicle within the range between two or more characteristic parts. component. Therefore, the present embodiment is determined based on the positional relationship of the occupied area, and the damage position can be determined more accurately.
  • FIG. 3 is a schematic diagram of a process for determining a relative positional relationship between a feature component and a damaged area in an implementation scenario of the present specification.
  • the middle circle track P is a damaged area manually marked by the user on the user's site, and A, B, and C are feature parts in the feature library from the mark image, respectively.
  • the center point of the irregular figure P manually circled by the user is used as a coordinate origin to establish a two-dimensional coordinate system (x, y).
  • the displacement vectors a, b, and c are respectively taken from the position of the center point of the feature parts A, B, and C to the coordinate origin.
  • the displacement vectors a, b, and c can be input into the feature relation library for matching query, and the relationship components corresponding to the displacement vector in the feature correspondence relation library are determined.
  • the determined relationship component can be used as the damaged component in the captured image taken by the identified client.
  • the server may select the correspondence with the highest degree of matching of the relative position relationship to identify and identify the damaged component. .
  • the location relationship data that matches the relative location relationship is not found in the feature correspondence database, the location with the highest matching degree with the relative location relationship is obtained. Relationship data;
  • the relationship component corresponding to the positional relationship data having the highest matching degree is used as a relationship component matching the relative positional relationship.
  • the processing of the matching degree may be confirmed according to the semantic information expressed in the relative positional relationship, for example, the two relative positional relationships of “left side of the left headlight” and “left side of the right headlight” may be combined with “two headlights”
  • the area between 20% and 80% is matched by the front face air inlet grille, and the matching degree is the highest in the feature correspondence library.
  • the relative positional relationship obtained is “close to the '20%-50% of the hub'”, and the most likely match is the library in the feature correspondence library: the feature “front door handle” and features In the area between the “hubs”, the area near the “hub” is 20%-60% positioned as the “front fender”, and the area close to the “front door handle” 0-40% is positioned as the “front door”.
  • the feature library and the feature correspondence relationship library may be partially or completely stored on the server side, which may save the database storage, query, matching, and the like processing of the client.
  • the client can take images as required and send them to the server, and then identify the features and determine the relative positional relationship by the more powerful server side, and query in the corresponding database to identify what components are damaged.
  • the feature library or the feature correspondence library may be generated by offline pre-built, and the vehicle component construction feature library may be pre-selected, and the feature library is updated and the corresponding feature relationship is updated. Library, update/maintenance is completed and then used online.
  • This specification does not exclude that the feature library or the feature correspondence library is constructed or updated/maintained online.
  • the feature library or the corresponding relationship library can be constructed online, and the data information in the database can be constructed. Simultaneous online use, the feature recognition of the captured image or the matching query processing of the relative positional relationship.
  • a method for identifying a damaged component of a vehicle may pre-establish a library of identifiable vehicle features including a plurality of vehicle components and a feature correspondence library of relative positional relationships of the vehicle components.
  • the user can manually delineate the damaged location on the client.
  • the identifiable vehicle features in the image are then identified, and their relative positions to the user-defined markers are determined based on the identifiable features.
  • the relative position is matched in the feature relation library to determine the damaged component, and the damaged position is assisted by manual and simple operation on the user site, and the insurance company is positioned to locate the damaged component of the vehicle, and the identification is damaged in the fixed damage.
  • the accuracy and processing efficiency of the components greatly enhances the user experience.
  • the user can manually delineate the damaged position, and then upload it to the server, and then the server determines the damaged component.
  • Another one or more embodiments of the present specification also provide another method of identifying damaged parts of a vehicle.
  • the user can send the original captured image to the server while capturing the image, and the damaged location is automatically detected by the server side, and the damaged location is marked and transmitted to the client user for confirmation. If the damaged location marked by the server is correct (in some embodiments, it can be expressed as valid), the user can directly confirm the match submitted to the system for the location relationship to confirm the damaged component.
  • the server is marked incorrectly (in some embodiments, it can be expressed as invalid)
  • the user can quickly adjust according to the actual situation on site, such as expanding the marked area, moving the marked area, and the like.
  • the damaged area can be quickly confirmed according to the manual assistance of the user site, and then the matching confirmation of the damaged component is performed based on the positional relationship between the feature component and the damaged area identified by the system, so that the user
  • the advantage of the vehicle damage situation is closer to the real vehicle damage situation, which can effectively improve the accuracy of the damaged component identification and the user's fixed loss experience.
  • the method may include:
  • the client acquires a captured image of the vehicle, and sends the captured image to a server;
  • the server identifies a first damage location in the captured image, and marks the first damage location in the captured image to generate a marker image;
  • S112 The server sends the tag image to the client.
  • S114 The client displays the tag information indicating the first damage location in the marked image.
  • the client confirms a vehicle damage location based on the received interaction operation, where the vehicle damage location includes the first damage location;
  • S118 The client sends the auxiliary damage image after confirming the vehicle damage location to the server;
  • the server After receiving the auxiliary damage image, the server identifies at least one feature component included in the auxiliary damage image;
  • S122 The server determines a relative positional relationship between the feature component and a vehicle damage location in the auxiliary damage image
  • S124 The server matches the relative position relationship in a feature correspondence relationship library, and acquires a corresponding relationship component;
  • S126 The server determines the damaged component in the captured image based on the relationship component.
  • the vehicle damage scene personnel can use the client to shoot the damaged vehicle.
  • the shooting may be one or more photos obtained by taking a photo, or may be a captured video.
  • the video may be regarded as a continuous image, a photo or a video. It can be considered as a type in the captured image.
  • the captured image taken can be sent to the server by the client.
  • the server side can utilize a damage identification system built in advance or in real time to identify the captured image uploaded by the client.
  • the damage identification system may include a damage identification algorithm constructed by using a plurality of training models, such as Re-Net and convolutional neural networks.
  • an algorithm model for detecting damage in an image may be constructed based on a Convolutional Neural Network (CNN) and a Region Proposal Network (RPN), combined with a pooling layer and a fully connected layer.
  • CNN Convolutional Neural Network
  • RPN Region Proposal Network
  • the captured image may be identified by the algorithm model, and the damage location of the vehicle in the captured image (which may be referred to herein as the first damage location) is initially identified.
  • the server may mark the identified damage location in the captured image.
  • a rectangular frame may be used to circle the first damage location identified by the server.
  • a captured image in which a first damage position is marked in a captured image may be referred to as a marker image.
  • the server returns the produced tag image to the client side.
  • the client may display the mark information of the first damage position in the mark image on the client, for example, the rectangular frame mark in which the first damage position is located in the display mark image described in the above example.
  • the user can see the damage location of the vehicle in the image initially recognized by the server through the client, and then the user can confirm whether the vehicle damage location identified by the server is valid according to the actual site situation, and realize the purpose of the user site to assist in identifying the damaged component.
  • the confirming the vehicle damage location may include:
  • the adjusting the marking information of the first damage location may include adjusting the position of the marking information in the marking image in some implementation scenarios, and may also adjusting the size or shape of the marking information.
  • the user can adjust the displacement, or other parameters, of the marking information of the first damage location according to the actual on-site vehicle damage situation.
  • FIG. 5 is a schematic diagram of a scene of a first damage location identified by a user on-site adjustment server in the embodiment.
  • the client can display the rectangular mark box of the first damage location recognized by the server in real time, and the user can adjust the position or size of the rectangular mark frame by sliding the finger or the mouse, dragging, etc. on the client, so that the rectangular mark frame is The position is more in line with the vehicle damage location observed by the actual site user, or the rectangular marker frame can completely cover the vehicle damage location observed by the user on site.
  • the user can also manually mark other vehicle damage locations through the client.
  • the user requests an image to be sent to the server according to the prompt request, and the server may not fully recognize all the vehicle damage locations in the captured image due to the shooting angle, the light, the server identification algorithm, and the like. For example, there are two damage locations A and B in the image taken by the user, but the server only recognizes one of the damage locations A. Since the user is at the scene of the vehicle damage, when the client only displays the damage location A identified by the server, the user can obtain the damage location B where the leak is not recognized. At this point, the user can manually mark the damage location B on the client.
  • the confirming the location of the vehicle damage based on the received interaction operation comprises:
  • S1160 Confirming the marking information of the second damage location based on the receiving interactive operation instruction, the second damage location including a new vehicle damage location added in the marking image.
  • the confirmed vehicle damage location at this time may include the second damage location.
  • the confirming the vehicle damage location may include adjusting, confirming the first damage location, and may also include processing to add the second damage location. It should be noted that, in the process of confirming the vehicle damage position, even if the first damage position is not actually adjusted, the first damage position processing is confirmed, such as confirming that the first damage position is correct or submitting the first damage. The location, etc., are all processes that confirm the location of the vehicle damage.
  • the second damage position is the same as above.
  • the current image information can be submitted to confirm the damage location of each vehicle in the image. information.
  • the mark image after confirming the vehicle damage position may be referred to as an auxiliary damage image, and the client may transmit the auxiliary damage image to the server by triggering “commit”.
  • the subsequent processing may refer to the positional relationship between the basic vehicle components to identify the processing mode of the damaged component in the image.
  • the server may be configured with a feature library and a feature correspondence library.
  • the server may identify the feature components in the feature library included in the auxiliary damage image.
  • at least one feature component may generally be identified.
  • At least one of the identified first damage position or the second damage position has been included in the auxiliary damage image, and the first damage position included in the auxiliary damage image and the vehicle damage position at the second damage position may be regarded as The damage area calculates the relative positional relationship between the feature and the vehicle damage location in the auxiliary damage images. Further, the relative position relationship may be matched in the feature correspondence relationship library, and the relationship component matching the relative position relationship in the feature correspondence relationship library may be acquired, and then the matched relationship component may be used as the matching relationship component. The damaged component in the captured image is identified.
  • the method for determining the relative positional relationship and the matching of the relative positional relationship reference may be made to the description of the method for manually circumventing the damaged area according to the foregoing method, and the server of the present specification is described according to the foregoing method embodiment.
  • the embodiment for identifying the location of the damage and then confirming it by the client may also include other embodiments, which are not described herein.
  • the above embodiment in combination with the vehicle damage processing, may further be arranged after determining the damaged component, and then instructing the client user to take a detailed photo of the damaged component for accurate processing of the subsequent damage and forming a maintenance plan. , quotes, etc.
  • the server identifies the damaged component
  • the identified damaged component information is sent to the designated server for further processing, including loss, re-identification, or storage.
  • the user can send the original captured image to the server while taking an image, and the initial damage is automatically detected by the server side to identify the damaged position, marking the damaged.
  • the location is passed to the client user for confirmation. If the damaged location marked by the server is correct, the user can directly confirm the matching submitted to the system for the positional relationship and confirm the damaged component. If the server is marked incorrectly, the user can quickly adjust or increase the location of the missing damage according to the actual situation on site.
  • the damaged area can be quickly confirmed according to the manual assistance of the user site, and then the matching confirmation of the damaged component is performed based on the positional relationship between the feature component and the damaged area identified by the system, so that the user is closer to the real vehicle damage at the vehicle damage scene.
  • the advantages of the situation can effectively improve the accuracy of the damaged component identification and the user's fixed loss experience, assist the insurance company to locate the vehicle damage component matching, improve the accuracy and processing efficiency of identifying the damaged component in the fixed loss, and greatly improve the user experience.
  • the above embodiment describes a plurality of method embodiments for identifying damaged parts of a vehicle of the present specification from the perspective of client-server interaction. Based on the above described embodiments of client-server interaction, the present specification can also provide an embodiment of a method that can be used by a server to identify damaged components of a vehicle. Specifically, an implementation is as shown in FIG. 6, and the method may include:
  • S200 Receive a markup image uploaded by a client, where the markup image includes a damaged area determined based on a behavior of a damage location mark in the captured image;
  • S220 Identify a feature component in the mark image, and determine a relative positional relationship between the feature component and the damaged area based on an image position of the feature component and the damaged area;
  • S260 Determine a damaged component in the captured image based on the relationship component.
  • the feature library and the feature correspondence library may be pre-established on the server side.
  • the feature component may include: a vehicle component included in the constructed feature library,
  • the feature correspondence relationship library includes: component position relationship data constructed by using a vehicle component in the feature library as a reference reference, where the component position relationship data includes a relationship component that represents an area between the vehicle components, The vehicle component specifies at least one kind of relationship data between the relationship component to which the region of the orientation belongs and the relationship component to which the region range of the specified ratio between the vehicle components belongs.
  • the feature library and the corresponding relationship library may also be used in real time online.
  • at least one of the feature library and the corresponding relational library is data information stored in a database on another server or memory.
  • the processing of determining, by the server, the relative positional relationship between the feature component and the damaged area may include:
  • S222 construct a coordinate system by using a center point of the damaged area as a coordinate origin;
  • S226 Determine a relative positional relationship between the feature component and the damaged area based on position coordinate data of the feature component.
  • a specified number of features may be selected to calculate the relative positional relationship. This can reduce the complexity of the relative positional relationship between components, quickly match and improve processing efficiency. Specifically, in another embodiment of the method,
  • the relative positional relationship may be determined based on the area occupied by the area of the damaged area circled by the user, respectively.
  • the determining, by the server, the relative positional relationship between the feature component and the damaged area based on the location coordinate data of the feature component may include:
  • S2282 Constructing, in the coordinate system, a second regular geometric figure including the coordinate origin and at least two feature position coordinate data;
  • S2286 determining area range information of the damaged area between the feature parts based on a size of the feature area and a coordinate distance of the feature part;
  • S2288 Determine a relative positional relationship between the feature component and the damaged area based on the area range information matching.
  • the server obtains the location relationship data with the highest degree of matching with the relative location relationship
  • a relationship component corresponding to the positional relationship data having the highest matching degree is used as a relationship component that matches the relative positional relationship.
  • the foregoing embodiment of the method for identifying a damaged component of a vehicle on the server side may be specifically referred to the description of the foregoing embodiment in which the client and the server interact with each other.
  • the present specification can also provide an embodiment of a method that can be used on the client side to identify damaged parts of the vehicle.
  • an implementation is as shown in FIG. 7, and the method may include:
  • S320 determining a damaged area based on a lesion position marking behavior in the captured image to form a mark image
  • S340 Send the mark image to a server, so that the server identifies the damaged component based on a relative positional relationship between the damaged area and the feature part in the mark image.
  • the method for recognizing a damaged component of the vehicle implemented by the client or the server on one side the user can manually take a picture of the damaged position after the camera is photographed and photographed by the client, and then the server side can determine the damage location based on the positional relationship. Damaged parts.
  • an identifiable vehicle feature library including a plurality of vehicle components and a feature correspondence relationship library of relative positional relationship of the vehicle components may be established in advance.
  • the user can manually delineate the damaged location on the client.
  • the identifiable vehicle features in the image are then identified, and their relative positions to the user-defined markers are determined based on the identifiable features.
  • the relative position is matched in the feature relation library to determine the damaged component, and the damaged position is assisted by manual and simple operation on the user site, and the insurance company is positioned to locate the damaged component of the vehicle, and the identification is damaged in the fixed damage. Accuracy and processing efficiency of components, greatly improving user experience
  • the above embodiment of the present specification further provides an implementation manner in which a client first recognizes a damaged part after the client takes an image, and then transmits the mark to the client user for confirmation.
  • the present specification further provides a method for identifying a damaged component of the vehicle on the server side. Specifically, an implementation is as shown in FIG. 8. The method may include:
  • S400 Acquire a captured image uploaded by the client, identify a first damage location in the captured image, and mark the first damage location in the captured image to generate a marker image;
  • S420 Send the tag image to the client.
  • S440 Receive an auxiliary damaged image returned by the client, and identify at least one feature component included in the auxiliary damage image, where the auxiliary damaged image includes a shape after confirming a vehicle damage position in the mark image based on an interaction operation.
  • Image information
  • S480 Match the relative positional relationship in the feature correspondence relationship library, acquire a corresponding relationship component, and determine a damaged component in the captured image based on the relationship component.
  • the client can display it to the user for viewing.
  • the user can confirm the location of the vehicle damage based on the comparison with the actual on-site vehicle damage situation. It can be adjusted or submitted without confirmation.
  • the missing lesion location (which may be referred to as the second lesion location) may be manually circled on the client. In this way, the damaged location can be identified more accurately based on the user's field-assisted observation to determine the damaged location.
  • the auxiliary damage image further includes mark information of the second damage position, and the second damage position includes a new vehicle damage position added in the mark image.
  • the present specification further provides an implementation manner of a client/server interaction in which a client first recognizes a damaged part after the client takes an image, and then transmits the mark to the client user for confirmation, and provides a method for the client/server interaction.
  • a method of identifying damaged parts of a vehicle on the client side is as shown in FIG. 9, and the method may include:
  • S500 Acquire a captured image of the vehicle, and send the captured image to a server;
  • S520 Receive a mark image returned by the server, where the mark image includes image information generated after marking the identified first damage position in the captured image;
  • S540 Display mark information indicating a first damage location in the mark image
  • S560 confirming a vehicle damage location based on the received interaction operation, the vehicle damage location including the first damage location;
  • S580 Send the auxiliary damage image after confirming the vehicle damage position to the server.
  • the user can see the damage location of the vehicle in the image initially recognized by the server through the client, and then the user can confirm whether the vehicle damage location identified by the server is valid according to the actual site situation, and realize the user site to assist in identifying the damaged component.
  • the confirming the vehicle damage location may include:
  • S562 confirm whether the mark position of the first damage position in the displayed mark image is correct; and if not, adjust the mark information of the first damage position based on the received interaction operation.
  • the adjusting the marking information of the first damage location may include adjusting the position of the marking information in the marking image in some implementation scenarios, and may also adjusting the size or shape of the marking information.
  • the user can adjust the displacement, or other parameters, of the marking information of the first damage location according to the actual on-site vehicle damage situation.
  • the user can also manually mark other vehicle damage locations through the client.
  • the confirming the vehicle damage location based on the received interaction operation may include:
  • the user can send the original captured image to the server while taking an image, and the initial damage is automatically detected by the server side to identify the damaged position, marking the damaged.
  • the location is passed to the client user for confirmation. If the damaged location marked by the server is correct, the user can directly confirm the matching submitted to the system for the positional relationship and confirm the damaged component. If the server is marked incorrectly, the user can quickly adjust or increase the location of the missing damage according to the actual situation on site.
  • the damaged area can be quickly confirmed according to the manual assistance of the user site, and then the matching confirmation of the damaged component is performed based on the positional relationship between the feature component and the damaged area identified by the system, so that the user is closer to the real vehicle damage at the vehicle damage scene.
  • the advantages of the situation can effectively improve the accuracy of the damaged component identification and the user's fixed loss experience, assist the insurance company to locate the vehicle damage component matching, improve the accuracy and processing efficiency of identifying the damaged component in the fixed loss, and greatly improve the user experience.
  • the client photographs at the vehicle damage scene
  • the user assists in marking the damage location
  • the remote server recognizes the location of the damaged component based on the positional relationship between the feature component and the damaged area.
  • the foregoing processing of capturing an image, delineating a damaged portion, identifying a feature, and matching a positional relationship may be performed by a terminal device on one side, for example, by a dedicated client.
  • the image is taken on site, and the dedicated client is provided with a feature library and a feature correspondence library.
  • the user can manually circle the damaged area on the dedicated client, and then the dedicated client can identify the feature component, determine the relative position relationship, etc., and can be sent to the server without being directly recognized by the dedicated client.
  • the present specification may also provide another method for identifying a damaged component of a vehicle, and the damaged component may be directly identified on the spot according to the auxiliary mark of the user.
  • the method may include:
  • S620 determining a damaged area based on a lesion position marking behavior in the captured image to form a marker image
  • S640 Identify a feature component in the mark image, and determine a relative positional relationship between the feature component and the damaged area based on an image position of the feature component and the damaged area;
  • S680 Determine a damaged component in the captured image based on the relationship component.
  • the embodiment may further provide another embodiment based on the description of the embodiment in which the client first captures the feedback to the client, and the method may further include:
  • S700 Acquire a captured image of the vehicle, identify a first damage location in the captured image, and mark the first damage location in the captured image to generate a marker image;
  • S720 Display mark information of the first damage location in the mark image
  • S740 Confirming a vehicle damage location based on the received interaction operation to form an auxiliary damage image, where the vehicle damage location includes the first damage location;
  • S760 Identify at least one feature component included in the auxiliary damage image; determine a relative positional relationship between the feature component and a vehicle damage location in the auxiliary damage image;
  • S780 Match the relative positional relationship in the feature correspondence relationship library, acquire a corresponding relationship component, and determine a damaged component in the captured image based on the relationship component.
  • the processing method for integrating the operations of the client and the server at the vehicle damage site may be included, and the description may be included according to the foregoing description of the method for exchanging the client and the server.
  • the embodiment is, for example, based on receiving an interworking instruction to confirm the tag information of the second lesion location, and the like.
  • FIG. 12 is a block diagram showing the hardware structure of a server for identifying a damaged component of a vehicle according to an embodiment of the present invention.
  • server 10 may include one or more (only one shown) processor 102 (processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA), A memory 104 for storing data, and a transmission module 106 for communication functions. It will be understood by those skilled in the art that the structure shown in FIG.
  • server 10 may also include more or fewer components than those shown in FIG. 12, for example, may also include other processing hardware, such as a GPU (Graphics Processing Unit), or have a different appearance than that shown in FIG. Configuration.
  • GPU Graphics Processing Unit
  • the memory 104 can be used to store software programs and modules of application software, such as program instructions/modules corresponding to the search method in the embodiment of the present invention, and the processor 102 executes various functions by running software programs and modules stored in the memory 104.
  • Application and data processing that is, a processing method for realizing the content display of the above navigation interaction interface.
  • Memory 104 may include high speed random access memory, and may also include non-volatile memory such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory.
  • memory 104 may further include memory remotely located relative to processor 102, which may be coupled to computer terminal 10 via a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the transmission module 106 is configured to receive or transmit data via a network.
  • the network specific examples described above may include a wireless network provided by a communication provider of the computer terminal 10.
  • the transport module 106 includes a Network Interface Controller (NIC) that can be connected to other network devices through a base station to communicate with the Internet.
  • the transmission module 106 can be a Radio Frequency (RF) module for communicating with the Internet wirelessly.
  • NIC Network Interface Controller
  • RF Radio Frequency
  • the present specification also provides an apparatus for identifying a damaged component of a vehicle.
  • the apparatus may include a system (including a distributed system), software (applications), modules, components, servers, clients, etc., using the methods described in the embodiments of the present specification, in conjunction with necessary device hardware for implementing the hardware.
  • the processing device in one embodiment provided by this specification is as described in the following embodiments.
  • the apparatus described in the following embodiments is preferably implemented in software, hardware, or a combination of software and hardware, is also possible and contemplated.
  • FIG. 13 is a schematic diagram of a module structure of an apparatus for identifying a damaged component of a vehicle, which may be used on the server side, and may include:
  • the receiving module 20 is configured to receive a markup image uploaded by the client, where the markup image includes a damaged area determined based on a behavior of the damage location mark in the captured image;
  • the positional relationship determining module 21 may be configured to identify a feature component in the marker image, and determine a relative positional relationship between the feature component and the damaged area based on an image position of the feature component and the damaged area. ;
  • the matching module 22 is configured to match the relative positional relationship in the feature correspondence relationship library to obtain a corresponding relationship component
  • the component identification module 23 can be configured to determine a damaged component in the captured image based on the relationship component.
  • FIG. 14 is a block diagram showing another embodiment of an apparatus for identifying a damaged component of a vehicle, and another embodiment of the apparatus may further include:
  • a feature library 24 that can be used to store vehicle components of the vehicle
  • the feature correspondence relationship library 25 may be configured to store component position relationship data constructed by using a vehicle component in the feature library as a reference reference, the component position relationship data including a relationship component and a vehicle component that represent an area between the vehicle components At least one type of relationship data of the relationship component to which the region of the specified orientation belongs, and the relationship component to which the region range of the specified ratio between the vehicle components belongs.
  • the location relationship determination module 21 determines a relative positional relationship between the feature component and the damaged area.
  • the specific processing may include:
  • a relative positional relationship of the feature component to the damaged area is determined based on position coordinate data of the feature component.
  • the location relationship determining module 21 may include:
  • the feature selection unit 210 may be configured to: when identifying the number N of feature parts in the mark image that is greater than 3, select K feature parts of the N feature parts to determine the damaged area and the K The relative positional relationship of the feature reference components, 2 ⁇ K ⁇ 3.
  • the determining, by the location relationship determining module 21, the relative positional relationship between the feature component and the damaged area based on the location coordinate data of the feature component may include:
  • the matching module 22 if the matching module 22 does not query the location relationship data that matches the relative positional relationship in the feature correspondence library, the relative position is acquired. Location-relational data with the highest degree of relationship matching; and,
  • a relationship component corresponding to the positional relationship data having the highest matching degree is used as a relationship component that matches the relative positional relationship.
  • FIG. 15 is a schematic structural diagram of an apparatus for identifying a damaged component of a vehicle, which may be used on the client side, and may include:
  • the image acquisition module 30 can be configured to acquire a captured image of the vehicle
  • the position marking module 31 is configured to determine a damaged area based on the damage position marking behavior in the captured image to form a mark image;
  • the image sending module 32 may be configured to send the mark image to a server, so that the server identifies the damaged component based on a relative positional relationship between the damaged area and the feature part in the mark image.
  • FIG. 16 is a schematic structural diagram of a module of an apparatus for identifying a damaged component of a vehicle, which may be used on the server side, and may include:
  • the image marking module 40 may be configured to acquire a captured image uploaded by the client, identify a first damage location in the captured image, and mark the first damage location in the captured image to generate a marker image;
  • a tag sending module 41 configured to send the tag image to the client
  • the auxiliary interaction module 42 may be configured to receive an auxiliary damaged image returned by the client, and identify at least one feature component included in the auxiliary damage image, where the auxiliary damaged image includes confirming in the mark image based on an interaction operation. Image information formed after the vehicle is damaged;
  • the position determining module 43 is configured to determine a relative positional relationship between the feature component and a vehicle damage location in the auxiliary damage image
  • the component identification module 44 is configured to match the relative positional relationship in the feature correspondence relationship library, acquire a corresponding relationship component, and determine a damaged component in the captured image based on the relationship component.
  • the auxiliary interaction module 42 receives the marker information of the second lesion location in the auxiliary lesion image, and the second lesion location is included in the marker image. Added new vehicle damage location.
  • FIG. 17 is a schematic structural diagram of an apparatus for identifying a damaged component of a vehicle, which may be used on the client side, and may include:
  • the first image sending module 50 may be configured to acquire a captured image of the vehicle and send the captured image to a server;
  • the mark receiving module 51 is configured to receive a mark image returned by the server, where the mark image includes image information generated after marking the identified first damage position in the captured image;
  • a mark display module 52 which can be used to display mark information of the marked damage image in the mark image
  • the damage location confirmation module 53 can be configured to confirm a vehicle damage location based on the received interaction operation, the vehicle damage location including the first damage location;
  • the second image transmitting module 54 can be configured to send the auxiliary damage image after confirming the vehicle damage location to the server.
  • the damage location confirmation module 53 may include:
  • the first adjusting unit 530 can be configured to confirm whether the marking position of the first damage location in the displayed marker image is correct; and if not, adjust the marking information of the first damage location based on the received interaction operation .
  • the damage location confirmation module 53 may include:
  • the second adjusting unit 532 can be configured to confirm the marking information of the second damage location based on the receiving the interactive operation instruction, where the second damage location includes a new vehicle damage location added in the marker image.
  • FIG. 18 is a schematic structural diagram of an embodiment of a damage location confirmation module provided by the present specification.
  • the first adjustment unit 530 and the second adjustment unit 532 described above may be simultaneously included.
  • the method for identifying a damaged component of a vehicle may be implemented by a processor executing a corresponding program instruction in a computer, such as using a C++ language of a Windows operating system on a PC side, or other systems such as Linux, android, and iOS.
  • a processor executing a corresponding program instruction in a computer, such as using a C++ language of a Windows operating system on a PC side, or other systems such as Linux, android, and iOS.
  • the server may include a processor and a memory for storing processor-executable instructions, where the processor implements the instruction:
  • the markup image includes a damaged area determined based on a behavior of the damage location mark in the captured image
  • a damaged component in the captured image is determined based on the relationship component.
  • the user can manually circle the damaged area on the client, and then the server identifies the feature, based on the relative positional relationship between the damaged area and the feature, to actually damage the part.
  • the present specification further provides a client, which may include a processor and a memory for storing processor-executable instructions, when the processor executes the instructions:
  • the tag image is sent to a server to cause the server to identify the damaged component based on the relative positional relationship of the damaged area and the feature in the marked image.
  • the above instructions may be stored in a variety of computer readable storage media.
  • the computer readable storage medium may include physical means for storing information, which may be digitized and stored in a medium utilizing electrical, magnetic or optical means.
  • the computer readable storage medium of this embodiment may include: means for storing information by means of electrical energy, such as various types of memories, such as RAM, ROM, etc.; means for storing information by magnetic energy means, such as hard disk, floppy disk, magnetic tape, magnetic Core memory, bubble memory, U disk; means for optically storing information such as CD or DVD.
  • electrical energy such as various types of memories, such as RAM, ROM, etc.
  • magnetic energy means such as hard disk, floppy disk, magnetic tape, magnetic Core memory, bubble memory, U disk
  • means for optically storing information such as CD or DVD.
  • quantum memories graphene memories, and the like.
  • a client or server for identifying a damaged component of a vehicle may pre-establish an identifiable vehicle feature library including a plurality of vehicle components and a feature correspondence library of relative positional relationships of the vehicle components.
  • an identifiable vehicle feature library including a plurality of vehicle components and a feature correspondence library of relative positional relationships of the vehicle components.
  • the user can manually delineate the damaged location on the client.
  • the identifiable vehicle features in the image are then identified, and their relative positions to the user-defined markers are determined based on the identifiable features.
  • the relative position is matched in the feature relation library to determine the damaged component, and the damaged position is assisted by manual and simple operation on the user site, and the insurance company is positioned to locate the damaged component of the vehicle, and the identification is damaged in the fixed damage.
  • the accuracy and processing efficiency of the components greatly enhances the user experience.
  • the server may include a processor and a memory for storing processor-executable instructions, when the processor executes the instructions:
  • auxiliary damaged image received by the client, identifying at least one feature included in the auxiliary damage image, the auxiliary damaged image including image information formed after confirming a vehicle damage position in the mark image based on an interaction operation ;
  • the relative positional relationship is matched in the feature correspondence library, the corresponding relationship component is acquired, and the damaged component in the captured image is determined based on the relationship component.
  • the client can include a processor and a memory for storing processor-executable instructions that, when executed by the processor, are implemented:
  • the mark image including image information generated after marking the identified first damage position in the captured image
  • the auxiliary damage image after confirming the vehicle damage position is sent to the server.
  • the user can send the original captured image to the server while taking an image, and the initial damage is automatically detected by the server side, and the damaged position is marked.
  • the damaged location is passed to the client user for confirmation. If the damaged location marked by the server is correct, the user can directly confirm the matching submitted to the system for the positional relationship and confirm the damaged component. If the server is marked incorrectly, the user can quickly adjust or increase the location of the missing damage according to the actual situation on site.
  • the damaged area can be quickly confirmed according to the manual assistance of the user site, and then the matching confirmation of the damaged component is performed based on the positional relationship between the feature component and the damaged area identified by the system, so that the user is closer to the real vehicle damage at the vehicle damage scene.
  • the advantages of the situation can effectively improve the accuracy of the damaged component identification and the user's fixed loss experience, assist the insurance company to locate the vehicle damage component matching, improve the accuracy and processing efficiency of identifying the damaged component in the fixed loss, and greatly improve the user experience.
  • an electronic device including a display screen, a processor, and a memory that stores processor-executable instructions.
  • the electronic device may include a field-specific device integrated with the feature library and the feature correspondence library, and may directly identify the damaged component or further complete the vehicle damage while shooting at the vehicle damage scene.
  • FIG. 19 is a schematic structural diagram of an embodiment of an electronic device provided by the present specification.
  • the display screen may include a device for displaying information content such as a touch screen, a liquid crystal display, a projection device, and the like.
  • the type of electronic device may include a mobile terminal, a dedicated car insurance device, a vehicle interaction device, a personal computer, and the like.
  • a damaged component in the captured image is determined based on the relationship component.
  • a display screen, a processor, and a memory storing processor-executable instructions may also be included.
  • the processor executes the instruction, it can be implemented:
  • Marking information of the first damage location in the mark image is displayed in the display screen
  • the relative positional relationship is matched in the feature correspondence library, the corresponding relationship component is acquired, and the damaged component in the captured image is determined based on the relationship component.
  • the present specification also provides a system for identifying damaged components of a vehicle based on a method or apparatus embodiment in which the aforementioned client interacts with the server to identify a damaged component of the vehicle.
  • the system may include a first client and a first server, where the first client may include a processing method for implementing any one of the client embodiments in an application scenario in which the client manually marks the damage location and the server identifies the processing.
  • the first server may include a processing method for implementing any one of the server embodiments in an application scenario in which the client manually marks the damage location and the server identifies the processing.
  • the system may include a second client and a second server, where the second client may include an application scenario in which the client captures an image and the server initially recognizes and returns to the client for confirmation, implementing any client embodiment.
  • the processing method of the server may be implemented in an application scenario in which the client captures an image and the server initially recognizes the image and returns it to the client for confirmation.
  • embodiments of the present specification are not limited to having to conform to industry communication standards, standard image data processing protocols, communication protocols, and standard data models/templates or embodiments described herein.
  • Some industry standards or implementations that have been modified in a manner that uses a custom approach or an embodiment described above may also achieve the same, equivalent, or similar, or post-deformation implementation effects of the above-described embodiments.
  • Embodiments obtained by applying such modified or modified data acquisition, storage, judgment, processing, etc. may still fall within the scope of alternative embodiments of the present specification.
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • HDL Hardware Description Language
  • the controller can be implemented in any suitable manner, for example, the controller can take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (eg, software or firmware) executable by the (micro)processor.
  • computer readable program code eg, software or firmware
  • examples of controllers include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, The Microchip PIC18F26K20 and the Silicone Labs C8051F320, the memory controller can also be implemented as part of the memory's control logic.
  • the controller can be logically programmed by means of logic gates, switches, ASICs, programmable logic controllers, and embedding.
  • a controller can be considered a hardware component, and the means for implementing various functions included therein can also be considered as a structure within the hardware component.
  • a device for implementing various functions can be considered as a software module that can be both a method of implementation and a structure within a hardware component.
  • the system, device, module or unit illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product having a certain function.
  • a typical implementation device is a computer.
  • the computer can be, for example, a personal computer, a laptop computer, a car-mounted human-machine interaction device, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet.
  • the above devices are described as being separately divided into various modules by function.
  • the functions of the modules may be implemented in the same software or software, or the modules that implement the same function may be implemented by multiple sub-modules or a combination of sub-units.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or integrated. Go to another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the controller can be logically programmed by means of logic gates, switches, ASICs, programmable logic controllers, and embedding.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
  • a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
  • processors CPUs
  • input/output interfaces network interfaces
  • memory volatile and non-volatile memory
  • the memory may include non-persistent memory, random access memory (RAM), and/or non-volatile memory in a computer readable medium, such as read only memory (ROM) or flash memory.
  • RAM random access memory
  • ROM read only memory
  • Memory is an example of a computer readable medium.
  • Computer readable media includes both permanent and non-persistent, removable and non-removable media.
  • Information storage can be implemented by any method or technology.
  • the information can be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory. (ROM), electrically erasable programmable read only memory (EEPROM), flash memory or other memory technology, compact disk read only memory (CD-ROM), digital versatile disk (DVD) or other optical storage, Magnetic tape cartridges, magnetic tape storage or other magnetic storage devices or any other non-transportable media can be used to store information that can be accessed by a computing device.
  • computer readable media does not include temporary storage of computer readable media, such as modulated data signals and carrier waves.
  • embodiments of the present specification can be provided as a method, system, or computer program product.
  • embodiments of the present specification can take the form of an entirely hardware embodiment, an entirely software embodiment or a combination of software and hardware.
  • embodiments of the present specification can take the form of a computer program product embodied on one or more computer usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • Embodiments of the present description can be described in the general context of computer-executable instructions executed by a computer, such as a program module.
  • program modules include routines, programs, objects, components, data structures, and the like that perform particular tasks or implement particular abstract data types.
  • Embodiments of the present specification can also be practiced in distributed computing environments where tasks are performed by remote processing devices that are connected through a communication network.
  • program modules can be located in both local and remote computer storage media including storage devices.

Abstract

本说明书实施例公开了一种识别车辆受损部件的方法、装置、服务器、客户端及系统,所述方法可以包括:可以预先建立包括多个车辆部件的可识别车辆特征库以及这些车辆部件相对位置关系的特征对应关系库。在定损图像拍摄过程中,用户可以在客户端手工圈定受损位置。然后服务器可以识别图像出中的可识别车辆特征,根据这些可识别特征确定出其与用户圈定的标记的相对位置。进一步将这个相对位置在特征关系库中进行匹配,确定受损部件,实现通过用户现场手工简单的操作对受损位置进行辅助定位,辅助保险公司定位车辆损伤部件匹配,提升定损中识别受损部件的准确率和处理效率。

Description

识别车辆受损部件的方法、装置、服务器、客户端及系统 技术领域
本说明书实施例方案属于计算机数据处理的技术领域,尤其涉及一种识别车辆受损部件的方法、装置、服务器、客户端及系统。
背景技术
机动车辆保险即汽车保险(或简称车险),是指对机动车辆由于自然灾害或意外事故所造成的人身伤亡或财产损失负赔偿责任的一种商业保险。随着经济的发展,机动车辆的数量不断增加,当前,车险已成为中国财产保险业务中最大的险种之一。
当被保险的车辆发生交通事故时,保险公司通常首先是现场查勘、定损。车辆的定损涉及到后续维修、评估等多方面技术和利益,是整个车险服务中十分重要的过程。随着技术发展和快速定损、理赔的业务发展需求,车辆发生事故时,目前远程勘察定损的方式越来越普遍。其主要是保险公司(定损员、勘查员、或者AI定损系统)通过事故车车主现场手机(或其他终端设备)拍摄的车损照片,来确定事故车辆车损范围、车损程度,进而确定维修方案、评估定损等。事故车车主由于车险知识的不充分或拍摄技术的限制,保险公司在使用车主现场手机拍摄的车损照片时,常常出现分辨不出损伤部件,或者产生大量多余的无效照片,影响定损处理效率和准确性。
因此,业内亟需一种可以更加准确识别出图像中受损部件的解决方案。
发明内容
本说明书实施例目的在于提供一种识别车辆受损部件的方法、装置、服务器、客户端及系统,可以通过识别图像中车辆特征与用户圈定的标记相对位置,对受损位置进行辅助定位,提升定损中识别受损部件的准确率和处理效率,大大提高用户体验。
本说明书实施例提供的一种识别车辆受损部件的方法、装置、服务器、客户端及系统是包括以下方式实现的:
一种识别车辆受损部件的方法,所述方法包括:
客户端获取车辆的拍摄图像;
客户端基于在所述拍摄图像中的损伤位置标记行为确定出受损区域,形成标记图像;
客户端将所述标记图像发送至服务器;
服务器识别出所述标记图像中的特征部件,基于所述特征部件与所述受损区域的图像位置确定出所述特征部件与所述受损区域的相对位置关系;
服务器在特征对应关系库中对所述相对位置关系进行匹配,获取对应的关系部件;
服务器基于所述关系部件确定出所述拍摄图像中的受损部件。
一种识别车辆受损部件的方法,所述方法包括:
接收客户端上传的标记图像,所述标记图像中包括基于在拍摄图像中的损伤位置标记行为确定出的受损区域;
识别出所述标记图像中的特征部件,基于所述特征部件与所述受损区域的图像位置确定出所述特征部件与所述受损区域的相对位置关系;
在特征对应关系库中对所述相对位置关系进行匹配,获取对应的关系部件;
基于所述关系部件确定出所述拍摄图像中的受损部件。
一种识别车辆受损部件的方法,所述方法包括:
获取车辆的拍摄图像;
基于在所述拍摄图像中的损伤位置标记行为确定出受损区域,形成标记图像;
将所述标记图像发送至服务器,以使所述服务器基于所述标记图像中的受损区域和特征部件的相对位置关系识别出受损部件。
一种识别车辆受损部件的方法,所述方法包括:
客户端获取车辆的拍摄图像,并将所述拍摄图像发送至服务器;
服务器识别所述拍摄图像中的第一损伤位置,并在所述拍摄图像中标记所述第一损伤位置,生成标记图像;
服务器将所述标记图像发送给所述客户端;
客户端展示所述标记图像中第一损伤位置的标记信息;
客户端基于接收的交互操作确认车辆损伤位置,所述车辆损伤位置包括所述第一损 伤位置;
客户端将车辆损伤位置确认后的辅助损伤图像发送至服务器;
所述服务器接收所述辅助损伤图像后,识别出所述辅助损伤图像中包括的至少一个特征部件;
服务器确定所述特征部件与所述辅助损伤图像中车辆损伤位置的相对位置关系;
所述服务器在特征对应关系库中对所述相对位置关系进行匹配,获取对应的关系部件;
服务器基于所述关系部件确定出所述拍摄图像中的受损部件。
一种识别车辆受损部件的方法,所述方法包括:
获取客户端上传的拍摄图像,识别所述拍摄图像中的第一损伤位置,并在所述拍摄图像中标记所述第一损伤位置,生成标记图像;
将所述标记图像发送给所述客户端;
接收客户端返回的辅助受损图像,识别出所述辅助损伤图像中包括的至少一个特征部件,所述辅助受损图像包括基于交互操作在所述标记图像中确认车辆损伤位置后形成的图像信息;
确定所述特征部件与所述辅助损伤图像中车辆损伤位置的相对位置关系;
在特征对应关系库中对所述相对位置关系进行匹配,获取对应的关系部件,并基于所述关系部件确定出所述拍摄图像中的受损部件。
一种识别车辆受损部件的方法,所述方法包括:
获取车辆的拍摄图像,并将所述拍摄图像发送至服务器;
接收服务器返回的标记图像,所述标记图像包括在所述拍摄图像中标记识别出的第一损伤位置后生成的图像信息;
展示所述标记图像中标记第一损伤位置的标记信息;
基于接收的交互操作确认车辆损伤位置,所述车辆损伤位置包括所述第一损伤位置;
将车辆损伤位置确认后的辅助损伤图像发送至服务器。
一种识别车辆受损部件的方法,所述方法包括:
获取车辆的拍摄图像;
基于在所述拍摄图像中的损伤位置标记行为确定出受损区域,形成标记图像;
识别出所述标记图像中的特征部件,基于所述特征部件与所述受损区域的图像位置确定出所述特征部件与所述受损区域的相对位置关系;
在特征对应关系库中对所述相对位置关系进行匹配,获取对应的关系部件;
基于所述关系部件确定出所述拍摄图像中的受损部件。
一种识别车辆受损部件的方法,所述方法包括:
获取车辆的拍摄图像,识别所述拍摄图像中的第一损伤位置,并在所述拍摄图像中标记所述第一损伤位置,生成标记图像;
展示所述标记图像中第一损伤位置的标记信息;
基于接收的交互操作确认车辆损伤位置,形成辅助损伤图像,所述车辆损伤位置包括所述第一损伤位置;
识别出所述辅助损伤图像中包括的至少一个特征部件;确定所述特征部件与所述辅助损伤图像中车辆损伤位置的相对位置关系;
在特征对应关系库中对所述相对位置关系进行匹配,获取对应的关系部件,并基于所述关系部件确定出所述拍摄图像中的受损部件。
一种识别车辆受损部件的装置,所述装置包括:
接收模块,用于接收客户端上传的标记图像,所述标记图像中包括基于在拍摄图像中的损伤位置标记行为确定出的受损区域;
位置关系确定模块,用于识别出所述标记图像中的特征部件,基于所述特征部件与所述受损区域的图像位置确定出所述特征部件与所述受损区域的相对位置关系;
匹配模块,用于在特征对应关系库中对所述相对位置关系进行匹配,获取对应的关系部件;
部件识别模块,用于基于所述关系部件确定出所述拍摄图像中的受损部件。
一种识别车辆受损部件的装置,所述装置包括:
图像获取模块,用于获取车辆的拍摄图像;
位置标记模块,用于基于在所述拍摄图像中的损伤位置标记行为确定出受损区域,形成标记图像;
图像发送模块,用于将所述标记图像发送至服务器,以使所述服务器基于所述标记图像中的受损区域和特征部件的相对位置关系识别出受损部件。
一种识别车辆受损部件的装置,所述装置包括:
图像标记模块,用于获取客户端上传的拍摄图像,识别所述拍摄图像中的第一损伤位置,并在所述拍摄图像中标记所述第一损伤位置,生成标记图像;
标记发送模块,用于将所述标记图像发送给所述客户端;
辅助交互模块,用于接收客户端返回的辅助受损图像,识别出所述辅助损伤图像中包括的至少一个特征部件,所述辅助受损图像包括基于交互操作在所述标记图像中确认车辆损伤位置后形成的图像信息;
位置确定模块,用于确定所述特征部件与所述辅助损伤图像中车辆损伤位置的相对位置关系;
部件识别模块,用于在特征对应关系库中对所述相对位置关系进行匹配,获取对应的关系部件,并基于所述关系部件确定出所述拍摄图像中的受损部件。
一种识别车辆受损部件的装置,所述装置包括:
第一图像发送模块,用于获取车辆的拍摄图像,并将所述拍摄图像发送至服务器;
标记接收模块,用于接收服务器返回的标记图像,所述标记图像包括在所述拍摄图像中标记识别出的第一损伤位置后生成的图像信息;
标记展示模块,用于展示所述标记图像中标记第一损伤位置的标记信息;
损伤位置确认模块,用于基于接收的交互操作确认车辆损伤位置,所述车辆损伤位置包括所述第一损伤位置;
第二图像发送模块,用于将车辆损伤位置确认后的辅助损伤图像发送至服务器。
一种服务器,包括处理器以及用于存储处理器可执行指令的存储器,所述处理器执行所述指令时实现:
接收客户端上传的标记图像,所述标记图像中包括基于在拍摄图像中的损伤位置标记行为确定出的受损区域;
识别出所述标记图像中的特征部件,基于所述特征部件与所述受损区域的图像位置确定出所述特征部件与所述受损区域的相对位置关系;
在特征对应关系库中对所述相对位置关系进行匹配,获取对应的关系部件;
基于所述关系部件确定出所述拍摄图像中的受损部件。
一种客户端,包括处理器以及用于存储处理器可执行指令的存储器,所述处理器执行所述指令时实现:
获取车辆的拍摄图像;
基于在所述拍摄图像中的损伤位置标记行为确定出受损区域,形成标记图像;
将所述标记图像发送至服务器,以使所述服务器基于所述标记图像中的受损区域和特征部件的相对位置关系识别出受损部件。
一种服务器,包括处理器以及用于存储处理器可执行指令的存储器,所述处理器执行所述指令时实现:
获取客户端上传的拍摄图像,识别所述拍摄图像中的第一损伤位置,并在所述拍摄图像中标记所述第一损伤位置,生成标记图像;
将所述标记图像发送给所述客户端;
接收客户端返回的辅助受损图像,识别出所述辅助损伤图像中包括的至少一个特征部件,所述辅助受损图像包括基于交互操作在所述标记图像中确认车辆损伤位置后形成的图像信息;
确定所述特征部件与所述辅助损伤图像中车辆损伤位置的相对位置关系;
在特征对应关系库中对所述相对位置关系进行匹配,获取对应的关系部件,并基于所述关系部件确定出所述拍摄图像中的受损部件。
一种客户端,包括处理器以及用于存储处理器可执行指令的存储器,所述处理器执行所述指令时实现:
获取车辆的拍摄图像,并将所述拍摄图像发送至服务器;
接收服务器返回的标记图像,所述标记图像包括在所述拍摄图像中标记识别出的第一损伤位置后生成的图像信息;
展示所述标记图像中标记第一损伤位置的标记信息;
基于接收的交互操作确认车辆损伤位置,所述车辆损伤位置包括所述第一损伤位置;
将车辆损伤位置确认后的辅助损伤图像发送至服务器。
一种电子设备,包括显示屏、处理器以及存储处理器可执行指令的存储器,所述处理器执行所述指令时实现:
获取车辆的拍摄图像;
基于在显示屏中对所述拍摄图像中的损伤位置标记行为确定出受损区域,形成标记图像;
识别出所述标记图像中的特征部件,基于所述特征部件与所述受损区域的图像位置确定出所述特征部件与所述受损区域的相对位置关系;
在特征对应关系库中对所述相对位置关系进行匹配,获取对应的关系部件;
基于所述关系部件确定出所述拍摄图像中的受损部件。
一种电子设备,包括显示屏、处理器以及存储处理器可执行指令的存储器,所述处理器执行所述指令时实现:
获取车辆的拍摄图像,识别所述拍摄图像中的第一损伤位置,并在所述拍摄图像中标记所述第一损伤位置,生成标记图像;
在所述显示屏中展示所述标记图像中第一损伤位置的标记信息;
基于接收的交互操作确认车辆损伤位置,形成辅助损伤图像,所述车辆损伤位置包括所述第一损伤位置;
识别出所述辅助损伤图像中包括的至少一个特征部件;确定所述特征部件与所述辅助损伤图像中车辆损伤位置的相对位置关系;
在特征对应关系库中对所述相对位置关系进行匹配,获取对应的关系部件,并基于所述关系部件确定出所述拍摄图像中的受损部件。
一种识别车辆受损部件的系统,包括第一客户端和第一服务器,所述第一客户端包括本在本说明书客户端手动标记出损伤位置并服务器识别处理的应用场景中,实现任意一个客户端实施例的处理方法,所述的第一服务器包括在本说明书客户端手动标记出损伤位置并服务器识别处理的应用场景中,实现任意一个服务器实施例的处理方法。
一种识别车辆受损部件的系统,包括第二客户端和第二服务器,所述第二客户 端包括在本说明书客户端拍摄图像并服务器初步识别后返回给客户端进行确认的应用场景中,实现任意一个客户端实施例的处理方法,所述的第二服务器包括在本说明书客户端拍摄图像并服务器初步识别后返回给客户端进行确认的应用场景中,实现任意一个服务器实施例的处理方法。
本说明书实施例提供的一种识别车辆受损部件的方法、装置、服务器、客户端及系统,可以预先建立包括多个车辆部件的可识别车辆特征库以及这些车辆部件相对位置关系的特征对应关系库。在定损图像拍摄过程中,用户可以在客户端手工圈定受损位置。然后可以识别图像出中的可识别车辆特征,根据这些可识别特征确定出其与用户圈定的标记的相对位置。进一步将这个相对位置在特征关系库中进行匹配,确定受损部件,实现通过用户现场手工简单的操作对受损位置进行辅助定位,辅助保险公司定位车辆损伤部件匹配,提升定损中识别受损部件的准确率和处理效率,大大提高用户体验。
附图说明
为了更清楚地说明本说明书实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本说明书中记载的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本说明书所述方法实施例的一个处理流程示意图;
图2是本说明书一个实施场景中用户现场在客户端手动标记损伤位置的示意图;
图3是本说明书一个实施场景中确定特征部件与受损区域相对位置关系的过程示意图;
图4是本说明书提供的所述方法另一个实施例的方法流程示意图;
图5是本说明书实施例一种用户现场调整服务器识别的第一损伤位置的场景示意图;
图6是本说明书提供的一种用于服务器的识别车辆受损部件的方法流程示意图;
图7是本说明书提供的一种用于客户端的识别车辆受损部件的方法流程示意图;
图8是本说明书提供的另一种用于服务器的识别车辆受损部件的方法流程示意 图;
图9是本说明书提供的另一种用于客户端的识别车辆受损部件的方法流程示意图;
图10是本说明书提供的所述方法另一个实施例的处理流程示意图;
图11是本说明书提供的所述方法另一个实施例的处理流程示意图;
图12是本发明实施例的一种识别车辆受损部件的服务器硬件结构框图;
图13是本说明书提供的一种识别车辆受损部件的装置实施例的模块结构示意图
图14是本说明书提供另一种识别车辆受损部件的装置实施例的模块结构示意图
图15是本说明书提供的一种识别车辆受损部件的装置实施例的模块结构示意图
图16是本说明书提供的一种识别车辆受损部件的装置实施例的模块结构示意图
图17是本说明书提供的一种识别车辆受损部件的装置实施例的模块结构示意图;
图18是本说明书提供的一种损伤位置确认模块实施例的结构示意图;
图19是本说明提供的一种电子设备实施例的结构示意图。
具体实施方式
为了使本技术领域的人员更好地理解本说明书中的技术方案,下面将结合本说明书实施例中的附图,对本说明书实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本说明书中的一部分实施例,而不是全部的实施例。基于本说明书中的一个或多个实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都应当属于本说明书实施例保护的范围。
本说明书提供的一种实施方案可以应用到客户端/服务器的系统构架中。所述的客户端可以包括车损现场人员(可以是事故车车主,也可以是保险公司人员)使用的具有拍摄功能(至少具有包括拍照功能)的终端设备,如智能手机、平板电脑、智能穿戴设备、专用拍摄设备等。所述的客户端可以具有通信模块,可以与远程的服务器进行通信连接,实现与所述服务器的数据传输。所述服务器可以包括保险公司一侧的服务器,其他的实施场景中也可以包括中间平台的服务器,例如与保险公司服务器有通信链接的 第三方车险服务器平台的服务器。所述的服务器可以包括单台计算机设备,也可以包括多个服务器组成的服务器集群,或者分布式系统的服务器结构。
在本说明书的一个或多个实施例中,可以预先建立特征库和特征对应关系库。所述的特征库可以包括基于选取的车辆的多个特征部件构建生成,例如可以包括左/右前大灯、车牌、门把手手、轮毂、后视镜、左/右尾灯等。特征库的特征部件可以包括组成车辆的单个配件,也可以包括多个配件组合的成套的组件,例如前车门(可以包括车门与车把手)。一些实施例中所述特征库中的特征部件的类型可以允许单独的配件和成套的组件同时存在,例如翼子板可以是特征库中的一个特征部件,翼子板总成也可以是特征库中的一个特征部件。后续识别处理时,可以从用户拍摄的图像中捕捉到这些特征中的一个或者多个,作为判断标记出的损伤位置在车辆上相对位置的参照基准。
同时可以建立一个特征对应关系库,所述的特征对应关系库中可以包括根据车辆部件之间的空间位置建立的车辆部件之间的位置关系数据。在本说明书的一些实施例中,所述的特征对应关系库可以基于前述所述的特征库建立,具体的可以以所述特征库中包含的特征部件作为基准参照,建立其之间的对应关系。所说的对应关系可以包括两个车辆部件之间的对应关系,如车辆部件A与车辆部件B之间的关系部件为P1;也可以包括三个或三个以上车辆部件之间的对应关系,如车辆部件A、车辆部件B、车辆部件C中三者中间区域的关系部件为P2,;或者,也可以包括一个车辆部件相对于多个车辆部件的对应关系,如在车辆部件A到车辆部件E的80%距离、车辆部件A到车辆部件F的40%距离的位置处的关系部件为P3。本说明书具体的一个实施例中,所述特征对应关系库中具体的位置关系数据可以包括表征车辆部件之间的区域所属的关系部件、车辆部件指定方位的区域所属的关系部件、车辆部件之间指定比例的区域范围所属的关系部件等多种对应关系。当然,在一些实施例中,一个部件相对于不同的参照部件来说,可以有不同的对应关系。
具体的一个示例中,例如特征库中可以包括两个前大灯、前/后门把手、轮毂等特征部件,则在特征对应关系库中建立的特征部件的位置关系数据可以包括如下所示的类型:
两个前大灯之间是“前保险杠”;
两个门把手之间是“后门”;
或者,特征部件“前门把手”与特征“轮毂”之间的区域中,靠近“轮毂”20%-60% 的区域定位为“前翼子板”,靠近“前门把手”0-40%的区域定位为“前门”;
或者,两个“前大灯”之间20%-80%的区域定位为“前脸进气气阁栅”等。
这些特征部件以及部件之间的位置关系数据可以以某种数据格式存储在相应的特征库和特征对应关系库中。
下面以一个具体的应用场景为例对本说明书实施方案进行说明。具体的,图1是本说明书提供的所述一种识别车辆受损部件的方法实施例的流程示意图。虽然本说明书提供了如下述实施例或附图所示的方法操作步骤或装置结构,但基于常规或者无需创造性的劳动在所述方法或装置中可以包括更多或者部分合并后更少的操作步骤或模块单元。在逻辑性上不存在必要因果关系的步骤或结构中,这些步骤的执行顺序或装置的模块结构不限于本说明书实施例或附图所示的执行顺序或模块结构。所述的方法或模块结构的在实际中的装置、服务器或终端产品应用时,可以按照实施例或者附图所示的方法或模块结构进行顺序执行或者并行执行(例如并行处理器或者多线程处理的环境、甚至包括分布式处理、服务器集群的实施环境)。
本实施例以用户在车损现场使用手机拍摄并将拍摄图像发送至保险公司进行车辆定损的用户现场辅助定损为应用场景中进行示例性说明。本示例应用场景中,客户端可以为用户使用的智能手机,在车辆发生事故时,用户可以使用安装有相应定损应用的智能手机对车辆受损情况进行拍摄,拍摄的过程中可以手工在拍摄图像中圈出受损的位置区域,然后可以将拍摄获取的图像发送给车险公司。车险公司一侧的服务器读取拍摄图像后,可以识别拍摄图像中的特征部件,和用户圈出的部件受损的位置区域。服务器可以基于根据用户圈出的位置区域在所述特征部件中的相对位置关系到特征对应关系库中进行匹配,获取与该相对位置关系对应的关系部件,即可以识别出受损部件。这样,用户上传给服务器的图像可以不再是单纯的车损现场本身的图像信息,还可以附带有用户手工标记出的受损部件位置区域的信息,以达到现场用户辅助识别受损部件,快速定损的目的。当然,下述实施例的描述并不对基于本说明书的其他可扩展到的技术方案构成限制。例如其他的实施场景中,本说明书提供的实施方案同样可以应用到第三方服务平台提供的与用户交互实现现场车辆定损的实施场景中,或者形成集成了所述特征库与特征对应关系库的现场专用设备,直接在车损现场进行拍摄的同时实现受损部件的识别或者进一步的完成车辆定损。具体的一种实施例如图1所示,本说明书提供的一种页面访问的数据处理方法的一种实施例中,所述方法可以包括:
S0:客户端获取车辆的拍摄图像;
S2:客户端基于在所述拍摄图像中的损伤位置标记行为确定出受损区域,形成标记图像;
S4:客户端将所述标记图像发送至服务器;
S6:服务器识别出所述标记图像中的特征部件,确定所述特征部件与所述受损区域的相对位置关系;
S8:服务器在特征对应关系库中对所述相对位置关系进行匹配,获取对应的关系部件;
S10:服务器基于所述关系部件确定出所述拍摄图像中的受损部件。
所述的特征部件可以为预先构建的特征库中的部件。如前所述,所述的特征库中可以存储多个可识别的车辆部件。当用户在车损现场时,可以使用客户端对车辆进行拍摄。拍摄时可以要求用户按照一定的拍摄要求进行拍摄,以使客户端获取的拍摄图像中可以包括至少一个可识别的特征部件,以用于后续确定特征部件与用户标记区域的相对位置关系。在本说明书的一个实施例中所述的特征部件可以包括:构建的特征库中所包括的车辆部件,
相应的,在本实施例中,所述的特征对应关系库可以包括:以所述特征库中的车辆部件作为基准参照构建的部件位置关系数据,所述部件位置关系数据包括表征车辆部件之间的区域所属的关系部件、车辆部件指定方位的区域所属的关系部件、车辆部件之间指定比例的区域范围所属的关系部件中的至少一种关系数据。
在本说明书的一个或多个实施例中,所述的特征库与特征应关系库可以存储在服务器一侧的计算机存储介质上,其他的实施方式中,所述的特征库、特征对应关系库中的一个或者全部可以存储在单独的数据库服务器或存储设备上等,在车险公司识别拍摄图像中的特征部件或进行相对位置关系匹配时可以到所述数据库服务器或存储设备上进行查询。
用户按要求拍摄图像后,可以手动在客户端拍摄获取的图像上标记出受损位置,客户端可以基于在受损位置确定出一个受损区域范围。具体的标记行为可以包括用户在客户端的触摸屏上通过手指滑动圈出拍摄照片中的损伤位置,当然,用户也可以使用鼠标、磁/光感应笔等间距的在客户端上标记出损伤位置。基于用户现场损伤位置标记确定出的受损区域可以是用户圈出的不规则形状的受损区域,也可以是进行修正后的规则形 状的受损区域,如矩形受损区域。图2是本说明书一个实施场景中用户现场在客户端手动标记损伤位置的示意图。
用户在拍摄图像上标记出损伤位置后,此时的拍摄图像可以称为标记图像。客户端可以该标记图像发送给远端的车险公司的服务器进行处理。此时,用户上传给服务器的图像可以不再是单纯的车损现场图像,而是附带有用户标记的损伤位置等的信息,以实现现场用户辅助识别受损部件、进而实现现场车辆快速定损的目的。
服务器接收所述标记图像后,可以读取图像信息,识别出标记图像中的特征部件和用户标记的受损区域。然后可以基于所述特征部件与所述受损区域的图像位置确定出所述特征部件与所述受损区域的相对位置关系。这里所述的相对位置关系,可以包括所述特征部件与所述受损区域方位上的关系数据、距离上的关系数据、距离百分比上的关系数据等中的一个或多个的组合。具体的一个示例,受损区域P1在识别出的特征部件A的右侧,则可以确定出的相对位置关系为“目标对象在特征部件A的右方”。或者结合一些图像算法,可以得到具体的其他信息,例如“目标对象在特征部件A正右方的10-40厘米区域范围”。
所述相对位置关系的确定可以使用图像像素的算法,或者其他的图像处理方式。本说明书提供的一个实施例中,可以采用在所述标记图像中建立二维或三维坐标系,分别定位出特征部件与受损区域的位置坐标,然后计算其相对位置关系。这样,基于坐标系可以更加快速、精确的计算确定处出相对位置关系。具体的所述方法的一个实施例中,所述确定所述特征部件与所述受损区域的相对位置关系可以包括:
S80:以所述受损区域的中心点作为坐标原点构建坐标系;
S82:分别确定所述特征部件在所述坐标系中的位置坐标数据;
S84:基于所述特征部件的位置坐标数据确定所述特征部件与所述受损区域的相对位置关系。
本实施例应用场景中,构建的坐标系可以为二维坐标系,采用是的以所述受损区域的中心点为坐标原点。但本说明书不排除其他的实施方式中,可以构建三维坐标系,以更加接近实际部件空间形态的方式计算出相对位置关系。或者,其他的实施例中,由于物体之间的参照是相互的,因此也可以采用以识别出的某个受损部件作为坐标原点构建二维或三维坐标,甚至,可以构建双坐标系来实现确定所述特征部件与所述受损区域的相对位置关系。
在一些实施例中,所述的拍摄图像中可以包括一个特征部件。例如,用户在车损现场拍摄时,可以实时的将拍摄的图像发生给服务器,如果服务器识别处理了上传的图像中的至少一个特征部件,即可认为用户拍摄的图像达到要求,可以用于车损辅助识别。一般的,在实际应用中,如果设置了识别出一个特征部件即可用于车损部件的定位的方式,则通常需要指示客户端用户按照一定的访问要求拍摄,或者现场的一些其他辅助措施。本说明书提供的所述方法的另一个实施例中可以设置使用两个或三个特征部件,然后结合受损区域的位置确定特征部件与受损区域的位置关系,然后再在特征对应关系库中匹配特征关系即可实现快速辅助定位,识别受损部件。具体的,本说明书所述方法的另一个实施例中,
S800:若识别出所述标记图像中特征部件的个数N大于3,则选取所述N个特征部件中的K个特征部件来确定所述受损区域与所述K个特征参考部件的相对位置关系,2≤K≤3。
本说明书一个实施例的具体实施方式中,可以采用向量的方式标记特征部件相对于受损区域的位置方向,然后基于多个特征部件的面积相对大小识别空间角度,结合坐标距离,确定出受损区域更加匹配哪一个或多个特征部件的区域范围。例如受损区域同时占据多个受损部件的区域,但大部分位置在特征部件A和B之间,则可以表示受损部件属于特征部件A和B之间区域范围内的车辆部件的可能性更高。这样,基于占据面积的位置关系确定,可以更加精确的确定损伤位置。具体的,本说明书所述方法的另一个实施例中,所述基于所述特征部件的位置坐标数据确定所述特征部件与所述受损区域的相对位置关系可以包括:
S840:根据所述受损区域的形状将所述受损区域转化为相应的第一规则几何图形;
S842:在所述坐标系中构建出包括所述坐标原点和至少两个特征部件位置坐标数据的第二规则几何图形;
S844:分别计算所述第二规则几何图形中所包含第一规则几何图形的面积占比;
S846:基于所述第一规则几何图形的面积占比和所述特征部件的坐标距离确定所述受损区域在所述特征部件之间的区域范围信息;
S848:基于所述区域范围信息匹配确定所述特征部件与所述受损区域的相对位置关系。
所述的规则几何图形,通常指从实物中抽象出的各种图形,如正方形、长方形、三角形、菱形、梯形、圆、扇形、圆环等。通常用户在客户端手工标记出来的受损区域范围为不规则图形,本实施例中可以根据用户标记的轨迹将其转化为相应的规则几何图形(在此可以称为第一规则几何图形)。例如图3中所示的,可以将不规则圆转化为规则的圆形。具体的规则第一几何图形中的参数,如半径、边长等,可以根据用户圈出的轨迹或处理需求进行适应性的设置。在本实施例中,受损区域的中心点位移坐标原点,而其他特征部件与坐标原来的连线可以构成相应的几何图形(在此称为第二规则几何图形)。这样,受损区域在其他部件构成的第二规则几何图形中可以占据一定的面积,占据的面积越大,则受损区域越可能属于某两个或多个特征部件之间区域范围内的车辆部件。因此,本实施例基于占据面积的位置关系确定,可以更加精确的确定损伤位置。
图3是本说明书一个实施场景中确定特征部件与受损区域相对位置关系的过程示意图。在图3中,中间的圆圈轨迹P是用户现场使用客户端手工标记的受损区域,A、B、C分别是从标记图像中别出特征库中的特征部件。然后以用户手动圈出的不规则图形P的中心点作为坐标原点建立二维坐标系(x,y)。进一步的分别以特征部件A、B、C的中心点的位置到坐标原点的位移向量a、b、c。然后可以将位移向量a、b、c输入到特征关系库中进行匹配查询,确定出这些位移向量对应在特征对应关系库中的关系部件。可以以确定出的关系部件作为识别出的客户端拍摄获取的拍摄图像中的受损部件。
本实施例中当在标记图像中识别出多个(这里可以指大于3个)特征部件时,可以仅使用其中的两个或者三个来确定与受损区域的相对位置关系,这样可以降低因参与计算的特征部件数量较多产生较多的位置关系从而导致多条(一般大于3条)位置关系之间产生的信息干涉(或影响、甚至矛盾)现象,可以有效兼顾受损部件识别准确性和特征关系对应库匹配查询效率。当然,基于此,在建立或维护特征对应关系库中设置的各个车辆部件的位置关系时也可以采用使用两个或三个部件与目标对象之间的位置关系来描述,降低特征对应关系库中位置关系数据的数据存储量,提高查询效率。
另一个实施例中,服务器若在特征对应关系库中未查询与所述相对位置关系匹配的数据信息,此时可以选取与所述相对位置关系的匹配度最高的对应关系来识别确认受损部件。具体的,所述方法的另一个实施例中,若在所述特征对应关系库中未查询到与所述相对位置关系匹配的位置关系数据,则获取与所述相对位置关系匹配度最高的位置关系数据;
将所述匹配度最高的位置关系数据对应的关系部件作为与所述相对位置关系匹 配的关系部件。
所述的匹配度的处理,可以根据相对位置关系中所表达的语义信息进行确认,例如“左前灯右侧”、“右前灯左侧”这两个相对位置关系可以与“两个前大灯之间20%-80%的区域是前脸进气气阁栅”相匹配,且在特征对应关系库中匹配度最高。或者,另一个示例中,得到的相对位置关系为“靠近‘轮毂’20%-50%的区域”,则可能与其匹配度最高的为特征对应关系库中库:特征部件“前门把手”与特征“轮毂”之间的区域中,靠近“轮毂”20%-60%的区域定位为“前翼子板”,而与靠近“前门把手”0-40%的区域定位为“前门”的匹配度次之。
本实施例提供的实施方案中,所述的特征库与特征对应关系库可以部分或全部存储在服务器一侧,这样可以节约客户端的数据库存储和查询、匹配等处理。客户端可以进行按照要求拍摄图像,并发送给服务器,然后由性能更强的服务器一侧来识别特征部件和确定相对位置关系,并在相应的数据库中查询,识别出损伤的是什么部件。
本说明书一个或多个实施例中,所述的特征库或特征对应关系库,可以采用离线预先构建的方式生成,可以预先选取车辆部件构建特征库,特征库更新的同时相应的更新特征对应关系库,更新/维护完成后再在线上使用。本说明书不排除所述特征库或特征对应关系库采用在线构建或更新/维护的方式,在计算机能力足够的情况下,可以在线构建出特征库或对应关系库,构建出数据库中的数据信息可以同步在线使用,对拍摄图像进行特征识别或相对位置关系的匹配查询处理。
本说明书上述实施例提供的一种识别车辆受损部件的方法,可以预先建立包括多个车辆部件的可识别车辆特征库以及这些车辆部件相对位置关系的特征对应关系库。在定损图像拍摄过程中,用户可以在客户端手工圈定受损位置。然后可以识别图像出中的可识别车辆特征,根据这些可识别特征确定出其与用户圈定的标记的相对位置。进一步将这个相对位置在特征关系库中进行匹配,确定受损部件,实现通过用户现场手工简单的操作对受损位置进行辅助定位,辅助保险公司定位车辆损伤部件匹配,提升定损中识别受损部件的准确率和处理效率,大大提高用户体验。
上述实施例中,用户车损现场拍摄图像后,用户可以手动圈定受损位置,然后上传给服务器,再由服务器来判定损伤部件。本说明书的其他一个或多个实施例还提供另一种识别车辆受损部件的方法。在其他的实施例中,用户可以边拍摄图像边将原始拍摄的图像发送给服务器,由服务器一侧自动初步损伤识别出受损位置,标记出受损位置 后传给客户端用户进行确认。如果服务器标记出的受损位置正确(一些实施例中可以表示为视为有效),则用户可以直接确认提交给系统进行位置关系的匹配,进而确认受损部件。如果服务器标记的不正确(一些实施例中可以表示为视为无效),则用户可以根据现场实际情况快速进行调整,如将标记面积扩大、移动标记区域等。这样,本说明书的其他实施例中,同样可以根据用户现场的手动辅助来快速确认受损区域,然后基于系统识别的特征部件与受损区域的位置关系进行受损部件的匹配确认,这样由于用户在车损现场更加接近真实车损情况的优势,可以有效提高受损部件识别的准确度和用户定损使用体验。具体的如图4所示,本说明书提供的所述方法的另一个实施例中,所述方法可以包括:
S100:客户端获取车辆的拍摄图像,并将所述拍摄图像发送至服务器;
S110:服务器识别所述拍摄图像中的第一损伤位置,并在所述拍摄图像中标记所述第一损伤位置,生成标记图像;
S112:服务器将所述标记图像发送给所述客户端;
S114:客户端展示所述标记图像中标记第一损伤位置的标记信息;
S116:客户端基于接收的交互操作确认车辆损伤位置,所述车辆损伤位置包括所述第一损伤位置;
S118:客户端将车辆损伤位置确认后的辅助损伤图像发送至服务器;
S120:所述服务器接收所述辅助损伤图像后,识别出所述辅助损伤图像中包括的至少一个特征部件;
S122:服务器确定所述特征部件与所述辅助损伤图像中车辆损伤位置的相对位置关系;
S124:所述服务器在特征对应关系库中对所述相对位置关系进行匹配,获取对应的关系部件;
S126:服务器基于所述关系部件确定出所述拍摄图像中的受损部件。
在本实施例中,车损现场人员可以使用客户端对受损车辆进行拍摄。需要说明的是,所述的拍摄可以是拍照得到的一张或多种照片,也可以是拍摄的视频,在本说明书的一些实施例中,视频可以认为是一种连续的图像,照片或视频可以认为是拍摄图像中的一种类型。拍摄得到的拍摄图像可以由客户端发送给服务器。
服务器一侧可以利用预先或实时构建的损伤识别系统来识别客户端上传的拍摄图像。所述的损伤识别系统可以包括采用多种训练模型训练构建的损伤识别算法,如Re-Net、卷积神经网络。具体的一个示例中,可以基于卷积神经网络(Convolutional Neural Network,CNN)和区域建议网络(Region Proposal Network,RPN),结合池化层、全连接层等构建图像中损伤识别的算法模型,服务器获取拍摄图像后,可以利用该算法模型对所述拍摄图像进行识别,初步识别出拍摄图像中车辆的损伤位置(在此可以将其称为第一损伤位置)。服务器可以在所述拍摄图像中将识别出的损伤位置标记出来,具体的例如可以使用矩形框圈出服务器识别出的第一损伤位置。本实施为便于描述,可以将拍摄图像中标记出所述第一损伤位置时的拍摄图像称为标记图像。服务器将生产的标记图像返回到所述客户端一侧。
所述客户端接收到所述标记图像后,可以在客户端展示所述标记图像中第一损伤位置的标记信息,例如上述示例所述的显示标记图像中第一损伤位置所在的矩形框标记。这样,用户可以通过客户端看到服务器初步识别出的图像中车辆的损伤位置,然后用户可以根据实际现场情况来确认服务器识别的车辆损伤位置是否有效,实现用户现场辅助识别损伤部件的目的。
本说明书的一个实施例中,所述的确认车辆损伤位置可以包括:
确认客户端展示的所述标记图像中的第一损伤位置的标记位置是否正确;以及,若不正确,则基于接收的交互操作调整所述第一损伤位置的标记信息。
所述的调整第一损伤位置的标记信息,在一些实施场景中可以包括调整所述标记信息在所述标记图像中的位置,也可以包括调整所述标记信息的大小或形状等。用户可以根据实际现场车损情况来调整所述第一损伤位置的标记信息相应的位移、或其他参数等。
图5是本实施例一个用户现场调整服务器识别的第一损伤位置的场景示意图。客户端可以实时显示服务器识别的第一损伤位置的矩形标记框,用户可以在客户端上通过手指或鼠标滑动、拖拽等调整所述矩形标记框的位置或大小,使所述矩形标记框的位置更加符合实际现场用户观察得到的车辆损伤位置,或者矩形标记框能完全覆盖住用户现场观察的车辆损伤位置。
另一个实施例中,用户还可以通过客户端手动标记出其他的车辆损伤位置。在一些实施场景中,用户按照提示要求拍摄一张图像后发给服务器,由于拍摄角度、光线、 服务器本身识别算法等原因,服务器可能未完全识别出拍摄图像中的全部车辆损伤位置。例如,用户拍摄的图像中存在两处损伤位置A和B,但服务器仅识别出其中的一个损伤位置A。由于用户在车损现场,当客户端仅显示服务器识别的损伤位置A时,用户可以得到存在缺漏的损伤位置B没有被识别出来。此时,用户可以在客户端上手动标记出损伤位置B。这样,可以结合车损现场用户的实际观察辅助确认图像中的损伤位置,极大的提高了车辆损伤部件的识别精度。具体的,本说明书提供的所述方法的另一个实施例中,所述基于接收的交互操作确认车辆损伤位置包括:
S1160:基于接收交互操作指令确认第二损伤位置的标记信息,所述第二损伤位置包括在所述标记图像中添加的新的车辆损伤位置。
当然,此时确认的车辆损伤位置可以包括所述第二损伤位置。在本说明书的一个或多个实施例中,所述的确认车辆损伤位置可以包括对所述第一损伤位置进行调整、确认,也可以包括对添加所述第二损伤位置的处理。需要说明的是,所述的确认车辆损伤位置处理中,即使未对第一损伤位置进行实际的调整,进行了确认所述第一损伤位置处理,如确认第一损伤位置正确或提交第一损伤位置等,均属于确认车辆损伤位置的处理过程。第二损伤位置同上,当用户调整了第一损伤位置或者不对所述第一损伤位置进行调整,也确认没有缺漏的第二损伤位置,则可以提交当前的图像信息,确认图像中各个车辆损伤位置信息。此时,可以将确认车辆损伤位置后的标记图像称为辅助损伤图像,客户端可以通过触发“提交”将所述辅助损伤图像发送给服务器。
保险公司或第三方定损服务方的服务器接收到所述辅助损伤图像后,后续的处理可以参照前述基本车辆部件之间的位置关系来识别图像中受损部件的处理方式。具体的,例如服务器可以设置有特征库和特征对应关系库。服务器可以对识别出所述辅助损伤图像中所包括的特征库中的特征部件,当然,为了确定与受损区域的相对位置关系,一般的可以识别出至少一个特征部件。所述辅助损伤图像中已经至少包括了识别出的第一损伤位置或第二损伤位置中的至少一个,可以将辅助损伤图像中包括的第一损伤位置、第二损伤位置的车辆损伤位置作为受损区域,计算出所述特征部件与这些辅助损伤图像中车辆损伤位置的相对位置关系。进一步的,可以将所述相对位置关系在特征对应关系库中进行位置关系的匹配,获取特征对应关系库中与所述相对位置关系匹配的关系部件,然后可以以匹配到的所述关系部件作为识别出的所述拍摄图像中的受损部件。
上述具体的识别特征部件、确定相对位置关系以及相对位置关系的匹配的处理方式可以参照前述用户手动圈出受损区域相关方法实施例的描述,并且根据前述方法实 施例的描述本说明书的服务器先识别损伤位置再由客户端确认的实施方式还可以包括其他更多的实施例,在此不做一一赘述。
当然,上述实施例结合车辆定损处理,还可以进一步的设置在确定出受损部件后,然后指示客户端用户拍摄受损部件的细节照片,以用于后续定损的精确处理、形成维修方案、报价等。或者,服务器识别出受损部件后,将识别出的受损部件信息发送给指定服务器做进一步的处理,包括定损、再次识别或者存储等。
本说明书上述实施例提供的另一种识别车辆受损部件的方法,用户可以边拍摄图像边将原始拍摄的图像发送给服务器,由服务器一侧自动初步损伤识别出受损位置,标记出受损位置后传给客户端用户进行确认。如果服务器标记出的受损位置正确,则用户可以直接确认提交给系统进行位置关系的匹配,进而确认受损部件。如果服务器标记的不正确,则用户可以根据现场实际情况快速进行调整或增加缺漏的损伤位置等。这样,可以根据用户现场的手动辅助来快速确认受损区域,然后基于系统识别的特征部件与受损区域的位置关系进行受损部件的匹配确认,这样由于用户在车损现场更加接近真实车损情况的优势,可以有效提高受损部件识别的准确度和用户定损使用体验,辅助保险公司定位车辆损伤部件匹配,提升定损中识别受损部件的准确率和处理效率,大大提高用户体验。
上述实施例从客户端与服务器交互的角度描述了本说明书的多个识别车辆受损部件的方法实施例。基于上述客户端与服务器交互的实施例描述,本说明书还可以提供一种可以用于服务器的识别车辆受损部件的方法实施例。具体的,一种实施例如图6所示,所述方法可以包括:
S200:接收客户端上传的标记图像,所述标记图像中包括基于在拍摄图像中的损伤位置标记行为确定出的受损区域;
S220:识别出所述标记图像中的特征部件,基于所述特征部件与所述受损区域的图像位置确定出所述特征部件与所述受损区域的相对位置关系;
S240:在特征对应关系库中对所述相对位置关系进行匹配,获取对应的关系部件;
S260:基于所述关系部件确定出所述拍摄图像中的受损部件。
一种实施例中,可以在服务器一侧预先建立特征库和特征对应关系库。具体的,所述特征部件可以包括:构建的特征库中所包括的车辆部件,
相应的,所述的特征对应关系库包括:以所述特征库中的车辆部件作为基准参照构建的部件位置关系数据,所述部件位置关系数据包括表征车辆部件之间的区域所属的关系部件、车辆部件指定方位的区域所属的关系部件、车辆部件之间指定比例的区域范围所属的关系部件中的至少一种关系数据。
当然,其他的实施例中,所述的特征库和对应关系库也可以实时在线构建使用。或者,所述的特征库和对应关系库中的至少一个为其他服务器或存储器上的数据库存储的数据信息。
所述方法的另一个实施例中,所述服务器在确定所述特征部件与所述受损区域的相对位置关系具体的处理可以包括:
S222:以所述受损区域的中心点作为坐标原点构建坐标系;
S224:分别确定所述特征部件在所述坐标系中的位置坐标数据;
S226:基于所述特征部件的位置坐标数据确定所述特征部件与所述受损区域的相对位置关系。
另一个实施例中,当服务器识别出标记图像中的特征部件数量大于一定的阈值,则可以选取其中的指定个数的特征部件来计算所述相对位置关系。这样可以减少部件之间相对位置关系的复杂度,快速进行匹配,提高处理效率。具体的,所述方法的另一个实施例中,
S228:若识别出所述标记图像中特征部件的个数N大于3,则选取所述N个特征部件中的K个特征部件来确定所述受损区域与所述K个特征参考部件的相对位置关系,2≤K≤3。
本说明书提供的所述方法的另一个实施例中,可以根据用户圈出的受损区域的面积分别在特征部件之间所占的面积大小来确定相对位置关系。具体的,所述方法的另一个实施例中,服务器基于所述特征部件的位置坐标数据确定所述特征部件与所述受损区域的相对位置关系可以包括:
S2280:根据所述受损区域的形状将所述受损区域转化为相应的第一规则几何图形;
S2282:在所述坐标系中构建出包括所述坐标原点和至少两个特征部件位置坐标数据的第二规则几何图形;
S2284:分别计算所述第二规则几何图形中所包含第一规则几何图形的特征面积的大小;
S2286:基于所述特征面积的大小和所述特征部件的坐标距离确定所述受损区域在所述特征部件之间的区域范围信息;
S2288:基于所述区域范围信息匹配确定所述特征部件与所述受损区域的相对位置关系。
另一个实施例中,所述服务器若在所述特征对应关系库中未查询到与所述相对位置关系匹配的位置关系数据,则获取与所述相对位置关系匹配度最高的位置关系数据;
将所述匹配度最高的位置关系数据对应的关系部件作为与所述相对位置关系匹配的关系部件。
上述所述可以用于服务器一侧的识别车辆受损部件的方法实施例具体的可以参照前述客户端与服务器交互相关实施例的描述。当然,基于上述客户端与服务器交互的实施例描述,本说明书还可以提供一种可以用于客户端一侧的识别车辆受损部件的方法实施例。具体的,一种实施例如图7所示,所述方法可以包括:
S300:获取车辆的拍摄图像;
S320:基于在所述拍摄图像中的损伤位置标记行为确定出受损区域,形成标记图像;
S340:将所述标记图像发送至服务器,以使所述服务器基于所述标记图像中的受损区域和特征部件的相对位置关系识别出受损部件。
上述客户端或服务器单侧实施的识别车辆受损部件的方法,用户可以在客户端对受损位置拍照、摄像后,可以手动圈定出损伤位置,然后由服务器一侧基于位置关系来判定识别出损伤部件。本说明书实施例提供的一种识别车辆受损部件的方法,可以预先建立包括多个车辆部件的可识别车辆特征库以及这些车辆部件相对位置关系的特征对应关系库。在定损图像拍摄过程中,用户可以在客户端手工圈定受损位置。然后可以识别图像出中的可识别车辆特征,根据这些可识别特征确定出其与用户圈定的标记的相对位置。进一步将这个相对位置在特征关系库中进行匹配,确定受损部件,实现通过用户现场手工简单的操作对受损位置进行辅助定位,辅助保险公司定位车辆损伤部件匹配,提升定损中识别受损部件的准确率和处理效率,大大提高用户体验
本说明书上述实施例还提供了一种客户端拍摄图像后由服务器先识别一次损伤部位,标记后传给客户端用户进行确认的客户端/服务器交互的实施方式。基本上述客户端/服务器交互的实施方式,本说明书还提供一种可以用于服务器一侧的识别车辆受损部件的方法,具体的,一种实施例如图8所示,所述方法可以包括:
S400:获取客户端上传的拍摄图像,识别所述拍摄图像中的第一损伤位置,并在所述拍摄图像中标记所述第一损伤位置,生成标记图像;
S420:将所述标记图像发送给所述客户端;
S440:接收客户端返回的辅助受损图像,识别出所述辅助损伤图像中包括的至少一个特征部件,所述辅助受损图像包括基于交互操作在所述标记图像中确认车辆损伤位置后形成的图像信息;
S460:确定所述特征部件与所述辅助损伤图像中车辆损伤位置的相对位置关系;
S480:在特征对应关系库中对所述相对位置关系进行匹配,获取对应的关系部件,并基于所述关系部件确定出所述拍摄图像中的受损部件。
标记图像发送给客户端后,客户端可以进行展示给用户查看。用户可以根据与实际现场车损情况的对比来确认车辆损伤位置,可以进行调整也可以不调整直接确认提交。另一种实施例中,如果用户发现还有其他损伤位置没有被服务器识别处理,则可以在客户端上手工圈出缺漏的损伤位置(可以称为第二损伤位置)。这样,基于用户现场辅助观察来确定受损位置,可以更加精确的识别出受损部件。具体的,本说明书提供的所述方法的另一个实施例中,
S442:所述辅助损伤图像还包括第二损伤位置的标记信息,所述第二损伤位置包括在所述标记图像中添加的新的车辆损伤位置。
本说明书基于上述实施例还提供了一种客户端拍摄图像后由服务器先识别一次损伤部位,标记后传给客户端用户进行确认的客户端/服务器交互的实施方式,还提供一种可以用于客户端一侧的识别车辆受损部件的方法。具体的,一种实施例如图9所示,所述方法可以包括:
S500:获取车辆的拍摄图像,并将所述拍摄图像发送至服务器;
S520:接收服务器返回的标记图像,所述标记图像包括在所述拍摄图像中标记识别出的第一损伤位置后生成的图像信息;
S540:展示所述标记图像中标记第一损伤位置的标记信息;
S560:基于接收的交互操作确认车辆损伤位置,所述车辆损伤位置包括所述第一损伤位置;
S580:将车辆损伤位置确认后的辅助损伤图像发送至服务器。
如前所述,用户可以通过客户端看到服务器初步识别出的图像中车辆的损伤位置,然后用户可以根据实际现场情况来确认服务器识别的车辆损伤位置是否有效,实现用户现场辅助识别损伤部件的目的。本说明书的另一个实施例中,所述确认车辆损伤位置可以包括:
S562:确认展示的所述标记图像中的第一损伤位置的标记位置是否正确;以及,若不正确,则基于接收的交互操作调整所述第一损伤位置的标记信息。
所述的调整第一损伤位置的标记信息,在一些实施场景中可以包括调整所述标记信息在所述标记图像中的位置,也可以包括调整所述标记信息的大小或形状等。用户可以根据实际现场车损情况来调整所述第一损伤位置的标记信息相应的位移、或其他参数等。
另一个实施例中,用户还可以通过客户端手动标记出其他的车辆损伤位置。本说明书提供的所述方法的另一个实施例中,所述基于接收的交互操作确认车辆损伤位置可以包括:
S564:基于接收交互操作指令确认第二损伤位置的标记信息,所述第二损伤位置包括在所述标记图像中添加的新的车辆损伤位置。
上述所述客户端或服务器一侧的实施例的具体实施过程可以参照前述相关方法实施例的描述,在此不做赘述。
本说明书上述实施例提供的另一种识别车辆受损部件的方法,用户可以边拍摄图像边将原始拍摄的图像发送给服务器,由服务器一侧自动初步损伤识别出受损位置,标记出受损位置后传给客户端用户进行确认。如果服务器标记出的受损位置正确,则用户可以直接确认提交给系统进行位置关系的匹配,进而确认受损部件。如果服务器标记的不正确,则用户可以根据现场实际情况快速进行调整或增加缺漏的损伤位置等。这样,可以根据用户现场的手动辅助来快速确认受损区域,然后基于系统识别的特征部件与受损区域的位置关系进行受损部件的匹配确认,这样由于用户在车损现场更加接近真实车损情况的优势,可以有效提高受损部件识别的准确度和用户定损使用体验,辅助保险公 司定位车辆损伤部件匹配,提升定损中识别受损部件的准确率和处理效率,大大提高用户体验。
上述实施例描述了客户端在车损现场拍摄,用户辅助标记损伤位置,由远程服务器识基于特征部件和受损区域的位置关系来识别受损部件的实施方式。如前所述,在其他的一些实施例中,上述的拍摄图像、圈定受损部位、识别特征部件、位置关系匹配等的处理还可以有一侧的终端设备完成,例如可以由专用的客户端在现场拍摄图像,专用的客户端设置有特征库和特征对应关系库。用户可以在专用的客户端上手动圈出受损区域,然后专用的客户端自己识别出特征部件、确定相对位置关系等,可以不需要发送给服务器,可以直接由专用的客户端本地识别出受损部件,或者进一步的完成后续定损作业。因此,本说明书还可以提供另一种识别车辆受损部件的方法,可以在现场直接根据用户的辅助标记识别出受损部件,具体的,所述方法可以包括:
S600:获取车辆的拍摄图像;
S620:基于在所述拍摄图像中的损伤位置标记行为确定出受损区域,形成标记图像;
S640:识别出所述标记图像中的特征部件,基于所述特征部件与所述受损区域的图像位置确定出所述特征部件与所述受损区域的相对位置关系;
S660:在特征对应关系库中对所述相对位置关系进行匹配,获取对应的关系部件;
S680:基于所述关系部件确定出所述拍摄图像中的受损部件。
当然,基于客户端拍摄服务器初始识别反馈给客户端后进行调整的实施例描述,本说明书还可以提供另一个实施例,具体的,所述方法可以包括:
S700:获取车辆的拍摄图像,识别所述拍摄图像中的第一损伤位置,并在所述拍摄图像中标记所述第一损伤位置,生成标记图像;
S720:展示所述标记图像中第一损伤位置的标记信息;
S740:基于接收的交互操作确认车辆损伤位置,形成辅助损伤图像,所述车辆损伤位置包括所述第一损伤位置;
S760:识别出所述辅助损伤图像中包括的至少一个特征部件;确定所述特征部件与所述辅助损伤图像中车辆损伤位置的相对位置关系;
S780:在特征对应关系库中对所述相对位置关系进行匹配,获取对应的关系部件,并基于所述关系部件确定出所述拍摄图像中的受损部件。
需要说明的是,本说明书实施例上述所述的可以在车损现场将客户端、服务器的作业整合到一起的处理方法,根据前述客户端与服务器相互交换的方法实施例的描述还可以包括其他的实施方式,例如基于接收交互操作指令确认第二损伤位置的标记信息等。具体的实现方式可以参照相关方法实施例的描述,在此不作一一赘述。
本说明书中上述方法的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。相关之处参见方法实施例的部分说明即可。
本申请实施例所提供的方法实施例可以在移动终端、计算机终端、服务器或者类似的运算装置中执行。以运行在服务器上为例,图12是本发明实施例的一种识别车辆受损部件的服务器的硬件结构框图。如图12所示,服务器10可以包括一个或多个(图中仅示出一个)处理器102(处理器102可以包括但不限于微处理器MCU或可编程逻辑器件FPGA等的处理装置)、用于存储数据的存储器104、以及用于通信功能的传输模块106。本领域普通技术人员可以理解,图12所示的结构仅为示意,其并不对上述电子装置的结构造成限定。例如,服务器10还可包括比图12中所示更多或者更少的组件,例如还可以包括其他的处理硬件,如GPU(Graphics Processing Unit,图像处理器),或者具有与图12所示不同的配置。
存储器104可用于存储应用软件的软件程序以及模块,如本发明实施例中的搜索方法对应的程序指令/模块,处理器102通过运行存储在存储器104内的软件程序以及模块,从而执行各种功能应用以及数据处理,即实现上述导航交互界面内容展示的处理方法。存储器104可包括高速随机存储器,还可包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器104可进一步包括相对于处理器102远程设置的存储器,这些远程存储器可以通过网络连接至计算机终端10。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
传输模块106用于经由一个网络接收或者发送数据。上述的网络具体实例可包括计算机终端10的通信供应商提供的无线网络。在一个实例中,传输模块106包括一个网络适配器(Network Interface Controller,NIC),其可通过基站与其他网络设备相连从而可与互联网进行通讯。在一个实例中,传输模块106可以为射频(Radio Frequency, RF)模块,其用于通过无线方式与互联网进行通讯。
基于上述所述的图像物体定位的方法,本说明书还提供一种识别车辆受损部件的装置。所述的装置可以包括使用了本说明书实施例所述方法的系统(包括分布式系统)、软件(应用)、模块、组件、服务器、客户端等并结合必要的实施硬件的设备装置。基于同一创新构思,本说明书提供的一种实施例中的处理装置如下面的实施例所述。由于装置解决问题的实现方案与方法相似,因此本说明书实施例具体的处理装置的实施可以参见前述方法的实施,重复之处不再赘述。尽管以下实施例所描述的装置较佳地以软件来实现,但是硬件,或者软件和硬件的组合的实现也是可能并被构想的。具体的,如图13所示,图13是本说明书提供的可以用于服务器一侧的一种识别车辆受损部件的装置实施例的模块结构示意图,具体的可以包括:
接收模块20,可以用于接收客户端上传的标记图像,所述标记图像中包括基于在拍摄图像中的损伤位置标记行为确定出的受损区域;
位置关系确定模块21,可以用于识别出所述标记图像中的特征部件,基于所述特征部件与所述受损区域的图像位置确定出所述特征部件与所述受损区域的相对位置关系;
匹配模块22,可以用于在特征对应关系库中对所述相对位置关系进行匹配,获取对应的关系部件;
部件识别模块23,可以用于基于所述关系部件确定出所述拍摄图像中的受损部件。
图14是本说明书提供另一种识别车辆受损部件的装置实施例的模块结构示意图,所述装置的另一个实施例中,还可以包括:
特征库24,可以用于存储车辆的车辆部件;
特征对应关系库25,可以用于存储以所述特征库中的车辆部件作为基准参照构建的部件位置关系数据,所述部件位置关系数据包括表征车辆部件之间的区域所属的关系部件、车辆部件指定方位的区域所属的关系部件、车辆部件之间指定比例的区域范围所属的关系部件中的至少一种关系数据。
本说明书提供的所述装置的另一个实施例中,所述位置关系确定模块21确定所述特征部件与所述受损区域的相对位置关系具体的处理可以包括:
以所述受损区域的中心点作为坐标原点构建坐标系;
分别确定所述特征部件在所述坐标系中的位置坐标数据;
基于所述特征部件的位置坐标数据确定所述特征部件与所述受损区域的相对位置关系。
本说明书提供的所述装置的另一个实施例中,所述位置关系确定模块21可以包括:
特征选取单元210,可以用于在识别出所述标记图像中特征部件的个数N大于3时,选取所述N个特征部件中的K个特征部件来确定所述受损区域与所述K个特征参考部件的相对位置关系,2≤K≤3。
本说明书提供的所述装置的另一个实施例中,所述位置关系确定模块21基于所述特征部件的位置坐标数据确定所述特征部件与所述受损区域的相对位置关系可以包括:
根据所述受损区域的形状将所述受损区域转化为相应的第一规则几何图形;
在所述坐标系中构建出包括所述坐标原点和至少两个特征部件位置坐标数据的第二规则几何图形;
分别计算所述第二规则几何图形中所包含第一规则几何图形的特征面积的大小;
基于所述特征面积的大小和所述特征部件的坐标距离确定所述受损区域在所述特征部件之间的区域范围信息;
基于所述区域范围信息匹配确定所述特征部件与所述受损区域的相对位置关系。
本说明书提供的所述装置的另一个实施例中,若所述匹配模块22在所述特征对应关系库中未查询到与所述相对位置关系匹配的位置关系数据,则获取与所述相对位置关系匹配度最高的位置关系数据;以及,
将所述匹配度最高的位置关系数据对应的关系部件作为与所述相对位置关系匹配的关系部件。
图15是本说明书提供的可以用于客户端一侧的一种识别车辆受损部件的装置实施例的模块结构示意图,具体的可以包括:
图像获取模块30,可以用于获取车辆的拍摄图像;
位置标记模块31,可以用于基于在所述拍摄图像中的损伤位置标记行为确定出受损区域,形成标记图像;
图像发送模块32,可以用于将所述标记图像发送至服务器,以使所述服务器基于所述标记图像中的受损区域和特征部件的相对位置关系识别出受损部件。
基于前述方法实施例描述,本说明书还提供的可以用于服务器一侧的另一种识别车辆受损部件的装置。图16是本说明书提供的可以用于服务器一侧的一种识别车辆受损部件的装置实施例的模块结构示意图,具体的可以包括:
图像标记模块40,可以用于获取客户端上传的拍摄图像,识别所述拍摄图像中的第一损伤位置,并在所述拍摄图像中标记所述第一损伤位置,生成标记图像;
标记发送模块41,可以用于将所述标记图像发送给所述客户端;
辅助交互模块42,可以用于接收客户端返回的辅助受损图像,识别出所述辅助损伤图像中包括的至少一个特征部件,所述辅助受损图像包括基于交互操作在所述标记图像中确认车辆损伤位置后形成的图像信息;
位置确定模块43,可以用于确定所述特征部件与所述辅助损伤图像中车辆损伤位置的相对位置关系;
部件识别模块44,可以用于在特征对应关系库中对所述相对位置关系进行匹配,获取对应的关系部件,并基于所述关系部件确定出所述拍摄图像中的受损部件。
本说明书提供的所述装置的另一个实施例中,所述辅助交互模块42接收到辅助损伤图像中还可以包括第二损伤位置的标记信息,所述第二损伤位置包括在所述标记图像中添加的新的车辆损伤位置。
基于前述方法实施例描述,本说明书还提供的可以用于客户端一侧的另一种识别车辆受损部件的装置。图17是本说明书提供的可以用于客户端一侧的一种识别车辆受损部件的装置实施例的模块结构示意图,具体的可以包括:
第一图像发送模块50,可以用于获取车辆的拍摄图像,并将所述拍摄图像发送至服务器;
标记接收模块51,可以用于接收服务器返回的标记图像,所述标记图像包括在所述拍摄图像中标记识别出的第一损伤位置后生成的图像信息;
标记展示模块52,可以用于展示所述标记图像中标记第一损伤位置的标记信息;
损伤位置确认模块53,可以用于基于接收的交互操作确认车辆损伤位置,所述车辆损伤位置包括所述第一损伤位置;
第二图像发送模块54,可以用于将车辆损伤位置确认后的辅助损伤图像发送至服务器。
本说明书提供的所述装置的另一个实施例中,所述损伤位置确认模块53可以包括:
第一调整单元530,可以用于确认展示的所述标记图像中的第一损伤位置的标记位置是否正确;以及,若不正确,则基于接收的交互操作调整所述第一损伤位置的标记信息。
本说明书提供的所述装置的另一个实施例中,损伤位置确认模块53可以包括:
第二调整单元532,可以用于基于接收交互操作指令确认第二损伤位置的标记信息,所述第二损伤位置包括在所述标记图像中添加的新的车辆损伤位置。
图18是本说明书提供的一种损伤位置确认模块实施例的结构示意图,一些实施例中,可以同时包括上述的第一调整单元530和第二调整单元532。
上述实施例所述的装置具体的实施可以参照相关方法实施例的描述,在此不做赘述。
本说明书实施例提供的识别车辆受损部件的方法可以在计算机中由处理器执行相应的程序指令来实现,如使用windows操作系统的c++语言在PC端实现,或其他例如Linux、android、iOS系统相对应的应用设计语言集合必要的硬件实现,或者基于量子计算机的处理逻辑实现等。具体的,本说明书提供的一种服务器实现上述方法的实施例中,所述服务器可以包括处理器以及用于存储处理器可执行指令的存储器,所述处理器执行所述指令时实现:
接收客户端上传的标记图像,所述标记图像中包括基于在拍摄图像中的损伤位置标记行为确定出的受损区域;
识别出所述标记图像中的特征部件,基于所述特征部件与所述受损区域的图像位置确定出所述特征部件与所述受损区域的相对位置关系;
在特征对应关系库中对所述相对位置关系进行匹配,获取对应的关系部件;
基于所述关系部件确定出所述拍摄图像中的受损部件。
用户可以在客户端手动圈出受损区域,然后服务器识别出特征部件,基于受损区域与特征部件的相对位置关系来确实受损部件。
基于前述方法实施例本说明书还提供一种客户端,可以包括处理器以及用于存储处理器可执行指令的存储器,所述处理器执行所述指令时实现:
获取车辆的拍摄图像;
基于在所述拍摄图像中的损伤位置标记行为确定出受损区域,形成标记图像;
将所述标记图像发送至服务器,以使所述服务器基于所述标记图像中的受损区域和特征部件的相对位置关系识别出受损部件。
上述的指令可以存储在多种计算机可读存储介质中。所述计算机可读存储介质可以包括用于存储信息的物理装置,可以将信息数字化后再以利用电、磁或者光学等方式的媒体加以存储。本实施例所述的计算机可读存储介质有可以包括:利用电能方式存储信息的装置如,各式存储器,如RAM、ROM等;利用磁能方式存储信息的装置如,硬盘、软盘、磁带、磁芯存储器、磁泡存储器、U盘;利用光学方式存储信息的装置如,CD或DVD。当然,还有其他方式的可读存储介质,例如量子存储器、石墨烯存储器等等。下述所述的装置或服务器或客户端或系统中的指令同上描述。
本说明书实施例提供的一种识别车辆受损部件的客户端或服务器,可以预先建立包括多个车辆部件的可识别车辆特征库以及这些车辆部件相对位置关系的特征对应关系库。在定损图像拍摄过程中,用户可以在客户端手工圈定受损位置。然后可以识别图像出中的可识别车辆特征,根据这些可识别特征确定出其与用户圈定的标记的相对位置。进一步将这个相对位置在特征关系库中进行匹配,确定受损部件,实现通过用户现场手工简单的操作对受损位置进行辅助定位,辅助保险公司定位车辆损伤部件匹配,提升定损中识别受损部件的准确率和处理效率,大大提高用户体验。
另一种服务器的实施例中,也可以在用户拍摄图像后,先由服务器识别出受损区域,标记出受损区域返回给用户进行确认。用户可以根据现场实际情况快速做出调整,如增加或删除受损区域,调整服务器识别的受损区域的位置或大小或形状等。具体的,所述服务器可以包括处理器以及用于存储处理器可执行指令的存储器,所述处理器执行所述指令时实现:
获取客户端上传的拍摄图像,识别所述拍摄图像中的第一损伤位置,并在所述 拍摄图像中标记所述第一损伤位置,生成标记图像;
将所述标记图像发送给所述客户端;
接收客户端返回的辅助受损图像,识别出所述辅助损伤图像中包括的至少一个特征部件,所述辅助受损图像包括基于交互操作在所述标记图像中确认车辆损伤位置后形成的图像信息;
确定所述特征部件与所述辅助损伤图像中车辆损伤位置的相对位置关系;
在特征对应关系库中对所述相对位置关系进行匹配,获取对应的关系部件,并基于所述关系部件确定出所述拍摄图像中的受损部件。
另一个客户端的实施例中,所述客户端可以包括处理器以及用于存储处理器可执行指令的存储器,所述处理器执行所述指令时实现:
获取车辆的拍摄图像,并将所述拍摄图像发送至服务器;
接收服务器返回的标记图像,所述标记图像包括在所述拍摄图像中标记识别出的第一损伤位置后生成的图像信息;
展示所述标记图像中标记第一损伤位置的标记信息;
基于接收的交互操作确认车辆损伤位置,所述车辆损伤位置包括所述第一损伤位置;
将车辆损伤位置确认后的辅助损伤图像发送至服务器。
本说明书上述实施例提供的另一种识别车辆受损部件的客户端和服务器,用户可以边拍摄图像边将原始拍摄的图像发送给服务器,由服务器一侧自动初步损伤识别出受损位置,标记出受损位置后传给客户端用户进行确认。如果服务器标记出的受损位置正确,则用户可以直接确认提交给系统进行位置关系的匹配,进而确认受损部件。如果服务器标记的不正确,则用户可以根据现场实际情况快速进行调整或增加缺漏的损伤位置等。这样,可以根据用户现场的手动辅助来快速确认受损区域,然后基于系统识别的特征部件与受损区域的位置关系进行受损部件的匹配确认,这样由于用户在车损现场更加接近真实车损情况的优势,可以有效提高受损部件识别的准确度和用户定损使用体验,辅助保险公司定位车辆损伤部件匹配,提升定损中识别受损部件的准确率和处理效率,大大提高用户体验。
基于前述所述,本说明书实施例还提供一种电子设备,包括显示屏、处理器以 及存储处理器可执行指令的存储器。所述的电子设备可以包括集成了所述特征库与特征对应关系库的现场专用设备,可以直接在车损现场进行拍摄的同时实现受损部件的识别或者进一步的完成车辆定损。图19是本说明提供的一种电子设备实施例的结构示意图。所述的显示屏可以包括触摸屏、液晶显示器、投影设备等显示信息内容的设备。所述的电子设备类型可以包括移动终端、专用车险设备、车机交互设备、个人电脑等。所述处理器执行所述指令时可以实现:
获取车辆的拍摄图像;
基于在显示屏中对所述拍摄图像中的损伤位置标记行为确定出受损区域,形成标记图像;
识别出所述标记图像中的特征部件,基于所述特征部件与所述受损区域的图像位置确定出所述特征部件与所述受损区域的相对位置关系;
在特征对应关系库中对所述相对位置关系进行匹配,获取对应的关系部件;
基于所述关系部件确定出所述拍摄图像中的受损部件。
本说明书提供的另一种电子设备的实施例中,同样可以包括显示屏、处理器以及存储处理器可执行指令的存储器。所述处理器执行所述指令时可以实现:
获取车辆的拍摄图像,识别所述拍摄图像中的第一损伤位置,并在所述拍摄图像中标记所述第一损伤位置,生成标记图像;
在所述显示屏中展示所述标记图像中第一损伤位置的标记信息;
基于接收的交互操作确认车辆损伤位置,形成辅助损伤图像,所述车辆损伤位置包括所述第一损伤位置;
识别出所述辅助损伤图像中包括的至少一个特征部件;确定所述特征部件与所述辅助损伤图像中车辆损伤位置的相对位置关系;
在特征对应关系库中对所述相对位置关系进行匹配,获取对应的关系部件,并基于所述关系部件确定出所述拍摄图像中的受损部件。
需要说明的是,本说明书实施例上述所述的装置、电子设备,根据相关方法实施例的描述还可以包括其他的实施方式。具体的实现方式可以参照方法实施例的描述,在此不作一一赘述。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的 部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于硬件+程序类实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
上述对本说明书特定实施例进行了描述。其它实施例在所附权利要求书的范围内。在一些情况下,在权利要求书中记载的动作或步骤可以按照不同于实施例中的顺序来执行并且仍然可以实现期望的结果。另外,在附图中描绘的过程不一定要求示出的特定顺序或者连续顺序才能实现期望的结果。在某些实施方式中,多任务处理和并行处理也是可以的或者可能是有利的。
基于前述客户端与服务器交互识别车辆受损部件的方法或装置实施例描述,本说明书还提供一种识别车辆受损部件的系统。所述系统可以包括第一客户端和第一服务器,所述第一客户端可以包括在客户端手动标记出损伤位置并服务器识别处理的应用场景中,实现任意一个客户端实施例的处理方法,所述的第一服务器可以包括在客户端手动标记出损伤位置并服务器识别处理的应用场景中,实现任意一个服务器实施例的处理方法。
本说明书还提供另一种识别车辆受损部件的系统。所述系统可以包括第二客户端和第二服务器,所述第二客户端可以包括在客户端拍摄图像并服务器初步识别后返回给客户端进行确认的应用场景中,实现任意一个客户端实施例的处理方法,所述的第二服务器可以包括在客户端拍摄图像并服务器初步识别后返回给客户端进行确认的应用场景中,实现任意一个服务器实施例的处理方法。
虽然本申请提供了如实施例或流程图所述的方法操作步骤,但基于常规或者无创造性的劳动可以包括更多或者更少的操作步骤。实施例中列举的步骤顺序仅仅为众多步骤执行顺序中的一种方式,不代表唯一的执行顺序。在实际中的装置或客户端产品执行时,可以按照实施例或者附图所示的方法顺序执行或者并行执行(例如并行处理器或者多线程处理的环境)。
尽管本说明书实施例内容中提到构建特征库或特征对应关系库、矩形框标记损伤位置、基于面积大小确定位置关系、利用卷积神经网络初步识别损伤位置等之类的数据获取、位置排列、交互、计算、判断等操作和数据描述,但是,本说明书实施例并不局限于必须是符合行业通信标准、标准图像数据处理协议、通信协议和标准数据模型/模板或本说明书实施例所描述的情况。某些行业标准或者使用自定义方式或实施例描述的实施基础上略加修改后的实施方案也可以实现上述实施例相同、等同或相近、或变形 后可预料的实施效果。应用这些修改或变形后的数据获取、存储、判断、处理方式等获取的实施例,仍然可以属于本说明书的可选实施方案范围之内。
在20世纪90年代,对于一个技术的改进可以很明显地区分是硬件上的改进(例如,对二极管、晶体管、开关等电路结构的改进)还是软件上的改进(对于方法流程的改进)。然而,随着技术的发展,当今的很多方法流程的改进已经可以视为硬件电路结构的直接改进。设计人员几乎都通过将改进的方法流程编程到硬件电路中来得到相应的硬件电路结构。因此,不能说一个方法流程的改进就不能用硬件实体模块来实现。例如,可编程逻辑器件(Programmable Logic Device,PLD)(例如现场可编程门阵列(Field Programmable Gate Array,FPGA))就是这样一种集成电路,其逻辑功能由用户对器件编程来确定。由设计人员自行编程来把一个数字系统“集成”在一片PLD上,而不需要请芯片制造厂商来设计和制作专用的集成电路芯片。而且,如今,取代手工地制作集成电路芯片,这种编程也多半改用“逻辑编译器(logic compiler)”软件来实现,它与程序开发撰写时所用的软件编译器相类似,而要编译之前的原始代码也得用特定的编程语言来撰写,此称之为硬件描述语言(Hardware Description Language,HDL),而HDL也并非仅有一种,而是有许多种,如ABEL(Advanced Boolean Expression Language)、AHDL(Altera Hardware Description Language)、Confluence、CUPL(Cornell University Programming Language)、HDCal、JHDL(Java Hardware Description Language)、Lava、Lola、MyHDL、PALASM、RHDL(Ruby Hardware Description Language)等,目前最普遍使用的是VHDL(Very-High-Speed Integrated Circuit Hardware Description Language)与Verilog。本领域技术人员也应该清楚,只需要将方法流程用上述几种硬件描述语言稍作逻辑编程并编程到集成电路中,就可以很容易得到实现该逻辑方法流程的硬件电路。
控制器可以按任何适当的方式实现,例如,控制器可以采取例如微处理器或处理器以及存储可由该(微)处理器执行的计算机可读程序代码(例如软件或固件)的计算机可读介质、逻辑门、开关、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程逻辑控制器和嵌入微控制器的形式,控制器的例子包括但不限于以下微控制器:ARC 625D、Atmel AT91SAM、Microchip PIC18F26K20以及Silicone Labs C8051F320,存储器控制器还可以被实现为存储器的控制逻辑的一部分。本领域技术人员也知道,除了以纯计算机可读程序代码方式实现控制器以外,完全可以通过将方法步骤进行逻辑编程来使得控制器以逻辑门、开关、专用集成电路、可编程逻辑控制器和嵌入微控制器等的形式来实现相同功能。因此这种控制器可以被认为是一种硬件部件,而 对其内包括的用于实现各种功能的装置也可以视为硬件部件内的结构。或者甚至,可以将用于实现各种功能的装置视为既可以是实现方法的软件模块又可以是硬件部件内的结构。
上述实施例阐明的系统、装置、模块或单元,具体可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。一种典型的实现设备为计算机。具体的,计算机例如可以为个人计算机、膝上型计算机、车载人机交互设备、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任何设备的组合。
虽然本说明书实施例提供了如实施例或流程图所述的方法操作步骤,但基于常规或者无创造性的手段可以包括更多或者更少的操作步骤。实施例中列举的步骤顺序仅仅为众多步骤执行顺序中的一种方式,不代表唯一的执行顺序。在实际中的装置或终端产品执行时,可以按照实施例或者附图所示的方法顺序执行或者并行执行(例如并行处理器或者多线程处理的环境,甚至为分布式数据处理环境)。术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、产品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、产品或者设备所固有的要素。在没有更多限制的情况下,并不排除在包括所述要素的过程、方法、产品或者设备中还存在另外的相同或等同要素。
为了描述的方便,描述以上装置时以功能分为各种模块分别描述。当然,在实施本说明书实施例时可以把各模块的功能在同一个或多个软件和/或硬件中实现,也可以将实现同一功能的模块由多个子模块或子单元的组合实现等。以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
本领域技术人员也知道,除了以纯计算机可读程序代码方式实现控制器以外,完全可以通过将方法步骤进行逻辑编程来使得控制器以逻辑门、开关、专用集成电路、可编程逻辑控制器和嵌入微控制器等的形式来实现相同功能。因此这种控制器可以被认为是一种硬件部件,而对其内部包括的用于实现各种功能的装置也可以视为硬件部件内的结构。或者甚至,可以将用于实现各种功能的装置视为既可以是实现方法的软件模块 又可以是硬件部件内的结构。
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory  media),如调制的数据信号和载波。
本领域技术人员应明白,本说明书的实施例可提供为方法、系统或计算机程序产品。因此,本说明书实施例可采用完全硬件实施例、完全软件实施例或结合软件和硬件方面的实施例的形式。而且,本说明书实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本说明书实施例可以在由计算机执行的计算机可执行指令的一般上下文中描述,例如程序模块。一般地,程序模块包括执行特定任务或实现特定抽象数据类型的例程、程序、对象、组件、数据结构等等。也可以在分布式计算环境中实践本说明书实施例,在这些分布式计算环境中,由通过通信网络而被连接的远程处理设备来执行任务。在分布式计算环境中,程序模块可以位于包括存储设备在内的本地和远程计算机存储介质中。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于系统实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本说明书实施例的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
以上所述仅为本说明书实施例的实施例而已,并不用于限制本说明书实施例。对于本领域技术人员来说,本说明书实施例可以有各种更改和变化。凡在本说明书实施例的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本说明书实施例的权利要求范围之内。

Claims (36)

  1. 一种识别车辆受损部件的方法,所述方法包括:
    客户端获取车辆的拍摄图像;
    客户端基于在所述拍摄图像中的损伤位置标记行为确定出受损区域,形成标记图像;
    客户端将所述标记图像发送至服务器;
    服务器识别出所述标记图像中的特征部件,基于所述特征部件与所述受损区域的图像位置确定出所述特征部件与所述受损区域的相对位置关系;
    服务器在特征对应关系库中对所述相对位置关系进行匹配,获取对应的关系部件;
    服务器基于所述关系部件确定出所述拍摄图像中的受损部件。
  2. 一种识别车辆受损部件的方法,所述方法包括:
    接收客户端上传的标记图像,所述标记图像中包括基于在拍摄图像中的损伤位置标记行为确定出的受损区域;
    识别出所述标记图像中的特征部件,基于所述特征部件与所述受损区域的图像位置确定出所述特征部件与所述受损区域的相对位置关系;
    在特征对应关系库中对所述相对位置关系进行匹配,获取对应的关系部件;
    基于所述关系部件确定出所述拍摄图像中的受损部件。
  3. 如权利要求2所述的方法,所述特征部件包括:构建的特征库中所包括的车辆部件,
    相应的,所述的特征对应关系库包括:以所述特征库中的车辆部件作为基准参照构建的部件位置关系数据,所述部件位置关系数据包括表征车辆部件之间的区域所属的关系部件、车辆部件指定方位的区域所属的关系部件、车辆部件之间指定比例的区域范围所属的关系部件中的至少一种关系数据。
  4. 如权利要求2所述的方法,所述确定出所述特征部件与所述受损区域的相对位置关系包括:
    以所述受损区域的中心点作为坐标原点构建坐标系;
    分别确定所述特征部件在所述坐标系中的位置坐标数据;
    基于所述特征部件的位置坐标数据确定所述特征部件与所述受损区域的相对位置关系。
  5. 如权利要求4所述的方法,若识别出所述标记图像中特征部件的个数N大于3,则选取N个特征部件中的K个特征部件来确定所述受损区域与所述K个特征参考部件 的相对位置关系,2≤K≤3。
  6. 如权利要求5所述的方法,所述基于所述特征部件的位置坐标数据确定所述特征部件与所述受损区域的相对位置关系包括:
    根据所述受损区域的形状将所述受损区域转化为相应的第一规则几何图形;
    在所述坐标系中构建出包括所述坐标原点和至少两个特征部件位置坐标数据的第二规则几何图形;
    分别计算所述第二规则几何图形中所包含第一规则几何图形的特征面积的大小;
    基于所述特征面积的大小和所述特征部件的坐标距离确定所述受损区域在所述特征部件之间的区域范围信息;
    基于所述区域范围信息匹配确定所述特征部件与所述受损区域的相对位置关系。
  7. 如权利要求2所述的方法,若在所述特征对应关系库中未查询到与所述相对位置关系匹配的位置关系数据,则获取与所述相对位置关系匹配度最高的位置关系数据;
    将所述匹配度最高的位置关系数据对应的关系部件作为与所述相对位置关系匹配的关系部件。
  8. 一种识别车辆受损部件的方法,所述方法包括:
    获取车辆的拍摄图像;
    基于在所述拍摄图像中的损伤位置标记行为确定出受损区域,形成标记图像;
    将所述标记图像发送至服务器,以使所述服务器基于所述标记图像中的受损区域和特征部件的相对位置关系识别出受损部件。
  9. 一种识别车辆受损部件的方法,所述方法包括:
    客户端获取车辆的拍摄图像,并将所述拍摄图像发送至服务器;
    服务器识别所述拍摄图像中的第一损伤位置,并在所述拍摄图像中标记所述第一损伤位置,生成标记图像;
    服务器将所述标记图像发送给所述客户端;
    客户端展示所述标记图像中第一损伤位置的标记信息;
    客户端基于接收的交互操作确认车辆损伤位置,所述车辆损伤位置包括所述第一损伤位置;
    客户端将车辆损伤位置确认后的辅助损伤图像发送至服务器;
    所述服务器接收所述辅助损伤图像后,识别出所述辅助损伤图像中包括的至少一个特征部件;
    服务器确定所述特征部件与所述辅助损伤图像中车辆损伤位置的相对位置关系;
    所述服务器在特征对应关系库中对所述相对位置关系进行匹配,获取对应的关系部件;
    服务器基于所述关系部件确定出所述拍摄图像中的受损部件。
  10. 一种识别车辆受损部件的方法,所述方法包括:
    获取客户端上传的拍摄图像,识别所述拍摄图像中的第一损伤位置,并在所述拍摄图像中标记所述第一损伤位置,生成标记图像;
    将所述标记图像发送给所述客户端;
    接收客户端返回的辅助受损图像,识别出所述辅助损伤图像中包括的至少一个特征部件,所述辅助受损图像包括基于交互操作在所述标记图像中确认车辆损伤位置后形成的图像信息;
    确定所述特征部件与所述辅助损伤图像中车辆损伤位置的相对位置关系;
    在特征对应关系库中对所述相对位置关系进行匹配,获取对应的关系部件,并基于所述关系部件确定出所述拍摄图像中的受损部件。
  11. 如权利要求10所述的方法,其中,所述辅助损伤图像还包括第二损伤位置的标记信息,所述第二损伤位置包括在所述标记图像中添加的新的车辆损伤位置。
  12. 一种识别车辆受损部件的方法,所述方法包括:
    获取车辆的拍摄图像,并将所述拍摄图像发送至服务器;
    接收服务器返回的标记图像,所述标记图像包括在所述拍摄图像中标记识别出的第一损伤位置后生成的图像信息;
    展示所述标记图像中标记第一损伤位置的标记信息;
    基于接收的交互操作确认车辆损伤位置,所述车辆损伤位置包括所述第一损伤位置;
    将车辆损伤位置确认后的辅助损伤图像发送至服务器。
  13. 如权利要求12所述的方法,其中,所述确认车辆损伤位置包括:
    确认展示的所述标记图像中的第一损伤位置的标记位置是否正确;以及,若不正确,则基于接收的交互操作调整所述第一损伤位置的标记信息。
  14. 如权利要求12所述的方法,其中,所述基于接收的交互操作确认车辆损伤位置包括:
    基于接收交互操作指令确认第二损伤位置的标记信息,所述第二损伤位置包括在所述标记图像中添加的新的车辆损伤位置。
  15. 一种识别车辆受损部件的方法,所述方法包括:
    获取车辆的拍摄图像;
    基于在所述拍摄图像中的损伤位置标记行为确定出受损区域,形成标记图像;
    识别出所述标记图像中的特征部件,基于所述特征部件与所述受损区域的图像位置确定出所述特征部件与所述受损区域的相对位置关系;
    在特征对应关系库中对所述相对位置关系进行匹配,获取对应的关系部件;
    基于所述关系部件确定出所述拍摄图像中的受损部件。
  16. 一种识别车辆受损部件的方法,所述方法包括:
    获取车辆的拍摄图像,识别所述拍摄图像中的第一损伤位置,并在所述拍摄图像中标记所述第一损伤位置,生成标记图像;
    展示所述标记图像中第一损伤位置的标记信息;
    基于接收的交互操作确认车辆损伤位置,形成辅助损伤图像,所述车辆损伤位置包括所述第一损伤位置;
    识别出所述辅助损伤图像中包括的至少一个特征部件;确定所述特征部件与所述辅助损伤图像中车辆损伤位置的相对位置关系;
    在特征对应关系库中对所述相对位置关系进行匹配,获取对应的关系部件,并基于所述关系部件确定出所述拍摄图像中的受损部件。
  17. 一种识别车辆受损部件的装置,所述装置包括:
    接收模块,用于接收客户端上传的标记图像,所述标记图像中包括基于在拍摄图像中的损伤位置标记行为确定出的受损区域;
    位置关系确定模块,用于识别出所述标记图像中的特征部件,基于所述特征部件与所述受损区域的图像位置确定出所述特征部件与所述受损区域的相对位置关系;
    匹配模块,用于在特征对应关系库中对所述相对位置关系进行匹配,获取对应的关系部件;
    部件识别模块,用于基于所述关系部件确定出所述拍摄图像中的受损部件。
  18. 如权利要求17所述的装置,所述装置还包括:
    特征库,用于存储车辆的车辆部件;
    特征对应关系库,用于存储以所述特征库中的车辆部件作为基准参照构建的部件位置关系数据,所述部件位置关系数据包括表征车辆部件之间的区域所属的关系部件、车辆部件指定方位的区域所属的关系部件、车辆部件之间指定比例的区域范围所属的关系部件中的至少一种关系数据。
  19. 如权利要求17所述的装置,所述位置关系确定模块确定所述特征部件与所述受损区域的相对位置关系包括:
    以所述受损区域的中心点作为坐标原点构建坐标系;
    分别确定所述特征部件在所述坐标系中的位置坐标数据;
    基于所述特征部件的位置坐标数据确定所述特征部件与所述受损区域的相对位置关系。
  20. 如权利要求19所述的装置,所述位置关系确定模块包括:
    特征选取单元,用于在识别出所述标记图像中特征部件的个数N大于3时,选取N个特征部件中的K个特征部件来确定所述受损区域与所述K个特征参考部件的相对位置关系,2≤K≤3。
  21. 如权利要求20所述的装置,所述位置关系确定模块基于所述特征部件的位置坐标数据确定所述特征部件与所述受损区域的相对位置关系包括:
    根据所述受损区域的形状将所述受损区域转化为相应的第一规则几何图形;
    在所述坐标系中构建出包括所述坐标原点和至少两个特征部件位置坐标数据的第二规则几何图形;
    分别计算所述第二规则几何图形中所包含第一规则几何图形的特征面积的大小;
    基于所述特征面积的大小和所述特征部件的坐标距离确定所述受损区域在所述特征部件之间的区域范围信息;
    基于所述区域范围信息匹配确定所述特征部件与所述受损区域的相对位置关系。
  22. 如权利要求18所述的装置,若所述匹配模块在所述特征对应关系库中未查询到与所述相对位置关系匹配的位置关系数据,则获取与所述相对位置关系匹配度最高的位置关系数据;以及,
    将所述匹配度最高的位置关系数据对应的关系部件作为与所述相对位置关系匹配的关系部件。
  23. 一种识别车辆受损部件的装置,所述装置包括:
    图像获取模块,用于获取车辆的拍摄图像;
    位置标记模块,用于基于在所述拍摄图像中的损伤位置标记行为确定出受损区域,形成标记图像;
    图像发送模块,用于将所述标记图像发送至服务器,以使所述服务器基于所述标记图像中的受损区域和特征部件的相对位置关系识别出受损部件。
  24. 一种识别车辆受损部件的装置,所述装置包括:
    图像标记模块,用于获取客户端上传的拍摄图像,识别所述拍摄图像中的第一损伤位置,并在所述拍摄图像中标记所述第一损伤位置,生成标记图像;
    标记发送模块,用于将所述标记图像发送给所述客户端;
    辅助交互模块,用于接收客户端返回的辅助受损图像,识别出所述辅助损伤图像中包括的至少一个特征部件,所述辅助受损图像包括基于交互操作在所述标记图像中确认车辆损伤位置后形成的图像信息;
    位置确定模块,用于确定所述特征部件与所述辅助损伤图像中车辆损伤位置的相对位置关系;
    部件识别模块,用于在特征对应关系库中对所述相对位置关系进行匹配,获取对应的关系部件,并基于所述关系部件确定出所述拍摄图像中的受损部件。
  25. 如权利要求24所述的装置,所述辅助交互模块接收到辅助损伤图像中还包括第二损伤位置的标记信息,所述第二损伤位置包括在所述标记图像中添加的新的车辆损伤位置。
  26. 一种识别车辆受损部件的装置,所述装置包括:
    第一图像发送模块,用于获取车辆的拍摄图像,并将所述拍摄图像发送至服务器;
    标记接收模块,用于接收服务器返回的标记图像,所述标记图像包括在所述拍摄图像中标记识别出的第一损伤位置后生成的图像信息;
    标记展示模块,用于展示所述标记图像中标记第一损伤位置的标记信息;
    损伤位置确认模块,用于基于接收的交互操作确认车辆损伤位置,所述车辆损伤位置包括所述第一损伤位置;
    第二图像发送模块,用于将车辆损伤位置确认后的辅助损伤图像发送至服务器。
  27. 如权利要求26所述的装置,所述损伤位置确认模块包括:
    第一调整单元,用于确认展示的所述标记图像中的第一损伤位置的标记位置是否正确;以及,若不正确,则基于接收的交互操作调整所述第一损伤位置的标记信息。
  28. 如权利要求27所述的装置,所述损伤位置确认模块包括:
    第二调整单元,用于基于接收交互操作指令确认第二损伤位置的标记信息,所述第二损伤位置包括在所述标记图像中添加的新的车辆损伤位置。
  29. 一种服务器,包括处理器以及用于存储处理器可执行指令的存储器,所述处理器执行所述指令时实现:
    接收客户端上传的标记图像,所述标记图像中包括基于在拍摄图像中的损伤位置标记行为确定出的受损区域;
    识别出所述标记图像中的特征部件,基于所述特征部件与所述受损区域的图像位置确定出所述特征部件与所述受损区域的相对位置关系;
    在特征对应关系库中对所述相对位置关系进行匹配,获取对应的关系部件;
    基于所述关系部件确定出所述拍摄图像中的受损部件。
  30. 一种客户端,包括处理器以及用于存储处理器可执行指令的存储器,所述处理器执行所述指令时实现:
    获取车辆的拍摄图像;
    基于在所述拍摄图像中的损伤位置标记行为确定出受损区域,形成标记图像;
    将所述标记图像发送至服务器,以使所述服务器基于所述标记图像中的受损区域和特征部件的相对位置关系识别出受损部件。
  31. 一种服务器,包括处理器以及用于存储处理器可执行指令的存储器,所述处理器执行所述指令时实现:
    获取客户端上传的拍摄图像,识别所述拍摄图像中的第一损伤位置,并在所述拍摄图像中标记所述第一损伤位置,生成标记图像;
    将所述标记图像发送给所述客户端;
    接收客户端返回的辅助受损图像,识别出所述辅助损伤图像中包括的至少一个特征部件,所述辅助受损图像包括基于交互操作在所述标记图像中确认车辆损伤位置后形成的图像信息;
    确定所述特征部件与所述辅助损伤图像中车辆损伤位置的相对位置关系;
    在特征对应关系库中对所述相对位置关系进行匹配,获取对应的关系部件,并基于所述关系部件确定出所述拍摄图像中的受损部件。
  32. 一种客户端,包括处理器以及用于存储处理器可执行指令的存储器,所述处理器执行所述指令时实现:
    获取车辆的拍摄图像,并将所述拍摄图像发送至服务器;
    接收服务器返回的标记图像,所述标记图像包括在所述拍摄图像中标记识别出的第一损伤位置后生成的图像信息;
    展示所述标记图像中标记第一损伤位置的标记信息;
    基于接收的交互操作确认车辆损伤位置,所述车辆损伤位置包括所述第一损伤位置;
    将车辆损伤位置确认后的辅助损伤图像发送至服务器。
  33. 一种电子设备,包括显示屏、处理器以及存储处理器可执行指令的存储器,所述处理器执行所述指令时实现:
    获取车辆的拍摄图像;
    基于在显示屏中对所述拍摄图像中的损伤位置标记行为确定出受损区域,形成标记图像;
    识别出所述标记图像中的特征部件,基于所述特征部件与所述受损区域的图像位置确定出所述特征部件与所述受损区域的相对位置关系;
    在特征对应关系库中对所述相对位置关系进行匹配,获取对应的关系部件;
    基于所述关系部件确定出所述拍摄图像中的受损部件。
  34. 一种电子设备,包括显示屏、处理器以及存储处理器可执行指令的存储器,所述处理器执行所述指令时实现:
    获取车辆的拍摄图像,识别所述拍摄图像中的第一损伤位置,并在所述拍摄图像中标记所述第一损伤位置,生成标记图像;
    在所述显示屏中展示所述标记图像中第一损伤位置的标记信息;
    基于接收的交互操作确认车辆损伤位置,形成辅助损伤图像,所述车辆损伤位置包括所述第一损伤位置;
    识别出所述辅助损伤图像中包括的至少一个特征部件;确定所述特征部件与所述辅助损伤图像中车辆损伤位置的相对位置关系;
    在特征对应关系库中对所述相对位置关系进行匹配,获取对应的关系部件,并基于所述关系部件确定出所述拍摄图像中的受损部件。
  35. 一种识别车辆受损部件的系统,包括第一客户端和第一服务器,
    所述第一服务器实现权力要求2-7中任意一项所述的方法的步骤;
    所述第一客户端实现权力要求8所述的方法的步骤。
  36. 一种识别车辆受损部件的系统,包括第二客户端和第二服务器,
    所述第二服务器实现权力要求10-11中任意一项所述的方法的步骤;
    所述第二客户端实现权力要求12-14中任意一项所述的方法的步骤。
PCT/CN2018/107217 2017-11-21 2018-09-25 识别车辆受损部件的方法、装置、服务器、客户端及系统 WO2019100839A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP18881645.8A EP3716195A4 (en) 2017-11-21 2018-09-25 METHOD AND DEVICE FOR IDENTIFYING DAMAGED VEHICLE PARTS, SERVER, CLIENT DEVICE AND SYSTEM
SG11202004704QA SG11202004704QA (en) 2017-11-21 2018-09-25 Method and apparatus for identifying a damaged part of a vehicle, server, client and system
US16/879,367 US11341746B2 (en) 2017-11-21 2020-05-20 Method and apparatus for identifying a damaged part of a vehicle, server, client and system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711161926.9 2017-11-21
CN201711161926.9A CN108090838B (zh) 2017-11-21 2017-11-21 识别车辆受损部件的方法、装置、服务器、客户端及系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/879,367 Continuation US11341746B2 (en) 2017-11-21 2020-05-20 Method and apparatus for identifying a damaged part of a vehicle, server, client and system

Publications (1)

Publication Number Publication Date
WO2019100839A1 true WO2019100839A1 (zh) 2019-05-31

Family

ID=62172293

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/107217 WO2019100839A1 (zh) 2017-11-21 2018-09-25 识别车辆受损部件的方法、装置、服务器、客户端及系统

Country Status (6)

Country Link
US (1) US11341746B2 (zh)
EP (1) EP3716195A4 (zh)
CN (1) CN108090838B (zh)
SG (1) SG11202004704QA (zh)
TW (1) TWI686746B (zh)
WO (1) WO2019100839A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110473418A (zh) * 2019-07-25 2019-11-19 平安科技(深圳)有限公司 危险路段识别方法、装置、服务器及存储介质

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108090838B (zh) * 2017-11-21 2020-09-29 阿里巴巴集团控股有限公司 识别车辆受损部件的方法、装置、服务器、客户端及系统
CN108875648A (zh) * 2018-06-22 2018-11-23 深源恒际科技有限公司 一种基于手机视频流的实时车辆损伤和部件检测的方法
CN109145903A (zh) * 2018-08-22 2019-01-04 阿里巴巴集团控股有限公司 一种图像处理方法和装置
CN110569856B (zh) * 2018-08-24 2020-07-21 阿里巴巴集团控股有限公司 样本标注方法及装置、损伤类别的识别方法及装置
CN110570316A (zh) 2018-08-31 2019-12-13 阿里巴巴集团控股有限公司 训练损伤识别模型的方法及装置
CN110567728B (zh) * 2018-09-03 2021-08-20 创新先进技术有限公司 用户拍摄意图的识别方法、装置及设备
WO2020051545A1 (en) * 2018-09-07 2020-03-12 Alibaba Group Holding Limited Method and computer-readable storage medium for generating training samples for training a target detector
CN110569700B (zh) * 2018-09-26 2020-11-03 创新先进技术有限公司 优化损伤识别结果的方法及装置
CN109410270B (zh) * 2018-09-28 2020-10-27 百度在线网络技术(北京)有限公司 一种定损方法、设备和存储介质
CN109359676A (zh) * 2018-10-08 2019-02-19 百度在线网络技术(北京)有限公司 用于生成车辆损伤信息的方法和装置
CN109900702A (zh) * 2018-12-03 2019-06-18 阿里巴巴集团控股有限公司 车辆损伤检测的处理方法、装置、设备、服务器和系统
TWI734349B (zh) * 2019-08-19 2021-07-21 威盛電子股份有限公司 神經網路影像辨識系統及其使用的神經網路建置系統及方法
CN110660000A (zh) * 2019-09-09 2020-01-07 平安科技(深圳)有限公司 数据预测方法、装置、设备及计算机可读存储介质
CN111209957B (zh) * 2020-01-03 2023-07-18 平安科技(深圳)有限公司 车辆部件识别方法、装置、计算机设备及存储介质
AU2021204872A1 (en) * 2020-01-03 2022-08-04 Tractable Ltd Method of determining damage to parts of a vehicle
CN112000829B (zh) * 2020-09-03 2023-05-30 科大讯飞股份有限公司 一种咨询响应方法、装置、设备及存储介质
US11769120B2 (en) * 2020-10-14 2023-09-26 Mitchell International, Inc. Systems and methods for improving user experience during damage appraisal
CN112802156A (zh) * 2020-12-31 2021-05-14 山东奥邦交通设施工程有限公司 一种基于四点变换的区域标识方法及系统
CN113361457A (zh) * 2021-06-29 2021-09-07 北京百度网讯科技有限公司 基于图像的车辆定损方法、装置及系统
JP2023536213A (ja) * 2021-06-29 2023-08-24 ベイジン バイドゥ ネットコム サイエンス テクノロジー カンパニー リミテッド 画像に基づく車両損傷評価方法、装置及びシステム
US11726587B2 (en) * 2021-11-03 2023-08-15 Htc Corporation Virtual image display system and pointing direction control method of control device thereof
US20230153975A1 (en) * 2021-11-16 2023-05-18 Solera Holdings, Llc Transfer of damage markers from images to 3d vehicle models for damage assessment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104268783A (zh) * 2014-05-30 2015-01-07 翱特信息系统(中国)有限公司 车辆定损估价的方法、装置和终端设备
CN106203644A (zh) * 2016-08-09 2016-12-07 深圳市永兴元科技有限公司 车辆定损方法和装置
CN106600422A (zh) * 2016-11-24 2017-04-26 中国平安财产保险股份有限公司 一种车险智能定损方法和系统
CN108090838A (zh) * 2017-11-21 2018-05-29 阿里巴巴集团控股有限公司 识别车辆受损部件的方法、装置、服务器、客户端及系统

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3808182B2 (ja) * 1997-08-28 2006-08-09 翼システム株式会社 車両修理費見積もりシステム及び修理費見積もりプログラムを格納した記録媒体
WO2003023557A2 (en) * 2001-09-06 2003-03-20 Wtd Technologies, Inc. Accident evidence recording method
US10748216B2 (en) * 2013-10-15 2020-08-18 Audatex North America, Inc. Mobile system for generating a damaged vehicle insurance estimate
US20160071258A1 (en) * 2014-09-08 2016-03-10 Guy L. McClung, III Responsibility system, ID and Tracking of items and debris including retread tires
CN105550756B (zh) * 2015-12-08 2017-06-16 优易商业管理成都有限公司 一种基于模拟车辆受损的汽车快速定损方法
CN105719188B (zh) * 2016-01-22 2017-12-26 平安科技(深圳)有限公司 基于多张图片一致性实现保险理赔反欺诈的方法及服务器
US10692050B2 (en) * 2016-04-06 2020-06-23 American International Group, Inc. Automatic assessment of damage and repair costs in vehicles
CN106127747B (zh) * 2016-06-17 2018-10-16 史方 基于深度学习的汽车表面损伤分类方法及装置
CN106600421A (zh) * 2016-11-21 2017-04-26 中国平安财产保险股份有限公司 一种基于图片识别的车险智能定损方法及系统
CN106780048A (zh) * 2016-11-28 2017-05-31 中国平安财产保险股份有限公司 一种智能车险的自助理赔方法、自助理赔装置及系统
CN107340120B (zh) * 2016-12-05 2019-02-19 安徽江淮汽车集团股份有限公司 一种汽车座椅颈部损伤识别方法及系统
CN107358596B (zh) * 2017-04-11 2020-09-18 阿里巴巴集团控股有限公司 一种基于图像的车辆定损方法、装置、电子设备及系统
CN107194323B (zh) * 2017-04-28 2020-07-03 阿里巴巴集团控股有限公司 车辆定损图像获取方法、装置、服务器和终端设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104268783A (zh) * 2014-05-30 2015-01-07 翱特信息系统(中国)有限公司 车辆定损估价的方法、装置和终端设备
CN106203644A (zh) * 2016-08-09 2016-12-07 深圳市永兴元科技有限公司 车辆定损方法和装置
CN106600422A (zh) * 2016-11-24 2017-04-26 中国平安财产保险股份有限公司 一种车险智能定损方法和系统
CN108090838A (zh) * 2017-11-21 2018-05-29 阿里巴巴集团控股有限公司 识别车辆受损部件的方法、装置、服务器、客户端及系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3716195A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110473418A (zh) * 2019-07-25 2019-11-19 平安科技(深圳)有限公司 危险路段识别方法、装置、服务器及存储介质

Also Published As

Publication number Publication date
EP3716195A4 (en) 2021-08-04
EP3716195A1 (en) 2020-09-30
SG11202004704QA (en) 2020-06-29
CN108090838B (zh) 2020-09-29
TW201926130A (zh) 2019-07-01
CN108090838A (zh) 2018-05-29
TWI686746B (zh) 2020-03-01
US20200349353A1 (en) 2020-11-05
US11341746B2 (en) 2022-05-24

Similar Documents

Publication Publication Date Title
WO2019100839A1 (zh) 识别车辆受损部件的方法、装置、服务器、客户端及系统
WO2019214313A1 (zh) 一种车辆定损的交互处理方法、装置、处理设备及客户端
US20210058608A1 (en) Method and apparatus for generating three-dimensional (3d) road model
CN111325796B (zh) 用于确定视觉设备的位姿的方法和装置
WO2021128777A1 (en) Method, apparatus, device, and storage medium for detecting travelable region
JP6364049B2 (ja) 点群データに基づく車両輪郭検出方法、装置、記憶媒体およびコンピュータプログラム
WO2022078467A1 (zh) 机器人自动回充方法、装置、机器人和存储介质
CN110245552B (zh) 车损图像拍摄的交互处理方法、装置、设备及客户端
US20180189577A1 (en) Systems and methods for lane-marker detection
WO2023016271A1 (zh) 位姿确定方法、电子设备及可读存储介质
JP2022016908A (ja) 俯瞰画像生成装置、俯瞰画像生成システム及び自動駐車装置
JP2020057387A (ja) 車両位置決め方法、車両位置決め装置、電子機器及びコンピュータ読み取り可能な記憶媒体
CN114097006A (zh) 交叉模态传感器数据对准
CN113706633B (zh) 一种目标对象的三维信息确定方法和装置
Mariotti et al. Spherical formulation of geometric motion segmentation constraints in fisheye cameras
CN114091521B (zh) 车辆航向角的检测方法、装置、设备及存储介质
WO2021175119A1 (zh) 用于获取车辆3d信息的方法和装置
KR20190060679A (ko) 이동 객체의 자세 학습 방법 및 장치
US20200410261A1 (en) Object identification in data relating to signals that are not human perceptible
CN114267041A (zh) 场景中对象的识别方法及装置
Qian et al. Survey on fish-eye cameras and their applications in intelligent vehicles
WO2022142890A1 (zh) 数据处理方法及相关装置
CN116007637B (zh) 定位装置、方法、车载设备、车辆、及计算机程序产品
JP2022544348A (ja) 対象を識別する方法とシステム
US11688094B1 (en) Method and system for map target tracking

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18881645

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018881645

Country of ref document: EP

Effective date: 20200622