CN110570388A - Method, device and equipment for detecting components of vehicle - Google Patents

Method, device and equipment for detecting components of vehicle Download PDF

Info

Publication number
CN110570388A
CN110570388A CN201811012746.9A CN201811012746A CN110570388A CN 110570388 A CN110570388 A CN 110570388A CN 201811012746 A CN201811012746 A CN 201811012746A CN 110570388 A CN110570388 A CN 110570388A
Authority
CN
China
Prior art keywords
component
position relation
vehicle
acquiring
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811012746.9A
Other languages
Chinese (zh)
Inventor
王剑
程丹妮
郭昕
程远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced New Technologies Co Ltd
Advantageous New Technologies Co Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201811012746.9A priority Critical patent/CN110570388A/en
Publication of CN110570388A publication Critical patent/CN110570388A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8887Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques

Abstract

The embodiment of the specification provides a method, a device and equipment for detecting a vehicle component. And detecting the area where each component is located and the corresponding category in the shot picture according to a target detection algorithm. And acquiring the position relation rule related to each part for the area where the part is positioned. And determining the related part having the relative position relation with the part according to the position relation rule. The actual positional relationship of the component and the related component is acquired. And determining the orientation of the component according to the actual position relation and the position relation rule. And correcting the type of the area where the part is located according to the orientation of the part.

Description

Method, device and equipment for detecting components of vehicle
Technical Field
One or more embodiments of the present disclosure relate to the field of computer technologies, and in particular, to a method, an apparatus, and a device for detecting a component of a vehicle.
Background
in the conventional technology, when a vehicle is damaged, a damage picture of the vehicle is acquired first. And then searching a target picture matched with the damage picture from the image library. And finally, determining the damage result of the vehicle according to the damage result of the vehicle in the target picture.
disclosure of Invention
one or more embodiments of the present specification describe a method and an apparatus for detecting a component of a vehicle, which can improve the accuracy of component detection.
in a first aspect, a component detection method for a vehicle is provided, including:
acquiring a shot picture of a vehicle; the captured picture covers a plurality of components of the vehicle;
Detecting the area where each component is located and the corresponding category in the shot picture according to a target detection algorithm;
acquiring a position relation rule related to each part for the area where each part is located;
determining related parts having relative position relation with the parts according to the position relation rule;
Acquiring the actual position relation between the component and the related component;
Determining the orientation of the component according to the actual position relation and the position relation rule;
and correcting the type of the area where the component is located according to the orientation of the component.
In a second aspect, there is provided a component detection apparatus for a vehicle, comprising:
an acquisition unit for acquiring a captured picture of a vehicle; the captured picture covers a plurality of components of the vehicle;
The detection unit is used for detecting the area where each component is located and the corresponding type in the shot picture acquired by the acquisition unit according to a target detection algorithm;
The acquisition unit is further used for acquiring a position relation rule related to each part for the area where each part is located;
A determining unit configured to determine a related component having a relative positional relationship with the component according to the positional relationship rule acquired by the acquiring unit;
the acquiring unit is further used for acquiring the actual position relationship between the component and the relevant component determined by the determining unit;
The determining unit is further configured to determine the orientation of the component according to the actual position relationship and the position relationship rule acquired by the acquiring unit;
And the correcting unit is used for correcting the type of the area where the component is located according to the position of the component determined by the determining unit.
In a third aspect, there is provided a component detecting apparatus of a vehicle, comprising:
A receiver for acquiring a photographed picture of a vehicle; the captured picture covers a plurality of components of the vehicle;
The at least one processor is used for detecting the area where each component is located and the corresponding category in the shot picture according to a target detection algorithm; acquiring a position relation rule related to each part for the area where each part is located; determining related parts having relative position relation with the parts according to the position relation rule; acquiring the actual position relation between the component and the related component; determining the orientation of the component according to the actual position relation and the position relation rule; and correcting the type of the area where the component is located according to the orientation of the component.
The method, the device and the equipment for detecting the components of the vehicle are provided by one or more embodiments of the specification, and the shot picture of the vehicle is acquired. And detecting the area where each component is located and the corresponding category in the shot picture according to a target detection algorithm. And acquiring the position relation rule related to each part for the area where the part is positioned. And determining the related part having the relative position relation with the part according to the position relation rule. The actual positional relationship of the component and the related component is acquired. And determining the orientation of the component according to the actual position relation and the position relation rule. And correcting the type of the area where the component is located according to the orientation of the component. Therefore, in the solution provided in this specification, after the area where each component is located and the corresponding category are detected according to the target detection algorithm, the orientation of each component can be determined according to the relative position relationship between the components, and the category of the corresponding area can be corrected according to the orientation of each component. Thus, the accuracy of component detection of the vehicle can be improved.
drawings
in order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
fig. 1 is a schematic view of an application scenario of a component detection method of a vehicle provided in the present specification;
FIG. 2 is a flow chart of a method for detecting a component of a vehicle according to one embodiment of the present disclosure;
FIG. 3a is one of the schematic diagrams of the regions and corresponding categories of components provided herein;
FIG. 3b is a second schematic diagram of the area and corresponding category of the component provided herein;
FIG. 4 is a schematic view of a vehicle component detection device provided in one embodiment of the present disclosure;
fig. 5 is a schematic diagram of a component detection apparatus of a vehicle provided in the present specification.
Detailed Description
The scheme provided by the specification is described below with reference to the accompanying drawings.
the method for detecting the components of the vehicle provided by one or more embodiments of the specification can be applied to the scene shown in fig. 1. In fig. 1, the picture splitting module 20 may be configured to split the shot pictures collected by the data collectors (including the C-end user and the loss assessment personnel of the insurance company). For example, shot pictures that are not related to the vehicle (e.g., background pictures, etc.) and abnormal pictures (e.g., shot pictures that cover multiple vehicles at the same time, etc.) can be tapped, so as to obtain tapped shot pictures.
the component detection module 40 is configured to detect, in the captured picture after the shunting processing, an area where each component is located and a corresponding category. The area here can also be understood as the position of the component in the captured picture. For example, the regions and corresponding categories in which the respective components are located may be detected by an object detection algorithm. Then, the orientation of each component can be determined according to the relative position relationship between the components, and the type of the corresponding region can be corrected according to the orientation of each component.
the vehicle damage assessment module 60 is used for locating damaged components and damage degree of the vehicle according to the regions and the corrected types, and automatically giving a maintenance scheme. The cost of repair is often not the same since the same part is maintained in different orientations. For example, both front and rear doors are doors, but there may be a difference in the repair price of the front door as compared to the rear door. The maintenance prices of the front bumper and the rear bumper are also different. Therefore, in this specification, when the vehicle damage assessment module 60 identifies the damaged portion based on the corrected category, the accuracy of locating the damaged portion can be improved, and the accuracy of the damage assessment result can be further improved.
Fig. 2 is a flowchart of a method for detecting a component of a vehicle according to an embodiment of the present disclosure. The execution subject of the method may be a device with processing capabilities: a server or system or module, for example, may be the component detection module 40 of fig. 1. As shown in fig. 2, the method may specifically include:
Step 202, a picture of the vehicle is obtained.
The shot pictures may be obtained by the picture splitting module 20 splitting the shot pictures collected by the data collector (including the C-end user and the loss assessment personnel of the insurance company). The photographic picture may be a photographic picture for a certain vehicle, which may cover a plurality of components of the vehicle. Components herein may include, but are not limited to, vehicle doors, bumpers, license plates, fenders, headlamps, tires, and the like.
And 204, detecting the area where each component is located and the corresponding category in the shot picture according to a target detection algorithm.
the target detection algorithm herein may include, but is not limited to, fast (fast) -Region-based convolutional Neural Network (RCNN), Region-based full convolutional Network (RFCN), single-shot multi-bounding box Detector (SSD), and YOLO, etc. Here, the region where the component of the vehicle is located is detected by the target detection algorithm, and the accuracy of the region detection can be improved.
Optionally, the target detection algorithm may be trained according to a plurality of sample pictures, and then the region where each component is located and the corresponding category may be detected in the shot picture according to the trained target detection algorithm, so that the accuracy of region detection may be improved. It should be noted that the sample picture may cover one or more components of a vehicle. For the sample picture, the region where each component is located and the category (e.g., component name) of the region may be manually pre-calibrated.
In one example, the regions and corresponding categories in which the various components detected in the captured picture are located may be as shown in FIG. 3a, according to an object detection algorithm. In fig. 3a, rectangular boxes are used to indicate the areas where the components of the vehicle are located, i.e. the positions of the components. The region may be represented by four-dimensional coordinates, e.g., (x, y, w, h), where x is the abscissa of the upper left vertex of the region, y is the ordinate of the upper left vertex of the region, w is the width of the region, and h is the height of the region. As can be seen from fig. 3a, the corresponding categories of the areas where the respective components are located may be: fenders, tires, bumpers, headlamps, midnets, license plates, emblems, fog lights, covers, and the like.
It should be understood that fig. 3a only shows a partial classification of the regions. In practical applications, the types of the areas can also be doors, grilles, etc., and the description is not repeated here. Furthermore, the rectangular boxes in the figures are only one representation of regions. In practical applications, the region may also be expressed in other shapes, such as a parallelogram, etc., which is not limited in this specification.
step 206, for the area where each component is located, the position relation rule related to the component is obtained.
optionally, when the number of the regions is multiple, before the position relationship rule is obtained, the regions may be sorted according to a predefined sorting rule. This can improve the efficiency of determining the orientation of the region.
In one example, the predefined ordering rule may be: the orientation of the headlamp is determined first, and then the orientations of the bumper, the tire, or the fender, etc. are determined in sequence. For example, assume that the following three regions are detected by step 204: region a, region B, and region C, and the three regions correspond to a tire, a bumper, and a headlight, respectively. Then, according to the predefined sorting rule, after sorting the three regions, it may be: region C, region B, and region a. And then sequentially determining the orientations of the parts corresponding to the areas C, B and A.
It should be understood that the above-described predefined ordering rules are for exemplary purposes only, and that this specification is in no way limited to the specific exemplary embodiments described herein.
in step 206, the position relationship rule associated with the component may be read from a predefined relative position relationship rule base. Wherein the predefined relative position relation rule base is used for recording the position relation rule related to each part of the vehicle.
In one example, the predefined relative position relationship rule base may be as shown in table 1.
TABLE 1
In table 1, rule 1 is associated with part: and (5) relevant position relation rules of the headlights. Rule 2 is associated with part: bumper related positional relationship rules. Rule 3 and rule 4 are components: tire-related positional relationship rules. Rule 5 and rule 6 are components: fender-related positional relationship rules.
Taking the area a in the foregoing example as an example, the obtained positional relationship rule is: rule 3 and rule 4.
And step 208, determining the related parts with the relative position relation with the parts according to the position relation rule.
and the obtained position relation rule is as follows: for example, in the case of rules 3 and 4, the relevant component having a relative position relationship with the current component may be: a bumper. It should be noted that the relevant parts herein may also include the vehicle body itself. For example, in the present case: the headlamp acquires the position relation rule as follows: in rule 1, the determined relevant component having a relative positional relationship with the component may be: the vehicle body itself.
Of course, in practical applications, if the orientation of the relevant component is determined, the orientation of the relevant component may also be obtained. Also taking the foregoing example as an example, in determining the orientation of the component corresponding to the area a, since the orientation of the component corresponding to the area B has already been determined, the orientation, e.g., the front edge, of the corresponding component (i.e., the bumper) can also be acquired.
step 210, acquiring the actual position relationship between the component and the related component.
in one implementation, the actual position relationship between the above-mentioned components and the related components may be determined according to the position relationship between the areas where the two components are located in the captured picture. For example, it may be: the areas where two components are located intersect, one component is to the left of the other component or one component is to the right of the other component, etc.
Taking the foregoing example as an example, the obtained actual position relationship may be: the area where the tire is located intersects the area where the front bumper is located, and the tire is to the right of the front bumper.
Step 212, the orientation of the component is determined according to the actual position relationship and the position relationship rule.
Taking the actual position relationship in the foregoing example as an example, combining the contents of the rule 3 and the rule 4 obtained above, the orientation of the tire can be determined as follows: and (4) right front.
It is understood that when the number of the regions is plural, the steps 206, 208, 210 and 212 are repeatedly performed.
And step 214, correcting the type of the area where the component is located according to the orientation of the component.
in this specification, revised categories may include, but are not limited to: the front fender comprises a left front door, a right front door, a left rear door, a right rear door, a front bumper, a rear bumper, a left front fender, a right front fender, a left rear fender, a right rear fender, a left front tire, a right front tire, a left rear tire, a right rear tire, a grid, a left headlamp, a right headlamp, a middle net, a license plate, a logo, a front cover and the like.
Taking fig. 3a as an example, the following modifications can be made to the categories of the regions: bumper- > front bumper, tire- > right front tire, fender- > right front fender, cover- > front cover, fog lamp- > right fog lamp, headlight- > right headlight. After correction of the categories of the areas in fig. 3a where the respective components are located, this can be seen in fig. 3 b.
It should be noted that after the areas and the corresponding categories where the components of the vehicle are located are detected in the shot picture, the damaged components and the damage degree of the vehicle can be located, and a maintenance scheme is automatically given, so that the automatic damage assessment of the vehicle is realized.
In the method for detecting a component of a vehicle according to an embodiment of the present specification, after the area where each component is located and the corresponding category are detected according to the target detection algorithm, the orientation of each component may be determined according to the relative position relationship between the components, and the category of the corresponding area may be corrected according to the orientation of each component. Thus, the accuracy of component detection of the vehicle can be improved. Furthermore, in a vehicle damage scenario, the cost of service is often not the same since the same component is maintained in different orientations. For example, both front and rear doors are doors, but there may be a difference in the repair price of the front door as compared to the rear door. The maintenance prices of the front bumper and the rear bumper are also different. Therefore, when the damaged part is identified based on the corrected category, the accuracy of positioning the damaged part can be improved, and the accuracy of the damage assessment result can be further improved
In correspondence with the method for detecting a component of a vehicle, an embodiment of the present specification further provides a device for detecting a component of a vehicle, as shown in fig. 4, the device including:
an acquisition unit 402 for acquiring a captured picture of the vehicle. The captured picture covers a plurality of components of the vehicle.
a detecting unit 404, configured to detect, according to an object detection algorithm, an area where each component is located and a corresponding category in the captured picture acquired by the acquiring unit 402.
The target detection algorithm herein may include any of the following: fast region-based convolutional neural networks, fast-RCNN, region-based full-convolutional networks, RFCN, single-pass multi-bounding box detectors, SSD, and YOLO. Further, the above categories may include, but are not limited to: doors, bumpers, fenders, tires, grilles, headlamps, midnets, license plates, emblems, covers, and the like.
The obtaining unit 402 is further configured to obtain, for an area where each component is located, a positional relationship rule related to the component.
The obtaining unit 402 may specifically be configured to:
And reading the position relation rule related to the component from a predefined relative position relation rule base. The predefined relative position relationship rule base is used for recording position relationship rules related to various components of the vehicle.
a determining unit 406, configured to determine, according to the positional relationship rule acquired by the acquiring unit 402, a relevant component having a relative positional relationship with the component.
The obtaining unit 402 is further configured to obtain an actual position relationship between the component and the relevant component determined by the determining unit 406.
The determining unit 406 is further configured to determine the orientation of the component according to the actual position relationship and the position relationship rule acquired by the acquiring unit 402.
A correcting unit 408, configured to correct the type of the area where the component is located according to the orientation of the component determined by the determining unit 406.
the revised category may include one or more of: the front fender comprises a left front door, a right front door, a left rear door, a right rear door, a front bumper, a rear bumper, a left front fender, a right front fender, a left rear fender, a right rear fender, a left front tire, a right front tire, a left rear tire, a right rear tire, a grid, a left headlamp, a right headlamp, a middle net, a license plate, a logo, a front cover and the like.
Optionally, the apparatus may further include:
a sorting unit 410, configured to sort the plurality of regions according to a predefined sorting rule.
The determining unit 406 is specifically configured to:
And sequentially selecting the area where each part is located from the sorted areas, and acquiring the position relation rule related to the part.
The functions of each functional module of the device in the above embodiments of the present description may be implemented through each step of the above method embodiments, and therefore, a specific working process of the device provided in one embodiment of the present description is not repeated herein.
In the component detection apparatus for a vehicle according to an embodiment of the present disclosure, the acquisition unit 402 acquires a captured picture of the vehicle. The detection unit 404 detects the regions and corresponding categories where the respective components are located in the captured picture according to the target detection algorithm. Acquisition section 402 acquires a positional relationship rule relating to each component for an area in which the component is located. The determination unit 406 determines a related part having a relative positional relationship with the part according to the positional relationship rule. The acquisition unit 402 acquires the actual positional relationship of the component and the relevant component. The determining unit 406 determines the orientation of the component based on the actual positional relationship and the positional relationship rule. The correction unit 408 corrects the type of the area where the component is located, according to the orientation of the component. Thus, the accuracy of component detection of the vehicle can be improved.
The detection device for a vehicle provided by the embodiment of the present disclosure may be a sub-module or a sub-unit of the component detection module 40 in fig. 1.
In correspondence with the method for detecting a component of a vehicle, an embodiment of the present specification further provides a component detecting apparatus of a vehicle, as shown in fig. 5, the apparatus may include:
A receiver 502 for obtaining a photographic picture of the vehicle. The captured picture covers a plurality of components of the vehicle.
At least one processor 504 is configured to detect, in the captured picture, an area where each component is located and a corresponding category according to a target detection algorithm. And acquiring the position relation rule related to each part for the area where the part is positioned. And determining the related part having the relative position relation with the part according to the position relation rule. The actual positional relationship of the component and the related component is acquired. And determining the orientation of the component according to the actual position relation and the position relation rule. And correcting the type of the area where the part is located according to the orientation of the part.
The component detection device of the vehicle provided by one embodiment of the specification can improve the component detection accuracy of the vehicle.
Fig. 5 shows an example in which the component detection device of the vehicle provided in the embodiment of the present specification is a server. In practical applications, the device may also be a terminal, which is not limited in this specification.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The steps of a method or algorithm described in connection with the disclosure herein may be embodied in hardware or may be embodied in software instructions executed by a processor. The software instructions may consist of corresponding software modules that may be stored in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, a hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an ASIC. Additionally, the ASIC may reside in a server. Of course, the processor and the storage medium may reside as discrete components in a server.
Those skilled in the art will recognize that, in one or more of the examples described above, the functions described in this invention may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
the foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The above-mentioned embodiments, objects, technical solutions and advantages of the present specification are further described in detail, it should be understood that the above-mentioned embodiments are only specific embodiments of the present specification, and are not intended to limit the scope of the present specification, and any modifications, equivalent substitutions, improvements and the like made on the basis of the technical solutions of the present specification should be included in the scope of the present specification.

Claims (13)

1. a component detection method of a vehicle, comprising:
Acquiring a shot picture of a vehicle; the captured picture covers a plurality of components of the vehicle;
detecting the area where each component is located and the corresponding category in the shot picture according to a target detection algorithm;
Acquiring a position relation rule related to each part for the area where each part is located;
Determining related parts having relative position relation with the parts according to the position relation rule;
acquiring the actual position relation between the component and the related component;
Determining the orientation of the component according to the actual position relation and the position relation rule;
and correcting the type of the area where the component is located according to the orientation of the component.
2. The method of claim 1, the target detection algorithm comprising any of: fast region-based convolutional neural networks, fast-RCNN, region-based full-convolutional networks, RFCN, single-pass multi-bounding box detectors, SSD, and YOLO.
3. The method of claim 1, when the number of the regions is plural, further comprising:
sequencing the plurality of areas according to a predefined sequencing rule;
The acquiring, for the area where each component is located, a position relation rule related to the component includes:
and sequentially selecting the area where each part is located from the sorted areas, and acquiring the position relation rule related to the part.
4. The method of claim 1, the obtaining positional relationship rules related to the component, comprising:
Reading a position relation rule related to the component from a predefined relative position relation rule base; the predefined relative position relation rule base is used for recording position relation rules related to various components of the vehicle.
5. The method of any of claims 1-4, the categories comprising one or more of: door, bumper, fender, tire, grille, headlight, grille, license plate, emblem, and cover.
6. The method of any of claims 1-4, the revised category comprising one or more of: the front fender comprises a left front door, a right front door, a left rear door, a right rear door, a front bumper, a rear bumper, a left front fender, a right front fender, a left rear fender, a right rear fender, a left front tire, a right front tire, a left rear tire, a right rear tire, a grid, a left headlamp, a right headlamp, a middle net, a license plate, a logo and a front cover.
7. a component detection apparatus of a vehicle, comprising:
an acquisition unit for acquiring a captured picture of a vehicle; the captured picture covers a plurality of components of the vehicle;
The detection unit is used for detecting the area where each component is located and the corresponding type in the shot picture acquired by the acquisition unit according to a target detection algorithm;
the acquisition unit is further used for acquiring a position relation rule related to each part for the area where each part is located;
A determining unit configured to determine a related component having a relative positional relationship with the component according to the positional relationship rule acquired by the acquiring unit;
the acquiring unit is further used for acquiring the actual position relationship between the component and the relevant component determined by the determining unit;
The determining unit is further configured to determine the orientation of the component according to the actual position relationship and the position relationship rule acquired by the acquiring unit;
and the correcting unit is used for correcting the type of the area where the component is located according to the position of the component determined by the determining unit.
8. the apparatus of claim 7, the target detection algorithm comprising any of: fast region-based convolutional neural networks, fast-RCNN, region-based full-convolutional networks, RFCN, single-pass multi-bounding box detectors, SSD, and YOLO.
9. The apparatus of claim 7, further comprising:
the sorting unit is used for sorting the plurality of areas according to a predefined sorting rule;
The determining unit is specifically configured to:
And sequentially selecting the area where each part is located from the sorted areas, and acquiring the position relation rule related to the part.
10. the apparatus according to claim 7, wherein the obtaining unit is specifically configured to:
Reading a position relation rule related to the component from a predefined relative position relation rule base; the predefined relative position relation rule base is used for recording position relation rules related to various components of the vehicle.
11. the apparatus of any of claims 7-10, the categories comprising one or more of: door, bumper, fender, tire, grille, headlight, grille, license plate, emblem, and cover.
12. the apparatus according to any of claims 7-10, the revised category comprising one or more of: the front fender comprises a left front door, a right front door, a left rear door, a right rear door, a front bumper, a rear bumper, a left front fender, a right front fender, a left rear fender, a right rear fender, a left front tire, a right front tire, a left rear tire, a right rear tire, a grid, a left headlamp, a right headlamp, a middle net, a license plate, a logo and a front cover.
13. a component detection apparatus of a vehicle, comprising:
A receiver for acquiring a photographed picture of a vehicle; the captured picture covers a plurality of components of the vehicle;
The at least one processor is used for detecting the area where each component is located and the corresponding category in the shot picture according to a target detection algorithm; acquiring a position relation rule related to each part for the area where each part is located; determining related parts having relative position relation with the parts according to the position relation rule; acquiring the actual position relation between the component and the related component; determining the orientation of the component according to the actual position relation and the position relation rule; and correcting the type of the area where the component is located according to the orientation of the component.
CN201811012746.9A 2018-08-31 2018-08-31 Method, device and equipment for detecting components of vehicle Pending CN110570388A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811012746.9A CN110570388A (en) 2018-08-31 2018-08-31 Method, device and equipment for detecting components of vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811012746.9A CN110570388A (en) 2018-08-31 2018-08-31 Method, device and equipment for detecting components of vehicle

Publications (1)

Publication Number Publication Date
CN110570388A true CN110570388A (en) 2019-12-13

Family

ID=68772407

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811012746.9A Pending CN110570388A (en) 2018-08-31 2018-08-31 Method, device and equipment for detecting components of vehicle

Country Status (1)

Country Link
CN (1) CN110570388A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111209957A (en) * 2020-01-03 2020-05-29 平安科技(深圳)有限公司 Vehicle part identification method and device, computer equipment and storage medium
CN114708323A (en) * 2022-03-10 2022-07-05 西安电子科技大学广州研究院 Object posture detection method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780603A (en) * 2016-12-09 2017-05-31 宇龙计算机通信科技(深圳)有限公司 Vehicle checking method, device and electronic equipment
CN107358596A (en) * 2017-04-11 2017-11-17 阿里巴巴集团控股有限公司 A kind of car damage identification method based on image, device, electronic equipment and system
CN107403424A (en) * 2017-04-11 2017-11-28 阿里巴巴集团控股有限公司 A kind of car damage identification method based on image, device and electronic equipment
CN107452025A (en) * 2017-08-18 2017-12-08 成都通甲优博科技有限责任公司 Method for tracking target, device and electronic equipment
WO2018036277A1 (en) * 2016-08-22 2018-03-01 平安科技(深圳)有限公司 Method, device, server, and storage medium for vehicle detection

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018036277A1 (en) * 2016-08-22 2018-03-01 平安科技(深圳)有限公司 Method, device, server, and storage medium for vehicle detection
CN106780603A (en) * 2016-12-09 2017-05-31 宇龙计算机通信科技(深圳)有限公司 Vehicle checking method, device and electronic equipment
CN107358596A (en) * 2017-04-11 2017-11-17 阿里巴巴集团控股有限公司 A kind of car damage identification method based on image, device, electronic equipment and system
CN107403424A (en) * 2017-04-11 2017-11-28 阿里巴巴集团控股有限公司 A kind of car damage identification method based on image, device and electronic equipment
CN107452025A (en) * 2017-08-18 2017-12-08 成都通甲优博科技有限责任公司 Method for tracking target, device and electronic equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111209957A (en) * 2020-01-03 2020-05-29 平安科技(深圳)有限公司 Vehicle part identification method and device, computer equipment and storage medium
CN111209957B (en) * 2020-01-03 2023-07-18 平安科技(深圳)有限公司 Vehicle part identification method, device, computer equipment and storage medium
CN114708323A (en) * 2022-03-10 2022-07-05 西安电子科技大学广州研究院 Object posture detection method and device

Similar Documents

Publication Publication Date Title
CN110569697A (en) Method, device and equipment for detecting components of vehicle
CN107577988B (en) Method, device, storage medium and program product for realizing side vehicle positioning
US20170277979A1 (en) Identifying defect on specular surfaces
US10853936B2 (en) Failed vehicle estimation system, failed vehicle estimation method and computer-readable non-transitory storage medium
CN112598922B (en) Parking space detection method, device, equipment and storage medium
CN110569693B (en) Vehicle body color recognition method and device
WO2016169790A1 (en) Camera extrinsic parameters estimation from image lines
JP6021689B2 (en) Vehicle specification measurement processing apparatus, vehicle specification measurement method, and program
CN107392080B (en) Image evaluation method and electronic device thereof
CN110751012B (en) Target detection evaluation method and device, electronic equipment and storage medium
CN110570388A (en) Method, device and equipment for detecting components of vehicle
US20160207473A1 (en) Method of calibrating an image detecting device for an automated vehicle
CN111027535A (en) License plate recognition method and related equipment
CN110097108B (en) Method, device, equipment and storage medium for identifying non-motor vehicle
CN111079782A (en) Vehicle transaction method and system, storage medium and electronic device
CN111605481A (en) Congestion car following system and terminal based on look around
CN110539748A (en) congestion car following system and terminal based on look around
JP4784932B2 (en) Vehicle discrimination device and program thereof
US11579271B2 (en) LIDAR noise removal apparatus and Lidar noise removal method thereof
CN112489466B (en) Traffic signal lamp identification method and device
CN114120309A (en) Instrument reading identification method and device and computer equipment
CN110569694A (en) Method, device and equipment for detecting components of vehicle
US20150109421A1 (en) Stereo-approach distance and speed meter
CN108388875B (en) Method and device for checking road surface related line and storage medium
CN115206130B (en) Parking space detection method, system, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40018750

Country of ref document: HK

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20200924

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Applicant after: Innovative advanced technology Co.,Ltd.

Address before: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Applicant before: Advanced innovation technology Co.,Ltd.

Effective date of registration: 20200924

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Applicant after: Advanced innovation technology Co.,Ltd.

Address before: A four-storey 847 mailbox in Grand Cayman Capital Building, British Cayman Islands

Applicant before: Alibaba Group Holding Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191213