WO2020088076A1 - 一种图像标注方法、装置及系统 - Google Patents

一种图像标注方法、装置及系统 Download PDF

Info

Publication number
WO2020088076A1
WO2020088076A1 PCT/CN2019/103592 CN2019103592W WO2020088076A1 WO 2020088076 A1 WO2020088076 A1 WO 2020088076A1 CN 2019103592 W CN2019103592 W CN 2019103592W WO 2020088076 A1 WO2020088076 A1 WO 2020088076A1
Authority
WO
WIPO (PCT)
Prior art keywords
damage
image
car
vehicle
attribute information
Prior art date
Application number
PCT/CN2019/103592
Other languages
English (en)
French (fr)
Inventor
周凡
Original Assignee
阿里巴巴集团控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司 filed Critical 阿里巴巴集团控股有限公司
Publication of WO2020088076A1 publication Critical patent/WO2020088076A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/64Analysis of geometric attributes of convexity or concavity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Definitions

  • One or more of this specification relates to the field of intelligent recognition technology, and in particular, to an image annotation method, device, and system.
  • the purpose of one or more embodiments of this specification is to provide an image annotation method, device and system that automatically generate the training model required by combining the visual image obtained by the camera device and the physical attribute information obtained based on the physical detection method Annotate the data, no need to manually mark the damage of the car damage image, and also achieve pixel-level damage labeling of the car damage image, which improves the efficiency and accuracy of the car damage image labeling, which can be used for model training based on deep learning
  • One or more embodiments of this specification provide an image annotation method, including:
  • the vehicle damage image is annotated with damage to generate vehicle damage sample data for training a vehicle damage recognition model.
  • One or more embodiments of this specification provide an image annotation device, including:
  • a first acquiring module configured to acquire a damaged image of a preset damage area on the target vehicle captured by the camera device;
  • a second obtaining module configured to obtain physical attribute information obtained by scanning the preset damage area based on a physical detection method
  • the image tagging module is used for tagging the car damage image according to the physical attribute information, and generating car damage sample data for training a car damage recognition model.
  • One or more embodiments of this specification provide an image tagging system, including: a camera device, a physical detection device, and the above-mentioned image tagging device, wherein the camera device and the physical detection device are both related to the image tagging device connection;
  • the camera device is used to capture a car damage image obtained from a predetermined damage area on the target vehicle, and transmit the car damage image to the image tagging device;
  • the physical detection device is configured to scan physical attribute information obtained by scanning the preset damage area based on a physical detection method, and transmit the physical attribute information to the image annotation device;
  • the image tagging device is configured to receive the car damage image and the image tagging device, and generate car damage sample data for training a car damage identification model according to the car damage image and the image tagging device.
  • One or more embodiments of this specification provide an image annotation device, including: a processor; and
  • a memory arranged to store computer-executable instructions, which when executed, causes the processor to:
  • the vehicle damage image is annotated with damage to generate vehicle damage sample data for training a vehicle damage recognition model.
  • One or more embodiments of this specification provide a storage medium for storing computer-executable instructions. When the executable instructions are executed, the following processes are implemented:
  • the vehicle damage image is annotated with damage to generate vehicle damage sample data for training a vehicle damage recognition model.
  • the image labeling method, device and system in one or more embodiments of the present specification obtain the vehicle damage image of the preset damage area captured by the camera device; and obtain the scanned image of the preset damage area based on the physical detection method Physical attribute information; according to the physical attribute information, the car damage image is annotated to generate car damage sample data.
  • the labeling data required for the training model is automatically generated, without manually labeling the damage situation of the car damage image, and the car damage image can also be realized Pixel-level damage labeling improves the labeling efficiency and accuracy of car damage images, so that it can provide massive and accurate labeling sample data for the model training process based on deep learning, so that training can get higher recognition accuracy of car damage recognition model.
  • FIG. 1 is a schematic diagram of a first application scenario of an image annotation method provided by one or more embodiments of this specification;
  • FIG. 2 is a first schematic flowchart of an image tagging method provided by one or more embodiments of this specification
  • FIG. 3 is a second schematic flowchart of an image annotation method provided by one or more embodiments of this specification.
  • 4a is a schematic diagram of an implementation principle of a three-dimensional surface image acquisition by a lidar device in an image annotation method provided by one or more embodiments of the present specification;
  • 4b is a schematic diagram of an implementation principle of an infrared thermal imaging device for acquiring a surface thermal imaging image in an image annotation method provided by one or more embodiments of the present specification;
  • FIG. 5 is a third schematic flowchart of an image annotation method provided by one or more embodiments of this specification.
  • 6a is a schematic diagram of a second application scenario of an image annotation method provided by one or more embodiments of this specification.
  • 6b is a schematic diagram of a third application scenario of an image annotation method provided by one or more embodiments of this specification.
  • FIG. 7 is a fourth schematic flowchart of an image annotation method provided by one or more embodiments of this specification.
  • FIG. 8 is a fifth schematic flowchart of an image annotation method provided by one or more embodiments of this specification.
  • 9a is a schematic diagram of the first module composition of an image annotation device provided by one or more embodiments of this specification.
  • 9b is a schematic diagram of a second module composition of an image annotation device provided by one or more embodiments of this specification.
  • FIG. 10 is a schematic diagram of a specific structure of an image annotation system provided by one or more embodiments of this specification.
  • FIG. 11 is a schematic diagram of a specific structure of an image tagging device provided by one or more embodiments of the present specification.
  • One or more embodiments of this specification provide an image annotation method, device, and system.
  • a visual image obtained by a camera device and physical attribute information obtained based on a physical detection method, automatically generated annotation data required for training a model , No need to manually mark the damage of the car damage image, it can also realize the pixel-level damage of the car damage image, which improves the efficiency and accuracy of the car damage image labeling, which can be used for the process of model training based on deep learning Provide massive and accurate labeled sample data, so as to train to get a more accurate recognition model of car damage.
  • FIG. 1 is a schematic diagram of an application scenario of an image annotation method provided by one or more embodiments of the present specification.
  • the system includes: a camera device, a physical detection device, and an image annotation device, wherein the camera device and physical detection The devices are all connected with the image tagging device; the camera device can be a digital camera and other equipment with a photo function, the physical detection device can be a laser radar device, an infrared thermal imaging device, etc., the image tagging device can be used for car damage
  • the background server for image labeling where the specific process of image labeling is:
  • the operator initially sets the data collection range according to the damaged part of the vehicle, that is, sets the initial shooting of the camera device Range, and the initial detection range of the physical detection device, for example, if the front bumper of the vehicle is damaged, that is, the preset damage area is the front hemisphere of the vehicle, then the shooting range and physical detection range are set to the front hemisphere of the vehicle;
  • a camera device is used to capture a preset damage area on the target vehicle to obtain a car damage image, and the car damage image is transmitted to the image annotation device, wherein the camera device can be installed on an adjustable cloud platform, through the adjustable cloud Station to adjust the shooting conditions of the camera device;
  • the physical detection device scans the preset damage area based on the physical detection method to obtain physical attribute information, and transmits the physical attribute information to the image annotation device.
  • the physical detection device is also provided on the adjustable gimbal. And the relative position of the physical detection device and the camera device remains unchanged, ensuring that the car damage image and the corresponding physical attribute information are simultaneously obtained under the same shooting conditions;
  • the image labeling device performs damage labeling on the car damage image according to the acquired physical attribute information, and generates car damage sample data for training the car damage recognition model, so that it is not necessary to manually label the damage of the car damage image manually. Realize pixel-level damage labeling of car damage images, improve the efficiency and accuracy of car damage image labeling, and provide massive and accurate labeled sample data for the process of model training based on deep learning, so that the training can get more recognition accuracy High vehicle damage recognition model.
  • FIG. 2 is a first schematic flowchart of an image tagging method provided by one or more embodiments of the present specification.
  • the method in FIG. 2 can be performed by the image tagging device in FIG. 1, as shown in FIG. 2, the method includes at least the following step:
  • the camera device photographs the preset damage area on the target vehicle to obtain a visual image for the preset damage area, and transmits the visual image to the image annotation device;
  • the physical detection device scans the preset damage area on the target vehicle to obtain physical attribute information for the preset damage area, and transmits the physical attribute information to the image annotation device;
  • the physical detection device may be a laser radar device, and correspondingly, the physical detection method may be a laser radar detection method;
  • the physical detection device may also be an infrared thermal imaging device, and correspondingly, the physical detection method may also be an infrared detection method;
  • the physical detection device may also be a combination of a lidar device and an infrared thermal imaging device.
  • the physical detection method may also be a combination of a lidar detection method and an infrared detection method;
  • the physical detection device may be a device that uses other physical detection methods to scan and collect physical attribute information
  • car damage sample data may include: a car damage image and a car damage image
  • Image annotation data which may include: the damage of each pixel in the car damage image
  • the physical attribute information corresponding to the car damage image is obtained, and according to the physical attribute information, the damage of each pixel in the car damage image is determined, and each pixel in the determined car damage image is determined.
  • the damage of the point is used as the labeling data for the car damage image; where the damage of the pixel can be data indicating whether the pixel is damaged, or the data indicating the size of the damage of the pixel, or the damage of the pixel Degree data.
  • the labeling data required for the training model is automatically generated without manually damaging the car damage image
  • Situation labeling can also achieve pixel-level damage labeling of car damage images, which improves the labeling efficiency and accuracy of car damage images, thereby providing massive and accurate labeling sample data for the process of model training based on deep learning. Trained to obtain a car damage recognition model with higher recognition accuracy.
  • the physical attribute information obtained by the lidar detection technology is the three-dimensional depth information of each pixel point, based on the three-dimensional depth information, the degree of deformation of the damaged surface can be evaluated, that is, the concave and damaged positions of the preset damage area can be evaluated.
  • High-precision identification and the physical property information obtained by infrared detection technology is the surface thermal imaging information of each pixel. Due to the difference in infrared thermal imaging of different materials, the combination of the detected surface thermal imaging information can determine the preset damage The surface material distribution of the area, so based on the surface thermal imaging information, the degree of scratching of the damaged surface can be evaluated, that is, the range of the scratching damage of the preset damage area can be identified with high accuracy;
  • the combination of different physical detection techniques to evaluate the damage of the preset damage area can improve the damage surface.
  • the assessment accuracy of the damage situation is based on this.
  • the above S202 obtains the physical result obtained by scanning the above-mentioned predetermined damage area based on the physical detection method. Attribute information, including:
  • the lidar device scans the preset damage area on the target vehicle to obtain three-dimensional depth information for the preset damage area, and transmits the three-dimensional depth information to the image annotation device;
  • the lidar device includes: a first processing unit, a laser emitting unit, and a laser receiving unit;
  • the laser emitting unit is used to emit a laser beam (that is, a detection signal) to the preset damage area, and the laser beam will reflect after reaching the preset damage area, and the laser receiving unit receives the reflected beam (that is, the target return) returned by the preset damage area Wave), and transmit the target echo to the first processing unit, the first processing unit compares the received target echo reflected from the preset damage area with the detection signal transmitted to the preset damage area to generate For the 3D depth information of the preset damage area, a 3D surface map representing the depth information of each pixel in the car damage image can be drawn based on the 3D depth information, so as to detect the damage of the preset damage area on the target vehicle by emitting a laser beam The relative position information of each point in the surface;
  • S2022 Obtain surface thermal imaging information obtained by scanning the preset damage area on the target vehicle using the infrared thermal imaging device;
  • the infrared thermal imaging device scans the preset damage area on the target vehicle to obtain surface thermal imaging information for the preset damage area, and transmits the surface thermal imaging information to the image annotation device;
  • the infrared thermal imaging device includes: a second processing unit, an infrared emitting unit, and an infrared receiving unit;
  • the infrared emitting unit is used to emit an infrared beam (ie, a detection signal) to the preset damage area.
  • the infrared beam will reflect after reaching the preset damage area, and the infrared receiving unit receives the reflected beam (ie, the target return) returned by the preset damage area Wave), and transmit the target echo to the first processing unit, and the second processing unit compares the received target echo reflected from the preset damage area with the detection signal transmitted to the preset damage area to generate
  • a surface thermal imaging map representing the radiant energy of each pixel in the car damage image can be drawn to realize the detection of the preset damage on the target vehicle by emitting an infrared beam Material distribution of the damaged surface in the area;
  • the above S203 marks the damage to the above-mentioned car damage image according to the acquired physical attribute information, and is generated for training car damage recognition
  • the model's car damage sample data including:
  • S2031 according to the acquired three-dimensional depth information and surface thermal imaging information, mark the damage of the above-mentioned car damage image, and generate car damage sample data for training the car damage recognition model;
  • the depth information of each pixel in the car damage image is determined; for each pixel, according to the depth information of the pixel, the unevenness (ie, deformation) of the pixel is determined ;
  • the obtained surface thermal imaging information determine the radiant energy corresponding to each pixel in the car damage image; for each pixel, according to the radiant energy corresponding to the pixel, determine the pixel's rubbing situation (that is, the paint drop situation) ;
  • the two physical detection dimensions of lidar detection technology and infrared detection technology are combined to achieve simultaneous comprehensive recognition of the deformation degree and the degree of scratching of the preset damage area, which improves the damage of the car damage image
  • the accuracy of labeling is beneficial to improve the recognition accuracy of the recognition model based on the trained car damage image.
  • the shooting conditions may affect the quality of the obtained car damage image and physical attribute information, which may reduce the accuracy of the damage labeling of the preset damage area, therefore, in order to improve the assessment of the damage of the preset damage area Accuracy, in the process of the camera device collecting the image of the car damage and the physical detection device collecting the physical attribute information, the shooting conditions are adjusted based on the preset adjustment rules, so that multiple car damages of the preset damage area can be obtained under different shooting conditions The image and a plurality of physical attribute information. Based on this, as shown in FIG. 5, the above S201 obtains a car damage image of a preset damage area on the target vehicle captured by the camera device, which specifically includes:
  • S2011 Acquire a set of car damage images for a preset damage area on the target vehicle, where the set of car damage images includes: using a camera device to capture multiple car damage images under different shooting conditions;
  • the above shooting conditions include at least one of the shooting orientation of the camera device, the relative position of the camera device and the target vehicle, the lighting parameters of the shooting environment, and other on-site environmental factors that affect the visual characteristics of the damaged area.
  • the shooting orientation may include: Shooting angle and shooting direction
  • the lighting parameters may include: number of light sources and lighting conditions, therefore, different shooting conditions may be at least one of the shooting orientation of the camera device, the relative position of the camera device and the target vehicle, and the lighting parameters of the shooting environment
  • each shooting condition corresponds to a car damage image for the preset damage area; that is to say, the obtained car damage images are all actually taken, rather than different based on the original image Obtained by image processing, in which the feature distribution of the actual image obtained is more in line with the real scene, which has better training effect for deep learning;
  • the above-mentioned preset adjustment rule may be determined according to the step information input by the operator, the setting information of the automatic shooting device to change the light, distance and angle, and for the same damage area, a group of hundreds to thousands of Shooting and marking of car damage pictures, for example, for the same damage area, adjust the shooting conditions according to the adjustment rules of moving 30cm left and right, changing the angle by 10 degrees, increasing the light from 500 lumens to 3000 lumens, and lifting 100 lumens each time , Collect vehicle damage images and physical attribute information under each shooting condition;
  • the above-mentioned preset adjustment rules may be determined based on the recognition accuracy of the vehicle damage identification model.
  • the shooting conditions may be optimized by modifying the preset adjustment rules, thereby optimizing the final
  • the obtained sample data of the car damage can further improve the recognition accuracy of the car damage recognition model.
  • the adjustable gimbal is adjusted by the control device to adjust the shooting orientation of the camera device through the adjustable gimbal, and if the A wheeled or crawler-type walking mechanism is installed underneath, and the adjustable gimbal can be controlled by the control device to move forward, backward, left, and right in the work site according to the control instructions, and the relative position of the camera device and the target vehicle can be adjusted by the adjustable gimbal;
  • the control device may be a separate control device or a control module provided in the image annotation device;
  • a light adjustment device is set at a specified position on the work site, and the control device controls the light intensity emitted by the light adjustment device according to the preset shooting parameters to adjust the light parameters of the shooting environment where the camera device is located, thereby enabling the camera device to shoot under different light parameters Corresponding car damage image;
  • the physical attribute information corresponding to the car damage image needs to be collected.
  • physical detection methods including: lidar detection method and infrared detection method are still taken as examples.
  • the above S2021 obtains the three-dimensional depth information obtained by scanning the preset damage area on the target vehicle using the lidar device, which specifically includes:
  • S20211 Obtain a three-dimensional depth information set for a preset damage area on a target vehicle, where the three-dimensional depth information set includes: multiple three-dimensional surface maps scanned under different shooting conditions using a lidar detection method;
  • the three-dimensional depth information ie, the three-dimensional surface map
  • the lidar device the three-dimensional depth
  • the above S2022 acquiring the surface thermal imaging information obtained by scanning the preset damage area on the target vehicle using the infrared thermal imaging device specifically includes:
  • S20221 Acquire a surface thermal imaging information set for a preset damage area on the target vehicle, where the surface thermal imaging information set includes: multiple surface thermal imaging images scanned under different shooting conditions using infrared detection;
  • the vehicle damage image of the preset damage area is collected by the camera device and the three-dimensional depth information (ie, the three-dimensional surface map) of the preset damage area is collected by the lidar device, but also by infrared thermal imaging
  • the device collects the surface thermal imaging information (that is, surface thermal imaging map) of the preset damage area, therefore, each surface thermal imaging map in the surface thermal imaging information set is obtained under a specific shooting condition;
  • the above S203 adds damage annotation to the above-mentioned car damage image according to the acquired three-dimensional depth information and surface thermography information, and generates car damage sample data for training the car damage recognition model, which specifically includes:
  • S20311 according to the acquired three-dimensional depth information set and surface thermal imaging information set, respectively mark the damage of the car damage image under each shooting condition to generate car damage sample data for training the car damage recognition model;
  • a predetermined damage area under a certain shooting condition, obtain the car damage image, three-dimensional surface map and surface thermal imaging map corresponding to the shooting condition, and establish the shooting conditions, car damage image, three-dimensional surface map Correspondence between the surface thermal imaging map and the car damage image, three-dimensional surface map and surface thermal imaging map corresponding to each shooting condition acquired for the preset damage area, to determine the car damage sample data for the preset damage area.
  • the control device passes The adjustable gimbal synchronously adjusts the camera device and the physical detection device. Therefore, under the same shooting conditions, the vehicle damage image for the preset damage area is collected by the camera device at the same time, and the physical damage device for the preset damage area is collected by the physical detection device. Physical attribute information;
  • the shooting conditions are adjusted based on the preset adjustment rules, so that the preset damage can be obtained under different shooting conditions
  • the recognition accuracy of the car damage recognition model trained by the sample data is adjusted based on the preset adjustment rules, so that the preset damage can be obtained under different shooting conditions
  • Fig. 6b is a schematic diagram of the third application scenario of the image annotation method, specifically:
  • a wireless positioning device is set at a specified position on the work site.
  • the wireless positioning device acquires the first position information before the target vehicle moves and the second position information after the movement, and determines the actual movement of the target vehicle according to the first position information and the second position information Distance, compare the actual movement distance with the theoretical movement distance, determine whether the movement error of the target vehicle meets the preset conditions, if not, send the corresponding prompt message to the control device, so that the control device can accurately locate the target vehicle
  • the wireless positioning device may be a positioning device based on radio signals, a positioning device based on Bluetooth signals, or a positioning device based on lidar.
  • the above S203 performs damage labeling on the car damage image according to the acquired physical attribute information, and generates car damage sample data for training the car damage recognition model.
  • the depth information of each pixel in the car damage image for the preset damage area is determined according to the acquired three-dimensional surface map for the preset damage area; according to each pixel Depth information of the dots to determine the deformation of each pixel in the car damage image (ie, the unevenness);
  • the physical attribute information is surface thermal imaging information
  • S2033 Determine the determined damage situation of each pixel and the car damage image as car damage sample data for training the car damage recognition model
  • the damage of each pixel in the car damage image is used as the annotation data for the car damage image, and the correspondence between the car damage image and the annotation data is established, and the corresponding relationship, the car damage image, and the annotation data are input to the pending
  • the trained machine learning model is based on supervised learning model.
  • the car damage image of the preset damage area on the target vehicle includes: multiple photos taken under different shooting conditions
  • the car damage image and the above physical property information for the preset damage area include: multiple pieces of physical property information scanned under different shooting conditions;
  • the above S2032 determines the damage situation of each pixel in the car damage image of the preset damage area according to the acquired physical attribute information, which specifically includes:
  • the damage of each pixel in the car damage image is determined according to the physical attribute information obtained under the shooting conditions corresponding to the car damage image.
  • the shooting conditions are adjusted based on the preset adjustment rules to achieve Obtain multiple car damage images and multiple physical attribute information of the preset damage area under different shooting conditions. Therefore, when labeling the car damage image, it is necessary to determine the corresponding image in the car damage image for each car damage image The physical attribute information obtained under the shooting conditions; according to the physical attribute information, the damage of each pixel in the car damage image is determined;
  • the process of determining the damage of each pixel in the car damage image based on the physical attribute information is specifically as follows:
  • the physical attribute information is three-dimensional depth information
  • the physical attribute information is surface thermography information
  • each car damage image under a certain shooting condition and the damage of each pixel in the car damage image are used as a piece of car damage sample data Therefore, for the same preset damage area, multiple pieces of car damage sample data under different shooting conditions are obtained;
  • each piece of car damage sample data includes: multiple lines of labeling data on the damage situation with the same shooting conditions and the same image of the car damage image, and each line of labeling data includes: statistical data of the damage situation of one pixel, for example, the degree of bumps and bumps Degree, etc .; each sample of car damage data also includes: overall car damage statistics for each car damage image, repair plan for the preset damage area, damage type for the preset damage area, etc. The maintenance plan of the damaged area may be determined based on the marking information of the relevant personnel.
  • the basic data collected for a predetermined damage area of the target vehicle that is, the correspondence between the shooting conditions, the car damage image and the physical attribute information, as shown in Table 1 below:
  • At least one of the photographing orientation of the camera device, the relative position of the camera device and the target vehicle, and the lighting parameters of the shooting environment in the shooting condition identified as 0001 and the shooting condition identified as 0002 are different, and the vehicle identified as AAAA
  • the damaged image, the three-dimensional surface image labeled 1aaaa, and the surface thermography image labeled 2aaaa are all acquired under the shooting conditions labeled 0001.
  • the car damage image of a predetermined damage area of the target vehicle is marked for car damage, and the generated
  • the labeled data of the damaged area is shown in Table 2 below:
  • the car damage sample data is input to a preset machine learning model, and the machine learning model is trained to obtain car damage recognition
  • the model wherein the machine learning model may be a machine learning model based on a supervised learning mode.
  • the above-mentioned car damage image is annotated according to the acquired physical attribute information to generate a
  • the sample data of the car damage recognition model it also includes:
  • the model parameters in the machine learning model based on the supervised learning model are updated based on the above vehicle damage sample data to obtain a vehicle damage recognition model after the model parameters are updated, and then, after the vehicle damage image to be recognized is obtained, The vehicle damage image to be recognized is used to identify the damage situation using the vehicle damage recognition model, and the vehicle is automatically damaged based on the determined damage situation for the vehicle damage image.
  • the image labeling method in one or more embodiments of this specification obtains a car damage image of a preset damage area captured by a camera device; and obtains physical attribute information obtained by scanning the preset damage area based on a physical detection method; According to the physical attribute information, a damage label is added to the car damage image to generate car damage sample data.
  • the labeling data required for the training model is automatically generated, without manually labeling the damage situation of the car damage image, and the car damage image can also be realized Pixel-level damage labeling improves the labeling efficiency and accuracy of car damage images, so that it can provide massive and accurate labeling sample data for the model training process based on deep learning, so that training can get higher recognition accuracy of car damage recognition model.
  • FIG. 9a is an image provided by one or more embodiments of this specification
  • the first acquiring module 901 is configured to acquire a car damage image of a preset damage area on the target vehicle captured by the camera device; and,
  • the second obtaining module 902 is configured to obtain physical attribute information obtained by scanning the preset damage area based on a physical detection method
  • the image tagging module 903 is configured to perform damage tagging on the car damage image according to the physical attribute information, and generate car damage sample data for training a car damage recognition model.
  • the labeling data required for the training model is automatically generated without manually damaging the car damage image
  • Situation labeling can also achieve pixel-level damage labeling of car damage images, which improves the labeling efficiency and accuracy of car damage images, thereby providing massive and accurate labeling sample data for the process of model training based on deep learning. Trained to obtain a car damage recognition model with higher recognition accuracy.
  • the second obtaining module 902 is specifically used to:
  • the first obtaining module 901 is specifically used to:
  • the second obtaining module 902 is specifically used to:
  • the set of physical attribute information includes: a plurality of physical attribute information obtained by scanning under different shooting conditions using a physical detection method.
  • the image annotation module 903 is specifically used to:
  • the damage situation of each pixel point and the vehicle damage image are determined as vehicle damage sample data for training a vehicle damage recognition model.
  • the vehicle damage image of the preset damage area on the target vehicle includes: multiple vehicle damage images captured under different shooting conditions;
  • the image annotation module 903 is further specifically used for:
  • the damage situation of each pixel in the car damage image is determined according to the physical attribute information obtained under the shooting conditions corresponding to the car damage image.
  • the device further includes a model training module 904 for:
  • a machine learning method is used to train the machine learning model based on the vehicle damage sample data to obtain a vehicle damage recognition model.
  • the shooting conditions include: at least one of the shooting orientation of the camera device, the relative position of the camera device and the target vehicle, the lighting parameters of the shooting environment, and other on-site environmental factors that affect the visual characteristics of the damaged area .
  • the relative position of the camera device and the target vehicle is obtained by controlling the movement of the target vehicle based on a positioning device with centimeter-level precise positioning capability.
  • the image tagging device in one or more embodiments of the present specification obtains a car damage image of a preset damage area captured by a camera device; and obtains physical attribute information obtained by scanning the preset damage area based on a physical detection method; According to the physical attribute information, a damage label is added to the car damage image to generate car damage sample data.
  • the labeling data required for the training model is automatically generated, without manually labeling the damage situation of the car damage image, and the car damage image can also be realized Pixel-level damage labeling improves the labeling efficiency and accuracy of car damage images, so that it can provide massive and accurate labeling sample data for the model training process based on deep learning, so that training can get higher recognition accuracy of car damage recognition model.
  • FIG. 10 is an image provided by one or more embodiments of this specification.
  • Schematic diagram of the labeling system as shown in Figure 10, the system includes:
  • the above-mentioned camera device 10 is used to take a car damage image obtained by shooting a preset damage area on the target vehicle, and transmit the car damage image to the image tagging device 30;
  • the above physical detection device 20 is configured to scan physical attribute information obtained by scanning the preset damage area based on a physical detection method, and transmit the physical attribute information to the image annotation device 30;
  • the above image tagging device 30 is used to receive a car damage image and an image tagging device, and generate car damage sample data for training a car damage recognition model based on the car damage image and image tagging device.
  • the method further includes:
  • the above image tagging device 30 is used to input the generated sample data of the car damage to the machine learning model based on the supervised learning mode;
  • a vehicle damage recognition model is obtained.
  • the model parameters in the machine learning model based on the supervised learning model are updated based on the above vehicle damage sample data to obtain a vehicle damage recognition model after the model parameters are updated, and then, after the vehicle damage image to be recognized is obtained, The vehicle damage image to be recognized is used to identify the damage situation using the vehicle damage recognition model, and the vehicle is automatically damaged based on the determined damage situation for the vehicle damage image.
  • the labeling data required for the training model is automatically generated without manually damaging the car damage image
  • Situation labeling can also achieve pixel-level damage labeling of car damage images, which improves the labeling efficiency and accuracy of car damage images, thereby providing massive and accurate labeling sample data for the process of model training based on deep learning. Trained to obtain a car damage recognition model with higher recognition accuracy.
  • the physical attribute information obtained by the lidar detection technology is the three-dimensional depth information of each pixel point, based on the three-dimensional depth information, the degree of deformation of the damaged surface can be evaluated, that is, the concave and damaged positions of the preset damage area can be evaluated.
  • High-precision identification and the physical property information obtained by infrared detection technology is the surface thermal imaging information of each pixel. Due to the difference in infrared thermal imaging of different materials, the combination of the detected surface thermal imaging information can determine the preset damage The surface material distribution of the area, so based on the surface thermal imaging information, the degree of scratching of the damaged surface can be evaluated, that is, the range of the scratching damage of the preset damage area can be identified with high accuracy;
  • the above-mentioned physical detection devices include: lidar devices and / or infrared thermal imaging devices;
  • the above-mentioned lidar device is used to scan the preset damaged area with a laser beam to obtain three-dimensional depth information, and transmit the three-dimensional depth information to the image annotation device;
  • the above-mentioned infrared thermal imaging device is used to scan the preset damaged area with infrared rays to obtain surface thermal imaging information, and transmit the surface thermal imaging information to the image annotation device.
  • the shooting conditions may affect the quality of the obtained car damage image and physical attribute information, which may reduce the accuracy of the damage labeling of the preset damage area, therefore, in order to improve the assessment of the damage of the preset damage area Accuracy, in the process of the camera device collecting the image of the car damage and the physical detection device collecting the physical attribute information, the shooting conditions are adjusted based on the preset adjustment rules, so that multiple car damages of the preset damage area can be obtained under different shooting conditions Images and multiple physical attribute information, therefore, the system further includes: an adjustable gimbal, wherein the camera device and the physical detection device are both provided on the adjustable gimbal, and the camera device is The relative position of the physical detection device remains unchanged;
  • the above-mentioned adjustable gimbal is used to adjust the shooting conditions of the camera device and the physical detection device;
  • the above-mentioned camera device is used to shoot a plurality of car damage images under different shooting conditions on the preset damage area on the target vehicle, and transmit the plurality of car damage images to the image tagging device;
  • the above-mentioned physical detection device is used to scan multiple pieces of physical attribute information by using the physical detection method to scan the preset damaged area under different shooting conditions, and transmit the multiple pieces of physical attribute information to the image annotation device.
  • the system further includes: a light adjustment device;
  • the above lighting adjustment device is used to adjust the lighting parameters of the shooting environment where the camera device is located.
  • the above system also includes: a positioning device with centimeter-level precise positioning capability;
  • the above positioning device is used for positioning the relative position of the imaging device and the target vehicle.
  • the image tagging system in one or more embodiments of this specification obtains a car damage image of a preset damage area captured by a camera device; and obtains physical attribute information obtained by scanning the preset damage area based on a physical detection method; According to the physical attribute information, a damage label is added to the car damage image to generate car damage sample data.
  • the labeling data required for the training model is automatically generated, without manually labeling the damage situation of the car damage image, and the car damage image can also be realized Pixel-level damage labeling improves the labeling efficiency and accuracy of car damage images, so that it can provide massive and accurate labeling sample data for the model training process based on deep learning, so that training can get higher recognition accuracy of car damage recognition model.
  • one or more embodiments of this specification also provide an image tagging device, which is used to perform the above image tagging method, such as As shown in Figure 11.
  • the image tagging device may have a relatively large difference due to different configurations or performances, and may include one or more processors 1101 and a memory 1102, and the memory 1102 may store one or more stored application programs or data.
  • the memory 1102 may be short-term storage or persistent storage.
  • the application program stored in the memory 1102 may include one or more modules (not shown), and each module may include a series of computer-executable instructions in the image annotation device.
  • the processor 1101 may be configured to communicate with the memory 1102 and execute a series of computer-executable instructions in the memory 1102 on the image annotation device.
  • the image tagging device may further include one or more power supplies 1103, one or more wired or wireless network interfaces 1104, one or more input / output interfaces 1105, one or more keyboards 1106, and so on.
  • the image tagging device includes a memory, and one or more programs, where one or more programs are stored in the memory, and one or more programs may include one or more modules, and each Each module may include a series of computer-executable instructions in the image annotation device, and is configured to be executed by one or more processors.
  • the one or more programs include the following computer-executable instructions:
  • the vehicle damage image is annotated with damage to generate vehicle damage sample data for training a vehicle damage recognition model.
  • the labeling data required for the training model is automatically generated without manually damaging the car damage image
  • Situation labeling can also achieve pixel-level damage labeling of car damage images, which improves the labeling efficiency and accuracy of car damage images, thereby providing massive and accurate labeling sample data for the process of model training based on deep learning. Trained to obtain a car damage recognition model with higher recognition accuracy.
  • the acquiring physical attribute information obtained by scanning the preset damage area based on a physical detection method includes:
  • the acquiring the vehicle damage image of the preset damage area on the target vehicle captured by the camera device includes:
  • the acquiring physical attribute information obtained by scanning the preset damage area based on a physical detection method includes:
  • the set of physical attribute information includes: a plurality of physical attribute information obtained by scanning under different shooting conditions using a physical detection method.
  • the damage labeling of the car damage image according to the physical attribute information to generate car damage sample data for training a car damage recognition model includes:
  • the damage situation of each pixel point and the vehicle damage image are determined as vehicle damage sample data for training a vehicle damage recognition model.
  • the vehicle damage images of the preset damage area on the target vehicle include: multiple vehicle damage images captured under different shooting conditions;
  • the determining the damage situation of each pixel in the car damage image according to the physical attribute information includes:
  • the damage situation of each pixel in the car damage image is determined according to the physical attribute information obtained under the shooting conditions corresponding to the car damage image.
  • the method further includes:
  • a machine learning method is used to train the machine learning model based on the vehicle damage sample data to obtain a vehicle damage recognition model.
  • the shooting conditions include: the shooting orientation of the camera device, the relative position of the camera device and the target vehicle, the lighting parameters of the shooting environment, and other visions that affect the damage area At least one of the characteristic on-site environmental factors.
  • the relative position of the camera device and the target vehicle is obtained by controlling the movement of the target vehicle based on a positioning device with a centimeter-level precise positioning capability.
  • the image tagging device in one or more embodiments of the present specification acquires a car damage image of a preset damage area captured by a camera device; and acquires physical attribute information obtained by scanning the preset damage area based on a physical detection method; According to the physical attribute information, a damage label is added to the car damage image to generate car damage sample data.
  • the labeling data required for the training model is automatically generated, without manually labeling the damage situation of the car damage image, and the car damage image can also be realized Pixel-level damage labeling improves the labeling efficiency and accuracy of car damage images, so that it can provide massive and accurate labeling sample data for the model training process based on deep learning, so that training can get higher recognition accuracy of car damage recognition model.
  • one or more embodiments of this specification also provide a storage medium for storing computer-executable instructions, a specific implementation
  • the storage medium may be a U disk, an optical disk, a hard disk, etc.
  • the vehicle damage image is annotated with damage to generate vehicle damage sample data for training a vehicle damage recognition model.
  • the labeling data required for the training model is automatically generated without manually damaging the car damage image
  • Situation labeling can also achieve pixel-level damage labeling of car damage images, which improves the labeling efficiency and accuracy of car damage images, thereby providing massive and accurate labeling sample data for the process of model training based on deep learning. Trained to obtain a car damage recognition model with higher recognition accuracy.
  • the acquiring physical attribute information obtained by scanning the preset damage area based on a physical detection method includes:
  • the acquiring the vehicle damage image of the preset damage area on the target vehicle captured by the camera device includes:
  • the acquiring physical attribute information obtained by scanning the preset damage area based on a physical detection method includes:
  • the set of physical attribute information includes: a plurality of physical attribute information obtained by scanning under different shooting conditions using a physical detection method.
  • the vehicle damage image is marked according to the physical attribute information to generate a vehicle damage sample for training a vehicle damage recognition model Data, including:
  • the damage situation of each pixel point and the vehicle damage image are determined as vehicle damage sample data for training a vehicle damage recognition model.
  • the vehicle damage image of the preset damage area on the target vehicle includes: multiple vehicle damage images captured under different shooting conditions;
  • the determining the damage situation of each pixel in the car damage image according to the physical attribute information includes:
  • the damage situation of each pixel in the car damage image is determined according to the physical attribute information obtained under the shooting conditions corresponding to the car damage image.
  • the method further includes:
  • a machine learning method is used to train the machine learning model based on the vehicle damage sample data to obtain a vehicle damage recognition model.
  • the shooting conditions include: the shooting orientation of the camera device, the relative position of the camera device and the target vehicle, and the lighting parameters of the shooting environment , And at least one of other on-site environmental factors that affect the visual characteristics of the damaged area.
  • the relative position of the camera device and the target vehicle is based on the positioning device with centimeter-level precise positioning capability to control the movement of the target vehicle owned.
  • the vehicle damage image of the preset damage area captured by the camera device is acquired; Set the physical attribute information obtained by scanning the damaged area; mark the damage of the car damage image according to the physical attribute information to generate sample data of the car damage.
  • the labeling data required for the training model is automatically generated, without manually labeling the damage situation of the car damage image, and the car damage image can also be realized Pixel-level damage labeling improves the labeling efficiency and accuracy of car damage images, so that it can provide massive and accurate labeling sample data for the model training process based on deep learning, so that training can get higher recognition accuracy of car damage recognition model.
  • the improvement of a technology can be clearly distinguished from the improvement in hardware (for example, the improvement of circuit structures such as diodes, transistors, and switches) or the improvement in software (the improvement of the process flow).
  • the improvement of many methods and processes can be regarded as a direct improvement of the hardware circuit structure.
  • Designers almost get the corresponding hardware circuit structure by programming the improved method flow into the hardware circuit. Therefore, it cannot be said that the improvement of a method flow cannot be realized by hardware physical modules.
  • a programmable logic device Programmable Logic Device, PLD
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • HDL Hardware Description Language
  • ABEL Advanced Boolean Expression
  • AHDL AlteraHardwareDescriptionLanguage
  • Confluence a specific programming language
  • CUPL CornellUniversityProgrammingLanguage
  • HDCal JHDL (JavaHardwareDescriptionLanguage)
  • Lava Lola
  • MyHDL PALASM
  • RHDL RubyLanguageDescription, etc.
  • VHDL Very-High-Speed Integrated Circuit Hardware Description
  • Verilog Verilog
  • the controller may be implemented in any suitable manner, for example, the controller may take a microprocessor or processor and a computer-readable medium storing computer-readable program code (such as software or firmware) executable by the (micro) processor , Logic gates, switches, application specific integrated circuits (Application Specific Integrated Circuit, ASIC), programmable logic controllers and embedded microcontrollers.
  • Examples of controllers include but are not limited to the following microcontrollers: ARC625D, Atmel AT91SAM, Microchip PIC18F26K20 and Silicon Labs C8051F320, the memory controller can also be implemented as part of the control logic of the memory.
  • controller in addition to implementing the controller in the form of pure computer-readable program code, it is entirely possible to logically program method steps to make the controller use logic gates, switches, application specific integrated circuits, programmable logic controllers and embedded The same function is realized in the form of a microcontroller or the like. Therefore, such a controller can be regarded as a hardware component, and the device for implementing various functions included therein can also be regarded as a structure within the hardware component. Or even, the means for realizing various functions can be regarded as both a software module of an implementation method and a structure within a hardware component.
  • the system, device, module or unit explained in the above embodiments may be specifically implemented by a computer chip or entity, or implemented by a product with a certain function.
  • a typical implementation device is a computer.
  • the computer may be, for example, a personal computer, a laptop computer, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device A combination of any of these devices.
  • one or more embodiments of this specification may be provided as a method, system, or computer program product. Therefore, one or more of this specification may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Moreover, one or more of this specification may employ computer program products implemented on one or more computer usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer usable program code form.
  • computer usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions can be provided to the processor of a general-purpose computer, special-purpose computer, embedded processing machine, or other programmable data processing device to produce a machine that enables the generation of instructions executed by the processor of the computer or other programmable data processing device
  • These computer program instructions may also be stored in a computer-readable memory that can guide a computer or other programmable data processing device to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including an instruction device, the instructions The device implements the functions specified in one block or multiple blocks of the flowchart one flow or multiple flows and / or block diagrams.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device, so that a series of operating steps are performed on the computer or other programmable device to produce computer-implemented processing, which is executed on the computer or other programmable device
  • the instructions provide steps for implementing the functions specified in one block or multiple blocks of the flowchart one flow or multiple flows and / or block diagrams.
  • the computing device includes one or more processors (CPUs), input / output interfaces, network interfaces, and memory.
  • processors CPUs
  • input / output interfaces output interfaces
  • network interfaces network interfaces
  • memory volatile and non-volatile memory
  • the memory may include non-permanent memory, random access memory (RAM) and / or non-volatile memory in computer-readable media, such as read only memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
  • RAM random access memory
  • ROM read only memory
  • flash RAM flash memory
  • Computer-readable media including permanent and non-permanent, removable and non-removable media, can store information by any method or technology.
  • the information may be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technologies, read-only compact disc read-only memory (CD-ROM), digital versatile disc (DVD) or other optical storage, Magnetic tape cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission media can be used to store information that can be accessed by computing devices.
  • computer-readable media does not include temporary computer-readable media (transitory media), such as modulated data signals and carrier waves.
  • one or more embodiments of this specification may be provided as a method, system, or computer program product. Therefore, one or more of this specification may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Moreover, one or more of this specification may employ computer program products implemented on one or more computer usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer usable program code form.
  • computer usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • program modules include routines, programs, objects, components, data structures, etc. that perform specific tasks or implement specific abstract data types.
  • program modules may also be practiced in a distributed computing environment in which tasks are performed by remote processing devices connected through a communication network.
  • program modules may be located in local and remote computer storage media including storage devices.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Geometry (AREA)
  • Optics & Photonics (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

一种图像标注方法、装置及系统,方法包括:获取利用摄像装置拍摄得到的预设损伤区域的车损图像(S201);以及,获取基于物理探测方式针对预设损伤区域进行扫描得到的物理属性信息(S202);根据物理属性信息对车损图像进行损伤标注,生成车损样本数据(S203)。通过将利用摄像装置得到的视觉图像和基于物理探测方式得到的物理属性信息相结合,自动生成训练模型所需标注数据,无需人工手动对车损图像进行损伤情况标注,还能够实现对车损图像进行像素级的损伤情况标注,提高了车损图像的标注效率和精度,从而能够为基于深度学习进行模型训练的过程提供海量、精准的标注样本数据,以便训练得到识别精度更高的车损识别模型。

Description

一种图像标注方法、装置及系统 技术领域
本说明书一个或多个涉及智能识别技术领域,尤其涉及一种图像标注方法、装置及系统。
背景技术
目前,随着社会经济的快速增长,中国高收入人群的增多,由于车辆给人们的出行带来了很大的方便,因而车辆的拥有数量也在随之不断增长,随之交通事故的发生也越来越频繁,人们为了减少因交通事故带来的损失,通常定期给车辆缴纳必要的车险,这样车主发生车辆事故后,可以提出理赔申请,然后保险公司需要对车辆的损伤程度进行评估,以确定需要修复的项目清单以及赔付金额,具体的,需要专业定损人员对现场采集的车损图像进行综合分析,进而对车辆碰撞修复进行科学系统的估损定价。
当前,为了提高车损图像的损伤长度进行快速识别,采用基于深度学习来识别车辆损伤程度,即基于预先训练好的车损识别模型对现场采集的车损图像进行智能识别,自动输出识别得到的损伤程度分析结果,其中,在训练得到车损识别模型时,需要获取大量的标注好的车损样本数据,通常针对每个子问题需要10万~1000万量级的标注数据,即事先对各种类型、材质的损伤进行标注,标明车损图像中各子区域对应的损伤程度,现有技术中对采集的大量车损图像进行人工标注,这样存在标注效率低、人工成本高、人为因素影响大、准确度低的问题,难以在短时间内产生训练模型所需的大量标注数据。
因此,需要提供一种效率高、准确度高、人工成本低的车损图像标注方法及装置。
发明内容
本说明书一个或多个实施例的目的是提供一种图像标注方法、装置及系统,通过将利用摄像装置得到的视觉图像和基于物理探测方式得到的物理属性信息相结合,自动生成训练模型所需标注数据,无需人工手动对车损图像进行损伤情况标注,还能够实现对车损图像进行像素级的损伤情况标注,提高了车损图像的标注效率和精度,从而能够为基于深度学习进行模型训练的过程提供海量、精准的标注样本数据,以便训练得到识别精度更高的车损识别模型。
为解决上述技术问题,本说明书一个或多个实施例是这样实现的:
本说明书一个或多个实施例提供了一种图像标注方法,包括:
获取利用摄像装置拍摄得到的目标车辆上预设损伤区域的车损图像;以及,
获取基于物理探测方式针对所述预设损伤区域进行扫描得到的物理属性信息;
根据所述物理属性信息对所述车损图像进行损伤标注,生成用于训练车损识别模型的车损样本数据。
本说明书一个或多个实施例提供了一种图像标注装置,包括:
第一获取模块,用于获取利用摄像装置拍摄得到的目标车辆上预设损伤区域的车损图像;以及,
第二获取模块,用于获取基于物理探测方式针对所述预设损伤区域进行扫描得到的物理属性信息;
图像标注模块,用于根据所述物理属性信息对所述车损图像进行损伤标注,生成用于训练车损识别模型的车损样本数据。
本说明书一个或多个实施例提供了一种图像标注系统,包括:摄像装置、物理探测装置和上述图像标注装置,其中,所述摄像装置和所述物理探测装置均与所述图像标注装置相连接;
所述摄像装置,用于对目标车辆上预设损伤区域进行拍摄得到的车损图像,并将所述车损图像传输至所述图像标注装置;
所述物理探测装置,用于基于物理探测方式对所述预设损伤区域进行扫描得到的物理属性信息,并将所述物理属性信息传输至所述图像标注装置;
所述图像标注装置,用于接收所述车损图像和所述图像标注装置,并根据所述车损图像和所述图像标注装置生成用于训练车损识别模型的车损样本数据。
本说明书一个或多个实施例提供了一种图像标注设备,包括:处理器;以及
被安排成存储计算机可执行指令的存储器,所述可执行指令在被执行时使所述处理器:
获取利用摄像装置拍摄得到的目标车辆上预设损伤区域的车损图像;以及,
获取基于物理探测方式针对所述预设损伤区域进行扫描得到的物理属性信息;
根据所述物理属性信息对所述车损图像进行损伤标注,生成用于训练车损识别模型的车损样本数据。
本说明书一个或多个实施例提供了一种存储介质,用于存储计算机可执行指令,所述可执行指令在被执行时实现以下流程:
获取利用摄像装置拍摄得到的目标车辆上预设损伤区域的车损图像;以及,
获取基于物理探测方式针对所述预设损伤区域进行扫描得到的物理属性信息;
根据所述物理属性信息对所述车损图像进行损伤标注,生成用于训练车损识别模型的车损样本数据。
本说明书一个或多个实施例中的图像标注方法、装置及系统,获取利用摄像装置拍摄得到的预设损伤区域的车损图像;以及,获取基于物理探测方式针对预设损伤区域进行扫描得到的物理属性信息;根据该物理属性信息对车损图像进行损伤标注,生成车损样本数据。通过将利用摄像装置得到的视觉图像和基于物理探测方式得到的物理属性信息相结合,自动生成训练模型所需标注数据,无需人工手动对车损图像进行损伤情况标注,还能够实现对车损图像进行像素级的损伤情况标注,提高了车损图像的标注效率和精度,从而能够为基于深度学习进行模型训练的过程提供海量、精准的标注样本数据,以便训练得到识别精度更高的车损识别模型。
附图说明
为了更清楚地说明本说明书一个或多个实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本说明书一个或多个中记载的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本说明书一个或多个实施例提供的图像标注方法的第一种应用场景示意图;
图2为本说明书一个或多个实施例提供的图像标注方法的第一种流程示意图;
图3为本说明书一个或多个实施例提供的图像标注方法的第二种流程示意图;
图4a为本说明书一个或多个实施例提供的图像标注方法中激光雷达装置采集三维表面图的实现原理示意图;
图4b为本说明书一个或多个实施例提供的图像标注方法中红外热成像装置采集表 面热成像图的实现原理示意图;
图5为本说明书一个或多个实施例提供的图像标注方法的第三种流程示意图;
图6a为本说明书一个或多个实施例提供的图像标注方法的第二种应用场景示意图;
图6b为本说明书一个或多个实施例提供的图像标注方法的第三种应用场景示意图;
图7为本说明书一个或多个实施例提供的图像标注方法的第四种流程示意图;
图8为本说明书一个或多个实施例提供的图像标注方法的第五种流程示意图;
图9a为本说明书一个或多个实施例提供的图像标注装置的第一种模块组成示意图;
图9b为本说明书一个或多个实施例提供的图像标注装置的第二种模块组成示意图;
图10为本说明书一个或多个实施例提供的图像标注系统的具体结构示意图;
图11为本说明书一个或多个实施例提供的图像标注设备的具体结构示意图。
具体实施方式
为了使本技术领域的人员更好地理解本说明书一个或多个中的技术方案,下面将结合本说明书一个或多个实施例中的附图,对本说明书一个或多个实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本说明书一个或多个一部分实施例,而不是全部的实施例。基于本说明书一个或多个中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都应当属于本说明书一个或多个保护的范围。
本说明书一个或多个实施例提供了一种图像标注方法、装置及系统,通过将利用摄像装置得到的视觉图像和基于物理探测方式得到的物理属性信息相结合,自动生成训练模型所需标注数据,无需人工手动对车损图像进行损伤情况标注,还能够实现对车损图像进行像素级的损伤情况标注,提高了车损图像的标注效率和精度,从而能够为基于深度学习进行模型训练的过程提供海量、精准的标注样本数据,以便训练得到识别精度更高的车损识别模型。
图1为本说明书一个或多个实施例提供的图像标注方法的应用场景示意图,如图1所示,该系统包括:摄像装置、物理探测装置和图像标注装置,其中,该摄像装置和物理探测装置均与图像标注装置相连接;该摄像装置可以是数码相机等具有拍照功能的设备,该物理探测装置可以是激光雷达装置、红外热成像装置等等,该图像标注装置可以 是用于车损图像进行标注的后台服务器,其中,图像标注的具体过程为:
(1)将具有预设损伤区域的目标车辆驶入工作场地内工作台的指定位置,具体的,操作人员根据车辆受损部位,对数据采集范围进行初始设定,即设置摄像装置的初始拍摄范围,以及设置物理探测装置的初始探测范围,例如,车辆的前保险杠受损,即预设损伤区域为车辆前半球,则将拍摄范围和物理探测范围设置为车辆前半球;
(2)通过摄像装置对目标车辆上预设损伤区域进行拍摄得到车损图像,并将该车损图像传输至图像标注装置,其中,摄像装置可以设置于可调节云台上,通过可调节云台调整摄像装置的拍摄条件;
(3)通过物理探测装置基于物理探测方式对上述预设损伤区域进行扫描得到物理属性信息,并将该物理属性信息传输至图像标注装置,其中,物理探测装置也设置于可调节云台上,且物理探测装置与摄像装置的相对位置保持不变,保证在同一拍摄条件下同时获取车损图像和对应的物理属性信息;
(4)图像标注装置根据获取到的物理属性信息对车损图像进行损伤标注,生成用于训练车损识别模型的车损样本数据,这样无需人工手动对车损图像进行损伤情况标注,还能够实现对车损图像进行像素级的损伤情况标注,提高了车损图像的标注效率和精度,从而能够为基于深度学习进行模型训练的过程提供海量、精准的标注样本数据,以便训练得到识别精度更高的车损识别模型。
图2为本说明书一个或多个实施例提供的图像标注方法的第一种流程示意图,图2中的方法能够由图1中的图像标注装置执行,如图2所示,该方法至少包括以下步骤:
S201,获取利用摄像装置拍摄得到的目标车辆上预设损伤区域的车损图像;
具体的,摄像装置对目标车辆上预设损伤区域进行拍照,得到针对该预设损伤区域的视觉图像,并将该视觉图像传输至图像标注装置;
S202,获取基于物理探测方式针对上述预设损伤区域进行扫描得到的物理属性信息;
具体的,物理探测装置对目标车辆上预设损伤区域进行扫描,得到针对该预设损伤区域的物理属性信息,并将该物理属性信息传输至图像标注装置;
其中,上述物理探测装置可以是激光雷达装置,对应的,上述物理探测方式可以是激光雷达探测方式;
上述物理探测装置还可以是红外热成像装置,对应的,上述物理探测方式还可以是 红外线探测方式;
上述物理探测装置也可以是激光雷达装置和红外热成像装置的组合,对应的,上述物理探测方式也可以是激光雷达探测方式和红外线探测方式的结合;
另外,物理探测装置又可以是采用其他物理探测方式扫描采集物理属性信息的装置;
S203,根据获取到的物理属性信息对上述车损图像进行损伤标注,生成用于训练车损识别模型的车损样本数据,其中,该车损样本数据可以包括:车损图像和针对该车损图像的标注数据,该标注数据可以包括:车损图像中各像素点的损伤情况;
具体的,针对每个车损图像,获取与该车损图像对应的物理属性信息,根据该物理属信息,确定车损图像中各像素点的损伤情况,将确定出的车损图像中各像素点的损伤情况作为针对该车损图像的标注数据;其中,该像素点的损伤情况可以是表征像素点是否损伤的数据,也可以是表征像素点损伤大小的数据,还可以是表征像素点损伤程度的数据。
本说明书一个或多个实施例中,通过将利用摄像装置得到的视觉图像和基于物理探测方式得到的物理属性信息相结合,自动生成训练模型所需标注数据,无需人工手动对车损图像进行损伤情况标注,还能够实现对车损图像进行像素级的损伤情况标注,提高了车损图像的标注效率和精度,从而能够为基于深度学习进行模型训练的过程提供海量、精准的标注样本数据,以便训练得到识别精度更高的车损识别模型。
其中,利用激光雷达探测技术得到的物理属性信息为各像素点的三维深度信息,基于该三维深度信息能够对损伤表面的变形程度的评估,即能够对预设损伤区域的凹陷、破损的位置进行高精度识别,而利用红外线探测技术得到的物理属性信息为各像素点的表面热成像信息,由于不同材质的红外线热成像存在一定差异,因此,结合探测到的表面热成像信息可以确定预设损伤区域的表面材质分布,因而基于该表面热成像信息能够对损伤表面的剐蹭程度的评估,即能够对预设损伤区域的剐蹭损伤的范围进行高精度识别;
进一步的,考虑到不同物理探测技术对预设损伤区域的损伤评估侧重点不同,因此,通过不同物理探测技术相结合的方式来对预设损伤区域的损伤情况进行评估,能够提高对损伤表面的损伤情况的评估准确度,基于此,如图3所示,以激光雷达探测技术和红外线探测技术相结合的方式为例,上述S202获取基于物理探测方式针对上述预设损伤区域进行扫描得到的物理属性信息,具体包括:
S2021,获取利用激光雷达装置针对目标车辆上预设损伤区域进行扫描得到的三维深度信息;
具体的,激光雷达装置对目标车辆上预设损伤区域进行扫描,得到针对该预设损伤区域的三维深度信息,并将该三维深度信息传输至图像标注装置;
如图4a所示,激光雷达装置包括:第一处理单元、激光发射单元和激光接收单元;
具体的,激光发射单元用于向预设损伤区域发射激光光束(即探测信号),激光光束到达预设损伤区域后将产生反射,激光接收单元接收预设损伤区域返回的反射光束(即目标回波),并将该目标回波传输至第一处理单元,第一处理单元将接收到的从预设损伤区域反射回来的目标回波与向预设损伤区域发射的探测信号进行比对,生成针对预设损伤区域的三维深度信息,基于三维深度信息可以绘制得到用于表征车损图像中各像素点的深度信息的三维表面图,实现以发射激光光束探测目标车辆上预设损伤区域的损伤表面中各点的相对位置信息;
S2022,获取利用红外热成像装置针对目标车辆上预设损伤区域进行扫描得到的表面热成像信息;
具体的,红外热成像装置对目标车辆上预设损伤区域进行扫描,得到针对该预设损伤区域的表面热成像信息,并将该表面热成像信息传输至图像标注装置;
如图4b所示,红外热成像装置包括:第二处理单元、红外线发射单元和红外线接收单元;
具体的,红外线发射单元用于向预设损伤区域发射红外线光束(即探测信号),红外线光束到达预设损伤区域后将产生反射,红外线接收单元接收预设损伤区域返回的反射光束(即目标回波),并将该目标回波传输至第一处理单元,第二处理单元将接收到的从预设损伤区域反射回来的目标回波与向预设损伤区域发射的探测信号进行比对,生成针对预设损伤区域的表面热成像信息,基于表面热成像信息可以绘制得到用于表征车损图像中各像素点的辐射能量的表面热成像图,实现以发射红外线光束探测目标车辆上预设损伤区域的损伤表面的材质分布;
对应的,针对同时利用激光雷达探测技术和红外线探测技术对预设损伤区域进行损伤评估的情况,上述S203根据获取到的物理属性信息对上述车损图像进行损伤标注,生成用于训练车损识别模型的车损样本数据,具体包括:
S2031,根据获取到的三维深度信息和表面热成像信息,对上述车损图像进行损伤 标注,生成用于训练车损识别模型的车损样本数据;
具体的,根据获取到的三维深度信息,确定车损图像中各像素点的深度信息;再针对每个像素点,根据该像素点的深度信息,确定该像素点的凹凸情况(即变形情况);
根据获取到的表面热成像信息,确定车损图像中各像素点对应的辐射能量;针对每个像素点,根据该像素点对应的辐射能量,确定该像素点的剐蹭情况(即掉漆情况);
将确定出的各像素点的凹凸情况、剐蹭情况和车损图像确定为用于训练车损识别模型的车损样本数据;
本说明书一个或多个实施例中,结合激光雷达探测技术和红外线探测技术这两个物理探测维度,实现同时对预设损伤区域的变形程度和剐蹭程度进行综合识别,提高了车损图像的损伤标注精准度,进而有利于提高基于标注后的车损图像训练得到的识别模型的识别精度。
进一步的,考虑到拍摄条件可能会影响得到的车损图像和物理属性信息的质量,从而可能降低对预设损伤区域的损伤标注的准确度,因此,为了提高对预设损伤区域的损伤情况评估的精准度,在摄像装置采集车损图像以及物理探测装置采集物理属性信息的过程中,基于预设调节规则对拍摄条件进行调整,实现在不同拍摄条件下获取预设损伤区域的多张车损图像和多个物理属性信息,基于此,如图5所示,上述S201获取利用摄像装置拍摄得到的目标车辆上预设损伤区域的车损图像,具体包括:
S2011,获取针对目标车辆上预设损伤区域的车损图像集合,其中,该车损图像集合包括:利用摄像装置在不同拍摄条件下拍摄得到的多张车损图像;
其中,上述拍摄条件包括:摄像装置的拍摄方位、摄像装置与目标车辆的相对位置、拍摄环境的光照参数、以及其他影响损伤区域视觉特征的现场环境因素中至少一种,该拍摄方位可以包括:拍摄角度和拍摄朝向,该光照参数可以包括:光源数量和光照情况,因此,不同拍摄条件可以是摄像装置的拍摄方位、摄像装置与目标车辆的相对位置、以及拍摄环境的光照参数中至少一项不同的多个拍摄条件,每个拍摄条件对应于针对预设损伤区域的一张车损图像;也就是说,获取到的车损图像均为实际拍摄得到的,而不是基于原始图像进行不同的图像处理得到的,其中,实际拍摄得到的图像的特征分布更加符合现实场景,这样对于深度学习来说具有更好的训练效果;
其中,上述预设调节规则可以是根据操作人员输入的步长、自动拍摄设备进行光照、距离和角度的改变的设定信息确定的,针对同一处损伤区域,自动完成一组数百至数千 张车损图片的拍摄和标注,例如,对同一处损伤区域,按照每次左右移动30cm、角度改变10度、光照从500流明提升到3000流明、每次提升100流明的调节规则来调整拍摄条件,采集各拍摄条件下的车损图像和物理属性信息;
进一步的,上述预设调节规则可以是基于车损识别模型的识别准确度确定的,当车损识别模型的识别准确度不理想时,可以通过修改预设调节规则来优化拍摄条件,从而优化最终得到的车损样本数据,进而提高车损识别模型的识别准确度。
具体的,由于摄像装置设置于图1中的可调节云台上,再通过控制设备对可调节云台进行调节,以通过可调节云台调整摄像装置的拍摄方位,并且如果在可调节云台下方安装有轮式或履带式行走机构,可通过控制设备控制可调节云台在工作场地内按照控制指令前后左右移动,以通过可调节云台调整摄像装置与目标车辆的相对位置;其中,该控制设备可以是单独的控制装置,还可以是设置于图像标注装置中的控制模块;
另外,针对上述拍摄条件包括拍摄环境的光照参数的情况,需要在通过摄像装置采集针对预设损伤区域的车损图像、以及通过物理探测装置采集针对预设损伤区域的物理属性信息的过程中,调节拍摄环境的光照参数,如图6a所示,给出了图像标注方法的第二种应用场景示意图,具体为:
在工作场地的指定位置设置一光照调节装置,控制设备根据预设拍摄参数控制光照调节装置发出的光照强度,以调整摄像装置所在的拍摄环境的光照参数,进而使得摄像装置在不同光照参数下拍摄相应的车损图像;
对应的,针对每张车损图像,均需要采集与该车损图像对应的物理属性信息,针对物理属性信息的采集过程,仍以物理探测方式包括:激光雷达探测方式和红外线探测方式为例,上述S2021获取利用激光雷达装置针对目标车辆上预设损伤区域进行扫描得到的三维深度信息,具体包括:
S20211,获取针对目标车辆上预设损伤区域的三维深度信息集合,其中,该三维深度信息集合包括:利用激光雷达探测方式在不同拍摄条件下扫描得到的多个三维表面图;
具体的,针对每个拍摄条件,不仅通过摄像装置采集预设损伤区域的车损图像,同时还通过激光雷达装置采集该预设损伤区域的三维深度信息(即三维表面图),因此,三维深度信息集合中的每一个三维表面图是在一个特定拍摄条件下得到的;
对应的,上述S2022获取利用红外热成像装置针对目标车辆上预设损伤区域进行扫描得到的表面热成像信息,具体包括:
S20221,获取针对目标车辆上预设损伤区域的表面热成像信息集合,其中,该表面热成像信息集合包括:利用红外线探测方式在不同拍摄条件下扫描得到的多个表面热成像图;
具体的,针对每个拍摄条件,不仅通过摄像装置采集预设损伤区域的车损图像以及通过激光雷达装置采集该预设损伤区域的三维深度信息(即三维表面图),同时还通过红外热成像装置采集该预设损伤区域的表面热成像信息(即表面热成像图),因此,表面热成像信息集合中的每一个表面热成像图是在一个特定拍摄条件下得到的;
对应的,上述S203根据获取到的三维深度信息和表面热成像信息,对上述车损图像进行损伤标注,生成用于训练车损识别模型的车损样本数据,具体包括:
S20311,根据获取到的三维深度信息集合和表面热成像信息集合,分别对各拍摄条件下的车损图像进行损伤标注,生成用于训练车损识别模型的车损样本数据;
也就是说,针对某一预设损伤区域,在某一拍摄条件下,获取该拍摄条件对应的车损图像、三维表面图和表面热成像图,并建立拍摄条件、车损图像、三维表面图和表面热成像图之间的对应关系,再根据针对预设损伤区域获取的各拍摄条件对应的车损图像、三维表面图和表面热成像图,确定针对预设损伤区域的车损样本数据。
其中,为了保证通过物理探测装置获取到的物理属性信息与通过摄像装置获取到的车损图像上的像素点一一匹配,根据摄像装置与物理探测装置的相对位置、以及摄像装置的取景框内每个像素点的拍摄范围,确定物理探测装置的扫描范围;进一步为了保证同一拍摄条件下的车损图像和物理属性信息一一对应,物理探测装置和摄像装置的相对位置不变,控制设备通过可调节云台对摄像装置和物理探测装置进行同步调整,因此,在同一拍摄条件下,同时通过摄像装置采集针对预设损伤区域的车损图像、以及通过物理探测装置采集针对预设损伤区域的物理属性信息;
另外,由于拍摄点相对目标车辆的中心点及损伤部位的坐标是已知的,基于空间几何计算,可知所拍摄的车损图像中,每一个像素点是否位于某种损伤范围内。
本说明书一个或多个实施例中,在摄像装置采集车损图像以及物理探测装置采集物理属性信息的过程中,基于预设调节规则对拍摄条件进行调整,实现在不同拍摄条件下获取预设损伤区域的多张车损图像和多个物理属性信息,针对每个拍摄条件,根据该拍摄条件下采集的物理属性信息,对在拍摄条件下拍摄得到的车损图像进行损伤标注,生成车损样本数据,这样对于一个预设损伤区域而言,得到了在多个拍摄条件下生成的车 损样本数据,从而能够提高对该预设损伤区域的损伤标注的准确度,进一步提高了基于该车损样本数据训练得到的车损识别模型的识别准确度。
其中,为了保证目标车辆的移动精度,提高摄像装置与目标车辆的相对位置的定位准确度,上述摄像装置与目标车辆的相对位置是基于具备厘米级精确定位能力的定位装置对目标车辆的移动进行控制得到的,如图6b所示,给出了图像标注方法的第三种应用场景示意图,具体为:
在工作场地的指定位置设置一无线定位装置,无线定位装置获取目标车辆移动前的第一位置信息和移动后的第二位置信息,根据第一位置信息和第二位置信息确定目标车辆的实际移动距离,将该实际移动距离与理论移动距离进行比对,确定目标车辆的移动误差是否满足预设条件,若否,则向控制设备发送相应的提示信息,以使控制设备对目标车辆进行准确定位,其中,无线定位装置可以是基于无线电信号的定位装置,也可以是基于蓝牙信号的定位装置,还可以是基于激光雷达的定位装置。
其中,针对车损图像的损伤标注过程,如图7所示,上述S203根据获取到的物理属性信息对上述车损图像进行损伤标注,生成用于训练车损识别模型的车损样本数据,具体包括:
S2032,根据获取到的物理属性信息,确定针对预设损伤区域的车损图像中各像素点的损伤情况;
具体的,针对物理属性信息为三维深度信息的情况,根据获取到的针对预设损伤区域的三维表面图,确定针对该预设损伤区域的车损图像中各像素点的深度信息;根据各像素点的深度信息,确定车损图像中各像素点的变形情况(即凹凸情况);
针对物理属性信息为表面热成像信息的情况,根据获取到的针对预设损伤区域的表面热成像图,确定针对该预设损伤区域的车损图像中各像素点的辐射能量;根据各像素点的辐射能量,确定车损图像中各像素点的剐蹭情况(即掉漆情况);
S2033,将确定出的各像素点的损伤情况和车损图像确定为用于训练车损识别模型的车损样本数据;
具体的,将车损图像中各像素点的损伤情况作为针对车损图像的标注数据,建立车损图像和标注数据之间的对应关系,将该对应关系、车损图像、标注数据输入至待训练的基于有监督学习模式的机器学习模型。
其中,针对在不同拍摄条件下获取车损图像和各车损图像对应的物理属性信息 的情况,即上述目标车辆上预设损伤区域的车损图像包括:在不同拍摄条件下拍摄得到的多张车损图像,以及上述针对预设损伤区域的物理属性信息包括:在不同拍摄条件下扫描得到的多份物理属性信息;
对应的,上述S2032根据获取到的物理属性信息,确定针对预设损伤区域的车损图像中各像素点的损伤情况,具体包括:
针对每张车损图像,根据在该车损图像对应的拍摄条件下得到的物理属性信息,确定该车损图像中各像素点的损伤情况。
具体的,为了提高对预设损伤区域的损伤情况评估的精准度,在摄像装置采集车损图像以及物理探测装置采集物理属性信息的过程中,基于预设调节规则对拍摄条件进行调整,实现在不同拍摄条件下获取预设损伤区域的多张车损图像和多个物理属性信息,因此,在对车损图像进行损伤标注时,需要针对每张车损图像,确定在该车损图像对应的拍摄条件下得到的物理属性信息;再根据该物理属性信息,确定该车损图像中各像素点的损伤情况;
其中,根据该物理属性信息,确定车损图像中各像素点的损伤情况的过程,具体为:
(1)针对物理属性信息为三维深度信息的情况,根据在该车损图像对应的拍摄条件下得到的三维表面图,确定该车损图像中各像素点的深度信息;根据各像素点的深度信息,确定车损图像中各像素点的变形情况(即凹凸情况);
(2)针对物理属性信息为表面热成像信息的情况,根据在该车损图像对应的拍摄条件下得到的表面热成像图,确定该车损图像中各像素点的辐射能量;根据各像素点的辐射能量,确定车损图像中各像素点的剐蹭情况(即掉漆情况);
具体的,在确定出各车损图像中各像素点的损伤情况后,在某一拍摄条件下的每一个车损图像和该车损图像中各像素点的损伤情况作为一份车损样本数据,因此,针对同一预设损伤区域,得到在不同拍摄条件下的多份车损样本数据;
其中,每份车损样本数据包括:拍摄条件相同且车损图像标识相同的多行关于损伤情况的标注数据,每行标注数据包括:一个像素点的损伤情况统计数据,例如,凸凹程度、剐蹭程度等等;其中,每份车损样本数据还包括:针对每张车损图像的整体车损统计数据、针对预设损伤区域的维修方案、针对预设损伤区域的损伤类型等,该针对预设损伤区域的维修方案可以是基于相关人员的标记信息确定的。
其中,针对目标车辆的某一预设损伤区域采集到的基础数据,即拍摄条件、车损图像与物理属性信息之间的对应关系,如下表1所示:
表1
Figure PCTCN2019103592-appb-000001
具体的,标识为0001的拍摄条件与标识为0002的拍摄条件中的摄像装置的拍摄方位、摄像装置与目标车辆的相对位置、以及拍摄环境的光照参数中至少一项不同,标识为AAAA的车损图像、标识为1aaaa的三维表面图、标识为2aaaa的表面热成像图均为在标识为0001的拍摄条件下采集得到的。
其中,结合上述表1中车损图像与物理属性信息之间的对应关系,根据物理属性信息,对目标车辆的某一预设损伤区域的车损图像进行车损标注,生成的针对该预设损伤区域的标注数据,如下表2所示:
表2
Figure PCTCN2019103592-appb-000002
进一步的,在对实际拍摄得到的车损图像进行自动损伤标注并生成车损样本数据后,将该车损样本数据输入至预设机器学习模型,并对该机器学习模型进行训练得到车损识别模型,其中,该机器学习模型可以是基于有监督学习模式的机器学习模型,具体的,如图8所示,在S203根据获取到的物理属性信息对上述车损图像进行损伤标注,生成用于训练车损识别模型的车损样本数据之后,还包括:
S204,将生成的车损样本数据输入至预设的基于有监督学习模式的机器学习模型;
S205,利用机器学习方法并基于上述车损样本数据对机器学习模型进行训练,得到车损识别模型。
具体的,基于上述车损样本数据对基于有监督学习模式的机器学习模型中的模型参数进行更新,得到模型参数更新后的车损识别模型,进而,在获取到待识别的车损图像后,利用该车损识别模型该待识别的车损图像进行损伤情况识别,根据确定出的针对车损图像的损伤情况,对车辆进行自动定损。
本说明书一个或多个实施例中的图像标注方法,获取利用摄像装置拍摄得到的预设损伤区域的车损图像;以及,获取基于物理探测方式针对预设损伤区域进行扫描得到的物理属性信息;根据该物理属性信息对车损图像进行损伤标注,生成车损样本数据。通过将利用摄像装置得到的视觉图像和基于物理探测方式得到的物理属性信息相结合,自动生成训练模型所需标注数据,无需人工手动对车损图像进行损伤情况标注,还能够实现对车损图像进行像素级的损伤情况标注,提高了车损图像的标注效率和精度,从而能够为基于深度学习进行模型训练的过程提供海量、精准的标注样本数据,以便训练得到识别精度更高的车损识别模型。
对应上述图2至图8描述的图像标注方法,基于相同的技术构思,本说明书一个或多个实施例还提供了一种图像标注装置,图9a为本说明书一个或多个实施例提供的图像标注装置的第一种模块组成示意图,该装置用于执行图2至图8描述的图像标注方法,如图9a所示,该装置包括:
第一获取模块901,用于获取利用摄像装置拍摄得到的目标车辆上预设损伤区域的车损图像;以及,
第二获取模块902,用于获取基于物理探测方式针对所述预设损伤区域进行扫描得到的物理属性信息;
图像标注模块903,用于根据所述物理属性信息对所述车损图像进行损伤标注,生成用于训练车损识别模型的车损样本数据。
本说明书一个或多个实施例中,通过将利用摄像装置得到的视觉图像和基于物理探测方式得到的物理属性信息相结合,自动生成训练模型所需标注数据,无需人工手动对车损图像进行损伤情况标注,还能够实现对车损图像进行像素级的损伤情况标注,提高了车损图像的标注效率和精度,从而能够为基于深度学习进行模型训练的过程提供海量、精准的标注样本数据,以便训练得到识别精度更高的车损识别模型。
可选地,所述第二获取模块902,具体用于:
获取利用激光雷达装置针对所述预设损伤区域进行扫描得到的三维深度信息;和/或,
获取利用红外热成像装置针对所述预设损伤区域进行扫描得到的表面热成像信息。
可选地,所述第一获取模块901,具体用于:
获取针对目标车辆上预设损伤区域的车损图像集合,其中,所述车损图像集合包括:利用摄像装置在不同拍摄条件下拍摄得到的多张车损图像;
对应的,所述第二获取模块902,具体用于:
获取针对所述预设损伤区域的物理属性信息集合,其中,所述物理属性信息集合包括:利用物理探测方式在不同拍摄条件下扫描得到的多个物理属性信息。
可选地,所述图像标注模块903,具体用于:
根据所述物理属性信息,确定所述车损图像中各像素点的损伤情况;
将所述各像素点的损伤情况和所述车损图像确定为用于训练车损识别模型的车损样本数据。
可选地,所述目标车辆上预设损伤区域的车损图像包括:在不同拍摄条件下拍摄得到的多张车损图像;
所述图像标注模块903,进一步具体用于:
针对每张所述车损图像,根据在该车损图像对应的拍摄条件下得到的所述物理属性信息,确定该车损图像中各像素点的损伤情况。
可选地,如图9b所示,所述装置还包括模型训练模块904,用于:
在生成用于训练车损识别模型的车损样本数据之后,将所述车损样本数据输入至基于有监督学习模式的机器学习模型;
利用机器学习方法并基于所述车损样本数据对所述机器学习模型进行训练,得到车损识别模型。
可选地,所述拍摄条件包括:所述摄像装置的拍摄方位、所述摄像装置与目标车辆的相对位置、拍摄环境的光照参数、以及其他影响损伤区域视觉特征的现场环境因素中至少一种。
可选地,所述摄像装置与目标车辆的相对位置是基于具备厘米级精确定位能力的定位装置对所述目标车辆的移动进行控制得到的。
本说明书一个或多个实施例中的图像标注装置,获取利用摄像装置拍摄得到的预设损伤区域的车损图像;以及,获取基于物理探测方式针对预设损伤区域进行扫描得到的物理属性信息;根据该物理属性信息对车损图像进行损伤标注,生成车损样本数据。通过将利用摄像装置得到的视觉图像和基于物理探测方式得到的物理属性信息相结合,自动生成训练模型所需标注数据,无需人工手动对车损图像进行损伤情况标注,还能够实现对车损图像进行像素级的损伤情况标注,提高了车损图像的标注效率和精度,从而能够为基于深度学习进行模型训练的过程提供海量、精准的标注样本数据,以便训练得到识别精度更高的车损识别模型。
需要说明的是,本说明书中关于图像标注装置的实施例与本说明书中关于图像标注方法的实施例基于同一发明构思,因此该实施例的具体实施可以参见前述对应的图像标注方法的实施,重复之处不再赘述。
对应上述图2至图8描述的图像标注方法,基于相同的技术构思,本说明书一个或多个实施例还提供了一种图像标注系统,图10为本说明书一个或多个实施例提供的图像标注系统的结构示意图,如图10所示,该系统包括:
摄像装置10、物理探测装置20和如图9a和图9b所示的图像标注装置30,其中,该摄像装置10和物理探测装置20均与图像标注装置30相连接;
上述摄像装置10,用于对目标车辆上预设损伤区域进行拍摄得到的车损图像,并将所述车损图像传输至所述图像标注装置30;
上述物理探测装置20,用于基于物理探测方式对所述预设损伤区域进行扫描得到的物理属性信息,并将所述物理属性信息传输至所述图像标注装置30;
上述图像标注装置30,用于接收车损图像和图像标注装置,并根据该车损图像和图像标注装置生成用于训练车损识别模型的车损样本数据。
其中,针对图像标注装置与模型训练装置设置于同一服务器的情况,在生成用于训练车损识别模型的车损样本数据之后,还包括:
上述图像标注装置30,用于将生成的车损样本数据输入至基于有监督学习模式的机器学习模型;
利用机器学习方法并基于上述车损样本数据对上述机器学习模型进行训练,得到车损识别模型。
具体的,基于上述车损样本数据对基于有监督学习模式的机器学习模型中的模型参数进行更新,得到模型参数更新后的车损识别模型,进而,在获取到待识别的车损图像后,利用该车损识别模型该待识别的车损图像进行损伤情况识别,根据确定出的针对车损图像的损伤情况,对车辆进行自动定损。
本说明书一个或多个实施例中,通过将利用摄像装置得到的视觉图像和基于物理探测方式得到的物理属性信息相结合,自动生成训练模型所需标注数据,无需人工手动对车损图像进行损伤情况标注,还能够实现对车损图像进行像素级的损伤情况标注,提高了车损图像的标注效率和精度,从而能够为基于深度学习进行模型训练的过程提供海量、精准的标注样本数据,以便训练得到识别精度更高的车损识别模型。
其中,利用激光雷达探测技术得到的物理属性信息为各像素点的三维深度信息,基于该三维深度信息能够对损伤表面的变形程度的评估,即能够对预设损伤区域的凹陷、破损的位置进行高精度识别,而利用红外线探测技术得到的物理属性信息为各像素点的表面热成像信息,由于不同材质的红外线热成像存在一定差异,因此,结合探测到的表面热成像信息可以确定预设损伤区域的表面材质分布,因而基于该表面热成像信息能够对损伤表面的剐蹭程度的评估,即能够对预设损伤区域的剐蹭损伤的范围进行高精度识别;
进一步的,考虑到不同物理探测技术对预设损伤区域的损伤评估侧重点不同,因此,通过不同物理探测技术相结合的方式来对预设损伤区域的损伤情况进行评估,能够提高对损伤表面的损伤情况的评估准确度,上述物理探测装置包括:激光雷达装置、 和/或红外热成像装置;
上述激光雷达装置,用于利用激光光束对所述预设损伤区域进行扫描得到三维深度信息,并将所述三维深度信息传输至所述图像标注装置;
上述红外热成像装置,用于利用红外线对所述预设损伤区域进行扫描得到表面热成像信息,并将所述表面热成像信息传输至所述图像标注装置。
进一步的,考虑到拍摄条件可能会影响得到的车损图像和物理属性信息的质量,从而可能降低对预设损伤区域的损伤标注的准确度,因此,为了提高对预设损伤区域的损伤情况评估的精准度,在摄像装置采集车损图像以及物理探测装置采集物理属性信息的过程中,基于预设调节规则对拍摄条件进行调整,实现在不同拍摄条件下获取预设损伤区域的多张车损图像和多个物理属性信息,因此,所述系统还包括:可调节云台,其中,所述摄像装置和所述物理探测装置均设置于所述可调节云台上,且所述摄像装置与所述物理探测装置的相对位置保持不变;
上述可调节云台,用于调整所述摄像装置和所述物理探测装置的拍摄条件;
上述摄像装置,用于在不同拍摄条件下对目标车辆上预设损伤区域进行拍摄得到多张车损图像,并将所述多张车损图像传输至所述图像标注装置;
上述物理探测装置,用于利用物理探测方式在不同拍摄条件下对所述预设损伤区域的进行扫描得到多份物理属性信息,并将所述多份物理属性信息传输至所述图像标注装置。
其中,所述系统还包括:光照调节装置;
上述光照调节装置,用于调整所述摄像装置所在的拍摄环境的光照参数。
其中,上述系统还包括:具备厘米级精确定位能力的定位装置;
上述定位装置,用于对所述摄像装置与所述目标车辆的相对位置进行定位。
本说明书一个或多个实施例中的图像标注系统,获取利用摄像装置拍摄得到的预设损伤区域的车损图像;以及,获取基于物理探测方式针对预设损伤区域进行扫描得到的物理属性信息;根据该物理属性信息对车损图像进行损伤标注,生成车损样本数据。通过将利用摄像装置得到的视觉图像和基于物理探测方式得到的物理属性信息相结合,自动生成训练模型所需标注数据,无需人工手动对车损图像进行损伤情况标注,还能够实现对车损图像进行像素级的损伤情况标注,提高了车损图像的标注效率和精度,从而 能够为基于深度学习进行模型训练的过程提供海量、精准的标注样本数据,以便训练得到识别精度更高的车损识别模型。
需要说明的是,本说明书中关于图像标注系统的实施例与本说明书中关于图像标注方法的实施例基于同一发明构思,因此该实施例的具体实施可以参见前述对应的图像标注方法的实施,重复之处不再赘述。
进一步地,对应上述图2至图8所示的方法,基于相同的技术构思,本说明书一个或多个实施例还提供了一种图像标注设备,该设备用于执行上述的图像标注方法,如图11所示。
图像标注设备可因配置或性能不同而产生比较大的差异,可以包括一个或一个以上的处理器1101和存储器1102,存储器1102中可以存储有一个或一个以上存储应用程序或数据。其中,存储器1102可以是短暂存储或持久存储。存储在存储器1102的应用程序可以包括一个或一个以上模块(图示未示出),每个模块可以包括对图像标注设备中的一系列计算机可执行指令。更进一步地,处理器1101可以设置为与存储器1102通信,在图像标注设备上执行存储器1102中的一系列计算机可执行指令。图像标注设备还可以包括一个或一个以上电源1103,一个或一个以上有线或无线网络接口1104,一个或一个以上输入输出接口1105,一个或一个以上键盘1106等。
在一个具体的实施例中,图像标注设备包括有存储器,以及一个或一个以上的程序,其中一个或者一个以上程序存储于存储器中,且一个或者一个以上程序可以包括一个或一个以上模块,且每个模块可以包括对图像标注设备中的一系列计算机可执行指令,且经配置以由一个或者一个以上处理器执行该一个或者一个以上程序包含用于进行以下计算机可执行指令:
获取利用摄像装置拍摄得到的目标车辆上预设损伤区域的车损图像;以及,
获取基于物理探测方式针对所述预设损伤区域进行扫描得到的物理属性信息;
根据所述物理属性信息对所述车损图像进行损伤标注,生成用于训练车损识别模型的车损样本数据。
本说明书一个或多个实施例中,通过将利用摄像装置得到的视觉图像和基于物理探测方式得到的物理属性信息相结合,自动生成训练模型所需标注数据,无需人工手动对车损图像进行损伤情况标注,还能够实现对车损图像进行像素级的损伤情况标注,提高了车损图像的标注效率和精度,从而能够为基于深度学习进行模型训练的过程提供 海量、精准的标注样本数据,以便训练得到识别精度更高的车损识别模型。
可选地,计算机可执行指令在被执行时,所述获取基于物理探测方式针对所述预设损伤区域进行扫描得到的物理属性信息,包括:
获取利用激光雷达装置针对所述预设损伤区域进行扫描得到的三维深度信息;
和/或,
获取利用红外热成像装置针对所述预设损伤区域进行扫描得到的表面热成像信息。
可选地,计算机可执行指令在被执行时,所述获取利用摄像装置拍摄得到的目标车辆上预设损伤区域的车损图像,包括:
获取针对目标车辆上预设损伤区域的车损图像集合,其中,所述车损图像集合包括:利用摄像装置在不同拍摄条件下拍摄得到的多张车损图像;
对应的,所述获取基于物理探测方式针对所述预设损伤区域进行扫描得到的物理属性信息,包括:
获取针对所述预设损伤区域的物理属性信息集合,其中,所述物理属性信息集合包括:利用物理探测方式在不同拍摄条件下扫描得到的多个物理属性信息。
可选地,计算机可执行指令在被执行时,所述根据所述物理属性信息对所述车损图像进行损伤标注,生成用于训练车损识别模型的车损样本数据,包括:
根据所述物理属性信息,确定所述车损图像中各像素点的损伤情况;
将所述各像素点的损伤情况和所述车损图像确定为用于训练车损识别模型的车损样本数据。
可选地,计算机可执行指令在被执行时,所述目标车辆上预设损伤区域的车损图像包括:在不同拍摄条件下拍摄得到的多张车损图像;
所述根据所述物理属性信息,确定所述车损图像中各像素点的损伤情况,包括:
针对每张所述车损图像,根据在该车损图像对应的拍摄条件下得到的所述物理属性信息,确定该车损图像中各像素点的损伤情况。
可选地,计算机可执行指令在被执行时,在生成用于训练车损识别模型的车损样本数据之后,还包括:
将所述车损样本数据输入至基于有监督学习模式的机器学习模型;
利用机器学习方法并基于所述车损样本数据对所述机器学习模型进行训练,得到车损识别模型。
可选地,计算机可执行指令在被执行时,所述拍摄条件包括:所述摄像装置的拍摄方位、所述摄像装置与目标车辆的相对位置、拍摄环境的光照参数、以及其他影响损伤区域视觉特征的现场环境因素中至少一种。
可选地,计算机可执行指令在被执行时,所述摄像装置与目标车辆的相对位置是基于具备厘米级精确定位能力的定位装置对所述目标车辆的移动进行控制得到的。
本说明书一个或多个实施例中的图像标注设备,获取利用摄像装置拍摄得到的预设损伤区域的车损图像;以及,获取基于物理探测方式针对预设损伤区域进行扫描得到的物理属性信息;根据该物理属性信息对车损图像进行损伤标注,生成车损样本数据。通过将利用摄像装置得到的视觉图像和基于物理探测方式得到的物理属性信息相结合,自动生成训练模型所需标注数据,无需人工手动对车损图像进行损伤情况标注,还能够实现对车损图像进行像素级的损伤情况标注,提高了车损图像的标注效率和精度,从而能够为基于深度学习进行模型训练的过程提供海量、精准的标注样本数据,以便训练得到识别精度更高的车损识别模型。
进一步地,对应上述图2至图8所示的方法,基于相同的技术构思,本说明书一个或多个实施例还提供了一种存储介质,用于存储计算机可执行指令,一种具体的实施例中,该存储介质可以为U盘、光盘、硬盘等,该存储介质存储的计算机可执行指令在被处理器执行时,能实现以下流程:
获取利用摄像装置拍摄得到的目标车辆上预设损伤区域的车损图像;以及,
获取基于物理探测方式针对所述预设损伤区域进行扫描得到的物理属性信息;
根据所述物理属性信息对所述车损图像进行损伤标注,生成用于训练车损识别模型的车损样本数据。
本说明书一个或多个实施例中,通过将利用摄像装置得到的视觉图像和基于物理探测方式得到的物理属性信息相结合,自动生成训练模型所需标注数据,无需人工手动对车损图像进行损伤情况标注,还能够实现对车损图像进行像素级的损伤情况标注,提高了车损图像的标注效率和精度,从而能够为基于深度学习进行模型训练的过程提供海量、精准的标注样本数据,以便训练得到识别精度更高的车损识别模型。
可选地,该存储介质存储的计算机可执行指令在被处理器执行时,所述获取基于物理探测方式针对所述预设损伤区域进行扫描得到的物理属性信息,包括:
获取利用激光雷达装置针对所述预设损伤区域进行扫描得到的三维深度信息;
和/或,
获取利用红外热成像装置针对所述预设损伤区域进行扫描得到的表面热成像信息。
可选地,该存储介质存储的计算机可执行指令在被处理器执行时,所述获取利用摄像装置拍摄得到的目标车辆上预设损伤区域的车损图像,包括:
获取针对目标车辆上预设损伤区域的车损图像集合,其中,所述车损图像集合包括:利用摄像装置在不同拍摄条件下拍摄得到的多张车损图像;
对应的,所述获取基于物理探测方式针对所述预设损伤区域进行扫描得到的物理属性信息,包括:
获取针对所述预设损伤区域的物理属性信息集合,其中,所述物理属性信息集合包括:利用物理探测方式在不同拍摄条件下扫描得到的多个物理属性信息。
可选地,该存储介质存储的计算机可执行指令在被处理器执行时,所述根据所述物理属性信息对所述车损图像进行损伤标注,生成用于训练车损识别模型的车损样本数据,包括:
根据所述物理属性信息,确定所述车损图像中各像素点的损伤情况;
将所述各像素点的损伤情况和所述车损图像确定为用于训练车损识别模型的车损样本数据。
可选地,该存储介质存储的计算机可执行指令在被处理器执行时,所述目标车辆上预设损伤区域的车损图像包括:在不同拍摄条件下拍摄得到的多张车损图像;
所述根据所述物理属性信息,确定所述车损图像中各像素点的损伤情况,包括:
针对每张所述车损图像,根据在该车损图像对应的拍摄条件下得到的所述物理属性信息,确定该车损图像中各像素点的损伤情况。
可选地,该存储介质存储的计算机可执行指令在被处理器执行时,在生成用于训练车损识别模型的车损样本数据之后,还包括:
将所述车损样本数据输入至基于有监督学习模式的机器学习模型;
利用机器学习方法并基于所述车损样本数据对所述机器学习模型进行训练,得到车损识别模型。
可选地,该存储介质存储的计算机可执行指令在被处理器执行时,所述拍摄条件包括:所述摄像装置的拍摄方位、所述摄像装置与目标车辆的相对位置、拍摄环境的光照参数、以及其他影响损伤区域视觉特征的现场环境因素中至少一种。
可选地,该存储介质存储的计算机可执行指令在被处理器执行时,所述摄像装置与目标车辆的相对位置是基于具备厘米级精确定位能力的定位装置对所述目标车辆的移动进行控制得到的。
本说明书一个或多个实施例中的存储介质存储的计算机可执行指令在被处理器执行时,获取利用摄像装置拍摄得到的预设损伤区域的车损图像;以及,获取基于物理探测方式针对预设损伤区域进行扫描得到的物理属性信息;根据该物理属性信息对车损图像进行损伤标注,生成车损样本数据。通过将利用摄像装置得到的视觉图像和基于物理探测方式得到的物理属性信息相结合,自动生成训练模型所需标注数据,无需人工手动对车损图像进行损伤情况标注,还能够实现对车损图像进行像素级的损伤情况标注,提高了车损图像的标注效率和精度,从而能够为基于深度学习进行模型训练的过程提供海量、精准的标注样本数据,以便训练得到识别精度更高的车损识别模型。
在20世纪90年代,对于一个技术的改进可以很明显地区分是硬件上的改进(例如,对二极管、晶体管、开关等电路结构的改进)还是软件上的改进(对于方法流程的改进)。然而,随着技术的发展,当今的很多方法流程的改进已经可以视为硬件电路结构的直接改进。设计人员几乎都通过将改进的方法流程编程到硬件电路中来得到相应的硬件电路结构。因此,不能说一个方法流程的改进就不能用硬件实体模块来实现。例如,可编程逻辑器件(Programmable Logic Device,PLD)(例如现场可编程门阵列(Field Programmable Gate Array,FPGA))就是这样一种集成电路,其逻辑功能由用户对器件编程来确定。由设计人员自行编程来把一个数字系统“集成”在一片PLD上,而不需要请芯片制造厂商来设计和制作专用的集成电路芯片。而且,如今,取代手工地制作集成电路芯片,这种编程也多半改用“逻辑编译器(logic compiler)”软件来实现,它与程序开发撰写时所用的软件编译器相类似,而要编译之前的原始代码也得用特定的编程语言来撰写,此称之为硬件描述语言(Hardware Description Language,HDL),而HDL也并非仅有一种,而是有许多种,如ABEL(Advanced Boolean Expression Language)、 AHDL(Altera Hardware Description Language)、Confluence、CUPL(Cornell University Programming Language)、HD Cal、JHDL(Java Hardware Description Language)、Lava、Lola、My HDL、PALASM、RHDL(Ruby Hardware Description Language)等,目前最普遍使用的是VHDL(Very-High-Speed Integrated Circuit Hardware Description Language)与Verilog。本领域技术人员也应该清楚,只需要将方法流程用上述几种硬件描述语言稍作逻辑编程并编程到集成电路中,就可以很容易得到实现该逻辑方法流程的硬件电路。
控制器可以按任何适当的方式实现,例如,控制器可以采取例如微处理器或处理器以及存储可由该(微)处理器执行的计算机可读程序代码(例如软件或固件)的计算机可读介质、逻辑门、开关、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程逻辑控制器和嵌入微控制器的形式,控制器的例子包括但不限于以下微控制器:ARC 625D、Atmel AT91SAM、Microchip PIC18F26K20以及Silicone Labs C8051F320,存储器控制器还可以被实现为存储器的控制逻辑的一部分。本领域技术人员也知道,除了以纯计算机可读程序代码方式实现控制器以外,完全可以通过将方法步骤进行逻辑编程来使得控制器以逻辑门、开关、专用集成电路、可编程逻辑控制器和嵌入微控制器等的形式来实现相同功能。因此这种控制器可以被认为是一种硬件部件,而对其内包括的用于实现各种功能的装置也可以视为硬件部件内的结构。或者甚至,可以将用于实现各种功能的装置视为既可以是实现方法的软件模块又可以是硬件部件内的结构。
上述实施例阐明的系统、装置、模块或单元,具体可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。一种典型的实现设备为计算机。具体的,计算机例如可以为个人计算机、膝上型计算机、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任何设备的组合。
为了描述的方便,描述以上装置时以功能分为各种单元分别描述。当然,在实施本说明书一个或多个时可以把各单元的功能在同一个或多个软件和/或硬件中实现。
本领域内的技术人员应明白,本说明书一个或多个的实施例可提供为方法、系统、或计算机程序产品。因此,本说明书一个或多个可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本说明书一个或多个可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本说明书一个或多个是参照根据本说明书一个或多个实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。
本领域技术人员应明白,本说明书一个或多个的实施例可提供为方法、系统或计算机程序产品。因此,本说明书一个或多个可采用完全硬件实施例、完全软件实施例或结合软件和硬件方面的实施例的形式。而且,本说明书一个或多个可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本说明书一个或多个可以在由计算机执行的计算机可执行指令的一般上下文中描述,例如程序模块。一般地,程序模块包括执行特定任务或实现特定抽象数据类型的例程、程序、对象、组件、数据结构等等。也可以在分布式计算环境中实践本说明书一个或多个,在这些分布式计算环境中,由通过通信网络而被连接的远程处理设备来执行任务。在分布式计算环境中,程序模块可以位于包括存储设备在内的本地和远程计算机存储介质中。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于系统实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
以上所述仅为本说明书一个或多个的实施例而已,并不用于限制本说明书一个或多个。对于本领域技术人员来说,本说明书一个或多个可以有各种更改和变化。凡在本说明书一个或多个的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本说明书一个或多个的权利要求范围之内。

Claims (23)

  1. 一种图像标注方法,其特征在于,包括:
    获取利用摄像装置拍摄得到的目标车辆上预设损伤区域的车损图像;以及,
    获取基于物理探测方式针对所述预设损伤区域进行扫描得到的物理属性信息;
    根据所述物理属性信息对所述车损图像进行损伤标注,生成用于训练车损识别模型的车损样本数据。
  2. 根据权利要求1所述的方法,其特征在于,所述获取基于物理探测方式针对所述预设损伤区域进行扫描得到的物理属性信息,包括:
    获取利用激光雷达装置针对所述预设损伤区域进行扫描得到的三维深度信息;
    和/或,
    获取利用红外热成像装置针对所述预设损伤区域进行扫描得到的表面热成像信息。
  3. 根据权利要求1所述的方法,其特征在于,所述获取利用摄像装置拍摄得到的目标车辆上预设损伤区域的车损图像,包括:
    获取针对目标车辆上预设损伤区域的车损图像集合,其中,所述车损图像集合包括:利用摄像装置在不同拍摄条件下拍摄得到的多张车损图像;
    对应的,所述获取基于物理探测方式针对所述预设损伤区域进行扫描得到的物理属性信息,包括:
    获取针对所述预设损伤区域的物理属性信息集合,其中,所述物理属性信息集合包括:利用物理探测方式在不同拍摄条件下扫描得到的多个物理属性信息。
  4. 根据权利要求1所述的方法,其特征在于,所述根据所述物理属性信息对所述车损图像进行损伤标注,生成用于训练车损识别模型的车损样本数据,包括:
    根据所述物理属性信息,确定所述车损图像中各像素点的损伤情况;
    将所述各像素点的损伤情况和所述车损图像确定为用于训练车损识别模型的车损样本数据。
  5. 根据权利要求4所述的方法,其特征在于,所述目标车辆上预设损伤区域的车损图像包括:在不同拍摄条件下拍摄得到的多张车损图像;
    所述根据所述物理属性信息,确定所述车损图像中各像素点的损伤情况,包括:
    针对每张所述车损图像,根据在该车损图像对应的拍摄条件下得到的所述物理属性信息,确定该车损图像中各像素点的损伤情况。
  6. 根据权利要求1所述的方法,其特征在于,在生成用于训练车损识别模型的车损样本数据之后,还包括:
    将所述车损样本数据输入至基于有监督学习模式的机器学习模型;
    利用机器学习方法并基于所述车损样本数据对所述机器学习模型进行训练,得到车损识别模型。
  7. 根据权利要求3所述的方法,其特征在于,所述拍摄条件包括:所述摄像装置的拍摄方位、所述摄像装置与目标车辆的相对位置、拍摄环境的光照参数、以及其他影响损伤区域视觉特征的现场环境因素中至少一种。
  8. 根据权利要求7所述的方法,其特征在于,所述摄像装置与目标车辆的相对位置是基于具备厘米级精确定位能力的定位装置对所述目标车辆的移动进行控制得到的。
  9. 一种图像标注装置,其特征在于,包括:
    第一获取模块,用于获取利用摄像装置拍摄得到的目标车辆上预设损伤区域的车损图像;以及,
    第二获取模块,用于获取基于物理探测方式针对所述预设损伤区域进行扫描得到的物理属性信息;
    图像标注模块,用于根据所述物理属性信息对所述车损图像进行损伤标注,生成用于训练车损识别模型的车损样本数据。
  10. 根据权利要求9所述的装置,其特征在于,所述第二获取模块,具体用于:
    获取利用激光雷达装置针对所述预设损伤区域进行扫描得到的三维深度信息;
    和/或,
    获取利用红外热成像装置针对所述预设损伤区域进行扫描得到的表面热成像信息。
  11. 根据权利要求9所述的装置,其特征在于,所述第一获取模块,具体用于:
    获取针对目标车辆上预设损伤区域的车损图像集合,其中,所述车损图像集合包括:利用摄像装置在不同拍摄条件下拍摄得到的多张车损图像;
    对应的,所述第二获取模块,具体用于:
    获取针对所述预设损伤区域的物理属性信息集合,其中,所述物理属性信息集合包括:利用物理探测方式在不同拍摄条件下扫描得到的多个物理属性信息。
  12. 根据权利要求9所述的装置,其特征在于,所述图像标注模块,具体用于:
    根据所述物理属性信息,确定所述车损图像中各像素点的损伤情况;
    将所述各像素点的损伤情况和所述车损图像确定为用于训练车损识别模型的车损样本数据。
  13. 根据权利要求12所述的装置,其特征在于,所述目标车辆上预设损伤区域的车损图像包括:在不同拍摄条件下拍摄得到的多张车损图像;
    所述图像标注模块,进一步具体用于:
    针对每张所述车损图像,根据在该车损图像对应的拍摄条件下得到的所述物理属性信息,确定该车损图像中各像素点的损伤情况。
  14. 根据权利要求9所述的装置,其特征在于,所述装置还包括模型训练模块,用于:
    在生成用于训练车损识别模型的车损样本数据之后,将所述车损样本数据输入至基于有监督学习模式的机器学习模型;
    利用机器学习方法并基于所述车损样本数据对所述机器学习模型进行训练,得到车损识别模型。
  15. 根据权利要求11所述的装置,其特征在于,所述拍摄条件包括:所述摄像装置的拍摄方位、所述摄像装置与目标车辆的相对位置、拍摄环境的光照参数、以及其他影响损伤区域视觉特征的现场环境因素中至少一种。
  16. 根据权利要求15所述的装置,其特征在于,所述摄像装置与目标车辆的相对位置是基于具备厘米级精确定位能力的定位装置对所述目标车辆的移动进行控制得到的。
  17. 一种图像标注系统,其特征在于,所述系统包括:摄像装置、物理探测装置和如权利要求9至16任一项所述的图像标注装置;
    其中,所述摄像装置和所述物理探测装置均与所述图像标注装置相连接;
    所述摄像装置,用于对目标车辆上预设损伤区域进行拍摄得到的车损图像,并将所述车损图像传输至所述图像标注装置;
    所述物理探测装置,用于基于物理探测方式对所述预设损伤区域进行扫描得到的物理属性信息,并将所述物理属性信息传输至所述图像标注装置;
    所述图像标注装置,用于接收所述车损图像和所述图像标注装置,并根据所述车损图像和所述图像标注装置生成用于训练车损识别模型的车损样本数据。
  18. 根据权利要求17所述的系统,其特征在于,所述物理探测装置包括:激光雷达装置、和/或红外热成像装置;
    其中,所述激光雷达装置,用于利用激光光束对所述预设损伤区域进行扫描得到三维深度信息,并将所述三维深度信息传输至所述图像标注装置;
    所述红外热成像装置,用于利用红外线对所述预设损伤区域进行扫描得到表面热成像信息,并将所述表面热成像信息传输至所述图像标注装置。
  19. 根据权利要求17所述的系统,其特征在于,所述系统还包括:可调节云台;
    其中,所述摄像装置和所述物理探测装置均设置于所述可调节云台上,且所述摄像装置与所述物理探测装置的相对位置保持不变;
    所述可调节云台,用于调整所述摄像装置和所述物理探测装置的拍摄条件;
    所述摄像装置,用于在不同拍摄条件下对目标车辆上预设损伤区域进行拍摄得到多张车损图像,并将所述多张车损图像传输至所述图像标注装置;
    所述物理探测装置,用于利用物理探测方式在不同拍摄条件下对所述预设损伤区域的进行扫描得到多份物理属性信息,并将所述多份物理属性信息传输至所述图像标注装置。
  20. 根据权利要求19所述的系统,其特征在于,所述系统还包括:光照调节装置;
    所述光照调节装置,用于调整所述摄像装置所在的拍摄环境的光照参数。
  21. 根据权利要求19所述的系统,其特征在于,所述系统还包括:具备厘米级精确定位能力的定位装置;
    所述定位装置,用于对所述摄像装置与所述目标车辆的相对位置进行定位。
  22. 一种图像标注设备,其特征在于,包括:
    处理器;以及
    被安排成存储计算机可执行指令的存储器,所述可执行指令在被执行时使所述处理器:
    获取利用摄像装置拍摄得到的目标车辆上预设损伤区域的车损图像;以及,
    获取基于物理探测方式针对所述预设损伤区域进行扫描得到的物理属性信息;
    根据所述物理属性信息对所述车损图像进行损伤标注,生成用于训练车损识别模型的车损样本数据。
  23. 一种存储介质,用于存储计算机可执行指令,其特征在于,所述可执行指令在被执行时实现以下流程:
    获取利用摄像装置拍摄得到的目标车辆上预设损伤区域的车损图像;以及,
    获取基于物理探测方式针对所述预设损伤区域进行扫描得到的物理属性信息;
    根据所述物理属性信息对所述车损图像进行损伤标注,生成用于训练车损识别模型的车损样本数据。
PCT/CN2019/103592 2018-10-31 2019-08-30 一种图像标注方法、装置及系统 WO2020088076A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811282580.2A CN109615649A (zh) 2018-10-31 2018-10-31 一种图像标注方法、装置及系统
CN201811282580.2 2018-10-31

Publications (1)

Publication Number Publication Date
WO2020088076A1 true WO2020088076A1 (zh) 2020-05-07

Family

ID=66002877

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/103592 WO2020088076A1 (zh) 2018-10-31 2019-08-30 一种图像标注方法、装置及系统

Country Status (3)

Country Link
CN (1) CN109615649A (zh)
TW (1) TW202018664A (zh)
WO (1) WO2020088076A1 (zh)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111523615A (zh) * 2020-05-08 2020-08-11 深源恒际科技有限公司 一种实现车辆外观专业性损伤标注的流水线闭环流程方法
CN111724371A (zh) * 2020-06-19 2020-09-29 联想(北京)有限公司 一种数据处理方法、装置及电子设备
CN111767862A (zh) * 2020-06-30 2020-10-13 广州文远知行科技有限公司 车辆标注方法、装置、计算机设备及可读存储介质
CN112712121A (zh) * 2020-12-30 2021-04-27 浙江智慧视频安防创新中心有限公司 一种基于深度神经网络的图像识别模型训练方法、装置及存储介质
CN113658345A (zh) * 2021-08-18 2021-11-16 杭州海康威视数字技术股份有限公司 一种样本标注方法及装置
CN113706552A (zh) * 2021-07-27 2021-11-26 北京三快在线科技有限公司 一种激光反射率底图语义分割标注数据的生成方法及装置
CN114140430A (zh) * 2021-11-30 2022-03-04 北京比特易湃信息技术有限公司 一种基于深度学习的车辆报损方法

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109615649A (zh) * 2018-10-31 2019-04-12 阿里巴巴集团控股有限公司 一种图像标注方法、装置及系统
CN110148013B (zh) * 2019-04-22 2023-01-24 创新先进技术有限公司 一种用户标签分布预测方法、装置及系统
CN110263190B (zh) * 2019-05-06 2023-10-20 菜鸟智能物流控股有限公司 一种数据处理方法、装置、设备和机器可读介质
US10783643B1 (en) 2019-05-27 2020-09-22 Alibaba Group Holding Limited Segmentation-based damage detection
CN110264444B (zh) * 2019-05-27 2020-07-17 阿里巴巴集团控股有限公司 基于弱分割的损伤检测方法及装置
CN110490960B (zh) * 2019-07-11 2023-04-07 创新先进技术有限公司 一种合成图像生成方法及装置
CN112307236A (zh) * 2019-07-24 2021-02-02 阿里巴巴集团控股有限公司 一种数据标注方法及其装置
CN112434548B (zh) * 2019-08-26 2024-06-04 杭州海康威视数字技术股份有限公司 一种视频标注方法及装置
CN110503705B (zh) * 2019-08-29 2023-10-17 上海鹰瞳医疗科技有限公司 图像标注方法和设备
CN112528710B (zh) * 2019-09-19 2024-04-09 上海海拉电子有限公司 一种路面探测的方法、装置、电子设备及存储介质
CN114972810B (zh) * 2022-03-28 2023-11-28 慧之安信息技术股份有限公司 基于深度学习的图像采集标注的方法
CN114965487B (zh) * 2022-06-10 2024-06-14 招商局重庆交通科研设计院有限公司 一种隧道典型病害自动监测设备的标定方法及装置

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3038094A1 (fr) * 2015-06-24 2016-12-30 Sidexa Gestion documentaire pour la reparation automobile
CN106780048A (zh) * 2016-11-28 2017-05-31 中国平安财产保险股份有限公司 一种智能车险的自助理赔方法、自助理赔装置及系统
CN106874840A (zh) * 2016-12-30 2017-06-20 东软集团股份有限公司 车辆信息识别方法及装置
CN107194398A (zh) * 2017-05-10 2017-09-22 平安科技(深圳)有限公司 车损部位的识别方法及系统
CN108550080A (zh) * 2018-03-16 2018-09-18 阿里巴巴集团控股有限公司 物品定损方法及装置
CN108710881A (zh) * 2018-05-23 2018-10-26 中国民用航空总局第二研究所 神经网络模型、候选目标区域生成方法、模型训练方法
CN109615649A (zh) * 2018-10-31 2019-04-12 阿里巴巴集团控股有限公司 一种图像标注方法、装置及系统

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111914692B (zh) * 2017-04-28 2023-07-14 创新先进技术有限公司 车辆定损图像获取方法及装置
CN108171708B (zh) * 2018-01-24 2021-04-30 北京威远图易数字科技有限公司 车辆定损方法与系统

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3038094A1 (fr) * 2015-06-24 2016-12-30 Sidexa Gestion documentaire pour la reparation automobile
CN106780048A (zh) * 2016-11-28 2017-05-31 中国平安财产保险股份有限公司 一种智能车险的自助理赔方法、自助理赔装置及系统
CN106874840A (zh) * 2016-12-30 2017-06-20 东软集团股份有限公司 车辆信息识别方法及装置
CN107194398A (zh) * 2017-05-10 2017-09-22 平安科技(深圳)有限公司 车损部位的识别方法及系统
CN108550080A (zh) * 2018-03-16 2018-09-18 阿里巴巴集团控股有限公司 物品定损方法及装置
CN108710881A (zh) * 2018-05-23 2018-10-26 中国民用航空总局第二研究所 神经网络模型、候选目标区域生成方法、模型训练方法
CN109615649A (zh) * 2018-10-31 2019-04-12 阿里巴巴集团控股有限公司 一种图像标注方法、装置及系统

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111523615A (zh) * 2020-05-08 2020-08-11 深源恒际科技有限公司 一种实现车辆外观专业性损伤标注的流水线闭环流程方法
CN111523615B (zh) * 2020-05-08 2024-03-26 北京深智恒际科技有限公司 一种实现车辆外观专业性损伤标注的流水线闭环流程方法
CN111724371A (zh) * 2020-06-19 2020-09-29 联想(北京)有限公司 一种数据处理方法、装置及电子设备
CN111724371B (zh) * 2020-06-19 2023-05-23 联想(北京)有限公司 一种数据处理方法、装置及电子设备
CN111767862A (zh) * 2020-06-30 2020-10-13 广州文远知行科技有限公司 车辆标注方法、装置、计算机设备及可读存储介质
CN112712121A (zh) * 2020-12-30 2021-04-27 浙江智慧视频安防创新中心有限公司 一种基于深度神经网络的图像识别模型训练方法、装置及存储介质
CN112712121B (zh) * 2020-12-30 2023-12-05 浙江智慧视频安防创新中心有限公司 一种图像识别模型训练方法、装置及存储介质
CN113706552A (zh) * 2021-07-27 2021-11-26 北京三快在线科技有限公司 一种激光反射率底图语义分割标注数据的生成方法及装置
CN113658345A (zh) * 2021-08-18 2021-11-16 杭州海康威视数字技术股份有限公司 一种样本标注方法及装置
CN114140430A (zh) * 2021-11-30 2022-03-04 北京比特易湃信息技术有限公司 一种基于深度学习的车辆报损方法

Also Published As

Publication number Publication date
CN109615649A (zh) 2019-04-12
TW202018664A (zh) 2020-05-16

Similar Documents

Publication Publication Date Title
WO2020088076A1 (zh) 一种图像标注方法、装置及系统
Choi et al. KAIST multi-spectral day/night data set for autonomous and assisted driving
CN108764187B (zh) 提取车道线的方法、装置、设备、存储介质以及采集实体
WO2022083402A1 (zh) 障碍物检测方法、装置、计算机设备和存储介质
US20210058608A1 (en) Method and apparatus for generating three-dimensional (3d) road model
WO2022078467A1 (zh) 机器人自动回充方法、装置、机器人和存储介质
CN108345831A (zh) 基于点云数据的道路图像分割的方法、装置以及电子设备
TW201837786A (zh) 基於圖像的車輛定損方法、裝置、電子設備及系統
WO2016184152A1 (zh) 一种测量方法、装置、移动终端及存储介质
US20230177724A1 (en) Vehicle to infrastructure extrinsic calibration system and method
US20200175257A1 (en) Method and device for face selection, recognition and comparison
WO2019001001A1 (zh) 一种障碍信息获取装置及方法
CN101907490A (zh) 基于二维细分法的微小光斑强度分布测量方法
CN111337010B (zh) 可移动设备的定位方法、定位装置及电子设备
CN105025219A (zh) 图像获取方法
EP4250245A1 (en) System and method for determining a viewpoint of a traffic camera
CN106442539B (zh) 利用图像信息测量工件表面缺陷的方法
US20230415786A1 (en) System and method for localization of anomalous phenomena in assets
CN115205806A (zh) 生成目标检测模型的方法、装置和自动驾驶车辆
WO2022083529A1 (zh) 一种数据处理方法及装置
CN114997264A (zh) 训练数据生成、模型训练及检测方法、装置及电子设备
CN103024259A (zh) 成像设备和成像设备的控制方法
CN112364793A (zh) 基于长短焦多相机车辆环境下的目标检测和融合方法
Anand et al. BEV Approach Based Efficient Object Detection using YoloV4 for LiDAR Point Cloud
Ahrnbom et al. Seg2Pose: Pose Estimations from Instance Segmentation Masks in One or Multiple Views for Traffic Applications.

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19880316

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19880316

Country of ref document: EP

Kind code of ref document: A1