WO2019192257A1 - 检测物品损伤的方法和装置、物品损伤检测器 - Google Patents

检测物品损伤的方法和装置、物品损伤检测器 Download PDF

Info

Publication number
WO2019192257A1
WO2019192257A1 PCT/CN2019/073837 CN2019073837W WO2019192257A1 WO 2019192257 A1 WO2019192257 A1 WO 2019192257A1 CN 2019073837 W CN2019073837 W CN 2019073837W WO 2019192257 A1 WO2019192257 A1 WO 2019192257A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
sub
damage
module
image
Prior art date
Application number
PCT/CN2019/073837
Other languages
English (en)
French (fr)
Inventor
刘永超
章海涛
郭玉锋
Original Assignee
阿里巴巴集团控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司 filed Critical 阿里巴巴集团控股有限公司
Priority to SG11202005454UA priority Critical patent/SG11202005454UA/en
Priority to EP19781812.3A priority patent/EP3716206B1/en
Publication of WO2019192257A1 publication Critical patent/WO2019192257A1/zh
Priority to US16/888,568 priority patent/US10929717B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0006Industrial image inspection using a design-rule based approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2155Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the incorporation of unlabelled data, e.g. multiple instance learning [MIL], semi-supervised techniques using expectation-maximisation [EM] or naïve labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/06Recognition of objects for industrial automation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02WCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO WASTEWATER TREATMENT OR WASTE MANAGEMENT
    • Y02W90/00Enabling technologies or technologies with a potential or indirect contribution to greenhouse gas [GHG] emissions mitigation

Definitions

  • the present specification relates to the field of data processing technologies, and in particular, to a method and apparatus for detecting damage to an article, and an article damage detector.
  • the present specification provides a method for detecting damage to an article, including:
  • the detection model includes a first sub-model and a second sub-model, the first sub-model identifying features of each image, and the feature processing result of each image
  • the second sub-model is input, and the second sub-model performs time-series correlation processing on the feature processing result to obtain the damage detection result; the first sub-model and the second sub-model are jointly trained by using the sample marked with the item damage.
  • the specification also provides a device for detecting damage to an article, comprising:
  • An image sequence acquiring unit configured to acquire at least two images having a time series relationship and reflecting the detected object at different angles;
  • a detection model application unit configured to input the image into the detection model according to time series to obtain a damage detection result;
  • the detection model includes a first sub-model and a second sub-model, the first sub-model identifying characteristics of each image,
  • the feature processing result of each image is input into the second sub-model, and the second sub-model performs time-series correlation processing on the feature processing result to obtain the damage detection result;
  • the first sub-model and the second sub-model are marked with item damage.
  • the samples were obtained through joint training.
  • a computer device provided by the present specification includes: a memory and a processor; the memory stores a computer program executable by the processor; and when the processor runs the computer program, performing the method for detecting damage of the article A step of.
  • the present specification provides a computer readable storage medium having stored thereon a computer program that, when executed by a processor, performs the steps described above in the method of detecting damage to an item.
  • the specification also provides an item damage detector comprising:
  • a shooting module configured to generate at least two images having a timing relationship of the detected items according to the shooting instruction of the calculation and control module;
  • a motion realization module configured to implement a relative motion between the camera of the shooting module and the detected object according to the movement instruction of the calculation and control module;
  • a calculation and control module configured to, by using a movement instruction and a shooting instruction, cause the shooting module to generate at least two images having a time series relationship, reflecting the detected object at different angles, and acquiring a damage detection result based on the image; the damage detection result is adopted.
  • an image having a time series relationship and reflecting the detected object from different angles is input into the detection model, and each of the first sub-models of the detection model is identified.
  • the characteristics of the image are processed into the second sub-model after the feature processing, and the second sub-model correlates the feature processing results of each image according to the time series to obtain the damage detection result; since the images of different angles can more fully reflect the reality of the object
  • the timing correlation of the feature processing results of each image can form a more uniform and complete detection result, and therefore the embodiment of the present specification can greatly improve the accuracy of the damage detection;
  • the calculation and control module causes the motion module to perform the relative motion between the camera and the detected object by the movement instruction, and causes the shooting module to generate the detected article with the timing by the shooting instruction.
  • Two or more images of the relationship, and based on the generated image, the damage detection result generated by the method or device for detecting damage of the article in the present specification is obtained, which makes the damage detection more convenient and greatly improves the damage detection.
  • the degree of precision is obtained.
  • FIG. 1 is a flow chart of a method for detecting damage of an article in an embodiment of the present specification
  • FIG. 2 is a hardware configuration diagram of an apparatus for operating a method for detecting damage of an article in the embodiment of the present specification, or a device for detecting a damage of an article in the embodiment of the present specification;
  • FIG. 3 is a logical structural diagram of an apparatus for detecting damage of an article in the embodiment of the present specification
  • FIG. 4 is a schematic structural view of an article damage detector in the embodiment of the present specification.
  • Fig. 5 is a schematic structural view of a detection model in an application example of the present specification.
  • the embodiment of the present specification proposes a new method for detecting damage of an item, and constructing a detection model with the first sub-model and the second sub-model of the cascade, the first sub-model is an object to be detected which is photographed from different angles and generated according to time series.
  • the image is input, the feature processing result for each image is obtained, and output to the second sub-model; the second sub-model performs time-series correlation of the feature processing results of each image to obtain the damage detection result, thereby being able to be different from different angles.
  • the damage of the detected object is more comprehensively discovered, and the damage detected on each image can be integrated into a uniform detection result by time series processing, which greatly improves the accuracy of the damage detection.
  • the method embodiment of the present invention for detecting damage of an item can be run on any device having computing and storage capabilities, such as a mobile phone, a tablet computer, a PC (Personal Computer), a notebook, a server, etc., and can also be operated by two Or a logical node of two or more devices to implement various functions in the method embodiment of the present invention for detecting damage to an item.
  • a mobile phone such as a tablet computer, a PC (Personal Computer), a notebook, a server, etc.
  • PC Personal Computer
  • the present invention is referred to as a detection model to perform article damage detection.
  • the detection model includes two cascaded sub-models, the first sub-model identifies the features of each image, and generates feature processing results for each image; the feature processing results of each image are input to the second sub-model according to the time series, The two sub-models perform timing correlation processing on the feature processing results of various images to obtain damage detection results.
  • the first sub-model can be any machine learning algorithm, and the algorithm that is better at feature extraction and processing can often get better effects, such as Deep Convolutional Neural Network;
  • the second sub-model can It is any machine learning algorithm that can perform time series data processing, such as RNN (Recurrent Neural Networks), LSTM (Long Short-Term Memory), and so on.
  • RNN Recurrent Neural Networks
  • LSTM Long Short-Term Memory
  • the detection model in the embodiment of the present specification is a supervised learning model, and the entire detection model is trained using a sample marked with an item damage.
  • joint training is performed on the first sub-model and the second sub-model, and the training error of the overall model is simultaneously fed back to the first sub-model and the second sub-model for parameter updating, and the parameters of the two sub-models are optimized simultaneously. Therefore, the overall prediction ability of the detection model is optimal.
  • each sample includes at least two images of the item with time series.
  • the form of the damage detection result is determined according to the requirements of the actual application scenario, and is not limited.
  • the damage detection result may be a classification result of whether there is damage on the detected article, may be a degree of damage of a certain type on the detected article, and may be a classification result of whether there are two or more types of damage on the detected article. It can also be the extent of damage to two or more types on the item being tested.
  • the type of damage may include: scratches, breakage, stains, foreign matter attachment, and the like.
  • the sample data may be marked in the form of the determined damage detection result, and the damage detection result of the form may be obtained by using the trained detection model.
  • the feature processing result outputted by the first sub-model to the second sub-model includes information used by the second sub-model to generate the damage detection result.
  • the feature processing result outputted by the first sub-model to the second sub-model may be the damage detection result of each single image, or may be other tensors capable of reflecting the damage detection result information on the single image, and is not limited.
  • the feature processing result may be the first sub-model pair for each piece.
  • the image is subjected to feature extraction, damage detection, and a classification result of each damage type on a single image of the detected object generated by combining the feature extraction result and the damage discovery result; or may be carried on a single image of the detected object
  • the tensor of the damage detection information of each damage category, the second sub-model may be based on the detection information of each damage category on the two or more images, and obtain the classification result of the detected item in each damage type after performing the time series correlation.
  • the first sub-model of the detection model uses a deep convolutional neural network algorithm
  • the first sub-model can output the last convolutional layer or pooled layer in the deep convolutional neural network (ie, in the fully connected layer)
  • the output of the all-connected layer in the deep convolutional neural network may be used as the feature processing result
  • the output of the output prediction layer (Output Predictions) may be used as the feature processing result.
  • Step 110 Acquire at least two images having a time series relationship and reflecting the detected items at different angles.
  • the at least two images having the time series relationship and reflecting the detected objects at different angles may be photos of continuous shooting of the moved detected articles, and may be videos recorded by moving the detected objects (the video is composed of multiple chronological orders)
  • the arranged image is composed of a photograph of the detected object continuously by the moving camera, a video of the detected object being recorded by the moving camera, or two other shots taken when the shooting angle is continuously changed.
  • the above photos or videos may also be a combination of the above various photos or videos.
  • At least two images having a time-series relationship and reflecting the detected object at different angles may be automatically generated, for example, by using the article damage detector in the present specification, or may be generated by an artificial hand-held photographing device (such as a mobile phone), without limitation. .
  • the device running in the method for detecting damage of the article in this embodiment may generate the above image by itself, may receive the image from other devices, and may read the image from a predetermined storage location, which is not limited.
  • the camera may be used to take a photo or record a video to generate the image; for example, the method in this embodiment may run on an App (application) service.
  • the camera's client uploads multiple photos or recorded videos to the server.
  • Step 120 Input an image into the detection model according to the time series to obtain a damage detection result.
  • the training detection model is used to input the acquired image into the detection model according to the time series, and the damage detection result can be obtained.
  • some damage can not be captured by the camera equipment at a certain angle.
  • the shooting at different angles can reduce the omission of the damage of the object on the image.
  • the damage detection report can be automatically generated and the detected item can be estimated.
  • the form of the damage detection report, the specific manner of generating the damage detection report, and the specific algorithm used for the evaluation can be implemented by referring to the prior art, and will not be described again.
  • the detection model is constructed by cascading the first sub-model and the second sub-model, and an image having a time series relationship and reflecting the detected object from different angles is input into the detection model.
  • the first sub-model outputs the feature processing result of each image to the second sub-model, and the second sub-model performs time-series correlation of the feature processing results of each image to obtain the damage detection result, thereby enabling images with different angles to be more
  • the damages found on each image are integrated into a uniform and more accurate detection result through time series processing.
  • the embodiment of the present specification also provides a device for detecting damage of an article.
  • the device can be implemented by software, or can be implemented by hardware or a combination of hardware and software.
  • the CPU Central Process Unit
  • the device in which the device for detecting damage to the item is located typically includes other hardware such as a chip for transmitting and receiving wireless signals, and/or is used to implement the network.
  • Other hardware such as communication board.
  • FIG. 3 is a schematic diagram of an apparatus for detecting damage of an article according to an embodiment of the present disclosure, including an image sequence acquiring unit and a detecting model applying unit, wherein: the image sequence acquiring unit is configured to acquire a time series relationship and reflect the detected object at different angles.
  • a detection model application unit configured to input the image into the detection model according to time series to obtain a damage detection result;
  • the detection model includes a first sub-model and a second sub-model, the first sub-model identifying each image
  • the feature processing result of each image is input to the second sub-model, and the second sub-model performs time-series correlation processing on the feature processing result to obtain the damage detection result;
  • the first sub-model and the second sub-model are adopted Samples marked with damage to the item were jointly trained.
  • the first sub-model adopts a deep convolutional neural network algorithm; and the second sub-model adopts a long-and short-term memory network LSTM algorithm.
  • the second sub-model adopts an LSTM algorithm based on an Attention mechanism.
  • the at least two images having the timing relationship and reflecting the detected object at different angles include at least one of the following: a photograph taken continuously for the moved detected article, and a video recorded by the moved detected article.
  • the camera of the detected item is continuously photographed by the moving camera, and the video of the detected item is recorded by the moving camera.
  • the damage detection result includes: a classification result to a plurality of types of damage.
  • the feature processing result for each image includes: a first sub-model performs feature extraction, damage detection, and a single image damage type classification result of the detected object generated by the feature fusion.
  • Embodiments of the present specification provide a computer device including a memory and a processor.
  • the computer stores a computer program executable by the processor; and when the processor runs the stored computer program, the processor performs the steps of the method for detecting damage of the article in the embodiment of the present specification. A detailed description of the various steps of the method for detecting damage to an item can be found in the previous section and will not be repeated.
  • Embodiments of the present specification provide a computer readable storage medium having stored thereon computer programs that, when executed by a processor, perform various steps of a method of detecting damage to an item in an embodiment of the present specification. A detailed description of the various steps of the method for detecting damage to an item can be found in the previous section and will not be repeated.
  • the embodiment of the present specification proposes a new item damage detector.
  • the instruction shooting module performs continuous shooting on the detected item while the relative motion between the camera and the detected item is performed by the calculation and control module. Recording, thereby conveniently and quickly generating a plurality of images of the detected articles having a time-series relationship and having an angle change, and performing damage detection according to the images or methods for detecting damage of the articles in the embodiments of the present specification, thereby obtaining a more Accurate test results.
  • a structure of the article damage detector in the embodiment of the present specification is as shown in FIG. 4, and includes a calculation and control module, a motion realization module, and a photographing module.
  • the calculation and control module includes a CPU, a memory, a memory, a communication sub-module, etc., the CPU reads the program in the memory into the memory, generates a movement instruction and a shooting instruction, and the communication sub-module sends the movement instruction to the motion realization module, Send shooting instructions to the shooting module.
  • the photographing module includes a camera.
  • the photographing module receives the photographing instruction issued by the calculation and control module, the photographing module performs continuous photographing or video recording, and generates at least two images of the detected article having a time series relationship according to the photographing instruction.
  • the shooting command can carry one or more parameters related to shooting, such as the delay time for delaying shooting, the time interval for continuous shooting, the number of photos taken continuously, the length of video recording, etc., according to the actual application scenario. Need to set, not limited.
  • the calculation and control module can also stop the photographing operation of the photographing module by transmitting an instruction to stop photographing.
  • the shooting module can save the generated image in a predetermined storage location or send it to the calculation and control module, which is also not limited.
  • the motion realization module is configured to implement relative motion between the camera of the photographing module and the detected object according to the movement instruction of the calculation and control module.
  • the motion realization module can implement the above by moving the detected object, or moving the camera, or moving the detected object and the camera simultaneously, according to factors such as the size and weight of the detected item, the requirement of the degree of damage of the item damage detector in the actual application scenario, and the like. The relative movement between the two.
  • the motion implementation module includes an item motion sub-module, and the detected item is placed on the item motion sub-module.
  • the item motion sub-module After receiving the movement instruction of the calculation and control module, the item motion sub-module performs lifting, shifting, and / or rotate, etc., so that the detected item moves according to the received instruction.
  • the camera may be fixed or moved in accordance with a movement command different from the motion track of the detected item.
  • the motion implementation module includes a camera motion sub-module, and the camera is mounted on the camera motion sub-module.
  • the camera motion sub-module After receiving the movement instruction of the calculation and control module, the camera motion sub-module performs lifting, shifting, and / or rotate, etc., so that the camera moves according to the received instructions.
  • the detected item may be fixed, or may be moved in accordance with a movement instruction different from the movement trajectory of the camera.
  • the movement instructions issued by the calculation and control module can carry a number of parameters related to the movement, which can be set according to the requirements in the actual application scenario, the specific implementation of the motion realization module, and the like, and are not limited.
  • the movement command may include a displacement length, a lifting height, a rotation angle, a movement speed, and the like; in addition, the calculation and control module may cause the motion realization module to stop the relative movement of the detected item and the camera by transmitting an instruction to stop moving.
  • the calculation and control module When performing the item damage detection, the calculation and control module sends a movement instruction to the motion realization module to make the detected object and the camera move relative to each other; and by issuing a shooting instruction to the shooting module, the shooting module generates a timing relationship and reflects at different angles. At least two images of the item being inspected.
  • the calculation and control module acquires a damage detection result obtained by the method or apparatus for detecting damage of the article in the embodiment of the present specification based on the generated image.
  • the computing and control module can locally operate a method or apparatus for detecting damage to an item in embodiments of the present specification.
  • the calculation and control module inputs the generated image into the detection model in time series, and the output of the detection model is the damage detection result.
  • the method or device for detecting damage of an item in the embodiment of the present specification runs on a server, and the calculation and control module of the item damage detector uploads the generated image to the server according to the time series, and the server inputs the image according to the sequence.
  • the model is detected and the output of the test model is returned to the calculation and control module.
  • a light source module may be added to the item damage detector, and a light control sub-module is added in the calculation and control module, and the light control sub-module sends a light source command to the light source module through the communication sub-module, and the light source module according to the light source command To provide suitable lighting for the shooting module to improve the quality of the generated image.
  • the calculation and control module can issue a light source command carrying a parameter such as a light angle and a light brightness according to the light condition of the current environment, and the light source module controls one to a plurality of lighting fixtures to achieve the light requirement of the shooting.
  • the motion implementation module in the above application scenario includes a camera motion submodule
  • the camera of the light source module and the camera of the imaging module may be installed on the camera motion submodule, and when the camera motion submodule is moved up, down, and/or according to the movement instruction. Or when rotating, the camera and fixtures will be moved at the same time, so that the light is perfectly matched to the shot for better shooting.
  • the calculation and control module can also generate a test report based on the damage detection result, perform an evaluation of the detected item, and the like.
  • the calculation module controls the camera module to take the detected object by the shooting instruction while the motion realization module performs the relative motion between the camera and the detected object by the movement instruction. It is quick and convenient to generate two or more images of the detected articles having a time series relationship and an angle change, and a method or device for detecting damage of the detected articles in the present specification is obtained based on the generated images, thereby obtaining a more accurate detection result.
  • an online mobile device used recycler places the damage detector in a crowded public place where the user can self-use the damage detector to obtain a recycling estimate for the used mobile device.
  • the mobile device can be a mobile phone, a tablet, a notebook, or the like.
  • the damage detector has a built-in test model built in, and its structure is shown in Figure 5.
  • the detection model includes a deep convolutional neural network sub-model (a first sub-model) and an LSTM sub-model (a second sub-model).
  • the detection model takes multiple images with time series as input. Firstly, each image is extracted by the deep convolutional neural network sub-model according to the time series; then the target mobile device is identified in the extracted features, and the target mobile device is used. Damage detection; then fuse the originally extracted features with the features after damage detection to avoid possible feature loss in the target mobile device recognition and damage discovery process, and generate a single image damage classification according to the merged features. result.
  • the deep convolutional neural network sub-model inputs the damage classification results of a single image into the LSTM sub-model according to the time series, and the LSTM sub-model performs temporal correlation on the continuous single-image damage classification results, combining the same on different single images. Damage, the output can completely reflect the damage classification result of the state of the detected mobile device. Among them, the LSTM sub-model can use the Attention mechanism to obtain better timing correlation effects.
  • the damage classification results include scratches, breakage, and foreign matter attachment.
  • the training sample marks the value of each sample on each damage classification: 0 (no type of damage) or 1 (With this type of damage), the deep convolutional neural network submodel and the LSTM submodel are jointly trained using such samples.
  • the damage detection is performed using the training-completed detection model, the output of the detected mobile device for each damage classification has the possibility of this type of damage.
  • the damage detector includes a calculation and control module, a motion realization module, a photographing module, and a light source module.
  • the detection model is saved in the memory of the calculation and control module.
  • the server of the online mobile device used recycler can update the stored program (including the detection model) online through communication with the calculation and control module.
  • the motion implementation module includes a platform on which to place the mobile device, which can be rotated in accordance with the movement instructions of the computing and control module.
  • the camera of the shooting module and the luminaire of the light source module are fixed around the platform.
  • the damage detector prompts the user to place the mobile device on the platform.
  • the calculation and control module determines the brightness of the light to be used according to the ambient light at that time, and issues a light source command to the light source module, and the light source module lights the light according to the illumination intensity specified in the instruction.
  • the calculation and control module issues a movement instruction to the motion realization module to rotate the platform 360 degrees; the calculation and control module issues a shooting instruction to the shooting module, so that the shooting module records a video on the object on the platform during the rotation of the platform.
  • the shooting module saves the recorded video in local storage.
  • the calculation and control module instructs the light source module to turn off the light, read the recorded video from the local memory, and input each of the images into the detection model according to the time series to obtain the detected mobile device. Classification results for various types of injuries.
  • the calculation and control module calculates the estimate of the detected mobile device based on the damage classification result and the model, configuration, and the like of the detected mobile device, and displays the estimated value to the user.
  • a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
  • processors CPUs
  • input/output interfaces network interfaces
  • memory volatile and non-volatile memory
  • the memory may include non-persistent memory, random access memory (RAM), and/or non-volatile memory in a computer readable medium, such as read only memory (ROM) or flash memory.
  • RAM random access memory
  • ROM read only memory
  • Memory is an example of a computer readable medium.
  • Computer readable media includes both permanent and non-persistent, removable and non-removable media.
  • Information storage can be implemented by any method or technology.
  • the information can be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory. (ROM), electrically erasable programmable read only memory (EEPROM), flash memory or other memory technology, compact disk read only memory (CD-ROM), digital versatile disk (DVD) or other optical storage, Magnetic tape cartridges, magnetic tape storage or other magnetic storage devices or any other non-transportable media can be used to store information that can be accessed by a computing device.
  • computer readable media does not include temporary storage of computer readable media, such as modulated data signals and carrier waves.
  • embodiments of the present specification can be provided as a method, system, or computer program product.
  • embodiments of the present specification can take the form of an entirely hardware embodiment, an entirely software embodiment or a combination of software and hardware.
  • embodiments of the present specification can take the form of a computer program product embodied on one or more computer usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) having computer usable program code embodied therein. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Quality & Reliability (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
  • Image Analysis (AREA)

Abstract

本说明书提供一种检测物品损伤的方法,包括:获取具有时序关系、以不同角度反映被检测物品的至少两张图像;按照时序将所述图像输入检测模型,得到损伤检测结果;所述检测模型包括第一子模型和第二子模型,第一子模型识别每张图像的特征,所述每张图像的特征处理结果被输入第二子模型,第二子模型对所述特征处理结果进行时序关联处理后得到损伤检测结果;第一子模型和第二子模型是采用标记有物品损伤的样本进行联合训练得到的。

Description

检测物品损伤的方法和装置、物品损伤检测器 技术领域
本说明书涉及数据处理技术领域,尤其涉及一种检测物品损伤的方法和装置、和一种物品损伤检测器。
背景技术
随着生活水平的提高,很多物品的更换频率逐渐加快。以手机为例,被新设备取代的旧手机通常闲置在用户手中,造成了资源的浪费。旧货回收能够使被淘汰的物品重新获得使用价值,投入新的产业链环节,让资源得到更好的整合,减少可能的环境污染。
人工智能技术的兴起,让通过互联网进行线上回收成为一种新的商业模式。线上回收通常根据回收物品的图片来评判物品的损伤程度,并作为估价时的重要考量因素。损伤检测的准确与否,对回收物品的真实价值和估价的差距有很大的影响。提高物品损伤检测的准确性,对线上回收行业的发展非常重要。
发明内容
有鉴于此,本说明书提供一种检测物品损伤的方法,包括:
获取具有时序关系、以不同角度反映被检测物品的至少两张图像;
按照时序将所述图像输入检测模型,得到损伤检测结果;所述检测模型包括第一子模型和第二子模型,第一子模型识别每张图像的特征,所述每张图像的特征处理结果被输入第二子模型,第二子模型对所述特征处理结果进行时序关联处理后得到损伤检测结果;第一子模型和第二子模型是采用标记有物品损伤的样本进行联合训练得到的。
本说明书还提供了一种检测物品损伤的装置,包括:
图像序列获取单元,用于获取具有时序关系、以不同角度反映被检测物品的至少两张图像;
检测模型应用单元,用于按照时序将所述图像输入检测模型,得到损伤检测结果;所述检测模型包括第一子模型和第二子模型,第一子模型识别每张图像的特征,所述每张图像的特征处理结果被输入第二子模型,第二子模型对所述特征处理结果进行时序关联处理后得到损伤检测结果;第一子模型和第二子模型是采用标记有物品损伤的样本进 行联合训练得到的。
本说明书提供的一种计算机设备,包括:存储器和处理器;所述存储器上存储有可由处理器运行的计算机程序;所述处理器运行所述计算机程序时,执行上述检测物品损伤的方法所述的步骤。
本说明书提供的一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器运行时,执行上述检测物品损伤的方法所述的步骤。
本说明书还提供了一种物品损伤检测器,包括:
拍摄模块,用于根据计算与控制模块的拍摄指令生成被检测物品具有时序关系的至少两张图像;
运动实现模块,用于根据计算与控制模块的移动指令实现拍摄模块的摄像头与被检测物品之间的相对运动;
计算与控制模块,用于通过移动指令和拍摄指令使拍摄模块生成具有时序关系、以不同角度反映被检测物品的至少两张图像,并基于所述图像获取损伤检测结果;所述损伤检测结果采用上述检测物品损伤的方法或装置生成。
由以上技术方案可见,本说明书检测物品损伤的方法和装置实施例中,以具有时序关系、从不同角度反映被检测物品的图像输入到检测模型中,由检测模型的第一子模型识别每张图像的特征,进行特征处理后输入给第二子模型,第二子模型按照时序将各张图像的特征处理结果关联起来,得到损伤检测结果;由于不同角度的图像能够更加全面的反映物品的真实情形,对各张图像的特征处理结果进行时序关联能够形成更加统一和完整的检测结果,因此本说明书的实施例能够极大的提高损伤检测的精准程度;
本说明书的物品损伤检测器实施例中,由计算与控制模块在通过移动指令使运动实现模块进行摄像头与被检测物品之间的相对运动的同时,通过拍摄指令使拍摄模块生成被检测物品具有时序关系的两张以上的图像,并基于生成的图像得到采用本说明书中检测物品损伤的方法或装置生成的损伤检测结果,在使得物品损伤检测更为便利的同时,极大的提高了损伤检测的精准程度。
附图说明
图1是本说明书实施例中一种检测物品损伤的方法的流程图;
图2是运行本说明书实施例中检测物品损伤的方法的设备、或本说明书实施例中检测物品损伤的装置所在的设备的一种硬件结构图;
图3是本说明书实施例中一种检测物品损伤的装置的逻辑结构图;
图4是本说明书实施例中一种物品损伤检测器的结构示意图;
图5是本说明书应用示例中一种检测模型的结构示意图。
具体实施方式
本说明书的实施例提出一种新的检测物品损伤的方法,以级联的第一子模型和第二子模型构建检测模型,第一子模型以从不同角度拍摄、按照时序生成的被检测物品图像为输入,得到对每张图像的特征处理结果,输出至第二子模型;第二子模型将各张图像的特征处理结果进行时序关联,得出损伤检测结果,从而既可以由不同角度的图像中更为全面的发现被检测物品的损伤,由可以通过时序处理将各张图像上发现的损伤综合为统一的检测结果,极大的提高了损伤检测的精准程度。
本说明书检测物品损伤的方法实施例可以运行在任何具有计算和存储能力的设备上,如手机、平板电脑、PC(Personal Computer,个人电脑)、笔记本、服务器等设备;还可以由运行在两个或两个以上设备的逻辑节点来实现本说明书检测物品损伤的方法实施例中的各项功能。
本说明书的实施例中,采用有时序关系的至少两张图像为输入的机器学习模型,本说明书中称之为检测模型,来进行物品损伤检测。检测模型包括两个级联的子模型,第一子模型识别每张图像的特征,生成对每张图像的特征处理结果;每张图像的特征处理结果按照时序被输入至第二子模型,第二子模型对各种图像的特征处理结果进行时序关联处理,得到损伤检测结果。
其中,第一子模型可以是任意的机器学习算法,采用更为擅长特征提取和处理的算法往往能得到更好的效果,例如深度卷积神经网络(Deep Convolutional Neural Network);第二子模型可以是任何一种能够进行时序数据处理的机器学习算法,例如RNN(Recurrent Neural Networks,循环神经网络)、LSTM(Long Short-Term Memory,长短期记忆网络)等。在第二子模型采用LSTM算法的场景中,采用基于Attention(关注)机制的LSTM算法可以得到更为准确的损伤检测结果。
本说明书实施例中的检测模型为有监督学习模型,采用标记有物品损伤的样本对 整个检测模型进行训练。换言之,对第一子模型和第二子模型进行联合训练(joint training),将整体模型的训练误差同时反馈到第一子模型和第二子模型中进行参数更新,同时优化两个子模型的参数,从而达到检测模型的整体预测能力最优。除标记的物品损伤外,每个样本中包括至少两张有时序的该物品的图像。
根据实际应用场景的需求,来确定损伤检测结果的形式,不做限定。例如,损伤检测结果可以是被检测物品上是否有损伤的分类结果,可以是被检测物品上某一类型损伤的程度,可以是被检测物品上是否有两种或两种以上类型损伤的分类结果,还可以是被检测物品上两种或两种以上类型损伤的程度。其中,损伤的类型可以包括:划痕、破损、污渍、异物附着等等。可以按照所确定的损伤检测结果的形式来标记样本数据,用训练完成的检测模型即可得到该形式的损伤检测结果。
第一子模型输出至第二子模型的特征处理结果包含了第二子模型用来生成损伤检测结果的信息。第一子模型输出至第二子模型的特征处理结果可以是每个单张图像的损伤检测结果,也可以是能够体现单张图像上损伤检测结果信息的其他张量,不做限定。
例如,如果检测模型输出的损伤检测结果是一个到多个损伤类别的分类结果(即被检测物品上带有每个类别损伤的可能性),则特征处理结果可以是第一子模型对每张图像进行特征提取、损伤发现、以及将特征提取结果和损伤发现结果进行特征融合后生成的被检测物品单张图像上每个损伤类型的分类结果;也可以是携带有被检测物品单张图像上每个损伤类别的损伤检测信息的张量,第二子模型可以基于两张以上图像上每个损伤类别的检测信息,在进行时序关联后得到被检测物品在每个损伤类型的分类结果。
再如,假设检测模型的第一子模型采用深度卷积神经网络算法,则第一子模型可以将深度卷积神经网络中最后一个卷积层或池化层的输出(即在进行全连通层和输出预测层处理前的输出)作为特征处理结果,也可以将深度卷积神经网络中全连通层的输出作为特征处理结果,还可以将输出预测层(Output Predictions)的输出作为特征处理结果。
本说明书的实施例中,检测物品损伤的方法的流程如图1所示。
步骤110,获取具有时序关系、以不同角度反映被检测物品的至少两张图像。
具有时序关系、以不同角度反映被检测物品的至少两张图像可以是对移动的被检测物品进行连续拍摄的照片,可以是对移动的被检测物品进行录制的视频(视频由多张按时间顺序排列的图像组成),可以是以移动的摄像头连续拍摄被检测物品的照片,可 以是以移动的摄像头录制被检测物品的视频,还可以是其他在以连续的方式转换拍摄角度时拍摄的两张以上的照片或视频,也可以是上述各种照片或视频的组合。
具有时序关系、以不同角度反映被检测物品的至少两张图像可以自动生成,例如由采用本说明书中的物品损伤检测器生成,也可以由人工手持拍摄设备(如手机等)生成,不做限定。
本实施例中检测物品损伤的方法所运行的设备可以自行生成上述图像,可以从其他设备接收上述图像,还可以从预定存储位置读取上述图像,不做限定。例如,本实施例中的方法运行在手机上时,可以利用手机的摄像头拍摄照片或录制视频来生成上述图像;再如,本实施例中的方法可以运行在某个App(应用程序)的服务端,由该App的客户端将拍摄的多张照片或录制的视频上传到服务端。
步骤120,按照时序将图像输入检测模型,得到损伤检测结果。
采用训练完成的检测模型,将所获取的图像按照时序输入到检测模型中,即可得到损伤检测结果。
对被检测物品,有些损伤在一定角度下并不能被摄像设备铺捉到,通过不同角度的拍摄能够减少物品损伤在图像上的遗漏,拍摄的角度越多、方位越全面,图像就越能真实的体现物品本身的状况。由于每张图像上体现出来的物品损伤可能不一致(例如图像1拍到了损伤A、B和C,而图像2拍到了损伤B和D),在检测模型中采用第一子模型进行损伤发现后,利用第二子模型为这些具有时序性的图像发现的同一损伤建立联系,得出一个对被检测物品上各处损伤的完整、统一的视图,从而使得损伤检测的精准度得以提升。
此外,基于损伤检测结果,可以自动生成损伤检测报告,并对被检测物品进行估值。损伤检测报告的形式、生成损伤检测报告的具体方式、以及估值所采用的具体算法可以参照现有技术实现,不再赘述。
可见,本说明书检测物品损伤的方法实施例中,以级联的第一子模型和第二子模型构建检测模型,将具有时序关系、从不同角度反映被检测物品的图像输入到检测模型中,第一子模型将对每张图像的特征处理结果输出至第二子模型,第二子模型将各张图像的特征处理结果进行时序关联来得到损伤检测结果,从而能够藉由不同角度的图像更为全面的发现被检测物品的损伤,同时通过时序处理将各张图像上发现的损伤综合为完整统一、更为精准的检测结果。
与上述流程实现对应,本说明书的实施例还提供了一种检测物品损伤的装置。该装置可以通过软件实现,也可以通过硬件或者软硬件结合的方式实现。以软件实现为例,作为逻辑意义上的装置,是通过所在设备的CPU(Central Process Unit,中央处理器)将对应的计算机程序指令读取到内存中运行形成的。从硬件层面而言,除了图2所示的CPU、内存以及存储器之外,检测物品损伤的装置所在的设备通常还包括用于进行无线信号收发的芯片等其他硬件,和/或用于实现网络通信功能的板卡等其他硬件。
图3所示为本说明书实施例提供的一种检测物品损伤的装置,包括图像序列获取单元和检测模型应用单元,其中:图像序列获取单元用于获取具有时序关系、以不同角度反映被检测物品的至少两张图像;检测模型应用单元用于按照时序将所述图像输入检测模型,得到损伤检测结果;所述检测模型包括第一子模型和第二子模型,第一子模型识别每张图像的特征,所述每张图像的特征处理结果被输入第二子模型,第二子模型对所述特征处理结果进行时序关联处理后得到损伤检测结果;第一子模型和第二子模型是采用标记有物品损伤的样本进行联合训练得到的。
可选的,所述第一子模型采用深度卷积神经网络算法;所述第二子模型采用长短期记忆网络LSTM算法。
可选的,所述第二子模型采用基于关注Attention机制的LSTM算法。
可选的,所述具有时序关系、以不同角度反映被检测物品的至少两张图像包括以下至少一项:对移动的被检测物品进行连续拍摄的照片、对移动的被检测物品进行录制的视频、以移动的摄像头连续拍摄被检测物品的照片、以移动的摄像头录制被检测物品的视频。
一个例子中,所述损伤检测结果包括:一种到多种损伤类型的分类结果。
上述例子中,所述对每张图像的特征处理结果包括:第一子模型对每张图像进行特征提取、损伤发现、和特征融合后生成的被检测物品的单张图像损伤类型分类结果。
本说明书的实施例提供了一种计算机设备,该计算机设备包括存储器和处理器。其中,存储器上存储有能够由处理器运行的计算机程序;处理器在运行存储的计算机程序时,执行本说明书实施例中检测物品损伤的方法的各个步骤。对检测物品损伤的方法的各个步骤的详细描述请参见之前的内容,不再重复。
本说明书的实施例提供了一种计算机可读存储介质,该存储介质上存储有计算机程序,这些计算机程序在被处理器运行时,执行本说明书实施例中检测物品损伤的方法 的各个步骤。对检测物品损伤的方法的各个步骤的详细描述请参见之前的内容,不再重复。
本说明书的实施例提出一种新的物品损伤检测器,由计算与控制模块在指令运动实现模块进行摄像头与被检测物品之间的相对运动的同时,指令拍摄模块对被检测物品进行连续拍摄或录像,从而便利而快捷的生成具有时序关系、有角度变化的被检测物品的多张图像,并根据这些图像采用本说明书中实施例中检测物品损伤的方法或装置进行损伤检测,得出更为精准的检测结果。
本说明书实施例中的物品损伤检测器的一种结构如图4所示,包括计算与控制模块、运动实现模块和拍摄模块。
其中,计算与控制模块包括CPU、内存、存储器、通讯子模块等,CPU将存储器中的程序读入内存中运行,生成移动指令和拍摄指令,由通讯子模块将移动指令发送给运动实现模块、将拍摄指令发送给拍摄模块。
拍摄模块包括摄像头,当拍摄模块收到计算与控制模块发出的拍摄指令后,对被检测物品进行连续拍照或视频录制,根据拍摄指令生成被检测物品具有时序关系的至少两张图像。拍摄指令中可以携带与拍摄相关的一个到多个参数,例如延时启动拍摄的延时时间、连续拍照的时间间隔、连续拍照的照片张数、视频录制的时长等,可根据实际应用场景的需要设置,不做限定。此外,计算与控制模块还可以通过发送停止拍摄的指令来停止拍摄模块的拍摄工作。拍摄模块可以将生成的图像保存在预定存储位置,也可以发送给计算与控制模块,同样不做限定。
运动实现模块用来根据计算与控制模块的移动指令实现拍摄模块的摄像头与被检测物品之间的相对运动。根据被检测物品的大小和重量、实际应用场景中对物品损伤检测器便携程度的要求等因素,运动实现模块可以通过移动被检测物品、或者移动摄像头、或者同时移动被检测物品和摄像头来实现上述二者之间的相对运动。
在一个例子中,运动实现模块包括物品运动子模块,被检测物品放置在物品运动子模块上,在收到计算与控制模块的移动指令后,物品运动子模块根据移动指令进行升降、位移、和/或旋转等,使被检测物品按照接收的指令移动。本例中,摄像头可以固定,也可以根据移动指令以不同于被检测物品的运动轨迹来移动。
在另一个例子中,运动实现模块包括摄像头运动子模块,将摄像头安装在摄像头运动子模块上,在收到计算与控制模块的移动指令后,摄像头运动子模块根据移动指令 进行升降、位移、和/或旋转等,使摄像头按照接收的指令移动。本例中,被检测物品可以固定,也可以根据移动指令以不同于摄像头的运动轨迹来移动。
计算与控制模块发出的移动指令中可以携带与移动相关的若干个参数,可以根据实际应用场景中的需要、运动实现模块的具体实现等来设置,不做限定。例如,移动指令中可以包括位移长度、升降高度、旋转角度、运动速度等;此外,计算与控制模块还可以通过发送停止移动的指令来使运动实现模块停止被检测物品与摄像头的相对运动。
在进行物品损伤检测时,计算与控制模块通过向运动实现模块发出移动指令,使被检测物品与摄像头进行相对运动;通过向拍摄模块发出拍摄指令,使拍摄模块生成具有时序关系、以不同角度反映被检测物品的至少两张图像。计算与控制模块基于所生成的图像,获取采用本说明书实施例中检测物品损伤的方法或装置得到的损伤检测结果。
在一种实现方式中,计算与控制模块可以在本地运行本说明书实施例中检测物品损伤的方法或装置。计算与控制模块将所生成的图像按照时序输入到检测模型中,检测模型的输出即为损伤检测结果。
在另一种实现方式中,本说明书实施例中检测物品损伤的方法或装置运行在服务器上,物品损伤检测器的计算与控制模块将生成的图像按照时序上传给服务器,服务器将图像按照时序输入检测模型,并将检测模型的输出返回给计算与控制模块。
在一些应用场景中,可以在物品损伤检测器上增加光源模块,在计算与控制模块中增加灯光控制子模块,由灯光控制子模块通过通讯子模块向光源模块发出光源指令,光源模块根据光源指令,来为拍摄模块提供适合的灯光,从而提高生成图像的质量。计算与控制模块可以根据当前环境的光线情况发出携带有光照角度、光照亮度等参数的光源指令,供光源模块控制一个到多个照明灯具,来达到拍摄的光线要求。
如果上述应用场景中运动实现模块包括摄像头运动子模块,则可以将光源模块的灯具和拍摄模块的摄像头都安装在摄像头运动子模块上,当摄像头运动子模块根据移动指令进行升降、位移、和/或旋转时,将同时移动摄像头和灯具,使得光照与拍摄完全匹配,达到更好的拍摄效果。
计算与控制模块还可以根据损伤检测结果生成检测报告、进行被检测物品的估价等。
可见,本说明书的物品损伤检测器实施例中,由计算与控制模块在通过移动指令使运动实现模块进行摄像头与被检测物品之间的相对运动的同时,通过拍摄指令使拍摄 模块拍摄被检测物品,快捷方便的生成具有时序关系、有角度变化的被检测物品两张以上的图像,基于生成的图像得到采用本说明书中检测物品损伤的方法或装置,得出更为精准的检测结果。
上述对本说明书特定实施例进行了描述。其它实施例在所附权利要求书的范围内。在一些情况下,在权利要求书中记载的动作或步骤可以按照不同于实施例中的顺序来执行并且仍然可以实现期望的结果。另外,在附图中描绘的过程不一定要求示出的特定顺序或者连续顺序才能实现期望的结果。在某些实施方式中,多任务处理和并行处理也是可以的或者可能是有利的。
在本说明书的一个应用示例中,某线上移动设备二手回收商将损伤检测器放置在人流密集的公共场所,用户可以自助使用损伤检测器,来得到对二手移动设备的回收估价。移动设备可以是手机、平板电脑、笔记本等。
损伤检测器内置有训练完毕的检测模型,其结构如图5所示。检测模型包括深度卷积神经网络子模型(一种第一子模型)和LSTM子模型(一种第二子模型)。
检测模型以多张具有时序关系的图像为输入,先由深度卷积神经网络子模型按照时序分别对每张图像进行特征提取;再在提取的特征中识别出目标移动设备,进行目标移动设备上的损伤发现;然后将最初提取的特征与进行损伤发现后的特征进行融合,以避免在目标移动设备识别和损伤发现过程中可能造成的特征丢失,按照融合后的特征生成单张图像的损伤分类结果。
深度卷积神经网络子模型将单张图像的损伤分类结果按照时序输入LSTM子模型,由LSTM子模型在连续的单张图像损伤分类结果间进行时序上的关联,合并不同单张图像上相同的损伤,输出能够完整的反映被检测移动设备状态的损伤分类结果。其中,LSTM子模型可以采用Attention机制,以获得更好的时序关联效果。
本应用示例中,损伤分类结果包括划痕、破损、和异物附着,在进行检测模型训练时,训练样本标记了每个样本在每个损伤分类上的值:0(无本类型损伤)或1(有本类型损伤),采用这样的若干个样本对深度卷积神经网络子模型和LSTM子模型进行联合训练。在使用训练完成的检测模型进行损伤检测时,输出为每个损伤分类上被检测的移动设备有该类型损伤的可能性。
损伤检测器包括计算与控制模块、运动实现模块、拍摄模块和光源模块。检测模型保存在计算与控制模块的存储器中。该线上移动设备二手回收商的服务器可以通过与 计算与控制模块的通信,对存储的程序(包括检测模型)进行在线更新。
运动实现模块包括一个放置移动设备的平台,该平台可以根据计算与控制模块的移动指令旋转。拍摄模块的摄像头和光源模块的灯具固定在平台的周围。
用户启动二手移动设备价值评估,输入移动设备的型号、配置等信息后,损伤检测器提示用户将移动设备放置在平台上。在用户放置好移动设备后,计算与控制模块根据当时的环境光线确定要使用的光照亮度,向光源模块发出光源指令,光源模块按照指令中指定的光照强度点亮灯具。计算与控制模块向运动实现模块发出移动指令,使平台转动360度;计算与控制模块向拍摄模块发出拍摄指令,使拍摄模块在平台转动期间对平台上的物体录制视频。拍摄模块将录制完的视频保存在本地存储器中。
在运动实现模块和拍摄模块工作完毕后,计算与控制模块指令光源模块关闭灯光,从本地存储器中读取录制完的视频,将其中的各张图像按照时序输入检测模型,得到被检测移动设备上各类损伤的分类结果。计算与控制模块根据损伤分类结果和被检测移动设备的型号、配置等信息,计算出被检测移动设备的估价,显示给用户。
以上所述仅为本说明书的较佳实施例而已,并不用以限制本申请,凡在本申请的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本申请保护的范围之内。
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑 可读媒体(transitory media),如调制的数据信号和载波。
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。
本领域技术人员应明白,本说明书的实施例可提供为方法、系统或计算机程序产品。因此,本说明书的实施例可采用完全硬件实施例、完全软件实施例或结合软件和硬件方面的实施例的形式。而且,本说明书的实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。

Claims (20)

  1. 一种检测物品损伤的方法,包括:
    获取具有时序关系、以不同角度反映被检测物品的至少两张图像;
    按照时序将所述图像输入检测模型,得到损伤检测结果;所述检测模型包括第一子模型和第二子模型,第一子模型识别每张图像的特征,所述每张图像的特征处理结果被输入第二子模型,第二子模型对所述特征处理结果进行时序关联处理后得到损伤检测结果;第一子模型和第二子模型是采用标记有物品损伤的样本进行联合训练得到的。
  2. 根据权利要求1所述的方法,所述第一子模型采用深度卷积神经网络算法;所述第二子模型采用长短期记忆网络LSTM算法。
  3. 根据权利要求2所述的方法,所述第二子模型采用基于关注Attention机制的LSTM算法。
  4. 根据权利要求1所述的方法,所述具有时序关系、以不同角度反映被检测物品的至少两张图像包括以下至少一项:对移动的被检测物品进行连续拍摄的照片、对移动的被检测物品进行录制的视频、以移动的摄像头连续拍摄被检测物品的照片、以移动的摄像头录制被检测物品的视频。
  5. 根据权利要求1所述的方法,所述损伤检测结果包括:一种到多种损伤类型的分类结果。
  6. 根据权利要求5所述的方法,所述每张图像的特征处理结果包括:第一子模型对每张图像进行特征提取、损伤发现、和特征融合后生成的被检测物品的单张图像损伤类型分类结果。
  7. 一种检测物品损伤的装置,包括:
    图像序列获取单元,用于获取具有时序关系、以不同角度反映被检测物品的至少两张图像;
    检测模型应用单元,用于按照时序将所述图像输入检测模型,得到损伤检测结果;所述检测模型包括第一子模型和第二子模型,第一子模型识别每张图像的特征,所述每张图像的特征处理结果被输入第二子模型,第二子模型对所述特征处理结果进行时序关联处理后得到损伤检测结果;第一子模型和第二子模型是采用标记有物品损伤的样本进行联合训练得到的。
  8. 根据权利要求7所述的装置,所述第一子模型采用深度卷积神经网络算法;所述第二子模型采用长短期记忆网络LSTM算法。
  9. 根据权利要求8所述的装置,所述第二子模型采用基于关注Attention机制的 LSTM算法。
  10. 根据权利要求7所述的装置,所述具有时序关系、以不同角度反映被检测物品的至少两张图像包括以下至少一项:对移动的被检测物品进行连续拍摄的照片、对移动的被检测物品进行录制的视频、以移动的摄像头连续拍摄被检测物品的照片、以移动的摄像头录制被检测物品的视频。
  11. 根据权利要求7所述的装置,所述损伤检测结果包括:一种到多种损伤类型的分类结果。
  12. 根据权利要求11所述的装置,所述每张图像的特征处理结果包括:第一子模型对每张图像进行特征提取、损伤发现、和特征融合后生成的被检测物品的单张图像损伤类型分类结果。
  13. 一种计算机设备,包括:存储器和处理器;所述存储器上存储有可由处理器运行的计算机程序;所述处理器运行所述计算机程序时,执行如权利要求1到6任意一项所述的步骤。
  14. 一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器运行时,执行如权利要求1到6任意一项所述的步骤。
  15. 一种物品损伤检测器,包括:
    拍摄模块,用于根据计算与控制模块的拍摄指令生成被检测物品具有时序关系的至少两张图像;
    运动实现模块,用于根据计算与控制模块的移动指令实现拍摄模块的摄像头与被检测物品之间的相对运动;
    计算与控制模块,用于通过移动指令和拍摄指令使拍摄模块生成具有时序关系、以不同角度反映被检测物品的至少两张图像,并基于所述图像获取损伤检测结果;所述损伤检测结果采用权利要求1至6任一所述的方法生成。
  16. 根据权利要求15所述的方法,所述计算与控制模块基于所述图像获取损伤检测结果,包括:计算与控制模块将所述图像上传至服务器,接收服务器采用权利要求1至6所述的方法生成的损伤检测结果;或,
    在本地运行权利要求1至6任一所述的方法,得到损伤检测结果。
  17. 根据权利要求15所述的方法,所述运动实现模块包括:用于放置被检测物品、根据移动指令进行升降、位移、和/或旋转的物品运动子模块;或,
    用于安装拍摄模块的摄像头、根据移动指令进行升降、位移、和/或旋转的摄像头运动子模块。
  18. 根据权利要求15所述的方法,所述物品损伤检测器还包括:光源模块,用于根据计算与控制模块的光源指令为拍摄模块提供灯光;
    所述计算与控制模块还包括:灯光控制子模块,用于通过向光源模块发送光源指令,使光源模块提供适合拍摄的灯光。
  19. 根据权利要求18所述的方法,所述运动实现模块包括:用于安装拍摄模块的摄像头和光源模块的灯具、根据移动指令进行升降、位移、和/或旋转的摄像头运动子模块。
  20. 根据权利要求15所述的方法,所述被检测物品包括:移动设备。
PCT/CN2019/073837 2018-04-03 2019-01-30 检测物品损伤的方法和装置、物品损伤检测器 WO2019192257A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
SG11202005454UA SG11202005454UA (en) 2018-04-03 2019-01-30 Article damage detection method and apparatus and article damage detector
EP19781812.3A EP3716206B1 (en) 2018-04-03 2019-01-30 Method and apparatus for measuring item damage, and item damage measurement device
US16/888,568 US10929717B2 (en) 2018-04-03 2020-05-29 Article damage detection

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810295312.8A CN108921811B (zh) 2018-04-03 2018-04-03 检测物品损伤的方法和装置、物品损伤检测器
CN201810295312.8 2018-04-03

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/888,568 Continuation US10929717B2 (en) 2018-04-03 2020-05-29 Article damage detection

Publications (1)

Publication Number Publication Date
WO2019192257A1 true WO2019192257A1 (zh) 2019-10-10

Family

ID=64402822

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/073837 WO2019192257A1 (zh) 2018-04-03 2019-01-30 检测物品损伤的方法和装置、物品损伤检测器

Country Status (6)

Country Link
US (1) US10929717B2 (zh)
EP (1) EP3716206B1 (zh)
CN (1) CN108921811B (zh)
SG (1) SG11202005454UA (zh)
TW (1) TWI696123B (zh)
WO (1) WO2019192257A1 (zh)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019021729A1 (ja) * 2017-07-25 2019-01-31 富士フイルム株式会社 損傷図作成方法、損傷図作成装置、損傷図作成システム、及び記録媒体
KR101916347B1 (ko) * 2017-10-13 2018-11-08 주식회사 수아랩 딥러닝 기반 이미지 비교 장치, 방법 및 컴퓨터 판독가능매체에 저장된 컴퓨터 프로그램
CN108921811B (zh) * 2018-04-03 2020-06-30 阿里巴巴集团控股有限公司 检测物品损伤的方法和装置、物品损伤检测器
WO2019209059A1 (en) * 2018-04-25 2019-10-31 Samsung Electronics Co., Ltd. Machine learning on a blockchain
CN109344819A (zh) * 2018-12-13 2019-02-15 深源恒际科技有限公司 基于深度学习的车辆损伤识别方法
US10885625B2 (en) 2019-05-10 2021-01-05 Advanced New Technologies Co., Ltd. Recognizing damage through image analysis
CN110569703B (zh) * 2019-05-10 2020-09-01 阿里巴巴集团控股有限公司 计算机执行的从图片中识别损伤的方法及装置
CN110443191A (zh) * 2019-08-01 2019-11-12 北京百度网讯科技有限公司 用于识别物品的方法和装置
CN110782920B (zh) * 2019-11-05 2021-09-21 广州虎牙科技有限公司 音频识别方法、装置及数据处理设备
CN111768381A (zh) * 2020-06-29 2020-10-13 北京百度网讯科技有限公司 零部件缺陷检测方法、装置及电子设备
CN112507820A (zh) * 2020-11-25 2021-03-16 北京旷视机器人技术有限公司 自动盘点货物的方法、装置、系统和电子设备
CN112634245A (zh) * 2020-12-28 2021-04-09 广州绿怡信息科技有限公司 损耗检测模型训练方法、损耗检测方法及装置
CN113033469B (zh) * 2021-04-14 2024-04-02 杭州电力设备制造有限公司 工器具损坏识别方法、装置、设备、系统及可读存储介质
CN114283290B (zh) * 2021-09-27 2024-05-03 腾讯科技(深圳)有限公司 图像处理模型的训练、图像处理方法、装置、设备及介质
US20230244994A1 (en) * 2022-02-03 2023-08-03 Denso Corporation Machine learning generation for real-time location
CN118130508B (zh) * 2024-04-29 2024-07-02 南方电网调峰调频发电有限公司检修试验分公司 励磁浇筑母线质量检测方法、装置和计算机设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105160318A (zh) * 2015-08-31 2015-12-16 北京旷视科技有限公司 基于面部表情的测谎方法及系统
US20160284019A1 (en) * 2008-10-02 2016-09-29 ecoATM, Inc. Kiosks for evaluating and purchasing used electronic devices and related technology
CN107194323A (zh) * 2017-04-28 2017-09-22 阿里巴巴集团控股有限公司 车辆定损图像获取方法、装置、服务器和终端设备
CN107704932A (zh) * 2017-08-23 2018-02-16 阿里巴巴集团控股有限公司 一种物品回收设备、装置以及电子设备
CN108921811A (zh) * 2018-04-03 2018-11-30 阿里巴巴集团控股有限公司 检测物品损伤的方法和装置、物品损伤检测器

Family Cites Families (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7809587B2 (en) * 2004-05-07 2010-10-05 International Business Machines Corporation Rapid business support of insured property using image analysis
US9842373B2 (en) * 2009-08-14 2017-12-12 Mousiki Inc. System and method for acquiring, comparing and evaluating property condition
JP6030457B2 (ja) * 2013-01-23 2016-11-24 株式会社メガチップス 画像検出装置及び制御プログラム並びに画像検出方法
US10453145B2 (en) * 2014-04-30 2019-10-22 Hartford Fire Insurance Company System and method for vehicle repair cost estimate verification
US9824453B1 (en) * 2015-10-14 2017-11-21 Allstate Insurance Company Three dimensional image scan for vehicle
CN105335710A (zh) * 2015-10-22 2016-02-17 合肥工业大学 一种基于多级分类器的精细车辆型号识别方法
US9965705B2 (en) * 2015-11-03 2018-05-08 Baidu Usa Llc Systems and methods for attention-based configurable convolutional neural networks (ABC-CNN) for visual question answering
CN105719188B (zh) * 2016-01-22 2017-12-26 平安科技(深圳)有限公司 基于多张图片一致性实现保险理赔反欺诈的方法及服务器
TWI585698B (zh) * 2016-02-26 2017-06-01 Item recycling method
EP4036852A1 (en) * 2016-03-07 2022-08-03 Assurant, Inc. Screen damage detection for devices
US10692050B2 (en) * 2016-04-06 2020-06-23 American International Group, Inc. Automatic assessment of damage and repair costs in vehicles
CN106407942B (zh) * 2016-09-27 2020-04-28 腾讯征信有限公司 一种图像处理方法及其装置
US10558750B2 (en) * 2016-11-18 2020-02-11 Salesforce.Com, Inc. Spatial attention model for image captioning
US10657707B1 (en) * 2017-01-09 2020-05-19 State Farm Mutual Automobile Insurance Company Photo deformation techniques for vehicle repair analysis
IT201700002416A1 (it) * 2017-01-11 2018-07-11 Autoscan Gmbh Apparecchiatura mobile automatizzata per il rilevamento e la classificazione dei danni sulla carrozzeria
US10628890B2 (en) * 2017-02-23 2020-04-21 International Business Machines Corporation Visual analytics based vehicle insurance anti-fraud detection
US10242401B2 (en) * 2017-03-13 2019-03-26 Ford Global Technologies, Llc Method and apparatus for enhanced rental check-out/check-in
CN107358596B (zh) * 2017-04-11 2020-09-18 阿里巴巴集团控股有限公司 一种基于图像的车辆定损方法、装置、电子设备及系统
CN107066980B (zh) * 2017-04-18 2020-04-24 腾讯科技(深圳)有限公司 一种图像变形检测方法及装置
CN108205684B (zh) * 2017-04-25 2022-02-11 北京市商汤科技开发有限公司 图像消歧方法、装置、存储介质和电子设备
CN107220667B (zh) * 2017-05-24 2020-10-30 北京小米移动软件有限公司 图像分类方法、装置及计算机可读存储介质
CN108229292A (zh) * 2017-07-28 2018-06-29 北京市商汤科技开发有限公司 目标识别方法、装置、存储介质和电子设备
CN107741781A (zh) * 2017-09-01 2018-02-27 中国科学院深圳先进技术研究院 无人机的飞行控制方法、装置、无人机及存储介质
DE102017220027A1 (de) * 2017-11-10 2019-05-16 Ford Global Technologies, Llc Verfahren zur Schadenkontrolle bei Kraftfahrzeugen
US10818014B2 (en) * 2018-07-27 2020-10-27 Adobe Inc. Image object segmentation based on temporal information

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160284019A1 (en) * 2008-10-02 2016-09-29 ecoATM, Inc. Kiosks for evaluating and purchasing used electronic devices and related technology
CN105160318A (zh) * 2015-08-31 2015-12-16 北京旷视科技有限公司 基于面部表情的测谎方法及系统
CN107194323A (zh) * 2017-04-28 2017-09-22 阿里巴巴集团控股有限公司 车辆定损图像获取方法、装置、服务器和终端设备
CN107704932A (zh) * 2017-08-23 2018-02-16 阿里巴巴集团控股有限公司 一种物品回收设备、装置以及电子设备
CN108921811A (zh) * 2018-04-03 2018-11-30 阿里巴巴集团控股有限公司 检测物品损伤的方法和装置、物品损伤检测器

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3716206A4 *

Also Published As

Publication number Publication date
EP3716206A4 (en) 2021-05-05
EP3716206B1 (en) 2022-11-09
US20200293830A1 (en) 2020-09-17
CN108921811A (zh) 2018-11-30
TWI696123B (zh) 2020-06-11
CN108921811B (zh) 2020-06-30
US10929717B2 (en) 2021-02-23
TW201942795A (zh) 2019-11-01
SG11202005454UA (en) 2020-07-29
EP3716206A1 (en) 2020-09-30

Similar Documents

Publication Publication Date Title
WO2019192257A1 (zh) 检测物品损伤的方法和装置、物品损伤检测器
US11276158B2 (en) Method and apparatus for inspecting corrosion defect of ladle
US11341626B2 (en) Method and apparatus for outputting information
Li et al. Materials for masses: SVBRDF acquisition with a single mobile phone image
CN110726724A (zh) 缺陷检测方法、系统和装置
WO2021113408A1 (en) Synthesizing images from 3d models
US20180286030A1 (en) System and method for testing an electronic device
Albanese et al. Tiny machine learning for high accuracy product quality inspection
Daneshgaran et al. Use of deep learning for automatic detection of cracks in tunnels: prototype-2 developed in the 2017–2018 time period
US20240087105A1 (en) Systems and Methods for Paint Defect Detection Using Machine Learning
CN113252678A (zh) 一种移动终端的外观质检方法及设备
Maestro-Watson et al. Deflectometric data segmentation based on fully convolutional neural networks
US11493901B2 (en) Detection of defect in edge device manufacturing by artificial intelligence
Varona et al. Importance of detection for video surveillance applications
US20190370877A1 (en) Apparatus, system, and method of providing mobile electronic retail purchases
CN111860070A (zh) 识别发生改变的对象的方法和装置
CN115100689B (zh) 一种对象检测方法、装置、电子设备和存储介质
JP7332028B2 (ja) アイテムの検査を実行するための方法、装置、コンピュータプログラムおよびコンピュータ命令を含む媒体
US20230095647A1 (en) Systems and Methods for Precise Anomaly Localization Within Content Captured by a Machine Vision Camera
WO2023002632A1 (ja) 推論方法、推論システムおよび推論プログラム
Jasmitha et al. Design and Evaluation of a Real-Time Stock Inventory Management System
Xie et al. A novel defect detection system for complex freeform surface structures
Fantinel et al. Multistep hybrid learning: CNN driven by spatial–temporal features for faults detection on metallic surfaces
Koyanaka et al. Individual model identification of waste digital devices by the combination of CNN-based image recognition and measured values of mass and 3D shape features
Kallakuri et al. M-Lens an IOT-Based Deep Learning Device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19781812

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019781812

Country of ref document: EP

Effective date: 20200623

NENP Non-entry into the national phase

Ref country code: DE