CN112669271A - Object surface defect detection method, related device and computer storage medium - Google Patents
Object surface defect detection method, related device and computer storage medium Download PDFInfo
- Publication number
- CN112669271A CN112669271A CN202011529483.6A CN202011529483A CN112669271A CN 112669271 A CN112669271 A CN 112669271A CN 202011529483 A CN202011529483 A CN 202011529483A CN 112669271 A CN112669271 A CN 112669271A
- Authority
- CN
- China
- Prior art keywords
- deep learning
- module
- image
- learning model
- detected
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 230000007547 defect Effects 0.000 title claims abstract description 78
- 238000001514 detection method Methods 0.000 title claims abstract description 53
- 238000000034 method Methods 0.000 claims abstract description 49
- 238000010191 image analysis Methods 0.000 claims abstract description 35
- 238000013136 deep learning model Methods 0.000 claims description 106
- 238000013135 deep learning Methods 0.000 claims description 59
- 238000004891 communication Methods 0.000 claims description 25
- 238000012549 training Methods 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims description 5
- 238000004422 calculation algorithm Methods 0.000 description 17
- 238000010586 diagram Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 8
- 239000011521 glass Substances 0.000 description 8
- 238000012545 processing Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 5
- 239000004973 liquid crystal related substance Substances 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 3
- 239000004744 fabric Substances 0.000 description 3
- 238000013500 data storage Methods 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 239000002241 glass-ceramic Substances 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
- Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
Abstract
The application discloses a method for detecting surface defects of an object, a related device and a computer storage medium, wherein the method comprises the following steps: the method comprises the steps of collecting images of an object to be detected through a binocular camera module to obtain characteristic information of the images, determining a depth learning model corresponding to the images through an image analysis module, inputting the characteristic information of the images into the depth learning model in the depth learning module through the binocular camera module, and outputting a detection result through the depth learning model. Therefore, the defects of the surfaces of different types of objects can be accurately identified through different depth learning models in the depth learning module, so that the risks such as the defects possibly existing on the surfaces of the target objects of the user are prompted, the user can purchase a satisfactory product, and the user experience is improved.
Description
Technical Field
The present application relates to the field of image recognition technologies, and in particular, to a method for detecting surface defects of an object, a related apparatus, and a computer storage medium.
Background
People always want to buy the articles without defects or with few defects as much as possible in the shopping process, but the articles with the defects on the surface are often bought by the carelessness of human eyes when the consumers judge the articles or compare the articles with reference objects held by the consumers.
Disclosure of Invention
The embodiment of the application provides an object surface defect detection method, a related device and a computer storage medium.
In a first aspect, an embodiment of the present application provides an object surface defect detection method, which is used for an electronic device including a binocular camera module, an image analysis module in communication with the binocular camera module, and a deep learning module in communication with the image analysis module, and includes:
acquiring an image of an object to be detected through the binocular camera module to obtain characteristic information of the image;
determining a deep learning model corresponding to the image through the image analysis module;
inputting the characteristic information of the image into a deep learning model in the deep learning module through the binocular camera module; wherein the deep learning module comprises a plurality of deep learning models;
and outputting a detection result through the deep learning model.
In a second aspect, an embodiment of the present application provides an object surface defect detecting apparatus, the apparatus is used for an electronic device, the electronic device includes a binocular camera module, an image analysis module in communication with the binocular camera module, and a deep learning module in communication with the image analysis module, the apparatus includes:
the obtaining module is used for collecting the image of the object to be detected through the binocular camera module to obtain the characteristic information of the image;
the determining module is used for determining a deep learning model corresponding to the image through the image analysis module;
the input module is used for inputting the characteristic information of the image into a deep learning model in the deep learning module through the binocular camera module; wherein the deep learning module comprises a plurality of deep learning models;
and the output module is used for outputting the detection result through the deep learning model.
In a third aspect, an embodiment of the present application provides an electronic device, including: a binocular camera module, an image analysis module in communication with the binocular camera module, and a deep learning module in communication with the image analysis module,
the binocular camera module is used for collecting images of an object to be detected to obtain characteristic information of the images;
the image analysis module is used for determining a deep learning model corresponding to the image;
the deep learning module is used for inputting the characteristic information of the image sent by the binocular camera module into a deep learning model in the deep learning module; wherein the deep learning module comprises a plurality of deep learning models;
the deep learning module is also used for outputting a detection result.
In a fourth aspect, embodiments of the present application provide a computer storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform the above-mentioned method steps.
In a fifth aspect, an embodiment of the present application provides an electronic device, which may include: a processor and a memory;
wherein the memory stores a computer program adapted to be loaded by the processor and to perform the above-mentioned method steps.
The beneficial effects brought by the technical scheme provided by some embodiments of the application at least comprise:
in the embodiment of the application, the image of the object to be detected can be collected through the binocular camera module, the characteristic information of the image is obtained, the deep learning model corresponding to the image is determined through the image analysis module, the characteristic information of the image is input into the deep learning model in the deep learning module through the binocular camera module, and the detection result is output through the deep learning model. Therefore, the defects of the surfaces of different types of objects can be accurately identified through different deep learning models in the deep learning module, so that the risks such as the defects possibly existing on the surfaces of the target objects of the user are prompted, the user can purchase a satisfactory product, and the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a block diagram of an electronic device according to an embodiment of the present disclosure;
fig. 2 is an application scene diagram of a method for detecting surface defects of an object according to an embodiment of the present disclosure;
FIG. 3 is a system architecture diagram illustrating a method for detecting surface defects of an object according to an embodiment of the present disclosure;
fig. 4 is a schematic flowchart of a method for detecting surface defects of an object according to an embodiment of the present disclosure;
FIG. 5 is a schematic flow chart illustrating another method for detecting surface defects of an object according to an embodiment of the present disclosure;
FIG. 6 is a schematic flowchart illustrating a method for detecting surface defects of an object according to an embodiment of the present disclosure;
FIG. 7 is a schematic flowchart of another method for detecting surface defects of an object according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of an apparatus for detecting surface defects of an object according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the application, as detailed in the appended claims.
In the description of the present application, it is to be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. The specific meaning of the above terms in the present application can be understood in a specific case by those of ordinary skill in the art. Further, in the description of the present application, "a plurality" means two or more unless otherwise specified. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
Fig. 1 schematically shows a structural diagram of an electronic device provided in an embodiment of the present application. As shown in fig. 1, the electronic device may include: the device comprises a binocular camera module, an image analysis module, a communication module, a deep learning module, an optical machine module, a feedback module and a control module. Wherein:
the binocular camera module is used for photographing a target object, and the binocular camera comprises two cameras, so that when the two cameras shoot the same target object, a three-dimensional model of an object can be theoretically constructed due to gaps between the two cameras, wherein the three-dimensional model comprises characteristic information of the target object.
The image analysis module is used for determining the category to which the target object belongs and determining the corresponding deep learning model according to the category to which the target object belongs.
The communication module is used for transmitting the acquired image information to the remote server and receiving information about the target object returned by the remote server, such as but not limited to, an object image of a similar category to the target object, an object image of the same category as the target object, or an object image similar or close to the target object.
The deep learning module is used for determining whether the surface of the object has defects according to the received information about the target object and the characteristic information of the target object, for example, if the deep learning module judges that the surface of the target object has cracks, the surface of the object has defects; and if the deep learning module judges that the crack pattern exists on the surface of the target object, the surface of the object has no defect.
The optical-mechanical module is used for receiving the detection result sent by the deep learning module and displaying the detection result to a user, and the user can further judge the target object according to the detection result fed back by the optical-mechanical module.
The feedback module is used for receiving a feedback signal sent by a user and sending a signal that the detection result is wrong to the deep learning module.
The control module is used for controlling the feedback module to send the received feedback information to the deep learning module, controlling the deep learning module to update training data in a training set and optimizing the deep learning model.
Fig. 2 schematically illustrates an application scenario diagram of an electronic device provided in an embodiment of the present application. As shown in fig. 2, the user detects whether there is a defect on the surface of the target object 22 using the electronic device 21. Wherein:
the electronic device 21 may be AR (Augmented Reality) glasses, the target object 22 may be a cup, and specifically, the user may wear the AR glasses to detect whether there is a defect on the surface of the cup. Further, the optical-mechanical module in the AR glasses may display the detection result sent by the deep learning module, for example, including but not limited to, sending a voice message to the user, setting a feedback button on the surface of the AR glasses, or setting a touch pad on the surface of the AR glasses.
Fig. 3 is a system architecture diagram illustrating a method for detecting surface defects of an object according to an embodiment of the present application. As shown in fig. 3, an executing subject of the embodiment of the present application is a terminal, the terminal has an electronic device with a display screen, and the terminal includes but is not limited to: AR glasses, handheld devices, personal computers, tablet computers, in-vehicle devices, smart phones, computing devices or other processing devices connected to a wireless modem, and the like. The terminal devices in different networks may be called different names, for example: user equipment, access terminal, subscriber unit, subscriber station, mobile station, remote terminal, mobile device, user terminal, wireless communication device, user agent or user equipment, cellular telephone, cordless telephone, Personal Digital Assistant (PDA), terminal equipment in a 5th generation mobile network or future evolution network, and the like. The terminal system is an operating system that can run on the terminal, is a program for managing and controlling terminal hardware and terminal applications, and is an indispensable system application of the terminal. The system comprises but is not limited to Android system, IOS system, Windows Phone (WP) system, Ubuntu mobile version operating system and the like.
According to some embodiments, the terminal may be connected to the server through a network. The network is used to provide a communication link between the terminal and the server. The network may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few. It should be understood that the number of terminals, networks, and servers in fig. 3 is merely illustrative. There may be any number of terminals, networks and servers, as desired for the reality. For example, the server may be a server cluster composed of a plurality of servers. The user can use the terminal to interact with the server through the network so as to obtain the detection result of the surface defects of the object and the like.
Next, the method for detecting surface defects of an object provided by the embodiment of the present application is described with reference to the electronic device shown in fig. 1, the application scenario of the electronic device shown in fig. 2, and the system architecture of the method for detecting surface defects of an object shown in fig. 3. Fig. 4 schematically illustrates a flow chart of an object surface defect detection method provided by an embodiment of the present application. As shown in fig. 4, the method for detecting surface defects of an object at least comprises the following steps:
s401, collecting the image of the object to be detected through the binocular camera module to obtain the characteristic information of the image.
Specifically, the binocular camera module can use a binocular camera to shoot, the binocular camera shoots left and right two viewpoint images of the same target object, a disparity map is obtained by using a stereo matching algorithm, and then a characteristic information map of the image of the target object is obtained.
S402, determining a deep learning model corresponding to the image through an image analysis module.
Specifically, since key points to be acquired by different types of objects are different, different deep learning models need to be selected for different types of objects, for example, an optimized SSD (Single Shot multiple box Detector) algorithm may be used for a mobile phone screen or glass ceramic, or a YOLO (You Only see Once) algorithm may also be used, and an improved fast-region with volumetric neural network (multi-scale fast convolutional neural network) algorithm may be used for a knitted fabric.
In addition, the method and the device for detecting the defects of the target object can determine the algorithm corresponding to the defects according to the types of the defects of the target object, so that the types of the defects can be further determined, for example, whether the defects of a mobile phone screen are cracks or scratches, and whether the defects of the knitted fabric surface are caused by textures or stains.
And S403, inputting the characteristic information of the image into a deep learning model in the deep learning module through the binocular camera module.
The deep learning module can comprise a plurality of deep learning models, and the specific objects of different categories correspond to different deep learning models.
The deep learning model in the embodiment of the application can output the detection result according to the picture issued by the manufacturer, for example, if the target object in the image is an apple mobile phone, the communication module can send the image acquired by the binocular camera module to the server, the server retrieves the picture issued by the manufacturer of the apple mobile phone, and sends the picture issued by the manufacturer of the apple mobile phone to the deep learning model through the communication module.
Specifically, the YOLO algorithm model applies a single Convolutional Neural Network (CNN) to the entire image, divides the image into meshes, and predicts class probabilities and bounding boxes for each mesh. For example, taking a 100x100 image as an example, it may be divided into grids, such as 7x 7. Then, for each mesh, the network predicts a bounding box and probability corresponding to each cell (glass-ceramic bottle, bottle cap, handle, etc.).
The fast RCNN algorithm model integrates four basic steps (candidate region generation, feature extraction, classification and position refinement) to be detected into a deep network framework, all calculations are not repeated and are completely completed in a GPU (Graphics Processing Unit), and the detection speed of detecting knitted fabrics can be effectively improved.
The SSD algorithm model is a combination of a fast RCNN algorithm and a YOLO algorithm, and a regression-based mode (similar to YOLO) is adopted to directly regress the category and the position of an object in a network, so that the detection speed is high. At the same time, a region-based concept (similar to fast RCNN) is also utilized, and a plurality of candidate regions are used as ROI (region of interest) in the detection process. The backbone Network of the SSD algorithm is based on a conventional image classification Network, and examples of the Network include, but are not limited to, VGG (Visual Geometry Group Network), ResNet (Residual Network), and the like.
And S404, outputting a detection result through the deep learning model.
In a specific example, when a user purchases a liquid crystal television, the user can wear AR glasses to detect a screen of the liquid crystal television, after a binocular camera module of the AR glasses collects an image of the screen of the liquid crystal television, feature information of the image can be obtained, a deep learning model corresponding to the image is determined to be an SSD algorithm model through an image analysis module, the feature information of the image is further input into the SSD algorithm model in the deep learning module, and a detection result of the screen of the liquid crystal television is output by using the SSD algorithm model.
In the embodiment of the application, the image of the object to be detected can be collected through the binocular camera module, the characteristic information of the image is obtained, the deep learning model corresponding to the image is determined through the image analysis module, the characteristic information of the image is input into the deep learning model in the deep learning module through the binocular camera module, and the detection result is output through the deep learning model. Therefore, the defects of the surfaces of different types of objects can be accurately identified through different deep learning models in the deep learning module, so that the risks such as the defects possibly existing on the surfaces of the target objects of the user are prompted, the user can purchase a satisfactory product, and the user experience is improved.
In some possible implementations, fig. 5 schematically illustrates a flow chart of an object surface defect detection method provided in an embodiment of the present application. As shown in fig. 5, the method for detecting surface defects of an object at least comprises the following steps:
s501, collecting an image of an object to be detected, and determining key points in the image.
Wherein the image of the object to be detected may comprise two views.
Specifically, the binocular camera module obtains a plurality of matching points by searching for a plurality of key points which are the same in the left view and the right view by using a triangulation principle.
And S502, determining the characteristic information of the image based on the key points in the image.
Preferably, the embodiment of the application can determine the parallax of the image based on the abscissa values of the key points in the left view and the right view; feature information of the image is determined based on the disparity of the image.
Specifically, the triangulation principle, i.e., the difference between the abscissa of the key point imaged in the left and right views, is inversely proportional to the distance from the key point to the imaging plane, to obtain the feature information.
And S503, determining a deep learning model corresponding to the image through the image analysis module.
Specifically, S503 is identical to S402, and is not described herein again.
And S504, inputting the characteristic information of the image into a deep learning model in the deep learning module through the binocular camera module.
Specifically, S504 is identical to S403, and is not described here.
And S505, outputting a detection result through the deep learning model.
Specifically, S505 is identical to S404, and is not described herein again.
In some possible implementations, fig. 6 schematically illustrates a flow chart of an object surface defect detection method provided in an embodiment of the present application. As shown in fig. 6, the method for detecting surface defects of an object at least comprises the following steps:
s601, collecting the image of the object to be detected through the binocular camera module to obtain the characteristic information of the image.
Specifically, S601 is identical to S401, and is not described herein again.
S602, determining the category of the object to be detected based on the image of the object to be detected.
Possibly, the image analysis module may determine the category of the object to be detected according to pixel information in the image and edge information of the target object.
S603, determining a deep learning model corresponding to the image of the object to be detected based on the category of the object to be detected.
Specifically, the SSD algorithm model may be used when the image analysis module determines that the target object in the image is an electronic device such as a mobile phone or a tablet computer, and the fast RCNN algorithm model may be used when the image analysis module determines that the daily supplies such as clothes, shoes, hats, etc. in the image.
And S604, inputting the characteristic information of the image into a deep learning model in the deep learning module through the binocular camera module.
Specifically, S604 is identical to S403, and is not described herein again.
And S605, outputting a detection result through the deep learning model.
Specifically, S605 is identical to S404, and is not described herein again.
In some possible embodiments, the detection result may include: the presence or absence of defects and/or position information of defects on the surface of the object to be inspected.
Further, the embodiment of the application can output whether the surface of the object to be detected has defects and/or position information of the defects through the deep learning model.
For example, there may be a defect in the left position that can output the neckline of the sweater when the user detects the sweater using the fast RCNN algorithm model.
In some possible implementations, fig. 7 schematically illustrates a flow chart of an object surface defect detection method provided in an embodiment of the present application. As shown in fig. 7, the method for detecting surface defects of an object at least comprises the following steps:
and S701, acquiring the image of the object to be detected through the binocular camera module to obtain the characteristic information of the image.
Specifically, S701 is identical to S401, and is not described herein again.
S702, determining a deep learning model corresponding to the image through an image analysis module.
Specifically, S605 is identical to S404, and is not described herein again.
And S703, inputting the characteristic information of the image into a deep learning model in the deep learning module through the binocular camera module.
Specifically, S605 is identical to S403, and is not described herein again.
And S704, outputting a detection result through a deep learning model.
Specifically, S605 is identical to S404, and is not described herein again.
S705, feedback information sent by a user is received through the optical machine module, and the feedback information is sent to the deep learning model corresponding to the image of the object to be detected in the deep learning module.
And S706, controlling the updating of the data in the deep learning model through the control module.
Possibly, when the deep learning module receives error reporting information fed back by the user and sent by the feedback module, the data of the training set in the deep learning model can be automatically updated.
Possibly, after receiving the error reporting information fed back by the user sent by the feedback module, the deep learning module stores the error reporting information and updates the data of the training set in the deep learning model based on a preset time period, for example, every 15 minutes.
And S707, training the data in the deep learning model through the deep learning model to optimize the deep learning model.
According to the method, the learning is performed from the related information of the mass objects through the deep learning, the training is performed according to the relation between the characteristics of the target object, the modeling is performed through the hierarchical layered model, various pattern recognition tasks including clustering, classifying, regression and other tasks are performed through the deep learning model, the errors in the current model are further recognized according to the feedback error information of the user, and therefore the deep learning model is optimized.
In addition, the embodiment of the application can also perform secondary identification on the error information fed back by the user.
Specifically, the communication module can upload error information fed back by the user to the server side for secondary identification, the server can analyze the error information fed back by the user, and if the judgment of the user is not incongruous, the user can directly join in a training set to continue training; if disputed, the error message of this feedback may not be used.
Fig. 8 is a schematic structural diagram of an object surface defect detecting apparatus according to an exemplary embodiment of the present application. The object surface defect detection device may be disposed in an electronic device such as a terminal device or a server, and executes the object surface defect detection method according to any of the embodiments described above. As shown in fig. 8, the object surface defect detecting apparatus includes:
the obtaining module 81 is used for collecting the image of the object to be detected through the binocular camera module to obtain the characteristic information of the image;
a determining module 82, configured to determine, by the image analysis module, a deep learning model corresponding to the image;
the input module 83 is used for inputting the characteristic information of the image into a deep learning model in the deep learning module through the binocular camera module; wherein the deep learning module comprises a plurality of deep learning models;
and an output module 84, configured to output a detection result through the deep learning model.
In the embodiment of the application, the image of the object to be detected can be collected through the binocular camera module, the characteristic information of the image is obtained, the deep learning model corresponding to the image is determined through the image analysis module, the characteristic information of the image is input into the deep learning model in the deep learning module through the binocular camera module, and the detection result is output through the deep learning model. Therefore, the defects of the surfaces of different types of objects can be accurately identified through different deep learning models in the deep learning module, so that the risks such as the defects possibly existing on the surfaces of the target objects of the user are prompted, the user can purchase a satisfactory product, and the user experience is improved.
In some possible embodiments, the obtaining module 81 includes:
the first determining unit is used for acquiring the image of the object to be detected and determining key points in the image;
a second determining unit configured to determine feature information of the image based on the key points in the image.
In some possible embodiments, the image comprises two views;
the second determining unit is specifically configured to:
determining the parallax of the image based on the abscissa values of the key points in the two views;
determining feature information of the image based on the disparity of the image.
In some possible embodiments, the determining module 82 includes:
the third determining unit is used for determining the category of the object to be detected based on the image of the object to be detected;
and the fourth determining unit is used for determining the deep learning model corresponding to the image of the object to be detected based on the category of the object to be detected.
In some possible embodiments, the input module 83 is specifically configured to: and inputting the characteristic information of the image into a deep learning model corresponding to the image of the object to be detected in the deep learning module.
In some possible embodiments, the detection result includes: whether the surface of the object to be detected has defects and/or position information of the defects;
the output module 84 is specifically configured to: and outputting whether the surface of the object to be detected has defects and/or position information of the defects through the deep learning model.
In some possible embodiments, the electronic device further comprises an opto-mechanical module;
after the output module 84, the apparatus further includes: and the display module is used for displaying the detection result to a user through the optical machine module.
In some possible embodiments, the electronic device further comprises a feedback module and a control module;
after the output module 84, the apparatus further includes:
the sending unit is used for receiving feedback information sent by the user through the optical-mechanical module and sending the feedback information to the deep learning model corresponding to the image of the object to be detected in the deep learning module;
the updating unit is used for controlling the updating of the data in the deep learning model through the control module;
and the optimization unit is used for training the data in the deep learning model through the deep learning model so as to optimize the deep learning model.
It should be noted that, when the object surface defect detecting apparatus provided in the foregoing embodiment executes the object surface defect detecting method, only the division of the functional modules is taken as an example, and in practical applications, the functions may be distributed to different functional modules according to needs, that is, the internal structure of the apparatus may be divided into different functional modules, so as to complete all or part of the functions described above. In addition, the object surface defect detection apparatus provided in the above embodiments and the object surface defect detection method embodiments belong to the same concept, and details of implementation processes thereof are referred to in the method embodiments, and are not described herein again.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
Fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 9, the electronic device 90 may include: at least one processor 901, at least one network interface 904, a user interface 903, memory 905, at least one communication bus 9002.
Wherein a communication bus 902 is used to enable connective communication between these components.
The user interface 903 may include a Display screen (Display) and a Camera (Camera), and the optional user interface 903 may also include a standard wired interface and a wireless interface.
The network interface 904 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others.
The Memory 905 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 905 includes a non-transitory computer-readable medium. The memory 905 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 905 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the above-described method embodiments, and the like; the storage data area may store data and the like referred to in the above respective method embodiments. The memory 905 may optionally be at least one memory device located remotely from the processor 901. As shown in fig. 9, the memory 905, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and an application program for object surface defect detection.
In the electronic device 90 shown in fig. 9, the user interface 903 is mainly used as an interface for providing input for a user, and acquiring data input by the user; the processor 901 may be configured to invoke an application program for detecting the surface defect of the object stored in the memory 905, and specifically perform the following operations:
acquiring an image of an object to be detected through the binocular camera module to obtain characteristic information of the image;
determining a deep learning model corresponding to the image through the image analysis module;
inputting the characteristic information of the image into a deep learning model in the deep learning module through the binocular camera module; wherein the deep learning module comprises a plurality of deep learning models;
and outputting a detection result through the deep learning model.
In a possible embodiment, when the processor 901 performs the acquiring of the image of the object to be detected by the binocular camera module to obtain the feature information of the image, specifically performs:
acquiring an image of the object to be detected, and determining key points in the image;
determining feature information of the image based on the keypoints in the image.
In a possible embodiment, the image comprises two views;
when determining the feature information of the image based on the key points in the image, the processor 901 specifically performs:
determining the parallax of the image based on the abscissa values of the key points in the two views;
determining feature information of the image based on the disparity of the image.
In a possible embodiment, when the processor 901 executes the deep learning model corresponding to the image of the object to be detected determined by the image analysis module, specifically:
determining the category of the object to be detected based on the image of the object to be detected;
and determining a deep learning model corresponding to the image of the object to be detected based on the category of the object to be detected.
In a possible embodiment, the processor 901 specifically executes, when executing the deep learning model in which the feature information of the image is input into the deep learning module through the binocular camera module, that: and inputting the characteristic information of the image into a deep learning model corresponding to the image of the object to be detected in the deep learning module.
In one possible embodiment, the detection result includes: whether the surface of the object to be detected has defects and/or position information of the defects;
when the processor 901 executes the output of the detection result by the deep learning model, specifically: and outputting whether the surface of the object to be detected has defects and/or position information of the defects through the deep learning model.
In a possible embodiment, the electronic device further comprises an opto-mechanical module;
the processor 901, after executing the outputting of the detection result by the deep learning model, further executes: and the detection result is displayed to a user through the optical-mechanical module.
In one possible embodiment, the electronic device further comprises a feedback module and a control module;
the processor 901 performs the output of the detection result by the deep learning model, and then performs
Receiving feedback information sent by the user through the optical-mechanical module, and sending the feedback information to a deep learning model corresponding to the image of the object to be detected in the deep learning module;
controlling, by the control module, updating of data in the deep learning model;
training data in the deep learning model through the deep learning model to optimize the deep learning model.
Embodiments of the present application also provide a computer-readable storage medium having stored therein instructions, which when executed on a computer or a processor, cause the computer or the processor to perform one or more of the steps in the embodiments shown in fig. 4-7. The respective constituent modules of the above object surface defect detecting apparatus may be stored in the computer-readable storage medium if they are implemented in the form of software functional units and sold or used as independent products.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in or transmitted over a computer-readable storage medium. The computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)), or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., Digital Versatile Disk (DVD)), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. And the aforementioned storage medium includes: various media capable of storing program codes, such as a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, and an optical disk. The technical features in the present examples and embodiments may be arbitrarily combined without conflict.
The above-described embodiments are merely preferred embodiments of the present application, and are not intended to limit the scope of the present application, and various modifications and improvements made to the technical solutions of the present application by those skilled in the art without departing from the design spirit of the present application should fall within the protection scope defined by the claims of the present application.
Claims (12)
1. A method of object surface defect detection for an electronic device including a binocular camera module, an image analysis module in communication with the binocular camera module, and a deep learning module in communication with the image analysis module, the method comprising:
acquiring an image of an object to be detected through the binocular camera module to obtain characteristic information of the image;
determining a deep learning model corresponding to the image through the image analysis module;
inputting the characteristic information of the image into a deep learning model in the deep learning module through the binocular camera module; wherein the deep learning module comprises a plurality of deep learning models;
and outputting a detection result through the deep learning model.
2. The method of claim 1, wherein the acquiring the image of the object to be detected by the binocular camera module to obtain the characteristic information of the image comprises:
acquiring an image of the object to be detected, and determining key points in the image;
determining feature information of the image based on the keypoints in the image.
3. The method of claim 2, wherein the image comprises two views;
the determining feature information of the image based on the key points in the image comprises:
determining the parallax of the image based on the abscissa values of the key points in the two views;
determining feature information of the image based on the disparity of the image.
4. The method of claim 1, wherein the determining, by the image analysis module, the deep learning model corresponding to the image of the object to be detected comprises:
determining the category of the object to be detected based on the image of the object to be detected;
and determining a deep learning model corresponding to the image of the object to be detected based on the category of the object to be detected.
5. The method of claim 4, wherein said entering feature information of the image into a deep learning model in the deep learning module by the binocular camera module comprises: and inputting the characteristic information of the image into a deep learning model corresponding to the image of the object to be detected in the deep learning module.
6. The method of claim 5, wherein the detection result comprises: whether the surface of the object to be detected has defects and/or position information of the defects;
the outputting of the detection result through the deep learning model includes: and outputting whether the surface of the object to be detected has defects and/or position information of the defects through the deep learning model.
7. The method of claim 6, wherein the electronic device further comprises an opto-mechanical module;
after the detecting result is output through the deep learning model, the method further comprises: and the detection result is displayed to a user through the optical-mechanical module.
8. The method of claims 1-7, wherein the electronic device further comprises a feedback module and a control module;
after the detecting result is output through the deep learning model, the method further comprises:
receiving feedback information sent by the user through the optical-mechanical module, and sending the feedback information to a deep learning model corresponding to the image of the object to be detected in the deep learning module;
controlling, by the control module, updating of data in the deep learning model;
training data in the deep learning model through the deep learning model to optimize the deep learning model.
9. An object surface defect detection apparatus, the apparatus for use in an electronic device comprising a binocular camera module, an image analysis module in communication with the binocular camera module, and a deep learning module in communication with the image analysis module, the apparatus comprising:
the obtaining module is used for collecting the image of the object to be detected through the binocular camera module to obtain the characteristic information of the image;
the determining module is used for determining a deep learning model corresponding to the image through the image analysis module;
the input module is used for inputting the characteristic information of the image into a deep learning model in the deep learning module through the binocular camera module; wherein the deep learning module comprises a plurality of deep learning models;
and the output module is used for outputting the detection result through the deep learning model.
10. An electronic device, comprising: a binocular camera module, an image analysis module in communication with the binocular camera module, and a deep learning module in communication with the image analysis module,
the binocular camera module is used for collecting images of an object to be detected to obtain characteristic information of the images;
the image analysis module is used for determining a deep learning model corresponding to the image;
the deep learning module is used for inputting the characteristic information of the image sent by the binocular camera module into a deep learning model in the deep learning module; wherein the deep learning module comprises a plurality of deep learning models;
the deep learning module is also used for outputting a detection result.
11. A computer storage medium, characterized in that it stores a plurality of instructions adapted to be loaded by a processor and to perform the method steps according to any of claims 1-8.
12. An electronic device, comprising: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the method steps according to any of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011529483.6A CN112669271A (en) | 2020-12-22 | 2020-12-22 | Object surface defect detection method, related device and computer storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011529483.6A CN112669271A (en) | 2020-12-22 | 2020-12-22 | Object surface defect detection method, related device and computer storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112669271A true CN112669271A (en) | 2021-04-16 |
Family
ID=75407608
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011529483.6A Withdrawn CN112669271A (en) | 2020-12-22 | 2020-12-22 | Object surface defect detection method, related device and computer storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112669271A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113283396A (en) * | 2021-06-29 | 2021-08-20 | 艾礼富电子(深圳)有限公司 | Target object class detection method and device, computer equipment and storage medium |
CN114398818A (en) * | 2021-06-02 | 2022-04-26 | 江苏盛邦纺织品有限公司 | Textile jacquard detection method and system based on deep learning |
CN116168034A (en) * | 2023-04-25 | 2023-05-26 | 深圳思谋信息科技有限公司 | Method, device, equipment and storage medium for detecting defect of knitted fabric |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104794717A (en) * | 2015-04-27 | 2015-07-22 | 中国科学院光电技术研究所 | Binocular vision system based depth information comparison method |
CN107909107A (en) * | 2017-11-14 | 2018-04-13 | 深圳码隆科技有限公司 | Fiber check and measure method, apparatus and electronic equipment |
CN108154508A (en) * | 2018-01-09 | 2018-06-12 | 北京百度网讯科技有限公司 | Method, apparatus, storage medium and the terminal device of product defects detection positioning |
CN110148106A (en) * | 2019-01-18 | 2019-08-20 | 华晨宝马汽车有限公司 | A kind of system and method using deep learning model inspection body surface defect |
CN111583223A (en) * | 2020-05-07 | 2020-08-25 | 上海闻泰信息技术有限公司 | Defect detection method, defect detection device, computer equipment and computer readable storage medium |
-
2020
- 2020-12-22 CN CN202011529483.6A patent/CN112669271A/en not_active Withdrawn
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104794717A (en) * | 2015-04-27 | 2015-07-22 | 中国科学院光电技术研究所 | Binocular vision system based depth information comparison method |
CN107909107A (en) * | 2017-11-14 | 2018-04-13 | 深圳码隆科技有限公司 | Fiber check and measure method, apparatus and electronic equipment |
CN108154508A (en) * | 2018-01-09 | 2018-06-12 | 北京百度网讯科技有限公司 | Method, apparatus, storage medium and the terminal device of product defects detection positioning |
CN110148106A (en) * | 2019-01-18 | 2019-08-20 | 华晨宝马汽车有限公司 | A kind of system and method using deep learning model inspection body surface defect |
CN111583223A (en) * | 2020-05-07 | 2020-08-25 | 上海闻泰信息技术有限公司 | Defect detection method, defect detection device, computer equipment and computer readable storage medium |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114398818A (en) * | 2021-06-02 | 2022-04-26 | 江苏盛邦纺织品有限公司 | Textile jacquard detection method and system based on deep learning |
CN114398818B (en) * | 2021-06-02 | 2024-05-24 | 中科维卡(苏州)自动化科技有限公司 | Textile jacquard detection method and system based on deep learning |
CN113283396A (en) * | 2021-06-29 | 2021-08-20 | 艾礼富电子(深圳)有限公司 | Target object class detection method and device, computer equipment and storage medium |
CN116168034A (en) * | 2023-04-25 | 2023-05-26 | 深圳思谋信息科技有限公司 | Method, device, equipment and storage medium for detecting defect of knitted fabric |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112669271A (en) | Object surface defect detection method, related device and computer storage medium | |
CN110674719B (en) | Target object matching method and device, electronic equipment and storage medium | |
EP3660703B1 (en) | Method, apparatus, and system for identifying device, storage medium, processor, and terminal | |
EP3550479A1 (en) | Augmented-reality-based offline interaction method and apparatus | |
CN110716645A (en) | Augmented reality data presentation method and device, electronic equipment and storage medium | |
CN109146943B (en) | Detection method, device and the electronic equipment of stationary object | |
CN111368934A (en) | Image recognition model training method, image recognition method and related device | |
US20120321193A1 (en) | Method, apparatus, and computer program product for image clustering | |
CN111597918A (en) | Training and detecting method and device of human face living body detection model and electronic equipment | |
CN107944414B (en) | Image processing method, image processing device, electronic equipment and computer readable storage medium | |
KR20110088273A (en) | Mobile terminal and method for forming human network using mobile terminal | |
AU2019361220B2 (en) | Augmented reality system and method | |
CN104966086A (en) | Living body identification method and apparatus | |
CN113822427A (en) | Model training method, image matching device and storage medium | |
CN114677350A (en) | Connection point extraction method and device, computer equipment and storage medium | |
CN112783779B (en) | Method and device for generating test case, electronic equipment and storage medium | |
CN104969225B (en) | Automated graphics for visual search correct | |
CN112818733A (en) | Information processing method, device, storage medium and terminal | |
CN110168599A (en) | A kind of data processing method and terminal | |
KR20190045679A (en) | Method for Providing Augmented Reality Service and Apparatus Thereof | |
CN112163062B (en) | Data processing method and device, computer equipment and storage medium | |
CN111159168B (en) | Data processing method and device | |
CN113819913A (en) | Path planning method and device, computer equipment and storage medium | |
CN113742430A (en) | Method and system for determining number of triangle structures formed by nodes in graph data | |
CN111738282A (en) | Image recognition method based on artificial intelligence and related equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20210416 |