CN112907533A - Detection model training method, device, equipment and readable storage medium - Google Patents
Detection model training method, device, equipment and readable storage medium Download PDFInfo
- Publication number
- CN112907533A CN112907533A CN202110185338.9A CN202110185338A CN112907533A CN 112907533 A CN112907533 A CN 112907533A CN 202110185338 A CN202110185338 A CN 202110185338A CN 112907533 A CN112907533 A CN 112907533A
- Authority
- CN
- China
- Prior art keywords
- picture
- pixel point
- neural network
- training
- position information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012549 training Methods 0.000 title claims abstract description 101
- 238000001514 detection method Methods 0.000 title claims abstract description 82
- 238000000034 method Methods 0.000 title claims abstract description 41
- 238000013528 artificial neural network Methods 0.000 claims abstract description 79
- 230000003287 optical effect Effects 0.000 claims abstract description 16
- 238000012360 testing method Methods 0.000 claims description 17
- 239000002131 composite material Substances 0.000 claims description 15
- 230000001502 supplementing effect Effects 0.000 claims description 6
- 238000013527 convolutional neural network Methods 0.000 claims description 5
- 238000003384 imaging method Methods 0.000 abstract description 7
- 230000007547 defect Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 210000002569 neuron Anatomy 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 238000013507 mapping Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
Abstract
The invention provides a detection model training method, a detection model training device, detection model training equipment and a readable storage medium. The method comprises the following steps: acquiring a picture and position information of each pixel point on the picture; and training a neural network based on the picture and the position information of each pixel point on the picture to obtain a detection model for automatic optical detection. According to the invention, the position information of each pixel point on the picture is added when the neural network is trained through the picture, so that the dimensionality of the picture data is increased, the detection model obtained through final training can realize the detection of the picture by combining the picture characteristics and the position information of each pixel point on the picture, the occurrence of detection errors caused by picture imaging distortion can be avoided, and the accuracy of optical automatic detection is improved.
Description
Technical Field
The invention relates to the technical field of optical detection, in particular to a detection model training method, a device, equipment and a readable storage medium.
Background
At present, generally, a picture is directly input as a neural network for training the neural network to obtain a detection model for automatic optical detection. In the training process, the neural network adjusts the neuron parameter value based on the input picture characteristics of the picture and the training labels. Wherein, for the pictures with the same training labels, the picture characteristics are the same or similar. However, in the processes of light emission and imaging, distortions such as distortion and vignetting exist, so that the picture characteristics of the obtained pictures are different due to different positions of the same target in the field of view (imaging), that is, the pictures with the same training labels have different picture characteristics, which results in that the detection accuracy of the finally obtained detection model is difficult to meet the actual requirements.
For example, a detection model for detecting whether an edge of a target is a straight line needs to be trained, and a target with a straight line edge is imaged for multiple times to obtain a plurality of pictures. Since the pictures are obtained by imaging the same target, the training labels of the pictures are all straight lines. However, due to the effect of distortion, if the object is imaged at the edge of the field of view, the edge of the object in the obtained picture 1 will have some curvature; if the object is imaged in the middle of the field of view, the edge of the object in the resulting picture 2 is a straight line. Subsequently, when the images 1 and 2 are used for training the neural network, for the neural network, the extracted image features of the image 1 and the extracted image features of the image 2 have a large difference, but the training labels of the two are consistent, so that the neural network cannot adjust the neuron parameters to the optimal values, and the detection accuracy of the finally obtained detection model is difficult to meet the actual requirement.
Disclosure of Invention
In order to solve the technical problem, the invention provides a detection model training method, a detection model training device and readable storage media.
In a first aspect, the present invention provides a detection model training method, including:
acquiring a picture and position information of each pixel point on the picture;
and training a neural network based on the picture and the position information of each pixel point on the picture to obtain a detection model for automatic optical detection.
Optionally, the step of obtaining the location information of each pixel point on the picture includes:
constructing a plane rectangular coordinate system by taking any pixel point on a picture as an origin to obtain coordinate information of each pixel point on the picture;
cutting the picture into a plurality of sub-pictures, and acquiring the minimum offset of each pixel point on the picture relative to the boundary of the sub-picture to which the pixel point belongs;
and taking the coordinate information of each pixel point on the picture and the minimum offset relative to the boundary of the sub-picture to which the pixel point belongs as the position information of each pixel point on the picture.
Optionally, the step of training the neural network based on the picture and the position information of each pixel point on the picture includes:
supplementing corresponding position information for each pixel point of the picture to obtain a multi-dimensional composite picture;
and inputting the composite picture into a neural network for training.
Optionally, the step of training the neural network based on the picture and the position information of each pixel point on the picture includes:
inputting the picture into a first neural network;
inputting the position information of each pixel point on the picture into a second neural network;
and inputting the output of the first neural network and the output of the second neural network into a third neural network for training.
Optionally, the neural network is a convolutional neural network.
In a second aspect, the present invention further provides a detection model training apparatus, including:
the acquisition module is used for acquiring a picture and position information of each pixel point on the picture;
and the training module is used for training the neural network based on the picture and the position information of each pixel point on the picture to obtain a detection model for automatic optical detection.
Optionally, the obtaining module is configured to:
constructing a plane rectangular coordinate system by taking any pixel point on a picture as an origin to obtain coordinate information of each pixel point on the picture;
cutting the picture into a plurality of sub-pictures, and acquiring the minimum offset of each pixel point on the picture relative to the boundary of the sub-picture to which the pixel point belongs;
and taking the coordinate information of each pixel point on the picture and the minimum offset relative to the boundary of the sub-picture to which the pixel point belongs as the position information of each pixel point on the picture.
Optionally, the training module is configured to:
supplementing corresponding position information for each pixel point of the picture to obtain a multi-dimensional composite picture;
and inputting the composite picture into a neural network for training.
In a third aspect, the present invention further provides a detection model training apparatus, which includes a processor, a memory, and a detection model training program stored on the memory and executable by the processor, wherein when the detection model training program is executed by the processor, the steps of the detection model training method as described above are implemented.
In a fourth aspect, the present invention further provides a readable storage medium, on which a detection model training program is stored, wherein the detection model training program, when executed by a processor, implements the steps of the detection model training method as described above.
In the invention, a picture and position information of each pixel point on the picture are obtained; and training a neural network based on the picture and the position information of each pixel point on the picture to obtain a detection model for automatic optical detection. According to the invention, the position information of each pixel point on the picture is added when the neural network is trained through the picture, so that the dimensionality of the picture data is increased, the detection model obtained through final training can realize the detection of the picture by combining the picture characteristics and the position information of each pixel point on the picture, the occurrence of detection errors caused by picture imaging distortion can be avoided, and the accuracy of optical automatic detection is improved.
Drawings
FIG. 1 is a schematic diagram of a hardware structure of a test model training apparatus according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart illustrating a training method of a test model according to an embodiment of the present invention;
FIG. 3 is a schematic view of a scene in which a rectangular plane coordinate system is constructed with a pixel point on a picture as an origin in an embodiment;
FIG. 4 is a diagram illustrating an embodiment of a scene for cropping a picture;
FIG. 5 is a schematic diagram of functional modules of an embodiment of the device for training a test model according to the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In a first aspect, an embodiment of the present invention provides a detection model training apparatus.
Referring to fig. 1, fig. 1 is a schematic diagram of a hardware structure of a detection model training apparatus according to an embodiment of the present invention. In this embodiment of the present invention, the testing model training apparatus may include a processor 1001 (e.g., a Central Processing Unit, CPU), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. The communication bus 1002 is used for realizing connection communication among the components; the user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard); the network interface 1004 may optionally include a standard wired interface, a WIreless interface (e.g., a WI-FI interface, WI-FI interface); the memory 1005 may be a Random Access Memory (RAM) or a non-volatile memory (non-volatile memory), such as a magnetic disk memory, and the memory 1005 may optionally be a storage device independent of the processor 1001. Those skilled in the art will appreciate that the hardware configuration depicted in FIG. 1 is not intended to be limiting of the present invention, and may include more or less components than those shown, or some components in combination, or a different arrangement of components.
With continued reference to FIG. 1, a memory 1005, which is one type of computer storage medium in FIG. 1, may include an operating system, a network communication module, a user interface module, and a detection model training program. The processor 1001 may call a detection model training program stored in the memory 1005, and execute the detection model training method provided by the embodiment of the present invention.
In a second aspect, an embodiment of the present invention provides a detection model training method.
Referring to fig. 2, fig. 2 is a schematic flow chart of an embodiment of the detection model training method of the present invention. As shown in fig. 2, in an embodiment, the method for training the detection model includes:
step S10, obtaining a picture and position information of each pixel point on the picture;
in this embodiment, the corresponding picture is obtained according to the function of the required detection model. For example, if a detection model for detecting whether the edge of the target is a straight line is to be trained, the target with the straight or non-straight edge is photographed to obtain a corresponding picture. And if the detection model for detecting the Mura defect needs to be trained, shooting the picture with the Mura defect or the picture without the Mura defect to obtain a corresponding picture. In addition to obtaining the picture, the position information of each pixel point on the picture also needs to be further obtained. The position information is used for representing the specific position of the pixel point on the picture.
Further, in an embodiment, the step of obtaining the location information of each pixel point on the picture includes:
step S101, a plane rectangular coordinate system is established by taking any pixel point on a picture as an origin, and coordinate information of each pixel point on the picture is obtained;
in this embodiment, a planar rectangular coordinate system is constructed with any one of the pixel points on the picture as an origin, and coordinate information of each pixel point on the picture is obtained in combination with the size of the picture. Referring to fig. 3, fig. 3 is a scene schematic diagram of constructing a rectangular plane coordinate system with a pixel point on a picture as an origin in an embodiment. As shown in fig. 3, a planar rectangular coordinate system is constructed with the pixel points at the upper left corner of the picture as the origin, the picture size is 4 × 4, the coordinate information of the pixel points located in the first row and the first column is (0, 0), the coordinate information of the pixel points located in the first row and the second column is (1, 0), the coordinate information of the pixel points located in the first row and the third column is (2, 0), and so on, the coordinate information of each pixel point on the picture can be obtained. It is easy to understand that fig. 3 is only a schematic illustration, and specifically, which pixel point on the picture is used as the origin to construct the planar rectangular coordinate system and how large the picture size is are all selected according to the actual situation.
Step S102, cutting the picture into a plurality of sub-pictures, and obtaining the minimum offset of each pixel point on the picture relative to the boundary of the sub-picture to which the pixel point belongs;
in this embodiment, the picture is cut into a plurality of sub-pictures, for example, the picture is cut into N sub-pictures, where N is a positive integer, and a specific value of N is set according to an actual need, which is not limited herein. Referring to fig. 4, fig. 4 is a schematic view of a scene for clipping a picture according to an embodiment. As shown in fig. 4, a picture is cut into 4 sub-pictures, taking sub-picture 1 as an example, the offsets of the pixel point in sub-picture 1 from the four boundaries of sub-picture 1 are L1, L2, L3 and L4, respectively, then the minimum value is selected from L1, L2, L3 and L4, for example, L1 is the minimum, and L1 is taken as the minimum offset of the pixel point from the boundary of the sub-picture to which the pixel point belongs. By analogy, the minimum offset of each pixel point on the picture relative to the boundary of the sub-picture to which the pixel point belongs can be obtained.
Step S103, using the coordinate information of each pixel point on the picture and the minimum offset relative to the boundary of the sub-picture to which the pixel point belongs as the position information of each pixel point on the picture.
In this embodiment, the coordinate information of each pixel point on the picture is obtained in step S101 and is marked as (X, Y), and the minimum offset of each pixel point on the picture with respect to the boundary of the sub-picture to which the pixel point belongs is obtained in step S102 and is marked as L. And taking the coordinate information of each pixel point on the picture and the minimum offset relative to the boundary of the sub-picture to which the pixel point belongs as the position information of each pixel point on the picture, namely marking the position information as (X, Y, L).
And step S20, training the neural network based on the picture and the position information of each pixel point on the picture to obtain a detection model for automatic optical detection.
In this embodiment, the neural network is trained based on the pictures and the position information of each pixel point on the pictures, and of course, each picture corresponds to a training label. For example, if the picture is obtained by shooting an object whose edge is a straight line, the training label corresponding to the picture is a "straight line", and similarly, if the picture is obtained by shooting a non-straight object, the training label corresponding to the picture is a "non-straight line"; the picture is obtained by shooting the picture with the Mura defect, and the training label corresponding to the picture is 'the Mura defect exists', and similarly, the picture is obtained by shooting the picture without the Mura defect, and the training label corresponding to the picture is 'the Mura defect does not exist'. The method comprises the steps of taking a picture and position information of each pixel point on the picture as input, extracting features based on the input information by a neural network, adjusting internal neuron parameters by combining training labels of the picture, stopping training until a finishing condition is met, and taking the obtained neural network as a detection model for automatic optical detection. It is easy to understand that a large number of samples are required for training the neural network, and therefore, the number of the pictures referred to in this embodiment is multiple, that is, the neural network is trained by using the multiple pictures and the position information of each pixel point on each picture as training samples.
Taking a picture obtained by shooting a target with a straight edge as an example, due to the influence of distortion, if the target is imaged at the edge of a field of view, the edge of the target in the obtained picture 1 has some curvature; if the object is imaged in the middle of the field of view, the edge of the object in the resulting picture 2 is a straight line. The training labels of the two are consistent, and if the neural network is trained by directly using the pictures 1 and 2 as training samples according to a conventional mode, the interference to the neural network is caused. For the picture 1, the training label is a straight line, and the feature of the picture 1 is extracted through the training label for recognition, so that the feature of the picture 1 is a feature corresponding to a non-straight line. In order to avoid such a problem, in this embodiment, the neural network is trained by using the picture and the position information of each pixel point on the picture as input, that is, the dimensionality of the picture data is increased. Therefore, for the neural network, the detection result is output by integrating the image characteristics of the picture and the position information of each pixel point on the picture, and the situation that the detection result does not accord with the real situation due to distortion of the picture caused by different imaging positions can be avoided.
In the embodiment, a picture and position information of each pixel point on the picture are obtained; and training a neural network based on the picture and the position information of each pixel point on the picture to obtain a detection model for automatic optical detection. Through the embodiment, the position information of each pixel point on the picture is added when the neural network is trained through the picture, and the dimensionality of picture data is increased, so that the detection model obtained through final training can be combined with the picture characteristics and the position information of each pixel point on the picture to realize the detection of the picture, the situation that the detection error is caused by picture imaging distortion can be avoided, and the accuracy of optical automatic detection is improved.
Further, in one embodiment, step S20 includes:
supplementing corresponding position information for each pixel point of the picture to obtain a multi-dimensional composite picture; and inputting the composite picture into a neural network for training.
In this embodiment, the picture may be a black-and-white picture or a color picture, and the position information includes (X, Y, L), so that each pixel of the picture is supplemented with corresponding position information, which is equivalent to adding three-dimensional data to the picture. If the picture is a black-and-white picture, the data dimension of the obtained multi-dimensional composite picture is 4 dimensions, and if the picture is a color picture, the data dimension of the obtained multi-dimensional composite picture is 6 dimensions. And inputting the multidimensional composite picture into a neural network for training, wherein the output is a training label corresponding to the input picture, namely the output is clear, and the neuron parameters in the neural network are adjusted in the whole training process, so that the input and the output form mapping. The training process is prior art and will not be described herein.
Further, in one embodiment, step S20 includes:
inputting the picture into a first neural network; inputting the position information of each pixel point on the picture into a second neural network; and inputting the output of the first neural network and the output of the second neural network into a third neural network for training.
In this embodiment, a picture is input to a first neural network as an input of the first neural network, and position information of each pixel point on the picture is input to a second neural network; and after the output of the first neural network and the output of the second neural network are obtained, the outputs of the two neural networks are input into a third neural network for training. The output result of the third neural network is a training label corresponding to the input picture, namely the output result of the third neural network is clear, and the whole training process adjusts neuron parameters of the first neural network, the second neural network and the third neural network so that input and output form mapping.
Further, in an embodiment, the neural network is a convolutional neural network.
In this embodiment, the neural network is preferably a convolutional neural network. Of course, other types of neural networks, such as a recurrent neural network, may be used as the neural network according to actual needs.
In a third aspect, an embodiment of the present invention further provides a detection model training apparatus.
Referring to fig. 5, fig. 5 is a schematic diagram illustrating functional modules of an embodiment of the detection model training apparatus according to the present invention. As shown in fig. 5, in an embodiment, the detection model training apparatus includes:
the acquisition module 10 is configured to acquire a picture and position information of each pixel point on the picture;
and the training module 20 is configured to train the neural network based on the picture and the position information of each pixel point on the picture, so as to obtain a detection model for automatic optical detection.
Further, in an embodiment, the obtaining module 10 is configured to:
constructing a plane rectangular coordinate system by taking any pixel point on a picture as an origin to obtain coordinate information of each pixel point on the picture;
cutting the picture into a plurality of sub-pictures, and acquiring the minimum offset of each pixel point on the picture relative to the boundary of the sub-picture to which the pixel point belongs;
and taking the coordinate information of each pixel point on the picture and the minimum offset relative to the boundary of the sub-picture to which the pixel point belongs as the position information of each pixel point on the picture.
Further, in an embodiment, the training module 20 is configured to:
supplementing corresponding position information for each pixel point of the picture to obtain a multi-dimensional composite picture;
and inputting the composite picture into a neural network for training.
Further, in an embodiment, the training module 20 is configured to:
inputting the picture into a first neural network;
inputting the position information of each pixel point on the picture into a second neural network;
and inputting the output of the first neural network and the output of the second neural network into a third neural network for training.
Further, in an embodiment, the neural network is a convolutional neural network.
The function implementation of each module in the detection model training device corresponds to each step in the detection model training method embodiment, and the function and implementation process are not described in detail here.
In a fourth aspect, the embodiment of the present invention further provides a readable storage medium.
The readable storage medium of the present invention stores a test model training program, wherein the test model training program, when executed by a processor, implements the steps of the test model training method as described above.
The method implemented when the detection model training program is executed may refer to various embodiments of the detection model training method of the present invention, and details thereof are not repeated herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for causing a terminal device to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.
Claims (10)
1. A detection model training method is characterized by comprising the following steps:
acquiring a picture and position information of each pixel point on the picture;
and training a neural network based on the picture and the position information of each pixel point on the picture to obtain a detection model for automatic optical detection.
2. The training method of detection models according to claim 1, wherein the step of obtaining the position information of each pixel point on the picture comprises:
constructing a plane rectangular coordinate system by taking any pixel point on a picture as an origin to obtain coordinate information of each pixel point on the picture;
cutting the picture into a plurality of sub-pictures, and acquiring the minimum offset of each pixel point on the picture relative to the boundary of the sub-picture to which the pixel point belongs;
and taking the coordinate information of each pixel point on the picture and the minimum offset relative to the boundary of the sub-picture to which the pixel point belongs as the position information of each pixel point on the picture.
3. The training method of the detection model according to claim 2, wherein the step of training the neural network based on the picture and the position information of each pixel point on the picture comprises:
supplementing corresponding position information for each pixel point of the picture to obtain a multi-dimensional composite picture;
and inputting the composite picture into a neural network for training.
4. The training method of the detection model according to claim 2, wherein the step of training the neural network based on the picture and the position information of each pixel point on the picture comprises:
inputting the picture into a first neural network;
inputting the position information of each pixel point on the picture into a second neural network;
and inputting the output of the first neural network and the output of the second neural network into a third neural network for training.
5. The detection model training method of any one of claims 1 to 4, wherein the neural network is a convolutional neural network.
6. A test pattern training apparatus, comprising:
the acquisition module is used for acquiring a picture and position information of each pixel point on the picture;
and the training module is used for training the neural network based on the picture and the position information of each pixel point on the picture to obtain a detection model for automatic optical detection.
7. The test model training apparatus of claim 6, wherein the obtaining module is configured to:
constructing a plane rectangular coordinate system by taking any pixel point on a picture as an origin to obtain coordinate information of each pixel point on the picture;
cutting the picture into a plurality of sub-pictures, and acquiring the minimum offset of each pixel point on the picture relative to the boundary of the sub-picture to which the pixel point belongs;
and taking the coordinate information of each pixel point on the picture and the minimum offset relative to the boundary of the sub-picture to which the pixel point belongs as the position information of each pixel point on the picture.
8. The test model training apparatus of claim 7, wherein the training module is configured to:
supplementing corresponding position information for each pixel point of the picture to obtain a multi-dimensional composite picture;
and inputting the composite picture into a neural network for training.
9. A test model training apparatus comprising a processor, a memory, and a test model training program stored on the memory and executable by the processor, wherein the test model training program when executed by the processor implements the steps of the test model training method of any one of claims 1 to 5.
10. A readable storage medium, characterized in that the readable storage medium has stored thereon a test model training program, wherein the test model training program, when executed by a processor, implements the steps of the test model training method according to any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110185338.9A CN112907533A (en) | 2021-02-10 | 2021-02-10 | Detection model training method, device, equipment and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110185338.9A CN112907533A (en) | 2021-02-10 | 2021-02-10 | Detection model training method, device, equipment and readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112907533A true CN112907533A (en) | 2021-06-04 |
Family
ID=76123666
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110185338.9A Pending CN112907533A (en) | 2021-02-10 | 2021-02-10 | Detection model training method, device, equipment and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112907533A (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106778867A (en) * | 2016-12-15 | 2017-05-31 | 北京旷视科技有限公司 | Object detection method and device, neural network training method and device |
CN108154508A (en) * | 2018-01-09 | 2018-06-12 | 北京百度网讯科技有限公司 | Method, apparatus, storage medium and the terminal device of product defects detection positioning |
CN109978004A (en) * | 2019-02-21 | 2019-07-05 | 平安科技(深圳)有限公司 | Image-recognizing method and relevant device |
CN111539484A (en) * | 2020-04-29 | 2020-08-14 | 北京市商汤科技开发有限公司 | Method and device for training neural network |
-
2021
- 2021-02-10 CN CN202110185338.9A patent/CN112907533A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106778867A (en) * | 2016-12-15 | 2017-05-31 | 北京旷视科技有限公司 | Object detection method and device, neural network training method and device |
CN108154508A (en) * | 2018-01-09 | 2018-06-12 | 北京百度网讯科技有限公司 | Method, apparatus, storage medium and the terminal device of product defects detection positioning |
CN109978004A (en) * | 2019-02-21 | 2019-07-05 | 平安科技(深圳)有限公司 | Image-recognizing method and relevant device |
CN111539484A (en) * | 2020-04-29 | 2020-08-14 | 北京市商汤科技开发有限公司 | Method and device for training neural network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110223226B (en) | Panoramic image splicing method and system | |
JP4468442B2 (en) | Imaging system performance measurement | |
Brauers et al. | Multispectral filter-wheel cameras: Geometric distortion model and compensation algorithms | |
US20090161982A1 (en) | Restoring images | |
CN110400278B (en) | Full-automatic correction method, device and equipment for image color and geometric distortion | |
KR20100020903A (en) | Image identifying method and imaging apparatus | |
US8687068B2 (en) | Pattern of color codes | |
CN111383254A (en) | Depth information acquisition method and system and terminal equipment | |
CN115239683A (en) | Detection method of circuit board, model training method and device and electronic equipment | |
CN114387450A (en) | Picture feature extraction method and device, storage medium and computer equipment | |
JP2018137636A (en) | Image processing device and image processing program | |
CN114549329A (en) | Image inpainting method, apparatus and medium | |
JP2021189527A (en) | Information processing device, information processing method, and program | |
CN112907533A (en) | Detection model training method, device, equipment and readable storage medium | |
CN116385567A (en) | Method, device and medium for obtaining color card ROI coordinate information | |
CN111401365B (en) | OCR image automatic generation method and device | |
CN111062984B (en) | Method, device, equipment and storage medium for measuring area of video image area | |
CN110853087B (en) | Parallax estimation method, device, storage medium and terminal | |
CN113048899A (en) | Thickness measuring method and system based on line structured light | |
CN115063813B (en) | Training method and training device of alignment model aiming at character distortion | |
US9449251B2 (en) | Image processing apparatus, image processing method, and medium | |
US11893706B2 (en) | Image correction device | |
CN109961083A (en) | For convolutional neural networks to be applied to the method and image procossing entity of image | |
CN113269728B (en) | Visual edge-tracking method, device, readable storage medium and program product | |
CN112053406B (en) | Imaging device parameter calibration method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210604 |