WO2022211064A1 - 情報読取装置 - Google Patents
情報読取装置 Download PDFInfo
- Publication number
- WO2022211064A1 WO2022211064A1 PCT/JP2022/016728 JP2022016728W WO2022211064A1 WO 2022211064 A1 WO2022211064 A1 WO 2022211064A1 JP 2022016728 W JP2022016728 W JP 2022016728W WO 2022211064 A1 WO2022211064 A1 WO 2022211064A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- code
- reading
- processing
- image
- Prior art date
Links
- 238000000034 method Methods 0.000 claims abstract description 240
- 230000008569 process Effects 0.000 claims abstract description 233
- 238000012549 training Methods 0.000 claims abstract description 37
- 238000012545 processing Methods 0.000 claims description 214
- 230000006866 deterioration Effects 0.000 claims description 80
- 238000012937 correction Methods 0.000 claims description 20
- 238000003384 imaging method Methods 0.000 claims description 15
- 238000003860 storage Methods 0.000 claims description 15
- 238000004891 communication Methods 0.000 claims description 12
- 238000001514 detection method Methods 0.000 claims description 6
- 238000005516 engineering process Methods 0.000 abstract description 6
- 230000000052 comparative effect Effects 0.000 description 22
- 238000012986 modification Methods 0.000 description 18
- 230000004048 modification Effects 0.000 description 18
- 239000003550 marker Substances 0.000 description 15
- 238000005520 cutting process Methods 0.000 description 10
- 238000012795 verification Methods 0.000 description 10
- 238000013528 artificial neural network Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 8
- 230000008719 thickening Effects 0.000 description 7
- 230000000694 effects Effects 0.000 description 6
- 238000007639 printing Methods 0.000 description 6
- 238000009499 grossing Methods 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 239000002184 metal Substances 0.000 description 4
- 230000007547 defect Effects 0.000 description 3
- 230000032683 aging Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000015556 catabolic process Effects 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000011109 contamination Methods 0.000 description 2
- 238000006731 degradation reaction Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 239000000123 paper Substances 0.000 description 2
- 239000000758 substrate Substances 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 230000008602 contraction Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000010485 coping Effects 0.000 description 1
- 230000010339 dilation Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 239000011347 resin Substances 0.000 description 1
- 229920005989 resin Polymers 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K7/00—Methods or arrangements for sensing record carriers, e.g. for reading patterns
- G06K7/10—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
- G06K7/14—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
- G06K7/1404—Methods for optical code recognition
- G06K7/146—Methods for optical code recognition the method including quality enhancement steps
- G06K7/1465—Methods for optical code recognition the method including quality enhancement steps using several successive scans of the optical code
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K7/00—Methods or arrangements for sensing record carriers, e.g. for reading patterns
- G06K7/10—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
- G06K7/14—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
- G06K7/1404—Methods for optical code recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K7/00—Methods or arrangements for sensing record carriers, e.g. for reading patterns
- G06K7/10—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
- G06K7/14—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
- G06K7/1404—Methods for optical code recognition
- G06K7/1408—Methods for optical code recognition the method being specifically adapted for the type of code
- G06K7/1417—2D bar codes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0499—Feedforward networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/09—Supervised learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K7/00—Methods or arrangements for sensing record carriers, e.g. for reading patterns
- G06K7/10—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
- G06K2007/10524—Hand-held scanners
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20036—Morphological image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30204—Marker
Definitions
- the technology disclosed in this specification relates to technology for reading information recorded in a two-dimensional information code.
- Patent Document 1 discloses a reader system that reads information in a two-dimensional code by adjusting reading conditions (brightness, exposure time, presence/absence of a filter) for reading the two-dimensional code. If the reader system fails to read the information in the two-dimensional code due to the deterioration of the density of the two-dimensional code, the influence of external light, etc., instead of the reading conditions indicated by the reference bank, the reading conditions indicated by the extension bank are used. According to the conditions, read the information in the two-dimensional code. An extension bank is generated by adjusting the reference bank according to predetermined adjustment rules.
- the above technology only adjusts the reading conditions. Therefore, for example, even if the reading conditions are adjusted, if information for reading information in the two-dimensional code (for example, information indicating the position of the two-dimensional code) cannot be detected from the image of the two-dimensional code, The information in the two-dimensional code cannot be read.
- information for reading information in the two-dimensional code for example, information indicating the position of the two-dimensional code
- This specification provides a technique for coping with situations where the information in the two-dimensional code cannot be read.
- the information reading device disclosed in this specification includes a camera capable of imaging a two-dimensional information code, A detection process of detecting first relational information, which is information defining a two-dimensional code area of the first information code, from an image of the first information code as the information code captured by the camera; a reading process of reading information recorded in the first information code based on the detected first relationship information; - executing a first code process including an output process for outputting the result of the reading process; a first code processing execution unit; The parameter of the learning model for outputting the second relationship information, which is an estimated value of information defining the two-dimensional code area of the information code from the image of the information code captured by the camera, is set to the first relationship.
- an adjusting unit that adjusts based on teacher data including at least a plurality of successful cases of the reading process read based on the information;
- a second code processing execution unit that performs second code processing, including output processing for outputting the reading result of the reading processing;
- An information reader An information reader.
- the information reading device uses the teacher data including at least a plurality of successful cases of the reading process using the first relational information detected by the detection process (that is, based on the teacher data). ) to adjust the parameters of a learning model that outputs relevant information about the information code from the image of the information code. After adjusting the parameters, the information reading device executes acquisition processing using the learning model, acquires the second relationship information from the second information code, and uses the acquired second relationship information. Execute the reading process. For example, in a situation where the detection process fails even if the reading conditions of the reading process are adjusted, the second relational information can be obtained from the second information code by the obtaining process using the learning model. It is possible to cope with a situation in which the information in the two-dimensional code cannot be read because the detection process fails even if the reading conditions for the reading process are adjusted.
- the first relational information may be information indicating position coordinates of points (positions) at the four corners of the information code.
- the first relational information may be information indicating black and white patterns of a plurality of cells forming a two-dimensional code area of the information code.
- the learning model that is, based on the learning model
- information indicating the position of the cell may be associated with a value indicating either black or white.
- pattern data the data indicating the black and white pattern of multiple cells
- pattern data the data indicating the black and white pattern of multiple cells
- the amount of pattern information increases as the number of pixels increases.
- one value is associated with one cell. does not increase the amount of information in An increase in the information amount of pattern data can be suppressed as compared with the above comparative example.
- One or more code reading devices including the camera, the first code processing execution unit, and the second code processing execution unit; and the adjustment unit separate from the one or more code reading devices.
- a learning device wherein the one or more code reading devices acquire the adjusted parameters from the learning device, and the second code processing execution unit refers to the acquired adjusted parameters. (ie, based on parameters), the second code processing may be performed.
- a comparative example in which one device includes a camera, a first code processing execution unit, a second code processing execution unit, and an adjustment unit is assumed.
- the single device not only reads the information code, but also adjusts the parameters of the learning model. For this reason, it may be necessary to increase the processing power of the single device compared to the conventional configuration in which adjustment of the parameters of the learning model was not performed.
- the adjustment of the parameters of the learning model is performed by a learning device separate from the code reading device. Therefore, the parameters of the learning model can be adjusted without increasing the processing power of the code reading device.
- the learning device is installed on the Internet, mediates communication between the one or more code reading devices and the learning device, and may further include an intermediary device connected to the Internet.
- the code reader can acquire the parameters by reading the specific information code.
- the one or more code readers include a first code reader and a second code reader different from the first code reader, and the learning device is adapted to read the first code reader.
- the second code reader acquires the teacher data from the device, refers to the acquired teacher data, adjusts the parameter, and the second code reader receives the data from the first code reader from the learning device.
- the parameters adjusted using the training data may be obtained, and the second code processing may be executed with reference to the obtained parameters.
- the second code reading device it is possible to use the successful case of the first code reading device, which is different from the second code reading device.
- the specific memory has a first area for storing a program for executing the first code processing and the second code processing; 2 areas, wherein the second area is an area for storing a plurality of pieces of learning information, and each of the plurality of pieces of learning information includes the learning model and the adjusted parameters. , may be included.
- the adjusting unit may be configured to start adjusting the parameters of the learning model after the number of the plurality of successful cases included in the training data exceeds a predetermined number.
- the information reading device further includes a classification unit that classifies the target information code for which the reading process has been successful in the first code process into a specific pattern among a plurality of patterns related to the type of deterioration of the information code; and a judgment unit that judges whether or not to adopt the successful case of the target information code as the training data based on the specific pattern that has been classified.
- the information reading device classifies the success cases of the target information code according to the deterioration pattern of the information code, and adopts the success cases of the target information code as the training data based on the classification result.
- the "deterioration" in this specification is not only the deterioration of the information code over time, but also the deterioration of the printing device that prints the information code, and the information caused by unstable support during printing of the medium on which the information code is printed. Also includes code variants.
- the classification unit compares an image of the restored code restored in the error correction executed in the reading process for the target information code with a binarized image of the actual image of the target information code. Then, the location of deterioration of the target information code is identified, and the target information code is converted into the specific pattern based on the location of deterioration identified by comparing the restored code with the actual image. can be classified.
- the classification unit may classify the target information code into the specific pattern based on at least one of contrast of an image of the target information code and deformation of the target information code. good.
- the target information code can be classified based on at least one of the contrast in the actual image of the target information code and the actual deformation of the target information code.
- the information reading device further classifies the target information code, which has been successfully read in the first code processing, into a specific item out of a plurality of items indicating internal factors of the information reading device. and a judgment unit for judging whether or not to adopt the successful case of the target information code as the training data based on the specific item that has been classified.
- the plurality of items include two or more items related to image processing performed by the information reading device on the image of the information code, and two items related to imaging conditions for the information reading device to capture the information code. At least one of the above items and two or more items relating to the processing time for the information reading device to read the information recorded in the information code may be included.
- the adjusting unit determines that, even before the number of the plurality of success cases included in the training data exceeds the predetermined number, the number of two or more success cases out of the number of the plurality of success cases is , configured to initiate tuning of the parameters of the learning model when exceeding a specific number less than the predetermined number, each of the two or more success cases reading the information code.
- the successful case of the target information code is used as the training data.
- the success case of the target information code is not adopted as the teacher data based on the classified specific pattern and the first storage control unit, wherein the successful case of the target information code is not stored in the specific memory, and the adjustment unit refers to the teacher data in the first memory. may be used to adjust the parameters.
- an increase control unit that performs image processing on a code image, which is an image of the information code captured by the camera, generates virtual cases of the reading process, and increases the number of successful cases in the training data; You may prepare.
- the image processing includes a process of adjusting the contrast of the code image, a process of adding a predetermined image to the code image, a process of rotating the code image, and a process of rotating the information code indicated by the code image. processing each cell.
- the number of successful cases can be increased in a situation where the number of successful cases does not exceed the predetermined number. Even if the number of actual successful cases does not exceed a predetermined number, the reliability of the parameters of the learning model can be ensured.
- a second memory for storing, for each of a plurality of information codes, case information indicating an example of the reading process performed on the information code; selecting one or more types of the image processing from among a plurality of types of the image processing based on trends in the reading processing cases indicated by the plurality of pieces of stored case information; The above image processing may be executed.
- a comparative example is assumed in which image processing predetermined by an administrator or the like is performed to generate a virtual case.
- the predetermined image processing corresponds to cases different from actual reading processing cases. According to the above configuration, it is possible to generate a virtual example by performing appropriate image processing in consideration of the tendency of actual reading processing examples.
- the case information indicating the case of the reading process executed for the first information code is stored in the second memory.
- a second storage controller may further be provided.
- case information for generating a virtual case can be stored and accumulated each time the reading process is executed.
- the second code processing includes cutting processing of cutting out the image of the second information code from the image captured by the camera with reference to the position of the pointing marker illuminated on the second information code. It's okay.
- control method of the information reading device, the computer program for the information reading device, and the storage medium storing the computer program are novel and useful.
- FIG. 1 is a conceptual diagram of an information reading system;
- FIG. 1 is a block diagram of an information reading system;
- FIG. 10 is a flowchart figure which shows the process of a code reader.
- FIG. 10 is a flowchart showing normal reading processing; It is a flowchart figure which shows the process of a learning apparatus. It is a flowchart figure which shows a learning reading process. It is a flowchart figure which shows the process of the reading device of 2nd Example. It is a flowchart figure which shows a degradation classification process. It is a flowchart figure which shows a pattern determination process. It is a flowchart figure which shows the process of the code reader of 3rd, 4th Example.
- FIG. 10 is a flowchart showing normal reading processing; It is a flowchart figure which shows the process of a learning apparatus. It is a flowchart figure which shows a learning reading process. It is a flowchart figure which shows the process of the reading device of 2nd Example. It is
- FIG. 11 is a flow chart showing processing of the learning device of the third and fourth embodiments; It is a figure which shows the learning object of 5th Example.
- FIG. 14 is a flowchart showing learning reading processing of the fifth embodiment;
- FIG. 20 is a flow chart diagram showing processing of the learning device of the sixth embodiment.
- FIG. 11 is a block diagram of an information reading system of a seventh embodiment; It is a figure which shows the classification by the thickening and thinning phenomenon.
- FIG. 4 is a diagram showing classification according to distortion phenomenon; It is a figure which shows classification by a pitch deviation phenomenon.
- FIG. 20 is a flowchart of internal factor classification processing of the ninth embodiment;
- FIG. 20 is a flowchart of internal factor classification processing in the tenth and eleventh embodiments;
- FIG. 20 is a flow chart showing processing of the code reader of the twelfth embodiment;
- FIG. 10 is a diagram showing a specific example 1 of cutout processing;
- FIG. 10 is a diagram showing a specific example 2 of cutout processing;
- FIG. 11 is a diagram showing a specific example 3 of cutout processing;
- the information reading system 2 of this embodiment has a two-dimensional code area CR, and is a system for reading information recorded in the two-dimensional code CD in which information is recorded in the code area CR.
- a two-dimensional code CD is displayed on a specific medium (for example, metal, substrate, resin, paper medium, etc.) in a factory, an outdoor workshop, or the like.
- the code region CR is rectangular (square, rectangle), for example, as illustrated in FIG. is mapped according to the information (the information you want to display).
- the two-dimensional code CR may be used for a long time. Therefore, the two-dimensional code CR may deteriorate over time.
- the black portions (black cells (dark cells) BR) of the two-dimensional code CR become lighter, and the white portions (white cells (bright cells) WR) of the two-dimensional code CR become lighter over time.
- the contrast of the black portion with respect to the portion of for example, due to aging, part of the two-dimensional code CR is lost and part of the two-dimensional code CR is stained.
- the information reading system 2 includes two code reading devices 10, a learning device 200, and a printer 500. Each device 10, 200, 500 is connected to LAN4 and can communicate via LAN4.
- LAN4 is a wired LAN or a wireless LAN.
- the information reading system 2 includes two code reading devices 10, this is merely an example.
- the information reading system 2 may include only one code reader 10 or may include three or more code readers 10 .
- the code reader 10 is a portable device for reading information recorded in the two-dimensional code CR. Note that the appearance of the code reading device 10 shown in FIG. 1 is merely an example, and the code reading device 10 may have the same appearance as a smart phone, for example.
- the code reading device 10 includes an operation unit 12, a display unit 14, a camera 20, a communication interface 22, and a control unit 30.
- an interface is described as "I/F.”
- the operation unit 12 has a plurality of keys.
- a user can input various instructions to the code reader 10 by operating the operation unit 12 .
- the display unit 14 is a display for displaying various information. Further, the display unit 14 may function as a touch panel (that is, the operation unit 12) capable of accepting user operations.
- Camera 20 includes a light source such as an LED light and a CCD image sensor.
- a communication I/F 22 is an I/F for executing communication via the LAN 4 . Communication I/F 22 is connected to LAN 4 .
- the control unit 30 includes a CPU 32 and a memory 34 (non-transit computer functioning as a readable recording medium);
- the CPU 32 executes various processes according to programs 40 stored in the memory 34 .
- the memory 34 also stores learning information 50 relating to machine learning using multi-layer neural networks.
- a multilayer neural network is a function composed of an input layer, an intermediate layer, and an output layer, and data input to the input layer is processed by the intermediate layer and output from the output layer.
- Multilayer neural networks are, for example, convolutional neural networks, fully-connected neural networks, and the like.
- machine learning is not limited to multilayer neural networks, and support vector machines, for example, may be used. Multilayer neural networks and the like are well-known techniques, and detailed description thereof is omitted here.
- the learning information 50 includes a learning model 52 and model parameters 54.
- the learning model 52 is a multilayer neural network model (that is, a formula).
- the model parameters 54 are parameters of the learning model 52 , specifically, values of various weights in the middle layers of the learning model 52 .
- the learning model 52 is installed, for example, from a server (not shown) provided by the vendor of the information reading system 2 . Note that, in a modified example, the learning model 52 may be stored in advance in the memory 34 at the shipping stage of the code reading device 10 .
- the model parameters 54 are generated by the learning device 200 and stored in the memory 34 .
- the learning device 200 is a device that adjusts the model parameters 254 of the learning model 252 .
- the learning device 200 is, for example, a server.
- Learning device 200 includes communication I/F 222 and control unit 230 .
- Communication I/F 222 is connected to LAN 4 .
- the control unit 230 includes a CPU 232 and a memory 234.
- CPU 32 executes various processes according to program 240 stored in memory 234 .
- the memory 234 further stores a plurality of teacher data 242 and learning information 250 .
- Learning information 250 includes learning model 52 similar to code reading device 10 , model parameters 254 similar to code reading device 10 , and initial parameters 256 .
- Initial parameters 256 are initial values of model parameters 254 (ie, initial values of various weights in the hidden layer). The initial parameters 256 are predetermined by the vendor of the information reading system 2, for example.
- the teacher data 242 is information used for adjusting the model parameters 254.
- the model parameter 254 minimizes the error between the output of the learning model 252 when the teacher data 242 is input and the correct value indicated by the teacher data 242 for each of the plurality of teacher data 242. adjusted to
- the CPU 32 when the CPU 32 receives an image capturing instruction via the operation unit 12, it controls the camera 20 to capture an image of the two-dimensional code displayed on a specific medium (such as metal). As a result, the CPU 32 acquires captured image data representing a captured image, which is an already captured image, from the camera 20 .
- a specific medium such as metal
- the CPU 32 executes normal reading processing for reading information recorded in the two-dimensional code from the captured image indicated by the already acquired captured image data. Details of the normal reading process will be described later with reference to FIG.
- the CPU 32 determines whether or not the reading of information in the normal reading process has succeeded.
- the process proceeds to S40.
- the CPU 32 determines that the reading of information in the normal reading process has failed (NO in S6), the process proceeds to S10.
- the CPU 32 determines whether or not the number of times information reading fails in normal reading processing in response to the imaging instruction in S2 is greater than a predetermined threshold value (eg, 3 times). When the CPU 32 determines that the number of failures is equal to or less than the threshold (NO in S10), the process proceeds to S12.
- a predetermined threshold value eg, 3 times
- the CPU 32 changes the imaging conditions of the camera 20 (for example, sensitivity, exposure time, presence/absence of light source, intensity of light source, etc.) and images the two-dimensional code again. After completing S12, the CPU 32 returns to S4.
- the process proceeds to S20.
- the CPU 32 determines whether the model parameters 54 in the memory 34 have been updated.
- the model parameters 54 are updated by receiving the adjusted model parameters 254 from the learning device 200 and storing the adjusted model parameters 254 as the model parameters 54 in the memory 34 .
- the process proceeds to S24.
- the CPU 32 executes learning reading processing for reading information recorded in the two-dimensional code from the captured image indicated by the captured image data acquired from the camera 20.
- the learning reading process is a process that is different from the normal reading process in S4 and uses the learning information 50 . Details of the learning reading process will be described later with reference to FIG.
- the CPU 32 determines whether or not the reading of information in the learning reading process has succeeded.
- the process proceeds to S40.
- the CPU 32 outputs either the reading result of the normal reading process of S4 or the reading result of the learning reading process of S26.
- the CPU 32 causes the display unit 14 to display an image showing the reading result.
- the CPU 32 transmits data indicating the reading result to an external device (for example, a PC or the like).
- the CPU 32 transmits a successful case of either processing to the learning device 200 as teacher data 242 in a situation where the reading of the two-dimensional code is successful in either the normal reading processing of S4 or the learning reading processing of S26.
- teaching data 242 ie, successful cases
- the CPU 32 determines whether or not to accept an instruction to finish reading the two-dimensional code via the operation unit 12. If the CPU 32 determines that it has received an instruction to finish reading the two-dimensional code (YES in S44), it ends the process of FIG. On the other hand, when the CPU 32 determines that an instruction to finish reading the two-dimensional code has not been received (NO in S44), the CPU 32 returns to S2 and images the two-dimensional code again.
- the CPU 32 determines that the model parameters 54 have never been updated (NO in S20), or determines that reading of information in the learning reading process has failed (NO in S26). In either case, the process proceeds to S30.
- the CPU 32 causes the display unit 14 to display a failure notification indicating that reading information from the two-dimensional code has failed.
- the process of FIG. 3 ends.
- the CPU 32 identifies symbol marks (also called finder patterns) arranged at three of the four corners of the two-dimensional code from the code image cut out in S50.
- the CPU 32 calculates position coordinates L1, L2, and L4 of points indicating three of the four corners of the two-dimensional code from the three specified symbol marks.
- the CPU 32 calculates the remaining position coordinate L3 from the calculated three position coordinates L1, L2, and L4.
- the position coordinates L1 to L4 of the points indicating the four corners (corners) of the two-dimensional code (hereinafter simply referred to as the four corner position coordinates L1 to L4) are calculated. In the drawings, these coordinates are simply indicated as coordinates L1 to L4 of the four corners.
- the CPU 32 identifies the position coordinates of each cell forming the two-dimensional code based on the position coordinates L1 to L4 of the four corners calculated in S52.
- each cell of the two-dimensional code is determined to be either white or black.
- decoding processing may include error correction processing.
- the error correction process corrects the position coordinates of each cell when there is a difference between the original two-dimensional code and the two-dimensional code in the captured image due to dirt adhering to a part of the two-dimensional code. and a process of restoring the original two-dimensional code from the black and white values of each cell. If the decoding of the two-dimensional code fails even though the error correction process has been executed multiple times in the decoding process, the reading of the two-dimensional code by the process of FIG. 4 fails. On the other hand, in other cases, reading of the two-dimensional code by the process of FIG. 4 succeeds. When the process of S58 ends, the process of FIG. 4 ends.
- FIG. 5 Processing executed by CPU 232 of learning device 200 according to program 240 will be described with reference to FIG. The process shown in FIG. 5 is triggered by receiving the teacher data 242 from the code reader 10 .
- teacher data 242 is received from one of the two code readers 10 .
- the teacher data 242 may be received from both of the two code reading devices 10 .
- the CPU 232 stores the teacher data 242 received from the code reading device 10 in the memory 234.
- the training data 242 in the present embodiment includes the code image data representing the code image cut in S50 and the position coordinates L1 to L4 of the four corners calculated in S52 as a successful case of the normal reading process in FIG. ,including.
- the teacher data 242 may also include success stories of the learning reading process of FIG. 6, which will be described later.
- the CPU 232 determines whether or not the number of teaching data 242 in the memory 234 is equal to or greater than the target number (eg, 100). When the CPU 232 determines that the number of teaching data 242 is equal to or greater than the target number (YES in S62), the process proceeds to S64. On the other hand, when the CPU 232 determines that the number of teaching data 242 is less than the target number (NO in S62), it ends the processing of FIG.
- the target number eg, 100
- the CPU 232 executes learning processing for adjusting the model parameters 254 of the learning model 252 using the plurality of teacher data 242 in the memory 234 (that is, referring to the teacher data 242).
- the learning model 252 inputs code image data to the input layer, and outputs estimated values of the position coordinates of the four corners of the two-dimensional code (the position coordinates of the points indicating the four corners) from the output layer. is a model.
- the CPU 232 selects one teacher data 242 from a plurality of teacher data 242, and applies the code image data in the selected one teacher data 242 to the input layer of the learning model 252. input.
- the estimated values of the position coordinates of the four corners of the two-dimensional code are output from the output layer of the learning model 252 .
- the CPU 232 adjusts the intermediate layer of the learning model 252 so that the difference between the position coordinates L1 to L4 of the four corners in the selected one teacher data 242 and the estimated value output from the learning model 252 is minimized.
- An adjustment process for adjusting the model parameters 254 is performed.
- the CPU 232 executes the adjustment processing for all of the plurality of teacher data 242 .
- the plurality of teacher data 242 used for the learning process are deleted from the memory 234 .
- the learning process is executed again. This causes the model parameters 254 to be iteratively adjusted.
- the frequency of execution of the learning process is not limited to the above example, and the learning process may be executed each time the teacher data 242 is received from the code reading device 10, for example.
- the CPU 232 executes verification processing for verifying the accuracy of the model parameters 254 updated at S64. Specifically, the CPU 232 inputs code image data representing a pseudo code image to the learning model 252 and obtains estimated values of the position coordinates of the four corners of the pseudo code image. The CPU 232 uses the obtained estimated value (that is, based on the estimated value) to execute the same processing as S54 to S58 in FIG. Execute the reading of the two-dimensional code indicated by . The CPU 232 counts the number of error correction processes executed in reading the pseudo two-dimensional code. Then, the CPU 232 determines the verification result "OK" indicating that the accuracy of the model parameter 254 is good when the number of counts is equal to or less than a predetermined number (for example, two times).
- a predetermined number for example, two times
- the CPU 232 determines the verification result "NG" indicating that the accuracy of the model parameters 254 is not good when the number of counts is greater than the predetermined number.
- the verification process is not limited to verification based on the number of error correction processes. Verification or the like based on differences may also be used.
- the CPU 232 determines whether or not the verification result at S66 indicates "OK". When the CPU 232 determines that the verification result indicates "OK” (YES in S68), the process proceeds to S70. On the other hand, when determining that the verification result indicates "NG” (NO in S68), the CPU 232 skips the process of S70 and ends the process of FIG.
- the CPU 232 transmits the model parameters 254 adjusted at S64 to the code reader 10.
- the model parameters 254 after adjustment are not only for the first code reading device 10 that is the transmission source of the training data 242, but also for the second code reading device 10 that is not the above transmission source among the two code reading devices 10. It is also sent to device 10 .
- the second code reader 10 can execute the learning reading process (see FIG. 6) using a successful case of another code reader 10 .
- the adjusted model parameters 254 may not be transmitted to the second code reading device 10 .
- the learning device 200 can be used by a teacher including a plurality of successful cases of decoding processing using the position coordinates L1 to L4 of the four corners of the two-dimensional code calculated in S52 of the normal reading processing in FIG.
- the model parameters 254 of the learning model 252 that outputs the estimated values of the position coordinates of the four corners of the two-dimensional code from the image of the two-dimensional code are adjusted (S64 in FIG. 5). Then, when the code reading device 10 changes the imaging conditions but reading in the normal reading process fails (NO in S10 after S12 in FIG. 3), the learning reading process is executed (S24).
- a process of referring to the learning model 52 and outputting the estimated values of the position coordinates of the four corners is executed (S82 in FIG. 6). For example, it is assumed that reading by the normal reading process fails due to failure in calculating the position coordinates L1 to L4 of the four corners of the two-dimensional code due to aged deterioration of the two-dimensional code. In this embodiment, even if the calculation of the position coordinates L1 to L4 of the four corners of the two-dimensional code fails due to aged deterioration of the two-dimensional code, the information in the two-dimensional code can be read by executing the learning reading process. can be read.
- a comparative example is assumed in which the information reading system 2 does not include the learning device 200 and the code reading device 10 executes the processing of FIG.
- the code reader 10 not only reads the two-dimensional code, but also adjusts the model parameters 54 of the learning model 52 . Therefore, it may be necessary to increase the processing power of the CPU 32 of the code reading device 10 compared to conventional devices that did not adjust the model parameters 54 of the learning model 52 .
- the adjustment of the model parameters 254 of the learning model 252 is performed by the learning device 200 that is separate from the code reading device 10 . Therefore, the model parameters 254 can be adjusted without increasing the processing power of the CPU 32 of the code reader 10 .
- the configuration of the above comparative example may be employed in the modified example.
- the information reading system 2 is an example of an "information reading device".
- the two code readers 10 in FIG. 2, one code reader 10 and the other code reader 10 are respectively referred to as “one or more code readers” and "first code reader”.
- the camera 20 is an example of a "camera”.
- the learning device 200 is an example of a "learning device.”
- a two-dimensional code is an example of an “information code (first information code and second information code)”.
- the position coordinates L1 to L4 of the four corners of the two-dimensional code are an example of "first relational information”.
- the estimated values of the position coordinates of the four corners of the two-dimensional code are an example of "relationship information (and second relational information)".
- the teacher data 242, learning model 252, and model parameters 254 are examples of "teaching data,””learningmodel,” and “parameter,” respectively.
- the processing in FIG. 4 and S40 in FIG. 3 are an example of the “first code processing”.
- S52 and S58 in FIG. 4 and S40 in FIG. 3 are examples of the “detection process", the "reading process”, and the "output process", respectively.
- the processing in FIG. 6 and S40 in FIG. 3 are examples of the "second code processing”.
- S82 in FIG. 6 is an example of the "acquisition process".
- control unit 30 of the code reading device 10 is an example of the "first code processing execution unit” and the “second code processing execution unit”.
- Control unit 230 of learning device 200 is an example of an “adjustment unit”.
- the processing of the code reader 10 of this embodiment is the processing of the code reader 10 of the first embodiment except that the processing of S100 to S102 is added and the processing of S104 is executed instead of S42.
- the processing is similar to that of FIG.
- the same reference numerals are assigned to the same processing as in the first embodiment, and the same applies to each embodiment described later.
- the deterioration classification process is a process of classifying the two-dimensional code to be read according to the pattern of deterioration and storing the success case corresponding to the reading result of the two-dimensional code as the training data 242 in the training data table 60 .
- the deterioration pattern is a pattern based on the presence/absence of dirt adhesion, the presence/absence of defects, the location of deterioration, and the like. The deterioration classification process will be described later with reference to FIG.
- the teacher data table 60 is stored in the memory 34 of the code reader 10. As shown in FIG. 7, the teacher data table 60 includes, for each of a plurality of deterioration patterns, a pattern number (for example, "p001"), a data group (for example, "TD1, TD2"), and an upper limit value (for example, "20 ”) are stored in association with each other.
- the pattern number is a number that identifies the corresponding deterioration pattern.
- the data group is one or more teacher data 242 classified as corresponding deterioration patterns.
- the upper limit is the upper limit of the number of teacher data 242 that can be stored in the corresponding deterioration pattern.
- the CPU 32 determines whether or not the number of teaching data 242 stored in the teaching data table 60 is equal to or greater than the target number.
- the target number is the same as the target number in S62 of FIG. 5, for example.
- the process proceeds to S104.
- the CPU 32 determines that the number of teaching data 242 is less than the target number (NO in S102)
- it skips the processing of S104 and proceeds to S44.
- the CPU 32 transmits all of the teacher data 242 in the teacher data table 60 to the learning device 200. After completing S104, the CPU 32 proceeds to S44.
- the CPU 32 compares the image of the restored code, which is the two-dimensional code restored in the error correction process, with the binarized image of the actual code, which is the two-dimensional code actually captured, to obtain the actual code.
- a pattern determination process for determining a pattern of deterioration of is executed.
- the recovery code image is represented in black and white binary.
- the binarized image of the actual code is an image obtained by binarizing the actual image of the actual code. The pattern determination process will be described later with reference to FIG. After completing the process of S112, the CPU 32 proceeds to S114.
- the CPU 32 determines whether or not the number of stored data groups of interest in the teacher data table 60 has reached the upper limit number corresponding to the data group of interest.
- the target data group is a data group associated with a pattern number identifying a pattern without deterioration when error correction processing is not executed (NO in S110).
- the target data group is the data group associated with the pattern number identifying the pattern determined by the pattern determination process when the error correction process is executed (YES in S110). If the CPU 32 determines that the number of data groups to be stored has not reached the upper limit (NO in S114), the process proceeds to S116. On the other hand, when the CPU 32 determines that the number of stored target data groups has reached the upper limit (YES in S114), the CPU 32 skips the process of S116 and ends the process of FIG.
- the CPU 32 stores the success case corresponding to the reading result at S40 in FIG.
- the process of FIG. 8 ends.
- a comparative example is assumed in which all successful cases are stored as teacher data 242 in the teacher data table 60 without executing the determination of S114.
- the usage of memory 34 for storing teacher data table 60 can be reduced.
- the configuration of the comparative example may be employed in the modified example.
- the two-dimensional code is equally divided into nine areas in order to classify the locations of deterioration.
- Each of the nine regions is associated with a deterioration value, which is a value indicating the type of deterioration.
- the CPU 32 determines one of the nine areas as the target area. Note that the number of divided regions is not limited to 9, and may be, for example, 4, 12, or the like. Also, the division of the regions is not limited to equal division, and for example, the sizes of the regions may be different from each other.
- the CPU 32 determines whether there is a difference between the target area of the restored code image and the target area of the binarized image of the actual code.
- a difference between the target area of the restored code image and the target area of the binarized image of the actual code means that the target area of the imaged two-dimensional code is degraded.
- the CPU 32 determines that there is a difference between the target areas of both codes (YES in S132)
- the process proceeds to S134.
- the CPU 32 determines that there is no difference between the target regions of both codes (NO in S132)
- it skips the processing of S134 to S138 and proceeds to S140.
- the CPU 32 determines whether the difference between the target areas of both codes is a difference corresponding to black deterioration or a difference corresponding to white deterioration.
- black deterioration is deterioration in which white cells of a two-dimensional code turn black.
- Black degradation is, for example, contamination in which black ink adheres to the two-dimensional code.
- White deterioration is deterioration in which black cells of a two-dimensional code turn white.
- White deterioration is, for example, contamination caused by white ink adhering to the two-dimensional code, loss of black cells in the two-dimensional code, and the like.
- the CPU 32 determines whether or not there is an unselected area as the target area among the nine areas. If the CPU 32 determines that there is an unselected area as the target area (YES in S140), the process returns to S130. On the other hand, the CPU 32 ends the process of FIG. 9 when there is no unselected area as the target area, that is, when the processes of S132 to S138 have been executed for all of the nine areas (NO in S140).
- a target number eg, 100
- the code reading device 10 classifies the success cases of the two-dimensional code to be read according to the deterioration pattern of the two-dimensional code (S112 in FIG. 8).
- the code reading device 10 determines whether or not to store the successful case of the two-dimensional code to be read as the teaching data 242 in the teaching data table 60 (S114). As a result, it is possible to prevent the success cases as the training data 242 from biasing toward cases of specific deterioration patterns.
- the presence or absence of adhesion of dirt, the presence or absence of defects, and the location of deterioration are used in the classification of deterioration patterns.
- contrast may be used in classifying deterioration patterns.
- deterioration patterns may be classified based on the value of the difference between the upper limit value and the lower limit value of the luminance value of the code image in the cutting process in S50 of FIG. 4 or S80 of FIG. For example, when the value of the difference is a value within the first value range, the successful case of the two-dimensional code to be read is classified into the first pattern, and the value of the difference is different from the first value range.
- the successful case of the two-dimensional code to be read may be classified into the second pattern.
- the deterioration of the two-dimensional code may be classified by combining the classification based on the attachment or loss of dirt and the classification based on the contrast.
- modification of information codes may be used in classifying deterioration patterns.
- the deformation of the information code includes, for example, the thickening and thinning phenomenon in FIG. 16, the distortion phenomenon in FIG. 17, the pitch deviation phenomenon in FIG.
- the deformation of the information code occurs due to, for example, deterioration of the printing device that prints the two-dimensional code, or unstable support during printing of the medium on which the information code is printed.
- the thickening/thinning phenomenon includes a thickening phenomenon and a thinning phenomenon.
- Thickening is a phenomenon in which the actual width of a black cell is thicker than the ideal width.
- the thinning phenomenon is a phenomenon in which the actual width of the black cell is narrower than the ideal width.
- the fatness/thinness ratio which indicates the extent of the fatness/thinness phenomenon, is calculated, for example, by analyzing the timing pattern of the two-dimensional code.
- the timing pattern is used to specify the position coordinates of the symbol mark.
- the timing pattern is a pattern in which white cells and black cells are alternately arranged.
- the width ratio in the horizontal direction is, for example, the difference between the total width of black cells in the horizontal timing pattern and the total width of white cells in the horizontal timing pattern with respect to the total length of the horizontal timing pattern in the horizontal direction.
- the fatness/thinness ratio in the vertical direction is calculated in the same manner as the fatness/thinness ratio in the horizontal direction using the timing pattern in the vertical direction (that is, based on the timing pattern).
- the fatness/thinness ratio of the entire two-dimensional code is calculated, for example, as an average value of the fatness/thinness ratio in the horizontal direction and the fatness/thinness ratio in the vertical direction.
- the larger value of the ratio of fatness and thinning in the horizontal direction and the ratio of fatness and thinning in the vertical direction may be used as the overall fatness and thinning ratio. Only one of the thinning ratios may be used.
- the CPU 32 classifies successful cases of the two-dimensional code to be read based on the total fatness/thinness ratio, and stores them in the teacher data table 60 (see S116 in FIG. 8). For example, as shown in FIG. 16, successful examples of the two-dimensional code to be read are classified based on five value ranges of the total fatness/thinness ratio. Note that the five value ranges in FIG. 16 are just an example.
- the ratio of the vertical length to the horizontal length of the two-dimensional code (hereafter referred to as the "length-to-width ratio") is 1:1.
- the distortion phenomenon is a phenomenon in which the horizontal length of the two-dimensional code is distorted with respect to the vertical length of the two-dimensional code, and the aspect ratio changes from 1:1.
- the aspect ratio is calculated, for example, as a ratio of the distance between the centers of vertically adjacent cells and the distance between the centers of horizontally adjacent cells.
- the CPU 32 classifies successful cases of the two-dimensional code to be read based on the aspect ratio, and stores them in the teacher data table 60 (see S116 in FIG. 8). For example, as shown in FIG. 17, successful examples of two-dimensional codes to be read are classified based on six patterns of aspect ratios. Note that the six patterns in FIG. 17 are only examples.
- the pitch deviation phenomenon is a phenomenon in which the pitch of adjacent cells deviates from the above-described ideal pitch, which is a constant interval.
- the pitch deviation ratio indicating the degree of pitch deviation is calculated, for example, as the ratio of the maximum amount of pitch deviation to the ideal pitch.
- the amount of pitch deviation is calculated as the absolute value of the difference between the ideal pitch and the distance between adjacent cell centers.
- the pitch deviation ratio may be, for example, the ratio of the average value of the amount of pitch deviation to the ideal pitch.
- the CPU 32 classifies successful cases of the two-dimensional code to be read based on the pitch deviation ratio, and stores them in the teacher data table 60 (see S116 in FIG. 8). For example, as shown in FIG. 18, successful examples of the two-dimensional code to be read are classified based on four ranges of the pitch deviation ratio. Note that the four value ranges in FIG. 18 are just an example.
- the target number in S102 of FIG. 7 is an example of the "predetermined number”.
- the memory 34 of the code reader 10 is an example of the "first memory”.
- the control unit 30 of the code reading device 10 that executes the process of S100 in FIG. 7 is an example of the "classification unit” and the "judgment unit".
- the processing of the code reader 10 of this embodiment is the same as the processing of FIG. 3, which is the processing of the code reader 10 of the first embodiment, except that the processing of S200 is added.
- the CPU 32 executes the normal reading process in S4, in S200 the CPU 32 transmits to the learning device 200 case information related to the case of the process executed in the normal reading process.
- the case information is stored in the case table 270 (see FIG. 11) in the memory 234 of the learning device 200.
- FIG. According to such a configuration, it is possible to accumulate case information by storing case information in the case table 270 each time the normal reading process is executed.
- the case table 270 corresponds to processing information indicating the content of the processing, cause information indicating the cause of the processing, and the number of occurrences of various processing executed in the normal reading processing.
- This is a table that is stored with the
- the processing information indicates, for example, "symbol” indicating that at least one symbol mark has failed to be specified in the normal reading process, "error correction” indicating that error correction processing was executed in the normal reading process, and the like.
- the cause information is information indicating the cause of occurrence of the corresponding process, and indicates the type of deterioration such as "black deterioration" and "white deterioration” (see FIG. 9).
- the number of occurrences indicates the number of occurrences of the process indicated by the combination of the corresponding process information and the corresponding cause information.
- the number of occurrences is incremented each time a combination of corresponding processing information and corresponding cause information (that is, case information) is received from the code reader 10 .
- the cause information "black deterioration" in the case table 270 corresponds to the cause information.
- the number of occurrences to occur is relatively large.
- the information in the case table 270 makes it possible to know the tendency of deterioration in the situation in which the code reader 10 is used.
- the processing of the learning device 200 of the present embodiment is the same as that of the learning device 200 of the first embodiment except that the processing of S205 is executed instead of S62 of FIG. 5 and the processing of S210 and S220 is added. It is the same as the processing in FIG. 5, which is the processing.
- S205 is the same as S64 in FIG. 5, except that the determination is made based on a sub-target number (for example, 50) that is smaller than the target number.
- a sub-target number for example, 50
- the process proceeds to S210.
- the CPU 232 determines one or more processing processes to be executed at S220, which will be described later.
- Processing processing is processing for processing the teacher data 242 in the memory 234 to generate new teacher data 242 .
- Information in the case table 270 is used in the processing.
- the CPU 32 identifies, from the case table 270, case information (that is, processing information and cause information) stored in association with the highest number of occurrences. Then, the CPU 32 determines the processing to be executed in S220 according to the identified case information.
- the CPU 32 also identifies the case information stored in association with the next highest number of occurrences from the case table 270, and determines another processing according to the identified case information. Also, for example, the CPU 32 may randomly specify case information from the case table 270 .
- the CPU 32 may determine one processing process, or may determine two or more processing processes.
- processing process is performed by adding one symbol mark to the code image indicated by the teacher data 242.
- This is a process for executing image processing for turning the part white.
- the image processing is, for example, a process of adding a white image to part of the symbol mark, a process of reducing or enlarging a part of a plurality of cells forming the symbol mark, and the like.
- the image processing may target cells other than the cells forming the symbol mark.
- the CPU 32 performs processing for adjusting the contrast of the code image indicated by the teacher data 242, and rotating the code image.
- Various image processing may be performed, including processing for This increases the variation of the teacher data 242 after processing.
- the code image is rotated, for example, the values of the four corner position coordinates L1 to L4 indicated by the teacher data 242 are also converted into values rotated by the same rotation angle as the code image.
- the CPU 32 may execute, as the processing processing, processing for adjusting the contrast of the code image, processing for rotating the code image, etc., without executing image processing according to the cause information.
- the CPU 32 executes each of the one or more processing determined in S210.
- the CPU 32 adjusts, for example, the position where the white image is added, the rotation angle for rotating the code image, and the like, and executes one processing process a plurality of times.
- the CPU 32 executes each of one or more processing processes multiple times until the number of teaching data 242 in the memory 234 reaches a predetermined target number (for example, 100).
- the CPU 232 proceeds to S64.
- a comparative example that does not include the case table 270 for example, a comparative example that executes processing using image processing that is predetermined by an administrator or the like of the learning device 200 is assumed.
- the processing may correspond to a case different from the case of the actual reading process.
- the case table 270 accumulates actual cases of normal reading processing.
- learning device 200 determines one or more processing processes from case table 270 (S210 in FIG. 11). Appropriate processing can be performed in consideration of trends in actual reading processing cases.
- Control unit 230 of learning device 200 that executes the process of S220 in FIG. 11 is an example of the “increase control unit”.
- the memory 234 of the learning device 200 is an example of the "second memory”.
- the control unit 230 that stores the case information transmitted in S200 of FIG. 10 in the memory 234 is an example of the "second storage control unit.”
- the two-dimensional code is displayed on various media (eg substrate, metal, paper media, etc.). Also, for example, the two-dimensional code is displayed on the medium by various display methods (eg, printing, cutting, etc.). Also, for example, the two-dimensional code is generated according to a specific standard among various standards (for example, size, encryption, etc.).
- Each of the plurality of case tables 270 corresponds to each of the plurality of types of two-dimensional code to be read.
- the case information transmitted to the learning device 200 in S200 of FIG. contains specific information about
- the specific information is information indicating a medium to be read, a display method, a standard, and the like.
- the specific information is input to the code reading device 10 by the user of the code reading device 10, for example.
- the learning device 200 When the learning device 200 receives the case information including the specific information from the code reading device 10, the learning device 200 should store the received case information from the plurality of case tables 270 in the memory 234 based on the specific information in the case information. Identify one case table 270 . Then, learning device 200 stores the information in the received case information in one specified case table 270 .
- the code reading device 10 transmits the above specific information to the learning device 200 together with the teacher data 242.
- learning device 200 When learning device 200 receives teacher data 242 and specific information from code reading device 10, learning device 200, in S60 of FIG. is stored in memory 234 . When learning device 200 determines that the number of training data 242 corresponding to the received specific information is equal to or greater than the sub-target value (YES in S205), learning device 200 proceeds to the processing from S210. Specifically, learning device 200 selects one case table 270 to be used in the learning process of S64 in FIG. Identify. Then, the learning device 200 refers to the single specified example table 270 and executes the processes from S210 onward.
- case information is accumulated for each type of two-dimensional code to be read.
- a specific type of reading target can be read not only at the most recent date and time when the code reader 10 was used, but also at past dates and times.
- a particular type of read target may be read in an area different from the predetermined area in which the code reader 10 is used.
- the processing can be executed with reference to the code example.
- the learning model 252 of this embodiment is a model that inputs code image data to the input layer and outputs estimated values of the black and white patterns of the cells of the two-dimensional code from the output layer.
- the black and white pattern data of the cells of the two-dimensional code (hereinafter referred to as “pattern data”) are, for example, the positional coordinates of each cell specified in the normal reading process (see S54 in FIG. 4, for example) and the same process. are generated based on the black and white values of each cell determined in the binarization process of .
- the position coordinates of each cell are as follows: the upper left cell is the origin (0, 0), the upper right cell, the lower left cell, and the lower right cell are (N, 0), (0, N), respectively. Defined as (N, N).
- N is the number of cells on one side of the two-dimensional code, which is 24, for example.
- the positional coordinates of each cell are assigned serial numbers in the order of the upper left cell, the upper right cell, the lower left cell, and the lower right cell.
- the pattern data is data in which each serial number of each cell is associated with the black and white values of the cell indicated by the serial number.
- black and white values for example, “white” indicates “1” and “black” indicates “0”.
- the teacher data 242 of this embodiment includes code image data representing the code image cut out in S50 of the normal reading process in FIG. 4 and the pattern data described above.
- the learning device 200 refers to the teacher data 242 and executes learning processing for adjusting the model parameters 254 (see S64 in FIG. 5).
- the teacher data 242 includes code image data representing a code image cut out in S80 of the learning reading process of FIG. 13 described later in this embodiment
- pattern data generated based on the learning reading process of FIG. may include In this case, the pattern data may be generated based on the positional coordinates of each cell identified in S84 of FIG. 13 and the black and white values of each cell estimated in S300 of the same process.
- the learning reading process of this embodiment is the same as the process of FIG. 6 of the first embodiment except that the process of S300 is executed instead of the binarization process of S86.
- the CPU 232 inputs the code image data representing the code image cut out in S80 to the learning model 52 in the memory 34, and extracts the black and white pattern of the two-dimensional code represented by the code image from the learning model 52. Get an estimate.
- the CPU 32 refers to the position coordinates of each cell specified in S84 and the black and white pattern estimated values obtained in S300, and calculates the black and white values of each cell of the two-dimensional code. decide. The CPU 32 then executes code processing based on the determined black and white values of each cell. 0
- the learning model 52 is referenced to determine the black and white values of each cell of the two-dimensional code (S300 in FIG. 13). For example, it is assumed that reading by the normal reading process fails because most of black and white of the two-dimensional code is discolored due to age deterioration of the two-dimensional code.
- the black and white value of each cell determined by the binarization processing of the two-dimensional code due to aging deterioration of the two-dimensional code is greatly different from the black and white value of each cell of the two-dimensional code before deterioration.
- the information in the two-dimensional code can be read by executing the learning reading process.
- a comparative example is assumed in which the pattern data is a plurality of pixels representing a two-dimensional code.
- the information amount of the pattern data increases as the number of pixels increases.
- one value is associated with one cell.
- the amount of information in the pattern data does not increase.
- An increase in the information amount of pattern data can be suppressed as compared with the above comparative example.
- the configuration of the above comparative example may be employed in the modified example.
- the black and white patterns of the cells of the two-dimensional code include the black and white values of the cells for all the cells.
- a two-dimensional code includes a group of cells arranged in a certain pattern such as a symbol mark (that is, a finder pattern), an alignment pattern, a timing pattern, or the like.
- the black-and-white pattern of the cells of the two-dimensional code includes the black-and-white values of a plurality of cells excluding the group of cells arranged in the above fixed pattern from all the cells, and the cells arranged in the above fixed pattern It does not have to contain the black and white values of the group.
- the serial number in the black and white pattern of each cell is 1 .about.624 are attached. 624 is the same value as the number of cells of the two-dimensional code.
- the serial numbers in the black and white pattern of each cell may be numbered from 1 to M (where M is greater than 624).
- 624 of the serial numbers from 1 to 31329 are used, and the rest of the serial numbers may not be used. For example, all values corresponding to the remaining serial numbers may be set to "0".
- one learning model 252 can be shared in learning two-dimensional codes of multiple sizes.
- Pattern data is an example of "first relationship information”.
- the estimated value of pattern data is an example of "related information (and second related information).”
- the processing of the learning apparatus 200 of this embodiment is the same as the processing of FIG. 5 of the first embodiment, except that the processing of S400 is executed instead of the processing of S70 of FIG.
- all the model parameters 254 of the learning model 252 may be adjusted in the learning process of S64, or part of the model parameters 254 may be adjusted.
- the model parameters 254 are adjusted, for example, parameters on the input layer side of the multilayer neural network are fixed at predetermined values, and parameters on the output layer side of the multilayer neural network are adjusted in the learning process.
- the CPU 232 determines that the verification result indicates "OK" (YES in S68), it proceeds to S400.
- the CPU 232 generates a two-dimensional code that records the model parameters 254 adjusted at S64.
- the CPU 232 then transmits print data representing the generated two-dimensional code to the printer 500 . Thereby, the two-dimensional code is printed.
- the code reading device 10 reads the printed two-dimensional code. Thereby, the code reading device 10 acquires the model parameters 254 in the two-dimensional code and stores them in the memory 34 .
- the code reading device 10 can acquire the model parameters 254 by reading the two-dimensional code.
- the code reading device 10 can acquire the model parameters 254 by reading the printed two-dimensional code without executing communication with the learning device 200 .
- the information recorded in the two-dimensional code is not limited to the plaintext of the model parameters 254, and may be data obtained by reducing the plaintext model parameters 254, for example.
- the CPU 232 may divide the model parameter 254 into a plurality of pieces of data, and generate a two-dimensional code for recording each piece of divided data. According to such a configuration, even if the amount of information of the model parameters 254 exceeds the amount of information that can be stored in one two-dimensional code, a two-dimensional code that records the model parameters 254 can be created.
- the information recorded in the two-dimensional code is not limited to all of the model parameters 254, and may be part of the model parameters 254.
- the values of some of the model parameters 254 are fixed, only some of the parameters adjusted in the learning process of S64 may be recorded in the two-dimensional code.
- only the parameters changed from the model parameters 254 before adjustment may be recorded in the two-dimensional code among the model parameters 254 after adjustment.
- the model parameters 254 before adjustment may be, for example, the model parameters 254 at the shipping stage, or the model parameters 254 after adjustment in the previous learning process. Compared to a configuration in which all model parameters 254 are recorded, the amount of information recorded in the two-dimensional code can be reduced.
- a printer is an example of an "output device.”
- the learning device 200 may display the two-dimensional code in which the model parameters 254 are recorded on a display (not shown) that can communicate with the learning device 200 .
- the display is an example of the "output device”.
- the code reader 10 stores multiple pieces of learning information 50 in the memory 34 .
- code reader 10 is used in a variety of situations. For example, a situation is assumed in which the code reader 10 is used not only in a factory but also outdoors. Further, for example, the code reading device 10 not only reads a two-dimensional code displayed on a first medium (for example, paper medium), but also reads a two-dimensional code displayed on a second medium (for example, metal) different from the first medium. A situation is assumed in which a two-dimensional code is read. According to the configuration of this embodiment, two unique learning information 50 can be stored for each of various situations. This makes it possible to deal with various situations.
- the memory 34 is divided into a plurality of storage areas including storage areas a1, b1 and b2.
- the program 40 is stored in the memory area a1, the first learning information 50 is stored in the memory area b1, and the second learning information 50 is stored in the memory area b2.
- a comparative example is assumed in which a plurality of pieces of learning information 50 are stored in one storage area. This comparative example requires a process of searching for learning information 50 to be updated from among a plurality of pieces of learning information 50 .
- the learning information 50 to be updated is searched from among the plurality of learning information 50. It is only necessary to access the storage area in which the learning information 50 to be updated is stored.
- Memory 34 is an example of a "specific memory.”
- the storage area a1 is an example of the "first area”.
- the storage areas b1 and b2 are an example of the "second area”.
- the intermediary device 700 is, for example, a server. Intermediary device 700 is connected to LAN4. This allows the intermediary device 700 to communicate with the code reader 10 via the LAN4. Also, the intermediary device 700 is connected to the Internet 6 . This allows intermediary device 700 to communicate with study device 200 via Internet 6 .
- the code reading device 10 can directly communicate with the intermediary device 700, but cannot directly communicate with the learning device 200.
- the code reading device 10 transmits the teacher data 242 to the intermediary device 700 .
- intermediary device 700 transmits teacher data 242 to learning device 200 .
- the learning device 200 transmits the adjusted model parameters 254 to the mediation device 700 .
- the intermediary device 700 transmits the adjusted model parameters 254 to the code reading device 10 . That is, mediation device 700 mediates communication between code reading device 10 and learning device 200 .
- the intermediary device 700 can block the code reading device 10 from the Internet 6 . It is possible to prevent the code reader 10 from being directly accessed from the Internet 6 .
- the intermediation device 700 is an example of an "intermediation device.”
- the code reading device 10 can perform image processing on the code image for the purpose of improving reading accuracy.
- Image processing is the process of transforming a code image using at least one filter.
- the filter is, for example, a smoothing filter, a black expansion filter, a black contraction filter, or the like.
- a smoothing filter is a filter that smoothes the luminance values of an image.
- the black 3 dilation filter is a filter that dilates black blobs in an image.
- a black shrink filter is a filter that shrinks black blobs in an image.
- the internal factor classification process of this embodiment is a process of classifying the two-dimensional code to be read according to the internal factor, which is the mode of use of the filter of the code reading device 10 .
- the filter may be set by the user, or may be automatically selected by the code reading device 10 .
- the filters may be used multiple times and the sizes of the filters may vary.
- the process of FIG. 19 is the same as the process of FIG. 8 except that the processes of S510 and S512 are executed instead of S110 and S112 of FIG.
- the CPU 32 determines whether at least one filter has been used for the code image. When the CPU 32 determines that no filter has been used for the code image (NO in S510), it skips the processing of S512 described later and proceeds to S114.
- the CPU 32 classifies the code images based on the mode of use of the filter.
- the classification items are, for example, the number of times the filter is used, the type of filter, the size of the filter, and the like. For example, as shown in FIG. 19, when a smoothing filter of size “3 ⁇ 3” is used only once for a code image, the successful case corresponding to the code image is the item “size “3 It is stored in the teacher data table 60 as teacher data TD1 corresponding to " ⁇ 3", the smoothing filter, and "once" (S116).
- the success case corresponding to the code image is associated with the number of times of use "0 times" and is used as teacher data. It is stored in the data table 60 (S116). Note that the items in FIG. 19 are merely an example, and for example, when two types of filters are used, a combination thereof may also be adopted as an item.
- the internal factor classification process of the present embodiment is a process of classifying the two-dimensional code to be read according to internal factors, which are imaging conditions of the code reading device 10 .
- the imaging conditions include the exposure time, the distance from the object to be imaged, ON/OFF of lighting (for example, flash), and the like.
- the imaging conditions may be set by the user, or may be automatically selected by the code reader 10 .
- the distance to the object to be imaged may be calculated, for example, based on the focal length, or may be calculated by a stereo method with reference to an indication marker illuminated to the object to be read.
- the process of FIG. 20 is the same as the process of FIG. 8 except that the process of S612 is executed instead of S110 and S112 of FIG.
- the CPU 32 classifies the code images based on the imaging conditions currently used for imaging the code images.
- the classification items are, for example, exposure time, distance, illumination, and the like.
- the success case corresponding to the code image is the item "0 .5ms4 or less, 100mm or less, and "ON" are stored in the teaching data table 60 as teaching data TD1 (S116).
- teaching data TD1 teaching data TD1
- the internal factor classification process of this embodiment is a process of classifying the two-dimensional code to be read according to the internal factor, which is the processing time of the reading process of the code reader 10 .
- the processing time is, for example, the time from the start to the end of the normal reading process (see FIG. 4) or the time from the start to the end of the learning reading process (see FIG. 6). Note that in other modifications, the processing time may be part of the normal reading process, for example, the time from the start to the end of the decoding process.
- the CPU 32 classifies the code images based on the processing time.
- the classification items include, for example, processing time "100 ms or less" and processing time "150 ms or less". For example, if the processing time is greater than 100 ms and 150 ms or less, the successful example corresponding to the code image is stored in the training data table 60 as training data TD3 corresponding to the item "150 ms or less" ( S116). Note that the items in FIG. 20 are only examples.
- the reason for the long processing time is, for example, the deterioration of the two-dimensional code, and the length of the processing time and the degree of deterioration may have a correlation. Classification based on the processing time makes it possible to suppress the distribution of success cases from biasing toward cases with a specific degree of deterioration. Also in this embodiment, the same effect as in the second embodiment can be obtained.
- An item with a high degree of influence is, for example, the pattern number indicating deterioration of the symbol mark in the pattern determination process (FIG. 9).
- a region in which data is recorded in the two-dimensional code can be restored by error correction even if it deteriorates.
- the symbol mark cannot be restored by error correction, and if the symbol mark deteriorates, the position of the two-dimensional code cannot be specified, and there is a high possibility that the reading of the two-dimensional code will fail.
- the item indicating deterioration of the symbol mark which is highly likely to fail in reading the two-dimensional code, is adopted as an item having a higher degree of influence than the other items.
- the code reading device 10 can read the total number of data groups of one or more items having a high degree of influence. is equal to or greater than the 5 predetermined number (YES in S700), all of the teaching data 242 is transmitted to the learning device 200 (S104).
- Successful cases corresponding to data groups of one or more items with a high degree of impact are cases in which two-dimensional codes can be read successfully, but the possibility of reading two-dimensional codes becoming unsuccessful due to deterioration of the symbol mark, etc. is increasing. This is an example showing In situations where the possibility of reading the two-dimensional code is likely to fail, learning can be started quickly without waiting for the number of teacher data 242 to reach the target number.
- the determination in S700 of FIG. 21 is not limited to the classification based on the deterioration of the information code, but also the classification based on the contrast of the information code (modification of the second embodiment), the classification based on the deformation of the information code (second embodiment). (Figs. 16 to 18)), and classification based on the internal factors of the code reader (ninth to eleventh embodiments (Figs. 19 and 20)). good. Also, the determination of S700 of FIG. 21 may be employed for classification based on a combination of at least two of the above deterioration, contrast, deformation, and internal factors.
- the code reader 10 can irradiate an indication marker that indicates a two-dimensional code to be read.
- the indication marker has a predetermined shape such as a cross, circle, square, or the like.
- the CPU 32 identifies the position of the instruction marker from the captured image of the camera 20 .
- the CPU 32 extracts a predetermined shape of the pointing marker from the captured image and identifies the position of the predetermined shape as the position of the pointing marker.
- the CPU 32 may specify the center of the captured image as the position of the pointing marker.
- the predetermined number of pixels (N1, N2) is preset based on, for example, the size of the two-dimensional code to be read.
- N1 and N2 are positive integers, and N2 may be the same as or different from N1.
- a code image including a two-dimensional code can be easily cut out from the captured image based on the instruction marker.
- a two-dimensional code is composed of black and white patterns. Therefore, as shown in FIG. 23, the luminance of the pixels representing the two-dimensional code fluctuates greatly, while the luminance of the pixels representing the background other than the two-dimensional code hardly fluctuates.
- the CPU 32 analyzes the luminance of pixels along the horizontal direction from the position of the pointing marker, and estimates pixels whose luminance variation is equal to or greater than a predetermined value as the boundary line of the two-dimensional code in the horizontal direction.
- the CPU 32 performs the same analysis in the vertical direction from the position of the pointing marker, and estimates the pixels whose brightness variation is equal to or greater than a predetermined value as the boundary line of the two-dimensional code in the vertical direction. Then, the CPU 32 designates a line of a predetermined number of pixels N3 from the horizontal boundary line as a vertical line of the cutting range, and a line of a predetermined number of pixels N4 from the vertical boundary line as a horizontal line of the cutting range. decide.
- the predetermined numbers of pixels N3 and N4 are set in advance based on, for example, the size of the two-dimensional code to be read. N3 and N4 are positive integers, and N3 may be the same as or different from N4.
- the Dimension codes can be included in the crop range.
- the CPU 32 calculates the position coordinates of the four corners of each of at least some of the code image data in the plurality of pieces of teacher data 242 stored in S60 of FIG. 5, as in S52 of FIG. Then, the CPU 32 sets a range including all four corners calculated for at least part of the code image data in the plurality of pieces of teacher data 242 as a cut range.
- the cutting range can be set even with the code reading device 10 that does not use the indication marker.
- a code reader 10 is, for example, a stationary code reader that is fixed to a production line in a factory, a POS register, or the like.
- the "information code” is not limited to a two-dimensional code, and may be, for example, a barcode, a multi-stage barcode, or the like.
- the objects to be learned by the learning device 200 are limited to the position coordinates of the four corners of the two-dimensional code (see the first embodiment) and the black and white patterns of each cell of the two-dimensional code (see the fifth embodiment). do not have.
- the learning target may be code position information indicating the position of the code image within the captured image. For example, a situation is assumed in which the black portion of the two-dimensional code becomes faint, making it difficult to find the two-dimensional code from the captured image. By learning the code position information, the two-dimensional code can be found from the captured image even in the above situation, and the code image can be cut in the cut processing of FIG.
- the code position information is an example of "relationship information (first relational information and second relational information)".
- the learning target may be the range to be cut in the cut processing.
- the object to be learned may be the position coordinates of the four corners of each of the three symbol marks.
- the learning object may be the position of the timing pattern of the two-dimensional code.
- the learning object may be information (position, size, etc.) indicating a predetermined pattern (eg, symbol mark, timing pattern, etc.) in the two-dimensional code.
- the information about the range to be cut, the position coordinates of the four corners of the symbol mark, or the specific pattern in the two-dimensional code is an example of "relationship information (first relational information and second relational information)". is.
- Relationship information (first relational information and second relational information) is not limited to one type of information (for example, positional coordinates of four corners of a two-dimensional code), but also plural types of information ( For example, positional coordinates of the four corners of the symbol mark and positional coordinates of the four corners of the two-dimensional code).
- the teacher data 242 may include not only successful cases of normal reading processing but also successful cases of learning reading processing.
- the training data 242 may include successful cases of the normal reading process and may not include successful cases of the learning reading process.
- the "teaching data" should at least include a plurality of successful cases of the reading process using the first relational information.
- the "classification unit” and “determination unit” are not limited to the control unit 30 of the code reading device 10, but may be the control unit 230 of the learning device 200, for example. In this case, the processing in FIG. 8 may be executed by the control unit 230 of the learning device 200.
- FIG. 8 may be executed by the control unit 230 of the learning device 200.
- the teacher data table 60 may be stored not only in the memory 34 of the code reading device 10 but also in the memory 234 of the learning device 200, for example.
- the memory 234 is an example of the "first memory”.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Electromagnetism (AREA)
- Toxicology (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
Description
・前記カメラによって撮像された前記情報コードとしての第1の情報コードの画像から前記第1の情報コードの二次元のコード領域を規定する情報である第1の関係情報を検出する検出処理と、
・検出済みの前記第1の関係情報に基づいて、前記第1の情報コードに記録されている情報を読み取る読取処理と、
・前記読取処理の結果を出力する出力処理と、を含む第1のコード処理を実行する、
第1のコード処理実行部と、
・前記カメラによって撮像された前記情報コードの画像から前記情報コードの二次元のコード領域を規定する情報の推定値である第2の関係情報を出力させる学習モデルのパラメータを、前記第1の関係情報に基づいて読み取られた前記読取処理の複数の成功事例を少なくとも含む教師データに基づいて調整する、調整部と、
・前記カメラによって撮像された前記情報コードとしての第2の情報コードの画像を前記学習モデルに入力して、前記学習モデルから前記第2の関係情報を取得する取得処理と、
・取得済みの前記第2の関係情報に基づいて、前記第2の情報コードに記録されている情報を読み取る読取処理と、
・前記読取処理の読取り結果を出力する出力処理と、を含む、第2のコード処理を行う、第2のコード処理実行部と、
を備える、情報読取装置。
(情報読取システム;図1、図2)
本実施例の情報読取システム2は、二次元のコード領域CRを有し、このコード領域CRに情報を記録した二次元コードCDから、そこに記録されている情報を読み取るためのシステムである。例えば、工場、野外の作業場等において、二次元コードCDが特定の媒体(例えば、金属、基板、樹脂、紙媒体等)に表示される。なお、コード領域CRは、例えば、図1に例示するように、矩形(正方形、長方形)であり、黒セル(暗色セル)BCと白セル(明色セル)WCとを特定の規則及びエンコードされた情報(表示したい情報)に沿ってマッピングされている。
また、当該二次元コードCRが長時間に亘って利用される場合がある。このため、当該二次元コードCRが、経年劣化する場合がある。例えば、経年により、二次元コードCRの一部や全体が黒色の部分(黒セル(暗色セル)BRの部分)が薄くなり、二次元コードCRの白色の部分(白セル(明色セル)WRの部分)に対する黒色の部分のコントラストが低下する。また、例えば、経年により、二次元コードCRの一部が欠損し、及び、二次元コードCRの一部に汚れが付着する。
コード読取装置10は、二次元コードCRに記録されている情報を読み取るための携帯装置である。なお、図1に示すコード読取装置10の外観は、一例に過ぎず、例えば、コード読取装置10は、スマートフォンと同様の外観を有していてもよい。
readable recording medium として機能する)と、を備える。CPU32は、メモリ34に記憶されているプログラム40に従って、様々な処理を実行する。また、メモリ34は、さらに、多層ニューラルネットワークを利用した機械学習に関する学習情報50を記憶する。ここで、多層ニューラルネットワークは、入力層、中間層、及び、出力層から構成される関数であり、入力層に入力されたデータが、中間層によって処理されて、出力層から出力される。多層ニューラルネットワークは、例えば、畳み込みニューラルネットワーク、全結合ニューラルネットワーク等である。また、機械学習では、多層ニューラルネットワークに限らず、例えば、サポートベクターマシンが利用されてもよい。多層ニューラルネットワーク等は、既知の技術であり、ここでは、詳細な説明は省略する。
図3を参照して、プログラム40に従って、コード読取装置10のCPU32が実行する処理について説明する。図3の処理は、コード読取装置10が、操作部12を介して、二次元コードの読み取りの開始の指示を受け付けることをトリガとして開始される。
図4を参照して、図3のS4における通常読取処理について説明する。S50では、CPU32は、撮像画像から二次元コードを示す画像であるコード画像を所定の画素数で切り取る切り取り処理を実行する。切り取り処理の具体例については、図22~図24にて後述する。
図5を参照して、プログラム240に従って、学習装置200のCPU232が実行する処理について説明する。図5に示す処理は、コード読取装置10から教師データ242を受信することをトリガとして開始される。ここで、教師データ242は、2個のコード読取装置10のうちの一方の装置から受信される。なお、変形例では、教師データ242は、2個のコード読取装置10の双方から受信されてもよい。
図6を参照して、図3のS24における学習読取処理について説明する。S80は、図4のS50と同様である。S82では、コード読取装置10のCPU32は、S80で切り取られたコード画像を示すコード画像データをメモリ34内の学習モデル52に入力して、当該学習モデル52から当該コード画像の四隅の位置座標の推定値を取得する。S84~S88は、S82で取得された推定値が利用される点を除いて、図4のS54~S58と同様である。S88が終了すると、図6の処理が終了する。
本実施例の構成によれば、学習装置200は、図4の通常読取処理のS52によって算出された二次元コードの四隅の位置座標L1~L4を利用したデコード処理の複数の成功事例を含む教師データ242を参照して、二次元コードの画像から二次元コードの四隅の位置座標の推定値を出力する学習モデル252のモデルパラメータ254を調整する(図5のS64)。そして、コード読取装置10は、撮像条件を変更するものの、通常読取処理における読取が失敗する場合(図3のS12の後にS10でNO)に、学習読取処理を実行する(S24)。学習読取処理では、通常読取処理のS52による算出に代えて、学習モデル52を参照して四隅の位置座標の推定値を出力する処理が実行される(図6のS82)。例えば、二次元コードの経年劣化により二次元コードの四隅の位置座標L1~L4の算出が失敗することに起因して、通常読取処理による読み取りが失敗することが想定される。本実施例では、二次元コードの経年劣化により二次元コードの四隅の位置座標L1~L4の算出が失敗する場合であっても、学習読取処理を実行することにより、二次元コード内の情報を読み取ることができる。
情報読取システム2が、「情報読取装置」の一例である。図2の2個のコード読取装置10、そのうちの一方のコード読取装置10、そのうちの他方のコード読取装置10が、それぞれ、「1個以上のコード読取装置」、「第1のコード読取装置」、「第2のコード読取装置」の一例である。カメラ20が、「カメラ」の一例である。学習装置200が、「学習装置」の一例である。二次元コードが、「情報コード(第1の情報コード及び第2の情報コード)」の一例である。二次元コードの四隅の位置座標L1~L4が、「第1の関係情報」の一例である。二次元コードの四隅の位置座標の推定値が、「関係情報(及び第2の関係情報)」の一例である。教師データ242、学習モデル252、モデルパラメータ254が、それぞれ、「教師データ」、「学習モデル」、「パラメータ」の一例である。図4の処理及び図3のS40が、「第1のコード処理」の一例である。図4のS52、S58、図3のS40が、それぞれ、「検出処理」、「読取処理」、「出力処理」の一例である。図6の処理及び図3のS40が、「第2のコード処理」の一例である。図6のS82が、「取得処理」の一例である。
本実施例は、コード読取装置10の処理の内容が一部異なる点を除いて、第1実施例と同様である。
本実施例のコード読取装置10の処理は、S100~S102の処理が追加され、S42に代えてS104の処理が実行されることを除いて、第1実施例のコード読取装置10の処理である図3の処理と同様である。なお、第1実施例と同様の処理については、同一の符号が付されており、後述する各実施例でも同様である。
図8を参照して、図7のS100の劣化分類処理について説明する。S110では、CPU32は、図7のS4の通常読取処理又はS24の学習読取処理において誤り訂正処理が実行されたのか否かを判断する。CPU32は、誤り訂正処理が実行されたと判断する場合(S110でYES)に、S112に進む。一方、CPU32は、誤り訂正処理が実行されなかったと判断する場合(S110でNO)に、S112の処理をスキップして、S114に進む。
モデルパラメータ254の信頼性を確保するためには、目標数(例えば100)を上回る成功事例が必要となる。しかし、成功事例を無条件に教師データ242として採用すると、教師データ242の個数が目標数を上回ったタイミングにおいて、成功事例のばらつきが、例えば、特定の劣化のパターン(例えば特定の領域の黒劣化)における事例に偏る可能性がある。本実施例の構成によれば、コード読取装置10は、読取対象の二次元コードの成功事例を二次元コードの劣化のパターンで分類する(図8のS112)。そして、コード読取装置10は、分類結果に基づいて、読取対象の二次元コードの成功事例を教師データ242として教師データテーブル60に記憶するのか否かを判断する(S114)。これにより、教師データ242としての成功事例が、特定の劣化のパターンにおける事例に偏ることを抑制することができる。
太り細り現象は、太り現象と、細り現象と、を含む。太り現象は、黒いセルの実際の幅が、理想的な幅によりも太い現象である。一方、細り現象は、黒いセルの実際の幅が、理想的な幅によりも細い現象である。
一般に、二次元コードの縦の長さと横の長さの比(以下では「縦横比」)は、1:1である。歪み現象は、二次元コードの縦の長さに対して二次元コードの横の長さが歪み、縦横比が1:1から変化する現象である。縦横比は、例えば、縦方向に隣接するセルの中心間の距離と、横方向に隣接するセルの中心間の距離と、の比として算出される。
一般に、二次元コードを構成するセルは、縦方向及び横方向において一定の間隔(即ちピッチ)で並ぶ。ピッチずれ現象は、隣接するセルのピッチが、上記の一定の間隔である理想のピッチからずれる現象である。ピッチずれの程度を示すピッチずれ割合は、例えば、理想のピッチに対する、最大のピッチずれの量の割合として算出される。ピッチずれの量は、理想のピッチと隣接するセルの中心間の距離との間の差分の絶対値として算出される。なお、変形例では、ピッチずれ割合は、例えば、理想のピッチに対する、ピッチずれの量の平均値の割合であってもよい。
図7のS102の目標数が、「所定の数」の一例である。コード読取装置10のメモリ34が、「第1のメモリ」の一例である。図7のS100の処理を実行するコード読取装置10の制御部30が、「分類部」及び「判断部」の一例である。
本実施例は、コード読取装置10の処理の内容及び学習装置200の処理の内容が一部で異なる点を除いて、第1実施例と同様である。
本実施例のコード読取装置10の処理は、S200の処理が追加される点を除いて、第1実施例のコード読取装置10の処理である図3の処理と同様である。
本実施例の学習装置200の処理は、図5のS62に代えてS205の処理が実行され、かつ、S210及びS220の処理が追加される点を除いて、第1実施例の学習装置200の処理である図5の処理と同様である。
モデルパラメータ254の信頼性を確保するためには、目標数(例えば100個)を上回る成功事例が必要となる。本実施例の構成によれば、成功事例の個数が目標数を上回らない状況において、加工処理を実行して、仮想的な成功事例、即ち、仮想的な教師データ242を生成することができる(図11のS220)。これにより、成功事例の個数、即ち、教師データ242の個数を増やすことができる。成功事例の個数が目標数を上回らなくても、モデルパラメータ254の信頼性を確保することができる。また、実際の成功事例の個数が目標数に到達しない状況において、比較的に信頼性の高いモデルパラメータ254を速やかに生成することができる。
図11のS220の処理を実行する学習装置200の制御部230が、「増加制御部」の一例である。学習装置200のメモリ234が、「第2のメモリ」の一例である。図10のS200によって送信された事例情報をメモリ234に記憶する制御部230が、「第2の記憶制御部」の一例である。
(コード読取装置10及び学習装置200の処理;図10、11)
本実施例は、複数個の事例テーブル270(図11参照)が利用される点、及び、図10のS200で送信される事例情報の内容とS42で送信される教師データ242の内容が一部異なる点を除いて、第3実施例と同様である。
本実施例は、教師データ242の内容及び学習モデル252によって出力される値が異なる点と、学習読取処理の一部が異なる点と、を除いて、第1実施例と同様である。
本実施例の学習モデル252は、コード画像データを入力層に入力して、出力層から二次元コードのセルの白黒のパターンの推定値を出力するモデルである。二次元コードのセルの白黒のパターンのデータ(以下では、「パターンデータ」と記載)は、例えば、通常読取処理で特定された各セルの位置座標(例えば図4のS54参照)と、同処理の二値化処理において決定された各セルの白黒の値と、に基づいて生成される。例えば、各セルの位置座標は、左上のセルを原点(0、0)として、右上のセル、左下のセル、及び、右下のセルのそれぞれを(N、0)、(0、N)、(N、N)と定義される。ここで「N」は、二次元コードの一辺のセルの個数であり、例えば、24である。そして、各セルの位置座標には、左上のセル、右上のセル、左下のセル、及び、右下のセルの順番に連番が付される。パターンデータは、各セルの連番のそれぞれについて、当該連番によって示されるセルの白黒の値を対応付けたデータである。ここで、白黒の値は、例えば、「白」が「1」を示し、「黒」が「0」を示す。なお、図4の通常読取処理のS58において誤り訂正処理が実行される場合には、パターンデータ内の白黒の値は、誤り訂正処理によって復元された値であってもよい。
本実施例の学習読取処理は、S86の二値化処理に代えて、S300の処理が実行される点を除いて、第1実施例の図6の処理と同様である。
パターンデータが、「第1の関係情報」の一例である。パターンデータの推定値が、「関係情報(及び第2の関係情報)」の一例である。
本実施例は、学習装置200の処理の一部が異なる点を除いて、第1実施例と同様である。
本実施例の学習装置200の処理は、図5のS70の処理に代えて、S400の処理が実行される点を除いて、第1実施例の図5の処理と同様である。なお、本実施例では、S64の学習処理において、学習モデル252のモデルパラメータ254の全部が調整されてもよいし、モデルパラメータ254の一部が調整されてもよい。モデルパラメータ254の一部が調整される場合には、例えば、多層ニューラルネットワークの入力層の側のパラメータが所定の値に固定され、多層ニューラルネットワークの出力層の側のパラメータが学習処理において調整されてもよい。
プリンタが、「出力装置」の一例である。なお、変形例では、学習装置200は、モデルパラメータ254が記録されている二次元コードを学習装置200と通信可能なディスプレイ(図示省略)に表示してもよい。本変形例では、当該ディスプレイが、「出力装置」の一例である。
本実施例では、コード読取装置10は、複数個の学習情報50をメモリ34に記憶する。例えば、コード読取装置10は、様々な状況で利用される。例えば、コード読取装置10が工場内だけでなく、野外で利用される状況が想定される。また、例えば、コード読取装置10が、第1の媒体(例えば紙媒体)に表示されている二次元コードを読み取るだけでなく、第1の媒体とは異なる第2の媒体(例えば金属)に表示されている二次元コードを読み取る状況が想定される。本実施例の構成によれば、様々な状況のそれぞれについて2固有の学習情報50を記憶することができる。これにより、当該様々な状況に対応することができる。
メモリ34が、「特定のメモリ」の一例である。記憶領域a1が、「第1の領域」の一例である。記憶領域b1、b2が、「第2の領域」の一例である。
(情報読取システム2;図15)
本実施例は、情報読取システム2が仲介装置700を備える点と、学習装置200がインターネット6上に設置されている点と、を除いて、第1実施例と同様である。
(内的要因分類処理:図19)
本実施例では、図7のS100の劣化分類処理に代えて、コード読取装置10の内的な処理及びコード読取装置10の設定等である内的要因に応じて分類する内的要因分類処理が実行される点を除いて、第2実施例と同様である。
(内的要因分類処理:図20)
本実施例の内的要因分類処理は、読取対象の二次元コードをコード読取装置10の撮像条件である内的要因に応じて分類する処理である。撮像条件は、露光時間、撮像対象との距離、照明(例えばフラッシュ)のON/OFF等である。撮像条件は、ユーザによって設定されてもよいし、コード読取装置10が自動的に選択してもよい。なお、撮像対象との距離は、例えば、焦点距離に基づき算出されてもよいし、読取対象に照射された指示マーカを参照してステレオ法で算出されてもよい。
(内的要因分類処理:図20)
本実施例の内的要因分類処理は、読取対象の二次元コードをコード読取装置10の読取処理の処理時間である内的要因に応じて分類する処理である。処理時間は、例えば、通常読取処理(図4参照)の開始から終了までの時間、又は、学習読取処理(図6参照)の開始から終了までの時間である。なお、他の変形例では、処理時間は、通常読取処理の一部、例えば、デコード処理の開始から終了までの時間であってもよい。
(コード読取装置10の処理;図21)
本実施例は、S700の判断が追加される点を除いて、第2実施例と同様である。CPU32は、教師データ242の個数が目標数より少ないと判断する場合(S102でNO)に、S700に進む。S700では、CPU32は、教師データテーブル60内の複数の項目のうち、影響度が高い1個以上の項目のデータ群の合計数が所定数以上であるのか否かを判断する。所定数は、S102の目標数よりも小さい値である。影響度は、二次元コードの読み取りが失敗する可能性の高さに応じて設定される。影響度が高い項目は、例えば、パターン決定処理(図9)においてシンボルマークの劣化を示すパターン番号である。二次元コードのうち、データが記録されている領域は、劣化しても誤り訂正によって復元可能である。これに対して、シンボルマークは、誤り訂正によって復元できず、シンボルマークが劣化すると、二次元コードの位置を特定できず、二次元コードの読み取りが失敗する可能性が高い。本実施例では、二次元コードの読み取りが失敗する可能性が高いシンボルマークの劣化を示す項目を、他の項目よりも影響度が高い項目として採用する。
図4のS50及び図6、図13のS80で実行される切り取り処理の具体例1について説明する。コード読取装置10は、読取対象の二次元コードを指示する指示マーカを照射可能である。指示マーカは、十字、丸、四角等の所定の形状を有する。具体例1では、CPU32は、カメラ20の撮像画像の中から、指示マーカの位置を特定する。例えば、CPU32は、撮像画像の中から指示マーカの所定の形状を抽出し、所定の形状の位置を指示マーカの位置として特定する。なお、変形例では、CPU32は、指示マーカの光源がカメラ20の画角の中心線上に位置している場合に、撮像画像の中心を指示マーカの位置として特定してもよい。
切り取り処理の他の具体例2について説明する。二次元コードは、白黒のパターンで構成される。このため、図23に示すように、二次元コードを示す画素の輝度は大きく変動し、二次元コード以外の背景を示す画素の輝度はほとんど変動しない。本具体例では、CPU32は、指示マーカの位置から横方向に沿って画素の輝度を分析し、輝度の変動が所定値以上である画素を横方向における二次元コードの境界線として推定する。同様に、CPU32は、指示マーカの位置から縦方向にも同様の分析を行い、輝度の変動が所定値以上である画素を縦方向における二次元コードの境界線として推定する。そして、CPU32は、横方向における境界線から所定の画素数N3の線を切り取り範囲の縦方向の線とし、縦方向における境界線から所定の画素数N4の線を切り取り範囲の横方向の線として決定する。所定の画素数N3、N4は、例えば、読取対象の二次元コードのサイズ等に基づき予め設定される。N3、N4は、正の整数であり、N3は、N4と同一であってもよいし、異なっていてもよい。所定の画素数N3、N4により上記の分析で推定される境界線よりも切り取り範囲を広くすることにより、読取対象の二次元コードの経年劣化により二次元コードの輪郭のセルが薄くなっても二次元コードを切り取り範囲に含ませることができる。
切り取り処理の他の具体例3について説明する。本具体例では、CPU32は、図5のS60で記憶される複数個の教師データ242内のコード画像データの少なくとも一部のそれぞれについて、図4のS52と同様の四隅の位置座標を算出する。そして、CPU32は、複数個の教師データ242内のコード画像データの少なくとも一部について算出された四隅の全てを含む範囲を切り取り範囲に設定する。
4 :LAN
6 :インターネット
10 :コード読取装置
12 :操作部
14 :表示部
20 :カメラ
22 :通信I/F
30 :制御部
32 :CPU
34 :メモリ
40 :プログラム
50 :学習情報
52 :学習モデル
54 :モデルパラメータ
60 :教師データテーブル
200 :学習装置
222 :通信I/F
230 :制御部
232 :CPU
234 :メモリ
240 :プログラム
242 :教師データ
250 :学習情報
252 :学習モデル
254 :モデルパラメータ
256 :初期パラメータ
270 :事例テーブル
500 :プリンタ
700 :仲介装置
L1~L4 :座標(位置座標)
a1、b1、b2 :記憶領域
Claims (20)
- 二次元の情報コードを撮像可能なカメラと、
前記カメラによって撮像された前記情報コードとしての第1の情報コードの画像から前記第1の情報コードの二次元のコード領域を規定する情報である第1の関係情報を検出する検出処理と、
検出済みの前記第1の関係情報に基づいて、前記第1の情報コードに記録されている情報を読み取る読取処理と、
前記読取処理の結果を出力する出力処理と、を含む第1のコード処理を実行する、
第1のコード処理実行部と、
前記カメラによって撮像された前記情報コードの画像から前記情報コードの二次元のコード領域を規定する情報の推定値である第2の関係情報を出力させる学習モデルのパラメータを、前記第1の関係情報に基づいて読み取られた前記読取処理の複数の成功事例を少なくとも含む教師データに基づいて調整する、調整部と、
前記カメラによって撮像された前記情報コードとしての第2の情報コードの画像を前記学習モデルに入力して、前記学習モデルから前記第2の関係情報を取得する取得処理と、
取得済みの前記第2の関係情報に基づいて、前記第2の情報コードに記録されている情報を読み取る読取処理と、
前記読取処理の読取り結果を出力する出力処理と、を含む、第2のコード処理を行う、第2のコード処理実行部と、
を備える、情報読取装置。 - 前記第1の関係情報は、前記情報コードの四隅の点の位置座標を示す情報である、請求項1に記載の情報読取装置。
- 前記第1の関係情報は、前記情報コードの前記二次元のコード領域を構成する複数個のセルの白黒のパターンを示す情報である、請求項1に記載の情報読取装置。
- 前記第1の関係情報では、前記複数個のセルのそれぞれについて、当該セルの位置を示す情報と、白黒のいずれかを示す値と、が対応付けられている、請求項3に記載の情報読取装置。
- 前記カメラと前記第1のコード処理実行部と前記第2のコード処理実行部とを備える1個以上のコード読取装置と、
前記1個以上のコード読取装置と別体な、前記調整部を備える学習装置と、
を備え、
前記1個以上のコード読取装置は、前記学習装置から調整後の前記パラメータを取得し、
前記第2のコード処理実行部は、取得済みの前記調整後のパラメータに基づいて前記第2のコード処理を実行する、請求項1から4のいずれか一項に記載の情報読取装置。 - 前記学習装置は、インターネット上に構築されており、
前記1個以上のコード読取装置と前記学習装置との通信を仲介し、前記インターネットに接続されている仲介装置をさらに備える、請求項5に記載の情報読取装置。 - 前記調整後のパラメータを記録する特定の情報コードを出力可能な出力装置をさらに備え、
前記1個以上のコード読取装置のそれぞれは、前記出力装置によって出力された前記特定の情報コードから前記調整後のパラメータを読み取ることによって、前記学習装置から前記調整後のパラメータの全部又は一部を取得する、請求項5又は6に記載の情報読取装置。 - 前記1個以上のコード読取装置は、第1のコード読取装置と、前記第1のコード読取装置とは異なる第2のコード読取装置と、を含み、
前記学習装置は、前記第1のコード読取装置から前記教師データを取得して、取得済みの前記教師データに基づいて、前記パラメータを調整し、
前記第2のコード読取装置は、
前記学習装置から、前記第1のコード読取装置の前記教師データに基づいて調整された前記パラメータを取得し、
取得済みの前記パラメータに基づいて、前記第2のコード処理を実行する、請求項5から7のいずれか一項に記載の情報読取装置。 - 特定のメモリをさらに備え、
前記特定のメモリは、
前記第1のコード処理及び前記第2のコード処理を実行するためのプログラムを記憶する第1の領域と、前記第1の領域とは異なる第2の領域と、を含み、
前記第2の領域は、複数個の学習情報を記憶するための領域であり、
前記複数個の学習情報のそれぞれは、前記学習モデルと、調整後の前記パラメータと、を含む、請求項1から8のいずれか一項に記載の情報読取装置。 - 前記調整部は、前記教師データに含まれる前記複数の成功事例の個数が所定の数を上回った後に、前記学習モデルの前記パラメータの調整を開始するように構成されている、請求項1から9のいずれか一項に記載の情報読取装置。
- 前記情報読取装置は、さらに、
前記第1のコード処理において前記読取処理が成功した対象の情報コードを、情報コードの劣化の種類に関する複数のパターンのうちの特定のパターンに分類する分類部と、
分類済みの前記特定のパターンに基づいて、前記対象の情報コードの成功事例を前記教師データとして採用するのか否かを判断する判断部と、
を備える、請求項10に記載の情報読取装置。 - 前記分類部は、
前記対象の情報コードに対する前記読取処理内で実行された誤り訂正において復元された復元コードの画像と、前記対象の情報コードの実際の画像を二値化した画像と、を比較して、前記対象の情報コードの劣化の箇所を特定し、
前記復元コードと前記実際の画像との比較によって特定された前記劣化の箇所に基づいて、前記対象の情報コードを前記特定のパターンに分類する、請求項11に記載の情報読取装置。 - 前記分類部は、前記対象の情報コードの画像のコントラストと、前記対象の情報コードの変形と、のうちの少なくとも1つに基づいて、前記対象の情報コードを前記特定のパターンに分類する、請求項11又は12に記載の情報読取装置。
- 前記情報読取装置は、さらに、
前記第1のコード処理において前記読取処理が成功した対象の情報コードを、前記情報読取装置の内的要因を示す複数の項目のうちの特定の項目に分類する分類部と、
分類済みの前記特定の項目に基づいて、前記対象の情報コードの成功事例を前記教師データとして採用するのか否かを判断する判断部と、
を備え、
前記複数の項目は、
前記情報読取装置が前記情報コードの画像に対して実行した画像処理に関する2個以上の項目と、
前記情報読取装置が前記情報コードを撮像する撮像条件に関する2個以上の項目と、
前記情報読取装置が前記情報コードに記録されている情報を読み取るための処理時間に関する2個以上の項目と、
のうちの少なくとも1つを含む、請求項11に記載の情報読取装置。 - 前記調整部は、前記教師データに含まれる前記複数の成功事例の個数が前記所定の数を上回る前であっても、前記複数の成功事例の個数のうちの2個以上の成功事例の個数が、前記所定の数よりも小さい特定の数を上回る場合に、前記学習モデルの前記パラメータの調整を開始するように構成されており、
前記2個以上の成功事例のそれぞれは、前記情報コードの読み取りが成功するものの、前記情報コードの読み取りが失敗する可能性が高まっていることを示す事例である、請求項10から14のいずれか一項に記載の情報読取装置。 - 第1のメモリと、
前記分類済みの特定のパターンに基づいて、前記対象の情報コードの成功事例を前記教師データとして採用すると判断される場合に、前記対象の情報コードの成功事例を前記教師データとして前記第1のメモリに記憶する第1の記憶制御部であって、前記分類済みの特定のパターンに基づいて、前記対象の情報コードの成功事例を前記教師データとして採用しないと判断される場合に、前記対象の情報コードの成功事例は前記特定のメモリに記憶されない、前記第1の記憶制御部と、
をさらに備え、
前記調整部は、前記第1のメモリ内の前記教師データに応じて、前記パラメータを調整する、請求項9から15のいずれか一項に記載の情報読取装置。 - 前記カメラによって撮像された情報コードの画像であるコード画像に対する画像処理を実行して、前記読取処理の仮想的な事例を生成し、前記教師データ内の成功事例の個数を増やす増加制御部をさらに備える、請求項1から16のいずれか一項に記載の情報読取装置。
- 前記画像処理は、
前記コード画像のコントラストを調整する処理、
所定の画像を前記コード画像に追加する処理、
前記コード画像を回転する処理、及び、
前記コード画像によって示される前記情報コードの各セルを加工する処理、
のうちの少なくとも一つを含む、請求項17に記載の情報読取装置。 - 複数個の情報コードのそれぞれについて、当該情報コードに対して実行された前記読取処理の事例を示す事例情報を記憶する第2のメモリをさらに備え、
前記増加制御部は、前記第2のメモリに記憶されている複数個の前記事例情報によって示される前記読取処理の事例の傾向に基づいて、複数種類の前記画像処理の中から1種類以上の前記画像処理を選択して、選択済みの前記1種類以上の画像処理を実行する、請求項18に記載の情報読取装置。 - 前記第1の関係情報を利用した前記読取処理が実行される場合に、前記第1の情報コードに対して実行された前記読取処理の事例を示す前記事例情報を前記第2のメモリに記憶する第2の記憶制御部をさらに備える、請求項19に記載の情報読取装置。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202280025014.1A CN117083616A (zh) | 2021-03-31 | 2022-03-31 | 信息读取装置 |
EP22781274.0A EP4318304A1 (en) | 2021-03-31 | 2022-03-31 | Information reading device |
US18/284,977 US20240176970A1 (en) | 2021-03-31 | 2022-03-31 | Information reader |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021-061763 | 2021-03-31 | ||
JP2021061763 | 2021-03-31 | ||
JP2022-010885 | 2022-01-27 | ||
JP2022010885A JP2022158916A (ja) | 2021-03-31 | 2022-01-27 | 情報読取装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022211064A1 true WO2022211064A1 (ja) | 2022-10-06 |
Family
ID=83459591
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2022/016728 WO2022211064A1 (ja) | 2021-03-31 | 2022-03-31 | 情報読取装置 |
Country Status (3)
Country | Link |
---|---|
US (1) | US20240176970A1 (ja) |
EP (1) | EP4318304A1 (ja) |
WO (1) | WO2022211064A1 (ja) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019067278A (ja) * | 2017-10-04 | 2019-04-25 | ファナック株式会社 | 識別コード読取装置及び機械学習装置 |
CN109815764A (zh) * | 2019-01-16 | 2019-05-28 | 王诗会 | 图像内部机读信息的读取方法及系统 |
CN110009615A (zh) * | 2019-03-31 | 2019-07-12 | 深圳大学 | 图像角点的检测方法及检测装置 |
WO2020081435A1 (en) * | 2018-10-15 | 2020-04-23 | Gauss Surgical, Inc. | Methods and systems for processing an image |
JP2020098108A (ja) * | 2018-12-17 | 2020-06-25 | 株式会社大林組 | 表面の不具合検査方法 |
JP2021047797A (ja) * | 2019-09-20 | 2021-03-25 | トッパン・フォームズ株式会社 | 機械学習装置、機械学習方法、及びプログラム |
WO2021152819A1 (ja) * | 2020-01-31 | 2021-08-05 | 株式会社オプティム | コンピュータシステム、情報コード読取方法及びプログラム |
-
2022
- 2022-03-31 WO PCT/JP2022/016728 patent/WO2022211064A1/ja active Application Filing
- 2022-03-31 EP EP22781274.0A patent/EP4318304A1/en not_active Withdrawn
- 2022-03-31 US US18/284,977 patent/US20240176970A1/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019067278A (ja) * | 2017-10-04 | 2019-04-25 | ファナック株式会社 | 識別コード読取装置及び機械学習装置 |
WO2020081435A1 (en) * | 2018-10-15 | 2020-04-23 | Gauss Surgical, Inc. | Methods and systems for processing an image |
JP2020098108A (ja) * | 2018-12-17 | 2020-06-25 | 株式会社大林組 | 表面の不具合検査方法 |
CN109815764A (zh) * | 2019-01-16 | 2019-05-28 | 王诗会 | 图像内部机读信息的读取方法及系统 |
CN110009615A (zh) * | 2019-03-31 | 2019-07-12 | 深圳大学 | 图像角点的检测方法及检测装置 |
JP2021047797A (ja) * | 2019-09-20 | 2021-03-25 | トッパン・フォームズ株式会社 | 機械学習装置、機械学習方法、及びプログラム |
WO2021152819A1 (ja) * | 2020-01-31 | 2021-08-05 | 株式会社オプティム | コンピュータシステム、情報コード読取方法及びプログラム |
Also Published As
Publication number | Publication date |
---|---|
EP4318304A1 (en) | 2024-02-07 |
US20240176970A1 (en) | 2024-05-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10108835B2 (en) | Method and system for decoding two-dimensional code using weighted average gray-scale algorithm | |
US9679214B2 (en) | Mobile image quality assurance in mobile document image processing applications | |
JP4911340B2 (ja) | 二次元コード検出システムおよび二次元コード検出プログラム | |
CN1297943C (zh) | 图像缺陷检查装置和图像缺陷检查方法 | |
US8881986B1 (en) | Decoding machine-readable code | |
JP5507134B2 (ja) | 2次元コード読取方法、2次元コード認識方法及び2次元コード読取装置 | |
JPH08241411A (ja) | 文書画像を評価するシステムおよびその方法 | |
WO2013044875A1 (zh) | 线性条码识别方法和系统 | |
US9626738B2 (en) | Image processing apparatus, image processing method, and storage medium | |
TW201930908A (zh) | 電路板瑕疵篩選方法及其裝置與電腦可讀取記錄媒體 | |
CN113538603B (zh) | 一种基于阵列产品的光学检测方法、系统和可读存储介质 | |
US20020051573A1 (en) | Two-dimensional code extracting method | |
JP7232222B2 (ja) | 情報処理システム | |
TWI438699B (zh) | 條碼的處理方法與其相關裝置 | |
JP4335229B2 (ja) | Qrコード認識装置、qrコード認識装置の制御方法、qrコード認識装置制御プログラムおよびそれを記録したコンピュータ読み取り可能な記録媒体 | |
JP5720623B2 (ja) | 二次元コード読取装置 | |
WO2022211064A1 (ja) | 情報読取装置 | |
JP2022158916A (ja) | 情報読取装置 | |
CN116167394A (zh) | 一种条码识别方法及系统 | |
JP4013027B2 (ja) | 2次元コード読取装置 | |
CN117083616A (zh) | 信息读取装置 | |
JP4872895B2 (ja) | 顔中心線検出装置 | |
CN115578729B (zh) | 数字员工ai智能流程编排方法 | |
US20240104753A1 (en) | Mark detection method and computer program | |
WO2023013546A1 (ja) | コンピュータプログラム、生成装置、および、生成方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22781274 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202280025014.1 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18284977 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022781274 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2022781274 Country of ref document: EP Effective date: 20231031 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |