CN114282559A - Optical code positioning method and device and image sensor chip - Google Patents

Optical code positioning method and device and image sensor chip Download PDF

Info

Publication number
CN114282559A
CN114282559A CN202111548837.6A CN202111548837A CN114282559A CN 114282559 A CN114282559 A CN 114282559A CN 202111548837 A CN202111548837 A CN 202111548837A CN 114282559 A CN114282559 A CN 114282559A
Authority
CN
China
Prior art keywords
boundary
optical code
current
pixel
black
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111548837.6A
Other languages
Chinese (zh)
Other versions
CN114282559B (en
Inventor
宋昊泽
刘洋
马成
乔羽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Changguang Chenxin Microelectronics Co ltd
Changchun Changguangchenxin Optoelectronics Technology Co ltd
Original Assignee
Hangzhou Changguang Chenxin Microelectronics Co ltd
Changchun Changguangchenxin Optoelectronics Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Changguang Chenxin Microelectronics Co ltd, Changchun Changguangchenxin Optoelectronics Technology Co ltd filed Critical Hangzhou Changguang Chenxin Microelectronics Co ltd
Priority to CN202111548837.6A priority Critical patent/CN114282559B/en
Publication of CN114282559A publication Critical patent/CN114282559A/en
Application granted granted Critical
Publication of CN114282559B publication Critical patent/CN114282559B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The optical code positioning method provided by the invention comprises the steps of acquiring image data of a current line and a global optical code coordinate information list, wherein a global optical code coordinate refers to four information of a starting line number and a starting column number of an optical code in an image, determining a boundary feature of each pixel, compressing the boundary feature to obtain a short boundary information list, clustering the optical code of the current line based on the short boundary information list to obtain optical code abscissa information of the current line, fusing the optical code abscissa information and the global optical code coordinate information list to obtain an updated global optical code coordinate information list and outputting the updated global optical code coordinate information list, and outputting the coordinate information of a plurality of optical codes existing in the whole image by using the memory overhead of one line of image data The calculation process is simple, and the coordinate information of a plurality of optical codes can be output.

Description

Optical code positioning method and device and image sensor chip
Technical Field
The present invention relates to the field of computers, and in particular, to an optical code positioning method and apparatus, an image sensor chip, and a computer-readable storage medium.
Background
The optical code recognition system is commonly existed in life of people, and express delivery, payment and automatic driving are full of the optical code recognition system. And the optical code location function is an integral part of this system. The cost of the camera system is an important consideration in the context of couriers and industrial pipelines. Therefore, if the position information of the binary code can be acquired on the image sensor chip, part of software calculation overhead of the optical code recognition system can be saved, so that the cost is reduced, and the product is more competitive.
Conventional optical code location algorithms are based on processing each frame of image. By recognizing the characteristic information of the optical code, it is characterized in four corners. To locate the current position of the optical code. The method is a software calculation method, and the method does not consider the scarcity of the memory in the hardware. It is difficult to migrate it into hardware. The existing method consumes huge memory, and the memory consumed by the traditional software optical code positioning algorithm takes a frame as a unit. Chinese patent CN113228051A proposes an algorithm for identifying optical codes on an image sensor, which reads three lines of data and a reduced image of an original image in a memory each time. Firstly, carrying out convolution operation for 2 times on three rows of data to obtain high-frequency information of the image. And then carrying out binarization on the image to distinguish high-frequency and low-frequency areas. And then, carrying out a dilation algorithm on the binarized image, so that a complete optical code area can be obtained. And etching the expansion result, wherein the etching algorithm can be used for converging the boundary which is expanded too much, and finally mapping the result to a reduced graph of the original graph. By doing this, a reduced binary image can be obtained. The optical code area in the binary image is white, and the background is black. . The finally output binary image can display the position of the optical code, but cannot provide the positioning information of the optical code, and has no positioning function, so that the method only called an identification algorithm and proposed by the patent still consumes dozens of lines of memory space. The method has the advantages that the memory consumption is huge, the calculation is complex, complex operations such as convolution operation are involved, each processing needs to sequentially traverse the same row of images from left to right for multiple times, the calculation times are more, and the calculation cost is huge.
Disclosure of Invention
The invention provides an optical code positioning method, an optical code positioning device, an image sensor chip and a computer readable storage medium, which only use about one line of extra memory overhead and have less memory consumption. The algorithm only needs to traverse the image of each line once and the calculation operation is mostly simple condition judgment, so the calculation amount is smaller. The final output of the invention can directly output the positioning information of each optical code, namely four parameters of the initial line number and the initial column number of each optical code in the image, thereby giving accurate positioning and facilitating the next operation of the optical code recognition system.
In one aspect of the present invention, an optical code positioning method is provided, where the method is applied to an image sensor chip, and an operation logic is to operate line by line according to a preset rule, and for each line of image data, the method includes:
acquiring image data of a current row and a global optical code coordinate information list, wherein the global optical code coordinate refers to four information of a starting row number and a starting column number of an optical code in an image;
determining boundary features of each pixel;
compressing the boundary features to obtain a short boundary information list;
clustering the optical codes of the current line based on the short boundary information list to obtain the optical code abscissa information of the current line;
and fusing the optical code abscissa information and the global optical code coordinate information list to obtain an updated global optical code coordinate information list and outputting the updated global optical code coordinate information list.
As an alternative, the determining the boundary characteristic of each pixel includes:
sequentially traversing the pixels of the current row along a first preset direction to determine a pixel value corresponding to each pixel;
and classifying the current pixels according to the current pixel values until the classification of all pixel points in the current row is completed, wherein the classification comprises black pixels, white pixels or other pixels.
As an optional solution, the classifying the current pixel according to the current pixel value includes:
determining the current pixel with the current pixel value larger than a first threshold value as a black pixel;
determining a current pixel of which the current pixel value is smaller than a second threshold value as a white pixel;
determining a current pixel of the current pixel between a first threshold and the second threshold as the other pixels, wherein the first threshold is greater than the second threshold.
As an optional scheme, the method further comprises the following steps:
when the current pixel type is a black pixel state, skipping the next adjacent pixel type or the black pixel, starting to search pixels within a certain range if the next adjacent pixel type is not the black pixel, and jumping to the target black pixel when the target black pixel is found, and carrying out logic reset; when the white pixel is found, recording the current pixel position as a black-white boundary, and when no black pixel or no white pixel exists in a certain range, recording the current pixel position as other black boundary types, or;
when the current pixel type is a white pixel state, skipping the next adjacent pixel type or the next adjacent white pixel, starting to search pixels in a certain range when the next adjacent pixel type is not the white pixel, skipping to the target white pixel when the target white pixel is found, and carrying out logic reset; when the black pixel is found, recording the current pixel position as a white-black boundary, and when no black or white exists in a certain range, recording the current pixel position as other white boundary types, or;
when the current pixel is confirmed to be other pixels, the skipping is directly carried out.
As an optional scheme, the compressing the boundary features to obtain a short boundary information list includes:
determining a current boundary category;
when the current boundary type is a white-black boundary, updating the boundary list, adding a white-black boundary and updating the index list when the current boundary list is in an empty state, updating the index list when the current list is not in an empty state and the previous boundary is in a black-white boundary state, replacing the last value of the index list with the current index value, and adding a white-black boundary and updating the index list when the current list is not in an empty state, or;
when the current boundary type is a black-and-white boundary, if the boundary list is not empty and the previous boundary is the nearest black-and-white boundary to the current boundary, adding the black-and-white boundary to the boundary list and updating the index list, or;
when the current boundary type is other black boundaries, if the current boundary list is not empty and the previous boundary is a white-black boundary, adding a truncation symbol in the boundary list and updating an index list at the same time, or;
and when the current boundary type is white and other boundaries, adding a truncation symbol in the boundary list and updating the index list when the boundary list is not empty and the previous boundary is a black and white boundary state.
As an optional solution, the clustering the optical code of the current line based on the short boundary information list to obtain the optical code abscissa information of the current line includes:
traversing the boundary information list;
if the current boundary is a white-black boundary and the current coordinate information list is in an empty state, adding new coordinate information;
if the current boundary is a truncation symbol, skipping the current position;
and if the current boundary is a black-white boundary, updating the initial coordinate of the current optical code.
As an optional scheme, the fusing the optical code abscissa information and the global optical code coordinate information list to obtain an updated global optical code coordinate information list and outputting the updated global optical code coordinate information list includes:
descending a certain length at the initial position of each global optical code, defining the length as a search length, and fusing line optical codes in a fusion search range;
when the column coordinates of the row optical code and the global optical code are determined to have intersection, fusing the row optical code into the global code;
when the column coordinates of the row optical code and the global optical code are determined to be not intersected, a global optical code is newly generated;
and after each row of optical codes are fused, the global optical codes are self-fused again, and when the initial positions of the two global optical codes have an intersection, the two global optical codes are fused into a global optical code.
In another aspect of the present invention, there is provided an optical code positioning apparatus, applied to an image sensor chip, where an operation logic is to operate line by line according to a preset rule, and for each line of image data, the apparatus includes:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring image data of a current line and a global optical code coordinate information list, and the global optical code coordinate refers to four information of a starting line number and a starting column number of an optical code in an image;
a determining module for determining boundary features of each pixel;
the compression module is used for compressing the boundary characteristics to obtain a short boundary information list;
the clustering module is used for clustering the optical codes of the current line based on the short boundary information list to obtain the optical code abscissa information of the current line;
and the fusion module is used for fusing the optical code abscissa information and the global optical code coordinate information list to obtain an updated global optical code coordinate information list and outputting the updated global optical code coordinate information list.
In still another aspect of the present invention, there is provided an image sensor chip, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method described above.
In a further aspect of the invention, there is provided a non-transitory computer readable storage medium having stored thereon a computer program of instructions, wherein the program when executed by a processor implements the method as described above.
The method comprises the steps of obtaining image data of a current line and a global optical code coordinate information list, wherein the global optical code coordinate refers to four information of a starting line number and a starting column number of an optical code in an image, determining boundary characteristics of each pixel, compressing the boundary characteristics to obtain a short boundary information list, clustering the optical code of the current line based on the short boundary information list to obtain optical code abscissa information of the current line, fusing the optical code abscissa information and the global optical code coordinate information list to obtain an updated global optical code coordinate information list and outputting the updated global optical code coordinate information list, and outputting coordinate information of a plurality of optical codes in the whole image by using memory overhead of one line of image data, the clustering method mainly based on condition judgment is characterized in that pixels in each row are clustered, and then clusters in different rows are fused to obtain coordinate information of a plurality of optical codes.
Drawings
FIG. 1 is a schematic flow chart of an optical code positioning method provided in an embodiment of the present invention;
FIG. 2 is a schematic diagram of an optical code location method provided in an embodiment of the invention;
FIG. 3 is a flow chart of a voiceprint registration method in an optical code location method provided in an embodiment of the present invention;
FIG. 4 is a flow chart illustrating an optical code positioning method provided in an embodiment of the present invention;
FIG. 5 is a flow chart of a method for optical code location provided in an embodiment of the present invention;
FIG. 6 is a schematic flow chart of an optical code positioning method provided in an embodiment of the present invention;
FIG. 7 is a flow chart illustrating a voiceprint registration method in an optical code location method provided in an embodiment of the present invention;
fig. 8 is a block diagram of an optical code positioning apparatus provided in an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Referring to fig. 1, an optical code positioning method according to an embodiment of the present invention is applied to an image sensor chip, where an operation logic is to operate line by line according to a preset rule, and for each line of image data, the method includes:
s101, acquiring image data of a current row and a global optical code coordinate information list, wherein the global optical code coordinate refers to four information of a starting row number and a starting column number of an optical code in an image.
And S102, determining the boundary characteristic of each pixel.
S103, compressing the boundary features to obtain a short boundary information list.
And S104, clustering the optical codes of the current line based on the short boundary information list to obtain the optical code abscissa information of the current line.
And S105, fusing the optical code abscissa information and the global optical code coordinate information list to obtain an updated global optical code coordinate information list and outputting the updated global optical code coordinate information list.
The embodiment of the invention provides an optical code positioning method, which only uses extra memory overhead of about one line and has less memory consumption. The algorithm only needs to traverse the image of each line once and the calculation operation is mostly simple condition judgment, so the calculation amount is smaller. The final output of the invention can directly output the positioning information of each optical code, namely four parameters of the initial line number and the initial column number of each optical code in the image, thereby giving accurate positioning and facilitating the next operation of the optical code recognition system.
Specifically, in S102, the determining the boundary characteristic of each pixel includes:
the method comprises the steps of sequentially traversing and determining pixel values corresponding to pixels of a current row along a first preset direction, classifying the current pixels according to the current pixel values until classification of all pixel points in the current row is completed, wherein the classification comprises black pixels, white pixels or other pixels, the first preset direction can be from left to right or from right to left, the first preset direction can be flexibly selected according to needs and is not limited, and it is required to explain that subsequent other direction indications are performed by taking the direction as a reference after the direction is determined.
Specifically, in S103, classifying the current pixel according to the current pixel value includes:
determining the current pixel with the current pixel value larger than a first threshold value as a black pixel, determining the current pixel with the current pixel value smaller than a second threshold value as a white pixel, and determining the current pixel with the current pixel between the first threshold value and the second threshold value as other pixels, wherein the first threshold value is larger than the second threshold value.
In one possible implementation, the classification may be determined as follows:
when the current pixel type is a black pixel state, skipping the next adjacent pixel type or the black pixel, starting to search pixels within a certain range if the next adjacent pixel type is not the black pixel, and jumping to the target black pixel when the target black pixel is found, and carrying out logic reset; when the white pixel is found, recording the current pixel position as a black-white boundary, and when no black pixel or no white pixel exists in a certain range, recording the current pixel position as other black boundary types, or;
when the current pixel type is a white pixel state, skipping the next adjacent pixel type or the next adjacent white pixel, starting to search pixels in a certain range when the next adjacent pixel type is not the white pixel, skipping to the target white pixel when the target white pixel is found, and carrying out logic reset; when the black pixel is found, recording the current pixel position as a white-black boundary, and when no black or white exists in a certain range, recording the current pixel position as other white boundary types, or;
when the current pixel is confirmed to be other pixels, the skipping is directly carried out.
Specifically, in S104, the compressing the boundary features to obtain a short boundary information list includes:
determining a current boundary category;
when the current boundary type is a white-black boundary, updating the boundary list, adding a white-black boundary and updating the index list when the current boundary list is in an empty state, updating the index list when the current list is not in an empty state and the previous boundary is in a black-white boundary state, replacing the last value of the index list with the current index value, and adding a white-black boundary and updating the index list when the current list is not in an empty state, or;
when the current boundary type is a black-and-white boundary, if the boundary list is not empty and the previous boundary is the nearest black-and-white boundary to the current boundary, adding the black-and-white boundary to the boundary list and updating the index list, or;
when the current boundary type is other black boundaries, if the current boundary list is not empty and the previous boundary is a white-black boundary, adding a truncation symbol in the boundary list and updating an index list at the same time, or;
and when the current boundary type is white and other boundaries, adding a truncation symbol in the boundary list and updating the index list when the boundary list is not empty and the previous boundary is a black and white boundary state.
Specifically, the clustering, based on the short boundary information list, the optical codes of the current line to obtain optical code abscissa information of the current line includes:
traversing the boundary information list;
if the current boundary is a white-black boundary and the current coordinate information list is in an empty state, adding new coordinate information;
if the current boundary is a truncation symbol, skipping the current position;
and if the current boundary is a black-white boundary, updating the initial coordinate of the current optical code.
As an optional scheme, the fusing the optical code abscissa information and the global optical code coordinate information list to obtain an updated global optical code coordinate information list and outputting the updated global optical code coordinate information list includes:
descending a certain length at the initial position of each global optical code, defining the length as a search length, and fusing line optical codes in a fusion search range;
when the column coordinates of the row optical code and the global optical code are determined to have intersection, fusing the row optical code into the global code;
when the column coordinates of the row optical code and the global optical code are determined to be not intersected, a global optical code is newly generated;
and after each row of optical codes are fused, the global optical codes are self-fused again, and when the initial positions of the two global optical codes have an intersection, the two global optical codes are fused into a global optical code.
In order to better understand the method for locating the related codes provided by the present invention, the following description is made in detail with reference to an embodiment, the method provided by the present invention may be executed on an image sensor chip, and after the execution of a row of operation logic is finished, the operation logic is executed from top to bottom row by row. Each row runs from left to right. The processing logic for each line is shown in fig. 1 as a line of image data and a global list of optical code coordinate information. The global optical code coordinate refers to four information of the starting row number and the starting column number of the optical code in the image.
The processing for each line is mainly divided into four steps: the method comprises the steps of firstly determining the boundary characteristics of each pixel, secondly compressing the boundary characteristics to obtain a short boundary information list, thirdly performing conditional clustering on the optical codes of the current line on the basis of the boundary information list to obtain the optical code abscissa information of the current line, and fourthly fusing the optical code coordinate information of the current line and the global optical code coordinate information. And finally outputting a line of data and an updated global optical code coordinate information list.
First the boundary features are sought. We define three categories, black and white respectively, among others. They are defined as white above a certain threshold, black below a certain threshold, and the rest as other species. Boundaries are defined as four categories, black and white, white and black, and white. Reading in a row of pixels, and traversing from left to right to determine boundary characteristics in sequence. The existence of a certain disturbance when determined is as follows:
referring to fig. 2, the position indicated by the arrow is observed, and if the human eye distinguishes that the position is a white-black boundary, but due to problems of an imaging system and the like, a certain fuzzy boundary exists. The fuzzy boundary is an undesirable boundary and belongs to a boundary where errors need to be removed, so the method provided by the invention can effectively avoid the influence of the fuzzy boundary. Because the optical code positioning algorithm executed on the image sensor is simple, high-frequency boundary information of the optical code needs to be utilized, the fuzzy boundary can generate great influence, the phenomenon is observed through a direct graph and a compression graph of hundreds of chips, and the method is provided for solving the problem.
Referring to fig. 3, an embodiment method for finding boundary features may include:
the current pixel is first classified, black, white or otherwise, according to its value.
If the current pixel type is a black pixel, the next or black pixel is skipped. The next non-black pixel starts to find the pixel to the right within a certain range, jumps to the black pixel if the black pixel is found, and then is logically reset. If a white pixel is found, the current pixel position is recorded as a black and white border. If there are neither black nor white pixels within a certain range, then the other boundary category is recorded as black.
If the current pixel is a white pixel, the next or white pixel is skipped. The next non-white pixel starts to find the pixel to the right within a certain range, jumps to the white pixel if found, and then is logically reset. If a black pixel is found, the current pixel position is recorded as a white-black border. If there are neither black nor white pixels within a certain range, then the other boundary category is recorded as white.
If the current pixel is other, then skip directly.
In summary, it can be seen that the boundary type of each pixel position is clear, and skipping represents no type, and the complexity of the calculation operation is low, and both are threshold determination operations.
It should be noted that the method can determine the boundary type and compress the list of the boundary type information at the same time, so that the storage can be better saved.
As shown in fig. 4, one implementation of compressing the boundary information may include:
firstly, determining the current boundary category, and for the boundary category divided into a plurality of cases, executing different operation modes, specifically comprising the following steps:
the first possibility is that if the boundary is currently white and black, then the following decision is entered:
if the current boundary list is empty, the updated boundary list adds a white-black boundary and updates the list of indexes, where the boundary list is a list storing boundary information, and the initial empty of each row can be indicated by a special symbol in hardware. The index list represents a list of indexes, initialization is performed at the beginning of each row to be empty, and the indexes refer to abscissa positions corresponding to each boundary, specifically to serial numbers of the columns. The index list and the boundary list are in one-to-one correspondence. If the current list is not empty and the previous border is a black and white border, the index list is updated, replacing the last value of the index list with the current index value. Next to the third possibility, if the current list is not empty, a white-black border is added to the border list and the index list is updated.
The second case is currently a black and white border, and if the border list is not empty at this time and the previous border is the black and white border closest to the current, then the border list adds the black and white border and updates the index list.
The third case is that there are currently black other boundaries, and if the current boundary list is not empty and the previous boundary is a white-black boundary, then a truncated symbol is added to the boundary list at this point, and the index list is updated at the same time.
The fourth case is when the border is white and other borders, if the border list is not empty and the previous border is a black and white border, then a truncated symbol is added to the border list and the index list is updated.
And finishing the boundary information compression work. Therefore, the subsequent processing does not need to traverse from the beginning of the line to the end of the line, but only needs to traverse the boundary information list, and much calculation is saved. The calculation is almost conditional judgment, is easy to realize and has small calculation amount.
Referring to fig. 5, conditional clustering is performed on the boundary information list, and the result of clustering is the optical code of the current line, where the conditional clustering mainly involves two key factors: one is a black-white connected region, namely a white-black boundary, a black-white boundary, a white-black boundary, a continuous boundary pair of the black-white boundary, and the other is the distance between the two black-white connected regions, and if the two black-white connected regions are too far away, the two black-white connected regions are considered as two optical codes. The flow of one implementation of the clustering operation is as follows:
and traversing the boundary information list. A first possibility is to add new coordinate information if the current boundary is a white-black boundary, if the current coordinate information list, which refers to the position of the optical code, i.e. the serial number of the column of the start position, is empty, the serial numbers of the start and end of each group and the optical code are in one-to-one correspondence. If the distance between the black and white boundary and the previous black and white boundary is smaller, the optical code is the same, and the column number of the initial position of the existing optical code is updated. If the distance between the black and white border and the previous black and white border is relatively large, it indicates that the optical code is different, and a new start position of the optical code needs to be added. A second possibility is that the current position is skipped if the current boundary is a truncated symbol. A third possibility is that the current is a black and white border. If so, the start coordinates of the current optical code are updated.
In combination with the method shown in fig. 6, all operations required in a row are completed so far, the optical code is a two-dimensional image, and the invention fuses sequentially from top to bottom by the method of fusing optical codes in different rows to obtain coordinate information of several optical codes possibly existing in one image.
First the fusion takes place within a certain range, as in the fused search area in the figure. I.e. the fused objects need to have positional context. We are a certain length downwards from the starting position of each global optical code, defined as the search length, and the line optical codes can be merged in the merged search range. Global optical codes here refer to optical code coordinate information in an image, each optical code having a row start position and a column start position. The line optical code means an optical code derived for each line, and coordinate information thereof contains only the positions of the start and end of the abscissa. If the optical code is found in the search area, it is fused and the search area is further updated.
Referring to fig. 7, the fusion operation is processed from top to bottom in one drawing, and the fusion process is mainly divided into two stages. In a first phase, the line optical code is merged into the global code if the column coordinates of the line optical code and the global optical code intersect, and if not, a global optical code is newly generated. And in the second stage, after each row of optical codes are fused, the global optical codes are self-fused again, and if the initial positions of the two global optical codes have an intersection, the two global optical codes are fused into a global optical code.
The optical code positioning method provided by the embodiment of the invention aims at solving the problem that the optical code is almost white background and black content. The present invention finds that the color regularity of its imaging, i.e. the black and white connected regions, is summarized. The black-white connected region of the optical code is determined to start from a black-white boundary and end with the black-white boundary, and the black-white boundary circulate, and other boundaries can not appear in the middle. The method defines the boundary information characteristic of each pixel, and uses the characteristic to more conveniently divide the black and white connected region. And the calculation is simple. The method solves the problem that fuzzy boundaries exist in the optical codes, and transition areas always exist between white and black, the fuzzy areas have obvious influence on results, and the influence on the abnormal judgment of the boundary types caused by the fuzzy boundaries can be effectively avoided. By summarizing the rules of the effective boundary information in a line, a method for compressing the boundary information is provided, which compresses the boundary information of each pixel in a line into a short boundary information list. The storage and calculation amount is greatly reduced. The conditional clustering method provided by the method mainly has two factors which mainly affect clustering, one factor is the type of a boundary, clustering needs to be started from white and black, then black and white boundaries of the white and black boundaries are grouped, and the clustering is completed by one group. The second factor of influence is distance. If the two black-white connected areas are closer to each other, the code is an optical code, and if the two black-white connected areas are farther from each other, the code is two optical codes. The method comprises the steps of determining a line optical code, defining an inter-line search range, and fusing the line optical code and a global optical code. A line optical code is a one-dimensional piece of information consisting of the beginning and the end of the optical code abscissa. In the fusion process, the line optical code is fused into the global code, and then the global code is fused in the process, so that the operation sequence can effectively avoid repetition.
Compared with the prior art, the prior art can only identify the optical code on the optical sensor and cannot locate, and the invention can clearly give out the locating information, which is an essential difference. The present invention locates the optical code by using boundary information, which is compared to the high frequency information used by the prior invention. The high-frequency information needs to occupy at least 5 lines of memory and needs complex convolution calculation operation, and the method only needs less than 1 line of extra memory and takes calculation as threshold judgment, thereby greatly reducing the calculation amount and the memory.
Compared with the prior art, the optical code is searched by traversing a whole line of high-frequency features in the prior art, and the method and the device compress the information of the boundary features while defining the boundary features. Finding the optical code then only requires traversing the boundary information list. The calculation amount and the memory are greatly reduced.
Compared with the prior art, the prior art obtains the area where the optical code is located by expansion and etching. According to the invention, by finding and summarizing the boundary rule of the optical code in the image, the region where the optical code is located can be divided only by threshold judgment and calculation.
Referring to fig. 8, an optical code positioning apparatus 800 is further provided in an embodiment of the present invention, where the optical code positioning apparatus is applied to an image sensor chip, and an operation logic is to operate line by line according to a preset rule, and for each line of image data, the apparatus includes:
an obtaining module 801, configured to obtain image data of a current row and a global optical code coordinate information list, where the global optical code coordinate refers to four information, namely a starting row number and a starting column number, of an optical code in an image;
a determining module 802 for determining boundary characteristics of each pixel;
a compressing module 803, configured to compress the boundary features to obtain a short boundary information list;
a clustering module 804, configured to perform clustering processing on the optical codes of the current line based on the short boundary information list to obtain optical code abscissa information of the current line;
and a fusion module 805, configured to fuse the optical code abscissa information and the global optical code coordinate information list to obtain an updated global optical code coordinate information list, and output the updated global optical code coordinate information list.
The embodiment of the invention also provides an image sensor chip, which comprises:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method described above.
In an embodiment of the present invention, there is further provided a non-transitory computer readable storage medium storing computer instructions, on which a computer program is stored, wherein when the program is executed by a processor, the optical code positioning method according to all embodiments of the present invention is provided, wherein when the program is applied to an image sensor chip, an operation logic is to operate line by line according to a preset rule, and for each line of image data, the method includes:
acquiring image data of a current row and a global optical code coordinate information list, wherein the global optical code coordinate refers to four information of a starting row number and a starting column number of an optical code in an image;
determining boundary features of each pixel;
compressing the boundary features to obtain a short boundary information list;
clustering the optical codes of the current line based on the short boundary information list to obtain the optical code abscissa information of the current line;
and fusing the optical code abscissa information and the global optical code coordinate information list to obtain an updated global optical code coordinate information list and outputting the updated global optical code coordinate information list.
The method comprises the steps of obtaining image data of a current line and a global optical code coordinate information list, wherein the global optical code coordinate refers to four information of a starting line number and a starting column number of an optical code in an image, determining boundary characteristics of each pixel, compressing the boundary characteristics to obtain a short boundary information list, clustering the optical code of the current line based on the short boundary information list to obtain optical code abscissa information of the current line, fusing the optical code abscissa information and the global optical code coordinate information list to obtain an updated global optical code coordinate information list and outputting the updated global optical code coordinate information list, and outputting coordinate information of a plurality of optical codes in the whole image by using memory overhead of one line of image data, the clustering method mainly based on condition judgment is characterized in that pixels in each row are clustered, and then clusters in different rows are fused to obtain coordinate information of a plurality of optical codes.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.
The invention also provides a computer program product comprising a computer program which, when executed by a processor, implements a speech recognition method according to the above.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present invention may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present invention can be achieved.
The above-described embodiments should not be construed as limiting the scope of the invention. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. An optical code positioning method is applied to an image sensor chip, operation logic is operated line by line according to a preset rule, and the method comprises the following steps of:
acquiring image data of a current row and a global optical code coordinate information list, wherein the global optical code coordinate refers to four information of a starting row number and a starting column number of an optical code in an image;
determining boundary features of each pixel;
compressing the boundary features to obtain a short boundary information list;
clustering the optical codes of the current line based on the short boundary information list to obtain the optical code abscissa information of the current line;
and fusing the optical code abscissa information and the global optical code coordinate information list to obtain an updated global optical code coordinate information list and outputting the updated global optical code coordinate information list.
2. The optical code location method of claim 1, wherein said determining boundary features for each pixel comprises:
sequentially traversing the pixels of the current row along a first preset direction to determine a pixel value corresponding to each pixel;
and classifying the current pixels according to the current pixel values until the classification of all pixel points in the current row is completed, wherein the classification comprises black pixels, white pixels or other pixels.
3. The optical code location method of claim 2, wherein classifying a current pixel according to the current pixel value comprises:
determining the current pixel with the current pixel value larger than a first threshold value as a black pixel;
determining a current pixel of which the current pixel value is smaller than a second threshold value as a white pixel;
determining a current pixel of the current pixel between a first threshold and the second threshold as the other pixels, wherein the first threshold is greater than the second threshold.
4. The optical code positioning method of claim 3, further comprising:
when the current pixel type is a black pixel state, skipping the next adjacent pixel type or the black pixel, starting to search pixels within a certain range if the next adjacent pixel type is not the black pixel, and jumping to the target black pixel when the target black pixel is found, and carrying out logic reset; when the white pixel is found, recording the current pixel position as a black-white boundary, and when no black pixel or no white pixel exists in a certain range, recording the current pixel position as other black boundary types, or;
when the current pixel type is a white pixel state, skipping the next adjacent pixel type or the next adjacent white pixel, starting to search pixels in a certain range when the next adjacent pixel type is not the white pixel, skipping to the target white pixel when the target white pixel is found, and carrying out logic reset; when the black pixel is found, recording the current pixel position as a white-black boundary, and when no black or white exists in a certain range, recording the current pixel position as other white boundary types, or;
when the current pixel is confirmed to be other pixels, the skipping is directly carried out.
5. The method of claim 4, wherein compressing the boundary features to obtain a short list of boundary information comprises:
determining a current boundary category;
when the current boundary type is a white-black boundary, updating the boundary list, adding a white-black boundary and updating the index list when the current boundary list is in an empty state, updating the index list when the current list is not in an empty state and the previous boundary is in a black-white boundary state, replacing the last value of the index list with the current index value, and adding a white-black boundary and updating the index list when the current list is not in an empty state, or;
when the current boundary type is a black-and-white boundary, if the boundary list is not empty and the previous boundary is the nearest black-and-white boundary to the current boundary, adding the black-and-white boundary to the boundary list and updating the index list, or;
when the current boundary type is other black boundaries, if the current boundary list is not empty and the previous boundary is a white-black boundary, adding a truncation symbol in the boundary list and updating an index list at the same time, or;
and when the current boundary type is white and other boundaries, adding a truncation symbol in the boundary list and updating the index list when the boundary list is not empty and the previous boundary is a black and white boundary state.
6. The method according to claim 5, wherein the clustering the optical codes of the current row based on the short boundary information list to obtain the optical code abscissa information of the current row comprises:
traversing the boundary information list;
if the current boundary is a white-black boundary and the current coordinate information list is in an empty state, adding new coordinate information;
if the current boundary is a truncation symbol, skipping the current position;
and if the current boundary is a black-white boundary, updating the initial coordinate of the current optical code.
7. The optical code positioning method according to claim 5, wherein the fusing the optical code abscissa information and the global optical code coordinate information list to obtain an updated global optical code coordinate information list and outputting the updated global optical code coordinate information list comprises:
descending a certain length at the initial position of each global optical code, defining the length as a search length, and fusing line optical codes in a fusion search range;
when the column coordinates of the row optical code and the global optical code are determined to have intersection, fusing the row optical code into the global code;
when the column coordinates of the row optical code and the global optical code are determined to be not intersected, a global optical code is newly generated;
and after each row of optical codes are fused, the global optical codes are self-fused again, and when the initial positions of the two global optical codes have an intersection, the two global optical codes are fused into a global optical code.
8. An optical code positioning device applied to an image sensor chip, wherein an operation logic is to operate line by line according to a preset rule, and the device comprises:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring image data of a current line and a global optical code coordinate information list, and the global optical code coordinate refers to four information of a starting line number and a starting column number of an optical code in an image;
a determining module for determining boundary features of each pixel;
the compression module is used for compressing the boundary characteristics to obtain a short boundary information list;
the clustering module is used for clustering the optical codes of the current line based on the short boundary information list to obtain the optical code abscissa information of the current line;
and the fusion module is used for fusing the optical code abscissa information and the global optical code coordinate information list to obtain an updated global optical code coordinate information list and outputting the updated global optical code coordinate information list.
9. An image sensor chip, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 7.
10. A non-transitory computer readable storage medium having stored thereon computer instructions, a computer program, characterized in that the program, when executed by a processor, implements the method according to any one of claims 1 to 7.
CN202111548837.6A 2021-12-17 2021-12-17 Optical code positioning method and device and image sensor chip Active CN114282559B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111548837.6A CN114282559B (en) 2021-12-17 2021-12-17 Optical code positioning method and device and image sensor chip

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111548837.6A CN114282559B (en) 2021-12-17 2021-12-17 Optical code positioning method and device and image sensor chip

Publications (2)

Publication Number Publication Date
CN114282559A true CN114282559A (en) 2022-04-05
CN114282559B CN114282559B (en) 2024-09-20

Family

ID=80872842

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111548837.6A Active CN114282559B (en) 2021-12-17 2021-12-17 Optical code positioning method and device and image sensor chip

Country Status (1)

Country Link
CN (1) CN114282559B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117891368A (en) * 2024-03-18 2024-04-16 成都融见软件科技有限公司 Code positioning method, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101061487A (en) * 2004-08-31 2007-10-24 讯宝科技公司 System and method for aiming an optical code scanning device
CN104933387A (en) * 2015-06-24 2015-09-23 上海快仓智能科技有限公司 Rapid positioning and identifying method based on two-dimensional code decoding
US20190265722A1 (en) * 2018-02-23 2019-08-29 Crown Equipment Corporation Systems and methods for optical target based indoor vehicle navigation
CN113705268A (en) * 2021-08-30 2021-11-26 山东大学 Two-dimensional code positioning method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101061487A (en) * 2004-08-31 2007-10-24 讯宝科技公司 System and method for aiming an optical code scanning device
CN104933387A (en) * 2015-06-24 2015-09-23 上海快仓智能科技有限公司 Rapid positioning and identifying method based on two-dimensional code decoding
US20190265722A1 (en) * 2018-02-23 2019-08-29 Crown Equipment Corporation Systems and methods for optical target based indoor vehicle navigation
CN113705268A (en) * 2021-08-30 2021-11-26 山东大学 Two-dimensional code positioning method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117891368A (en) * 2024-03-18 2024-04-16 成都融见软件科技有限公司 Code positioning method, electronic equipment and storage medium
CN117891368B (en) * 2024-03-18 2024-05-14 成都融见软件科技有限公司 Code positioning method, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN114282559B (en) 2024-09-20

Similar Documents

Publication Publication Date Title
JP6485894B2 (en) Method, apparatus, system and storage medium for automatically extracting driver's license expiration date
CN110008809B (en) Method and device for acquiring form data and server
CN109918987B (en) Video subtitle keyword identification method and device
CN106599940B (en) Picture character recognition method and device
CN102156751B (en) Method and device for extracting video fingerprint
CN108805128B (en) Character segmentation method and device
JP7026165B2 (en) Text recognition method and text recognition device, electronic equipment, storage medium
JP6161266B2 (en) Information processing apparatus, control method therefor, electronic device, program, and storage medium
US10062007B2 (en) Apparatus and method for creating an image recognizing program having high positional recognition accuracy
CN108460346B (en) Fingerprint identification method and device
CN112183511A (en) Method, system, storage medium and equipment for deriving table from image
US20030012440A1 (en) Form recognition system, form recognition method, program and storage medium
CN113626444B (en) Table query method, device, equipment and medium based on bitmap algorithm
CN113657274A (en) Table generation method and device, electronic equipment, storage medium and product
CN113887608A (en) Model training method, image detection method and device
CN114519858A (en) Document image recognition method and device, storage medium and electronic equipment
CN109508571B (en) Strip-space positioning method and device, electronic equipment and storage medium
CN114282559B (en) Optical code positioning method and device and image sensor chip
CN115546809A (en) Table structure identification method based on cell constraint and application thereof
CN115861255A (en) Model training method, device, equipment, medium and product for image processing
CN115457017A (en) Wire defect detection method and device, computer equipment and storage medium
JP2007200246A (en) Method for evaluating image processing algorithm, method, device, and program for generating same algorithm, and recording medium
CN113269153B (en) Form identification method and device
CN110796129A (en) Text line region detection method and device
JP7133085B2 (en) Database update method and device, electronic device, and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Office Buildings 1 and 5, Phase I, Optoelectronic Information Industry Park, No. 7691 Ziyou Road, Changchun Economic and Technological Development Zone, Jilin Province, 130000

Applicant after: Changchun Changguang Chenxin Microelectronics Co.,Ltd.

Applicant after: Hangzhou Changguang Chenxin Microelectronics Co.,Ltd.

Address before: No. 588, Yingkou Road, Jingkai District, Changchun City, Jilin Province, 130033

Applicant before: Changchun Changguangchenxin Optoelectronics Technology Co.,Ltd.

Applicant before: Hangzhou Changguang Chenxin Microelectronics Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant