Disclosure of Invention
The invention provides an optical code positioning method, an optical code positioning device, an image sensor chip and a computer readable storage medium, which only use about one line of extra memory overhead and have less memory consumption. The algorithm only needs to traverse the image of each line once and the calculation operation is mostly simple condition judgment, so the calculation amount is smaller. The final output of the invention can directly output the positioning information of each optical code, namely four parameters of the initial line number and the initial column number of each optical code in the image, thereby giving accurate positioning and facilitating the next operation of the optical code recognition system.
In one aspect of the present invention, an optical code positioning method is provided, where the method is applied to an image sensor chip, and an operation logic is to operate line by line according to a preset rule, and for each line of image data, the method includes:
acquiring image data of a current row and a global optical code coordinate information list, wherein the global optical code coordinate refers to four information of a starting row number and a starting column number of an optical code in an image;
determining boundary features of each pixel;
compressing the boundary features to obtain a short boundary information list;
clustering the optical codes of the current line based on the short boundary information list to obtain the optical code abscissa information of the current line;
and fusing the optical code abscissa information and the global optical code coordinate information list to obtain an updated global optical code coordinate information list and outputting the updated global optical code coordinate information list.
As an alternative, the determining the boundary characteristic of each pixel includes:
sequentially traversing the pixels of the current row along a first preset direction to determine a pixel value corresponding to each pixel;
and classifying the current pixels according to the current pixel values until the classification of all pixel points in the current row is completed, wherein the classification comprises black pixels, white pixels or other pixels.
As an optional solution, the classifying the current pixel according to the current pixel value includes:
determining the current pixel with the current pixel value larger than a first threshold value as a black pixel;
determining a current pixel of which the current pixel value is smaller than a second threshold value as a white pixel;
determining a current pixel of the current pixel between a first threshold and the second threshold as the other pixels, wherein the first threshold is greater than the second threshold.
As an optional scheme, the method further comprises the following steps:
when the current pixel type is a black pixel state, skipping the next adjacent pixel type or the black pixel, starting to search pixels within a certain range if the next adjacent pixel type is not the black pixel, and jumping to the target black pixel when the target black pixel is found, and carrying out logic reset; when the white pixel is found, recording the current pixel position as a black-white boundary, and when no black pixel or no white pixel exists in a certain range, recording the current pixel position as other black boundary types, or;
when the current pixel type is a white pixel state, skipping the next adjacent pixel type or the next adjacent white pixel, starting to search pixels in a certain range when the next adjacent pixel type is not the white pixel, skipping to the target white pixel when the target white pixel is found, and carrying out logic reset; when the black pixel is found, recording the current pixel position as a white-black boundary, and when no black or white exists in a certain range, recording the current pixel position as other white boundary types, or;
when the current pixel is confirmed to be other pixels, the skipping is directly carried out.
As an optional scheme, the compressing the boundary features to obtain a short boundary information list includes:
determining a current boundary category;
when the current boundary type is a white-black boundary, updating the boundary list, adding a white-black boundary and updating the index list when the current boundary list is in an empty state, updating the index list when the current list is not in an empty state and the previous boundary is in a black-white boundary state, replacing the last value of the index list with the current index value, and adding a white-black boundary and updating the index list when the current list is not in an empty state, or;
when the current boundary type is a black-and-white boundary, if the boundary list is not empty and the previous boundary is the nearest black-and-white boundary to the current boundary, adding the black-and-white boundary to the boundary list and updating the index list, or;
when the current boundary type is other black boundaries, if the current boundary list is not empty and the previous boundary is a white-black boundary, adding a truncation symbol in the boundary list and updating an index list at the same time, or;
and when the current boundary type is white and other boundaries, adding a truncation symbol in the boundary list and updating the index list when the boundary list is not empty and the previous boundary is a black and white boundary state.
As an optional solution, the clustering the optical code of the current line based on the short boundary information list to obtain the optical code abscissa information of the current line includes:
traversing the boundary information list;
if the current boundary is a white-black boundary and the current coordinate information list is in an empty state, adding new coordinate information;
if the current boundary is a truncation symbol, skipping the current position;
and if the current boundary is a black-white boundary, updating the initial coordinate of the current optical code.
As an optional scheme, the fusing the optical code abscissa information and the global optical code coordinate information list to obtain an updated global optical code coordinate information list and outputting the updated global optical code coordinate information list includes:
descending a certain length at the initial position of each global optical code, defining the length as a search length, and fusing line optical codes in a fusion search range;
when the column coordinates of the row optical code and the global optical code are determined to have intersection, fusing the row optical code into the global code;
when the column coordinates of the row optical code and the global optical code are determined to be not intersected, a global optical code is newly generated;
and after each row of optical codes are fused, the global optical codes are self-fused again, and when the initial positions of the two global optical codes have an intersection, the two global optical codes are fused into a global optical code.
In another aspect of the present invention, there is provided an optical code positioning apparatus, applied to an image sensor chip, where an operation logic is to operate line by line according to a preset rule, and for each line of image data, the apparatus includes:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring image data of a current line and a global optical code coordinate information list, and the global optical code coordinate refers to four information of a starting line number and a starting column number of an optical code in an image;
a determining module for determining boundary features of each pixel;
the compression module is used for compressing the boundary characteristics to obtain a short boundary information list;
the clustering module is used for clustering the optical codes of the current line based on the short boundary information list to obtain the optical code abscissa information of the current line;
and the fusion module is used for fusing the optical code abscissa information and the global optical code coordinate information list to obtain an updated global optical code coordinate information list and outputting the updated global optical code coordinate information list.
In still another aspect of the present invention, there is provided an image sensor chip, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method described above.
In a further aspect of the invention, there is provided a non-transitory computer readable storage medium having stored thereon a computer program of instructions, wherein the program when executed by a processor implements the method as described above.
The method comprises the steps of obtaining image data of a current line and a global optical code coordinate information list, wherein the global optical code coordinate refers to four information of a starting line number and a starting column number of an optical code in an image, determining boundary characteristics of each pixel, compressing the boundary characteristics to obtain a short boundary information list, clustering the optical code of the current line based on the short boundary information list to obtain optical code abscissa information of the current line, fusing the optical code abscissa information and the global optical code coordinate information list to obtain an updated global optical code coordinate information list and outputting the updated global optical code coordinate information list, and outputting coordinate information of a plurality of optical codes in the whole image by using memory overhead of one line of image data, the clustering method mainly based on condition judgment is characterized in that pixels in each row are clustered, and then clusters in different rows are fused to obtain coordinate information of a plurality of optical codes.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Referring to fig. 1, an optical code positioning method according to an embodiment of the present invention is applied to an image sensor chip, where an operation logic is to operate line by line according to a preset rule, and for each line of image data, the method includes:
s101, acquiring image data of a current row and a global optical code coordinate information list, wherein the global optical code coordinate refers to four information of a starting row number and a starting column number of an optical code in an image.
And S102, determining the boundary characteristic of each pixel.
S103, compressing the boundary features to obtain a short boundary information list.
And S104, clustering the optical codes of the current line based on the short boundary information list to obtain the optical code abscissa information of the current line.
And S105, fusing the optical code abscissa information and the global optical code coordinate information list to obtain an updated global optical code coordinate information list and outputting the updated global optical code coordinate information list.
The embodiment of the invention provides an optical code positioning method, which only uses extra memory overhead of about one line and has less memory consumption. The algorithm only needs to traverse the image of each line once and the calculation operation is mostly simple condition judgment, so the calculation amount is smaller. The final output of the invention can directly output the positioning information of each optical code, namely four parameters of the initial line number and the initial column number of each optical code in the image, thereby giving accurate positioning and facilitating the next operation of the optical code recognition system.
Specifically, in S102, the determining the boundary characteristic of each pixel includes:
the method comprises the steps of sequentially traversing and determining pixel values corresponding to pixels of a current row along a first preset direction, classifying the current pixels according to the current pixel values until classification of all pixel points in the current row is completed, wherein the classification comprises black pixels, white pixels or other pixels, the first preset direction can be from left to right or from right to left, the first preset direction can be flexibly selected according to needs and is not limited, and it is required to explain that subsequent other direction indications are performed by taking the direction as a reference after the direction is determined.
Specifically, in S103, classifying the current pixel according to the current pixel value includes:
determining the current pixel with the current pixel value larger than a first threshold value as a black pixel, determining the current pixel with the current pixel value smaller than a second threshold value as a white pixel, and determining the current pixel with the current pixel between the first threshold value and the second threshold value as other pixels, wherein the first threshold value is larger than the second threshold value.
In one possible implementation, the classification may be determined as follows:
when the current pixel type is a black pixel state, skipping the next adjacent pixel type or the black pixel, starting to search pixels within a certain range if the next adjacent pixel type is not the black pixel, and jumping to the target black pixel when the target black pixel is found, and carrying out logic reset; when the white pixel is found, recording the current pixel position as a black-white boundary, and when no black pixel or no white pixel exists in a certain range, recording the current pixel position as other black boundary types, or;
when the current pixel type is a white pixel state, skipping the next adjacent pixel type or the next adjacent white pixel, starting to search pixels in a certain range when the next adjacent pixel type is not the white pixel, skipping to the target white pixel when the target white pixel is found, and carrying out logic reset; when the black pixel is found, recording the current pixel position as a white-black boundary, and when no black or white exists in a certain range, recording the current pixel position as other white boundary types, or;
when the current pixel is confirmed to be other pixels, the skipping is directly carried out.
Specifically, in S104, the compressing the boundary features to obtain a short boundary information list includes:
determining a current boundary category;
when the current boundary type is a white-black boundary, updating the boundary list, adding a white-black boundary and updating the index list when the current boundary list is in an empty state, updating the index list when the current list is not in an empty state and the previous boundary is in a black-white boundary state, replacing the last value of the index list with the current index value, and adding a white-black boundary and updating the index list when the current list is not in an empty state, or;
when the current boundary type is a black-and-white boundary, if the boundary list is not empty and the previous boundary is the nearest black-and-white boundary to the current boundary, adding the black-and-white boundary to the boundary list and updating the index list, or;
when the current boundary type is other black boundaries, if the current boundary list is not empty and the previous boundary is a white-black boundary, adding a truncation symbol in the boundary list and updating an index list at the same time, or;
and when the current boundary type is white and other boundaries, adding a truncation symbol in the boundary list and updating the index list when the boundary list is not empty and the previous boundary is a black and white boundary state.
Specifically, the clustering, based on the short boundary information list, the optical codes of the current line to obtain optical code abscissa information of the current line includes:
traversing the boundary information list;
if the current boundary is a white-black boundary and the current coordinate information list is in an empty state, adding new coordinate information;
if the current boundary is a truncation symbol, skipping the current position;
and if the current boundary is a black-white boundary, updating the initial coordinate of the current optical code.
As an optional scheme, the fusing the optical code abscissa information and the global optical code coordinate information list to obtain an updated global optical code coordinate information list and outputting the updated global optical code coordinate information list includes:
descending a certain length at the initial position of each global optical code, defining the length as a search length, and fusing line optical codes in a fusion search range;
when the column coordinates of the row optical code and the global optical code are determined to have intersection, fusing the row optical code into the global code;
when the column coordinates of the row optical code and the global optical code are determined to be not intersected, a global optical code is newly generated;
and after each row of optical codes are fused, the global optical codes are self-fused again, and when the initial positions of the two global optical codes have an intersection, the two global optical codes are fused into a global optical code.
In order to better understand the method for locating the related codes provided by the present invention, the following description is made in detail with reference to an embodiment, the method provided by the present invention may be executed on an image sensor chip, and after the execution of a row of operation logic is finished, the operation logic is executed from top to bottom row by row. Each row runs from left to right. The processing logic for each line is shown in fig. 1 as a line of image data and a global list of optical code coordinate information. The global optical code coordinate refers to four information of the starting row number and the starting column number of the optical code in the image.
The processing for each line is mainly divided into four steps: the method comprises the steps of firstly determining the boundary characteristics of each pixel, secondly compressing the boundary characteristics to obtain a short boundary information list, thirdly performing conditional clustering on the optical codes of the current line on the basis of the boundary information list to obtain the optical code abscissa information of the current line, and fourthly fusing the optical code coordinate information of the current line and the global optical code coordinate information. And finally outputting a line of data and an updated global optical code coordinate information list.
First the boundary features are sought. We define three categories, black and white respectively, among others. They are defined as white above a certain threshold, black below a certain threshold, and the rest as other species. Boundaries are defined as four categories, black and white, white and black, and white. Reading in a row of pixels, and traversing from left to right to determine boundary characteristics in sequence. The existence of a certain disturbance when determined is as follows:
referring to fig. 2, the position indicated by the arrow is observed, and if the human eye distinguishes that the position is a white-black boundary, but due to problems of an imaging system and the like, a certain fuzzy boundary exists. The fuzzy boundary is an undesirable boundary and belongs to a boundary where errors need to be removed, so the method provided by the invention can effectively avoid the influence of the fuzzy boundary. Because the optical code positioning algorithm executed on the image sensor is simple, high-frequency boundary information of the optical code needs to be utilized, the fuzzy boundary can generate great influence, the phenomenon is observed through a direct graph and a compression graph of hundreds of chips, and the method is provided for solving the problem.
Referring to fig. 3, an embodiment method for finding boundary features may include:
the current pixel is first classified, black, white or otherwise, according to its value.
If the current pixel type is a black pixel, the next or black pixel is skipped. The next non-black pixel starts to find the pixel to the right within a certain range, jumps to the black pixel if the black pixel is found, and then is logically reset. If a white pixel is found, the current pixel position is recorded as a black and white border. If there are neither black nor white pixels within a certain range, then the other boundary category is recorded as black.
If the current pixel is a white pixel, the next or white pixel is skipped. The next non-white pixel starts to find the pixel to the right within a certain range, jumps to the white pixel if found, and then is logically reset. If a black pixel is found, the current pixel position is recorded as a white-black border. If there are neither black nor white pixels within a certain range, then the other boundary category is recorded as white.
If the current pixel is other, then skip directly.
In summary, it can be seen that the boundary type of each pixel position is clear, and skipping represents no type, and the complexity of the calculation operation is low, and both are threshold determination operations.
It should be noted that the method can determine the boundary type and compress the list of the boundary type information at the same time, so that the storage can be better saved.
As shown in fig. 4, one implementation of compressing the boundary information may include:
firstly, determining the current boundary category, and for the boundary category divided into a plurality of cases, executing different operation modes, specifically comprising the following steps:
the first possibility is that if the boundary is currently white and black, then the following decision is entered:
if the current boundary list is empty, the updated boundary list adds a white-black boundary and updates the list of indexes, where the boundary list is a list storing boundary information, and the initial empty of each row can be indicated by a special symbol in hardware. The index list represents a list of indexes, initialization is performed at the beginning of each row to be empty, and the indexes refer to abscissa positions corresponding to each boundary, specifically to serial numbers of the columns. The index list and the boundary list are in one-to-one correspondence. If the current list is not empty and the previous border is a black and white border, the index list is updated, replacing the last value of the index list with the current index value. Next to the third possibility, if the current list is not empty, a white-black border is added to the border list and the index list is updated.
The second case is currently a black and white border, and if the border list is not empty at this time and the previous border is the black and white border closest to the current, then the border list adds the black and white border and updates the index list.
The third case is that there are currently black other boundaries, and if the current boundary list is not empty and the previous boundary is a white-black boundary, then a truncated symbol is added to the boundary list at this point, and the index list is updated at the same time.
The fourth case is when the border is white and other borders, if the border list is not empty and the previous border is a black and white border, then a truncated symbol is added to the border list and the index list is updated.
And finishing the boundary information compression work. Therefore, the subsequent processing does not need to traverse from the beginning of the line to the end of the line, but only needs to traverse the boundary information list, and much calculation is saved. The calculation is almost conditional judgment, is easy to realize and has small calculation amount.
Referring to fig. 5, conditional clustering is performed on the boundary information list, and the result of clustering is the optical code of the current line, where the conditional clustering mainly involves two key factors: one is a black-white connected region, namely a white-black boundary, a black-white boundary, a white-black boundary, a continuous boundary pair of the black-white boundary, and the other is the distance between the two black-white connected regions, and if the two black-white connected regions are too far away, the two black-white connected regions are considered as two optical codes. The flow of one implementation of the clustering operation is as follows:
and traversing the boundary information list. A first possibility is to add new coordinate information if the current boundary is a white-black boundary, if the current coordinate information list, which refers to the position of the optical code, i.e. the serial number of the column of the start position, is empty, the serial numbers of the start and end of each group and the optical code are in one-to-one correspondence. If the distance between the black and white boundary and the previous black and white boundary is smaller, the optical code is the same, and the column number of the initial position of the existing optical code is updated. If the distance between the black and white border and the previous black and white border is relatively large, it indicates that the optical code is different, and a new start position of the optical code needs to be added. A second possibility is that the current position is skipped if the current boundary is a truncated symbol. A third possibility is that the current is a black and white border. If so, the start coordinates of the current optical code are updated.
In combination with the method shown in fig. 6, all operations required in a row are completed so far, the optical code is a two-dimensional image, and the invention fuses sequentially from top to bottom by the method of fusing optical codes in different rows to obtain coordinate information of several optical codes possibly existing in one image.
First the fusion takes place within a certain range, as in the fused search area in the figure. I.e. the fused objects need to have positional context. We are a certain length downwards from the starting position of each global optical code, defined as the search length, and the line optical codes can be merged in the merged search range. Global optical codes here refer to optical code coordinate information in an image, each optical code having a row start position and a column start position. The line optical code means an optical code derived for each line, and coordinate information thereof contains only the positions of the start and end of the abscissa. If the optical code is found in the search area, it is fused and the search area is further updated.
Referring to fig. 7, the fusion operation is processed from top to bottom in one drawing, and the fusion process is mainly divided into two stages. In a first phase, the line optical code is merged into the global code if the column coordinates of the line optical code and the global optical code intersect, and if not, a global optical code is newly generated. And in the second stage, after each row of optical codes are fused, the global optical codes are self-fused again, and if the initial positions of the two global optical codes have an intersection, the two global optical codes are fused into a global optical code.
The optical code positioning method provided by the embodiment of the invention aims at solving the problem that the optical code is almost white background and black content. The present invention finds that the color regularity of its imaging, i.e. the black and white connected regions, is summarized. The black-white connected region of the optical code is determined to start from a black-white boundary and end with the black-white boundary, and the black-white boundary circulate, and other boundaries can not appear in the middle. The method defines the boundary information characteristic of each pixel, and uses the characteristic to more conveniently divide the black and white connected region. And the calculation is simple. The method solves the problem that fuzzy boundaries exist in the optical codes, and transition areas always exist between white and black, the fuzzy areas have obvious influence on results, and the influence on the abnormal judgment of the boundary types caused by the fuzzy boundaries can be effectively avoided. By summarizing the rules of the effective boundary information in a line, a method for compressing the boundary information is provided, which compresses the boundary information of each pixel in a line into a short boundary information list. The storage and calculation amount is greatly reduced. The conditional clustering method provided by the method mainly has two factors which mainly affect clustering, one factor is the type of a boundary, clustering needs to be started from white and black, then black and white boundaries of the white and black boundaries are grouped, and the clustering is completed by one group. The second factor of influence is distance. If the two black-white connected areas are closer to each other, the code is an optical code, and if the two black-white connected areas are farther from each other, the code is two optical codes. The method comprises the steps of determining a line optical code, defining an inter-line search range, and fusing the line optical code and a global optical code. A line optical code is a one-dimensional piece of information consisting of the beginning and the end of the optical code abscissa. In the fusion process, the line optical code is fused into the global code, and then the global code is fused in the process, so that the operation sequence can effectively avoid repetition.
Compared with the prior art, the prior art can only identify the optical code on the optical sensor and cannot locate, and the invention can clearly give out the locating information, which is an essential difference. The present invention locates the optical code by using boundary information, which is compared to the high frequency information used by the prior invention. The high-frequency information needs to occupy at least 5 lines of memory and needs complex convolution calculation operation, and the method only needs less than 1 line of extra memory and takes calculation as threshold judgment, thereby greatly reducing the calculation amount and the memory.
Compared with the prior art, the optical code is searched by traversing a whole line of high-frequency features in the prior art, and the method and the device compress the information of the boundary features while defining the boundary features. Finding the optical code then only requires traversing the boundary information list. The calculation amount and the memory are greatly reduced.
Compared with the prior art, the prior art obtains the area where the optical code is located by expansion and etching. According to the invention, by finding and summarizing the boundary rule of the optical code in the image, the region where the optical code is located can be divided only by threshold judgment and calculation.
Referring to fig. 8, an optical code positioning apparatus 800 is further provided in an embodiment of the present invention, where the optical code positioning apparatus is applied to an image sensor chip, and an operation logic is to operate line by line according to a preset rule, and for each line of image data, the apparatus includes:
an obtaining module 801, configured to obtain image data of a current row and a global optical code coordinate information list, where the global optical code coordinate refers to four information, namely a starting row number and a starting column number, of an optical code in an image;
a determining module 802 for determining boundary characteristics of each pixel;
a compressing module 803, configured to compress the boundary features to obtain a short boundary information list;
a clustering module 804, configured to perform clustering processing on the optical codes of the current line based on the short boundary information list to obtain optical code abscissa information of the current line;
and a fusion module 805, configured to fuse the optical code abscissa information and the global optical code coordinate information list to obtain an updated global optical code coordinate information list, and output the updated global optical code coordinate information list.
The embodiment of the invention also provides an image sensor chip, which comprises:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method described above.
In an embodiment of the present invention, there is further provided a non-transitory computer readable storage medium storing computer instructions, on which a computer program is stored, wherein when the program is executed by a processor, the optical code positioning method according to all embodiments of the present invention is provided, wherein when the program is applied to an image sensor chip, an operation logic is to operate line by line according to a preset rule, and for each line of image data, the method includes:
acquiring image data of a current row and a global optical code coordinate information list, wherein the global optical code coordinate refers to four information of a starting row number and a starting column number of an optical code in an image;
determining boundary features of each pixel;
compressing the boundary features to obtain a short boundary information list;
clustering the optical codes of the current line based on the short boundary information list to obtain the optical code abscissa information of the current line;
and fusing the optical code abscissa information and the global optical code coordinate information list to obtain an updated global optical code coordinate information list and outputting the updated global optical code coordinate information list.
The method comprises the steps of obtaining image data of a current line and a global optical code coordinate information list, wherein the global optical code coordinate refers to four information of a starting line number and a starting column number of an optical code in an image, determining boundary characteristics of each pixel, compressing the boundary characteristics to obtain a short boundary information list, clustering the optical code of the current line based on the short boundary information list to obtain optical code abscissa information of the current line, fusing the optical code abscissa information and the global optical code coordinate information list to obtain an updated global optical code coordinate information list and outputting the updated global optical code coordinate information list, and outputting coordinate information of a plurality of optical codes in the whole image by using memory overhead of one line of image data, the clustering method mainly based on condition judgment is characterized in that pixels in each row are clustered, and then clusters in different rows are fused to obtain coordinate information of a plurality of optical codes.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.
The invention also provides a computer program product comprising a computer program which, when executed by a processor, implements a speech recognition method according to the above.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present invention may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present invention can be achieved.
The above-described embodiments should not be construed as limiting the scope of the invention. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.