CN115690768A - Image recognition method, system and related equipment - Google Patents

Image recognition method, system and related equipment Download PDF

Info

Publication number
CN115690768A
CN115690768A CN202211347712.1A CN202211347712A CN115690768A CN 115690768 A CN115690768 A CN 115690768A CN 202211347712 A CN202211347712 A CN 202211347712A CN 115690768 A CN115690768 A CN 115690768A
Authority
CN
China
Prior art keywords
image
recognized
target pixel
target
identified
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211347712.1A
Other languages
Chinese (zh)
Inventor
刘力茂
王能才
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CMB Yunchuang Information Technology Co Ltd
Original Assignee
CMB Yunchuang Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CMB Yunchuang Information Technology Co Ltd filed Critical CMB Yunchuang Information Technology Co Ltd
Priority to CN202211347712.1A priority Critical patent/CN115690768A/en
Publication of CN115690768A publication Critical patent/CN115690768A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses an image identification method, an image identification system and related equipment, wherein the method comprises the following steps: judging a plurality of target pixel points of the constituent elements in the element image, and respectively belonging to element types in corresponding positions of the image to be identified; if the image belongs to the non-transparent point type, determining that the target pixel point is successfully identified in the corresponding position of the image to be identified; and if the target pixel points with the preset number are successfully identified, determining the elements recorded by the element image as the target elements appearing in the image to be identified. By taking the distribution condition of the non-transparent points of the elements recorded in the element image as a reference basis, whether the corresponding positions of the target pixel points in the image to be recognized are the non-transparent points or not is judged, the element content in the image to be recognized can be effectively recognized, and the subsequent use effect of the element content is improved. In addition, the method and the device do not need to consider the colors specifically appearing in the image to be recognized, so that the recognition of the elements with different colors in the image to be recognized is limited to a small extent.

Description

Image recognition method, system and related equipment
Technical Field
The embodiment of the application relates to the technical field of images, in particular to an image identification method, an image identification system and related equipment.
Background
In daily applications, such a need is often encountered: identification such as a certificate number corresponding to a certain image is identified from the certain image; before requesting to acquire data from the website, the user is required to input the verification code displayed in the verification graph.
Conventionally, although a method for manually recognizing image content can help to acquire required feature elements such as letters or Chinese characters in an image, the method takes a long time and is prone to erroneous recognition for a large batch of images to be processed.
In this regard, there is a need to provide an efficient solution.
Disclosure of Invention
The embodiment of the application provides an image identification method, an image identification system and related equipment, which are used for improving the identification effect of image elements.
A first aspect of an embodiment of the present application provides an image recognition method, including:
acquiring an image to be recognized and a plurality of element images, wherein at least one element recorded in the plurality of element images appears in the image to be recognized;
for each element image in the multiple element images, judging multiple target pixel points of elements in the element image, wherein the multiple target pixel points belong to element types in corresponding positions of the image to be recognized; the target pixel point refers to a pixel point appearing as a non-transparent point in the element image, and the element types include a non-transparent point type and a transparent point type;
if the target pixel point belongs to the non-transparent point type, determining that the target pixel point is successfully identified in the corresponding position of the image to be identified;
and if the target pixel points with the preset number are successfully identified, determining the elements recorded by the element image as the target elements appearing in the image to be identified.
Optionally, the colors corresponding to and appearing by the non-transparent points include non-white, and the colors corresponding to and appearing by the transparent points include white. It can be mentioned that in the image to be recognized and the element image in the embodiment of the present application, as long as the pixel points of the white background are considered to be composed of transparent points.
The image recognition method according to the first aspect of the present application may be implemented by using the content according to the second aspect of the present application.
A second aspect of the embodiments of the present application provides an image recognition system, including:
an acquisition unit configured to acquire an image to be recognized and a plurality of elemental images in which at least one element described in the plurality of elemental images appears in the image to be recognized;
the processing unit is used for judging a plurality of target pixel points of the elements in the element images for each element image in the plurality of element images, and the element types of the elements belong to the corresponding positions of the images to be identified respectively; the target pixel points refer to pixel points appearing as non-transparent points in the element image, and the element types comprise non-transparent point types and transparent point types;
the processing unit is further configured to determine that the target pixel point is successfully identified in the corresponding position of the image to be identified if the target pixel point belongs to the non-transparent point type;
the processing unit is further configured to determine that the element recorded in the element image is a target element appearing in the image to be recognized if a preset number of target pixel points are successfully recognized.
A third aspect of the embodiments of the present application provides an electronic device, including:
the system comprises a central processing unit, a memory and an input/output interface;
the memory is a transient storage memory or a persistent storage memory;
the central processing unit is configured to communicate with the memory and execute the instructions in the memory to perform the method described in the first aspect of the embodiments of the present application or any specific implementation manner of the first aspect.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium, including instructions that, when executed on a computer, cause the computer to perform a method as described in the first aspect of embodiments of the present application or any specific implementation manner of the first aspect.
A fifth aspect of embodiments of the present application provides a computer program product containing instructions or a computer program, which when run on a computer causes the computer to perform the method as described in the first aspect of embodiments of the present application or any specific implementation manner of the first aspect.
According to the technical scheme, the embodiment of the application has at least the following advantages:
the non-transparent point distribution condition of the elements recorded in the element image is taken as a reference basis to judge whether the corresponding positions of the target pixel points in the image to be recognized are also non-transparent points, which is beneficial to comparing and investigating whether the elements are target elements appearing in the image to be recognized, so that the element content in the image to be recognized is conveniently and effectively recognized and extracted, and the subsequent use effect of the element content is improved. In addition, the shape characteristics of the elements are fully applied, and if the pixel points are non-transparent points, the specific colors of the elements in the image to be recognized do not need to be considered, so that the recognition limit degree of the elements with different colors in the image to be recognized is small.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art according to the drawings.
FIG. 1 is a flowchart illustrating an image recognition method according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of an image to be recognized according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an elemental image according to an embodiment of the present application;
FIG. 4 is another schematic flow chart illustrating an image recognition method according to an embodiment of the present application;
FIG. 5 is another schematic flow chart illustrating an image recognition method according to an embodiment of the present application;
FIG. 6 is a schematic diagram of an embodiment of an image recognition system;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the purpose, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the accompanying drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without making creative efforts fall within the protection scope of the present application.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims and drawings of the present application, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Moreover, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In the following description, references to "one embodiment" or "one specific example" and the like describe a subset of all possible embodiments, but it is understood that "one embodiment" or "one specific example" may be the same subset or a different subset of all possible embodiments, and may be combined with each other without conflict. In the following description, the terms plurality or plurality are used to refer to at least two. Where a value is said to have reached a threshold (if any), in some embodiments it may include the case where the former is greater than the latter.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Referring to fig. 1 to 4, a first aspect of the present application provides a specific example of an image recognition method, where the embodiment includes steps 11 to 14:
11. and acquiring an image to be recognized and a plurality of element images.
Taking a verification code recognition scene as an example, for any image to be recognized shown in fig. 2, a plurality of element images shown in fig. 3 may be obtained, where the plurality of element images may specifically include images in which an integer from 0 to 9 and a letter element from a to Z are respectively recorded, and certainly may also include images in which a letter from a to Z, a chinese character, a punctuation mark, and other feature elements are recorded. At least one element described in the multiple element images is additionally present in the image to be recognized.
In some specific examples, the storage paths of the image to be recognized and the multiple element images may be read respectively to obtain the image to be recognized and the multiple element images, and pixel point information corresponding to the images, such as two-dimensional color array information described below. The order of execution between the acquisition of the image to be recognized and the acquisition of the elemental image may not be limited.
12. And judging a plurality of target pixel points of the constituent elements in the element image, and respectively belonging to element types in corresponding positions of the image to be identified.
For each element image in the multiple element images, because each element image records an element which can be used as a reference for comparison, a target pixel point of a constituent element can be used as an index, the multiple target pixel points of the constituent element in the element image are judged, and the element types of the constituent element in the corresponding positions of the image to be recognized are respectively belonged to, so that whether the element is also present in the image to be recognized is checked. The target pixel points refer to pixel points appearing as non-transparent points in the element image, and the element types comprise non-transparent point types and transparent point types. In the embodiments of the present application, the non-transparent point mainly refers to a color point that is easily recognized by naked eyes and colored in non-white color (e.g., red or purple), and correspondingly, the transparent point mainly refers to a color point (or called a pixel point or a point) that is difficult to recognize by naked eyes and colored in white color. Specifically, the element types of the target pixel points in the corresponding positions of the image to be recognized can be judged column by column; of course, or the element types of the target pixel points in the corresponding positions of the image to be recognized can be determined line by line.
In practical application, the form in which the application program loads the picture (or called an image) into the memory is a two-dimensional color point array (or called a two-dimensional color array) of the picture, members of the array are color values, which are color expression values of constituent elements of the picture appearing at different pixel points, and the color values can be converted into decimal values by default, and can be specifically set by actual conditions, which is not limited herein. In some embodiments, the specific operation process of step 12 may include steps 121 to 123:
121. and traversing color values corresponding to all pixel points in the element graph array of the element image, and distinguishing target pixel points according to the color values.
As shown in fig. 4, taking "u" element (or called character) as an example, an element map array formed by the element image including "u" element can be read, in which a pixel having a value (color value) of 16777265 can be correspondingly represented as a white point, i.e., a transparent point, and a pixel having a color value other than 16777265 can be correspondingly represented as a non-white point, i.e., a non-transparent point; therefore, the points which can be used as the target pixel points are distinguished, namely, the non-transparent points are distinguished. It may be mentioned that the non-transparent points in the image to be recognized may comprise points constituting target elements and/or points constituting interference lines.
122. And determining the corresponding target position of each target pixel point in the local array according to the ranking of the color value of each target pixel point in the element graph array.
The local array is contained in the array to be recognized and is proportional to the size of the element diagram array, and the array to be recognized is used for expressing the color distribution condition of each pixel point of the image to be recognized. Specifically, for the image to be recognized containing the content of the "ut1e" element, the two-dimensional color array, i.e., the array to be recognized, can be read in the same manner, so that then, as the size of the array of the element map shown in fig. 4 (assuming that 5*4, the number of rows and columns of the array is not so large) is defined as the defined range, the local array (also the array with the size of 5*4) in the array to be recognized is defined, so as to find out the corresponding target position of each target pixel point in the local array.
As a possible implementation manner, before step 122, the method of the present application may further include: the element size of the image to be recognized is processed to a target element size, which is the size of the elements described in the element image. The size of the element is processed, so that the line number of the array to be recognized is adjusted to be consistent with the line number of the array of the element map, color values between the local array and the array of the element map can be quickly checked, and similar situations of the image to be recognized and the element image can be obtained in a low-delay manner, namely whether the image to be recognized contains elements recorded in the element image or not can be quickly recognized.
Thus, as described below with respect to coordinates (2,1), the specific operation of step 122 may include: obtaining the coordinates of the pixel value of the target pixel point in the element graph array; and determining the position of the coordinate correspondingly specified in the local array as a target position, wherein the array to be recognized and the element diagram array have the same size of elements and the same number of rows.
123. And judging the element types of the target pixel points in the local array according to the color values correspondingly expressed by the target positions in the local array, wherein the target pixel points with the color values matched to be white are non-transparent points.
As described above, the element map array can be regarded as a coordinate system, so that each target pixel point (color value is not 16777265) in each column can be assigned with coordinate information, and for the exemplary target pixel point whose second row and first column (2,1) color value is 10592931, it is determined whether the pixel point color value of the target position whose coordinate is also (2,1) in the local array is 16777265, if it is 16777265, it can be determined that the pixel point is a transparent point, otherwise, it can be determined that the pixel point is a non-transparent point. It can be seen that, the member values (color values) of the two-dimensional color arrays between the image to be recognized and the element image are compared, if the color values at the corresponding positions in the two arrays are represented as non-transparent points, it can be determined that a certain current point of the "u" element is successfully recognized in the image to be recognized, and therefore, if other preset number of points of the "u" element are determined in the same way, the points are also successfully recognized to appear in the image to be recognized, and it can be determined that the image to be recognized (specifically, the region where the local array is located) contains the target element of "u".
13. And determining that the target pixel point is successfully identified in the corresponding position of the image to be identified.
If the step 12 determines that the target pixel point is in the corresponding position of the image to be recognized and belongs to the non-transparent point type, determining that the target pixel point is successfully recognized in the corresponding position of the image to be recognized; therefore, whether the element is recorded in the image to be recognized or not can be reliably and reasonably found, and the recognition effect of the verification code in the image is improved.
14. Target elements appearing in the image to be recognized are specified.
For each target pixel point in the plurality of target pixel points, if a preset number of target pixel points are successfully identified, that is, the operation of step 13 is successfully executed, it is determined that the element recorded in the element image is the target element appearing in the image to be identified. For example, for the image to be recognized with "ut1e" in the 2 nd record of fig. 2, according to the operation content of the foregoing steps 11 to 14, it can be recognized that "u" is the target element appearing in the image to be recognized, and similarly, the t, 1, e, and other elements can also be recognized by the operation of the embodiment.
In practical applications, the elements of the image to be recognized are very close to each other or even connected together, and a recognition range can be set in the face of the scene of the conjoined character, so in some specific examples, the preset number may be the number of the non-transparent points within the preset width of the image to be recognized, and the preset width is smaller than the image width of the image to be recognized. The preset width may specifically refer to a region width of the element image corresponding to 60% of the total column number in the element image array, so that as long as a pixel point with the 60% width is successfully identified, even if it is successfully identified which letter or number the target element in the image to be identified is specifically.
In summary, in the embodiment of the application, the non-transparent point distribution condition of the elements recorded in the element image is used as a reference, and whether the corresponding positions of the target pixel points in the image to be recognized are also non-transparent points is judged, so that whether the elements are target elements appearing in the image to be recognized is favorably compared and examined, the element content in the image to be recognized is conveniently and effectively recognized and extracted, and the subsequent use effect on the element content is improved. In addition, functionally, considering that the shapes of most elements to be recognized in daily life are fixed, such as the certificate number and the verification code, the shapes of the characters of the elements to be recognized are fixed in the same scene, and the colors of the characters have little influence on the recognition result, so that the shape characteristics of the elements are sufficiently concerned and applied, and if whether pixel points of the elements are non-transparent points or not, the specific colors of the elements in the images to be recognized do not need to be considered, so that the method can be widely applied to recognizing the elements with different colors.
Based on the above description of examples, some specific examples of possible implementations will be provided below, and in practical applications, the implementation contents between these examples can be implemented in combination according to the corresponding functional principles and application logic as needed.
In some specific examples, after step 14, the method of the present application may further include the operations of:
if the target elements represented by the local array are successfully determined, the corresponding designated regions of the local array in the image to be recognized are used as recognized regions; the size of the identified region is determined according to the size of the element image containing the target element, for example, the size of the region corresponding to the element image array (array size is assumed to be 5*4), and may be referred to as the region width. Here, the identified region is defined to indicate that the elements included in the region have been successfully identified, and only the same process (e.g., the operations in steps 11 to 14) needs to identify other unidentified regions to determine what the target elements included in other unidentified regions are, e.g., t, 1, and e are identified, so as to avoid repeated consumption of operating resources and prolong the response time of the result.
On the basis of the above description, in order to prevent occupying system resources and quickening the recognition progress of the image element to be recognized, in some specific examples, after step 11 and before step 12, the method of the present application may further include the following operations:
for each target pixel point in the target pixel points, judging whether the target pixel points are correspondingly distributed in an identified region, namely, traversing to judge whether the current point (pixel point) is in the identified region, wherein the identified region can refer to an image region of the target pixel which is successfully identified in the image to be identified;
if not, judging the element type of the target pixel point in the corresponding position of the image to be identified; if yes, returning to judge whether another target pixel point is correspondingly distributed in the identified region; and if the target pixel points are all located in the identified regions, determining target elements appearing in regions except the identified regions in the image to be identified.
In some specific examples, prior to step 12, the method of the present application may further include the following (setting the identified order between similar elements):
judging whether similar elements (such as any two characters in i, j and l) exist in the multiple element images, wherein the similar elements refer to that the arrangement difference of non-transparent points of the two elements on the distribution positions is smaller than a preset difference; and setting the recognized order among the similar elements according to the number of the non-transparent points of the similar elements, so that the similar elements are respectively judged whether to be the target element or not according to the order. Specifically, the color points of the elements may be sorted from large to small, with the color point of the character 'j' being arranged at the front when the color point is the largest, and the color point of the character 'i' being arranged at the back when the color point is the smallest. The setting of the identified order of similar elements herein may be performed before or during step 11. The recognition order among similar elements is adjusted, so that the readiness of the preparation work is convenient for the operations in step 12 and later to read and apply the pixel point information (color value) of each element in sequence, so as to prevent the recognition accuracy of the elements in the image to be recognized from being influenced by disorder, and if the 'j' element is recognized as 'i' by mistake.
In summary, as shown in fig. 5, a preparation work in an early stage may be performed, including setting a reading path of resources such as the image to be recognized and the element image, and the recognition range of the conjoined character, so that the two-dimensional color array content between the image to be recognized and the element image may be successfully loaded and read in a subsequent stage. Then, starting from the upper left corner to the lower right corner of the array to be recognized, judging whether current target pixel points of a certain character (such as coordinate pixel points of 2,1 forming the character u) are distributed in a recognized region row by row, if so, skipping the row, and continuously traversing and judging whether target pixel points of the next row are in the recognized region; if not, judging whether the current target pixel point is a non-transparent point in the corresponding position of the local array of the image to be recognized, and if the current target pixel point is a non-transparent point, calculating that the target pixel point is successfully recognized in the corresponding position of the image to be recognized; similarly, until the preset number of target pixel points are successfully identified, the element recorded in the element image can be determined to be the target element appearing in the image to be identified, such as the character "u"; of course, for the region where the current local array is located, if the element array content (color value) of the character "u" is determined to be not the target element even after traversing, it indicates that the target element contained therein should be some element other than "u", so the element image containing other characters (such as "t" or number "1") can be processed in the same manner, so as to finally identify what the target element really contained in the region is. If the target elements represented by the local arrays are successfully determined, the size of the element map array can be taken as a range, the corresponding designated regions of the local arrays in the image to be recognized are taken as recognized regions, other local arrays of the array to be recognized are processed in the same way until the color values of the array to be recognized are traversed, and therefore the target pixels are successfully recognized by all the regions of the image to be recognized.
Therefore, technically, the method fully utilizes the characteristic that the element shape is fixed frequently, and starting from the color point value, the image content is efficiently identified without considering what non-white color is specifically allocated to the pixel point; therefore, preprocessing operations such as graying, binarization or noise reduction and the like on the image to be recognized are not needed in the conventional method, and the operation flow and the processing time are simplified. In addition, in the identification process, the identified regions are not identified, so that the response speed of the result is increased; the recognition range set by the special scene increases the fault tolerance rate.
Referring to fig. 6, a second aspect of the present application provides a specific example of an image recognition system, which includes:
an acquiring unit 601 configured to acquire an image to be recognized and a plurality of elemental images, at least one element described in the plurality of elemental images appearing in the image to be recognized;
the processing unit 602 is configured to, for each elemental image of the multiple elemental images, determine multiple target pixel points of constituent elements in the elemental image, and determine element types to which the constituent elements belong in corresponding positions of the image to be recognized; the target pixel point refers to a pixel point appearing as a non-transparent point in an element image, and the element types comprise a non-transparent point type and a transparent point type;
the processing unit 602 is further configured to determine that the target pixel point is successfully identified in the corresponding position of the image to be identified if the target pixel point belongs to the non-transparent point type;
the processing unit 602 is further configured to determine, if all the target pixel points with the preset number are successfully identified, that an element recorded in the element image is a target element appearing in the image to be identified.
Optionally, the processing unit 602 is specifically configured to:
traversing color values corresponding to all pixel points in an element graph array of the element image, and distinguishing target pixel points according to the color values;
determining the corresponding target position of each target pixel point in the local array according to the ranking of the color value of each target pixel point in the element map array; the local array is contained in the array to be recognized and is proportional to the size of the element diagram array, and the array to be recognized is used for expressing the color distribution condition of each pixel point of the image to be recognized;
and judging the element types of the target pixel points in the local array according to the color values correspondingly expressed by the target positions in the local array, wherein the target pixel points with the color values matched to be white are non-transparent points.
Optionally, the processing unit 602 is further configured to: processing the element size of the image to be recognized to a target element size, wherein the target element size is the size of the elements recorded in the element image;
the processing unit 602 is specifically configured to: obtaining the coordinates of the pixel value of the target pixel point in the element graph array;
and determining the position of the coordinate correspondingly specified in the local array as a target position, wherein the array to be recognized and the element diagram array have the same size of elements and the same number of rows.
Optionally, the processing unit 602 is further configured to: if the target elements represented by the local arrays are successfully determined, the corresponding regions of the local arrays in the image to be recognized are used as recognized regions, and the sizes of the recognized regions are determined according to the sizes of the element images containing the target elements.
Optionally, the processing unit 602 is further configured to:
for each target pixel point in the target pixel points, judging whether the target pixel points are correspondingly distributed in an identified region, wherein the identified region refers to an image region of the target pixel which is successfully identified in the image to be identified;
if not, judging the element type of the target pixel point in the corresponding position of the image to be recognized;
if yes, returning to judge whether another target pixel point is correspondingly distributed in the identified region;
and if the target pixel points are all located in the identified regions, determining target elements appearing in regions except the identified regions in the image to be identified.
Optionally, the processing unit 602 is further configured to:
judging whether similar elements exist in the multiple element images, wherein the similar elements refer to the fact that the arrangement difference of non-transparent points of the two elements on the distribution positions is smaller than a preset difference;
and setting the recognized order among the similar elements according to the number of the non-transparent points of the similar elements, so that the similar elements are respectively judged whether to be the target element or not according to the order.
Optionally, the preset number is the number of the non-transparent points within a preset width of the image to be recognized, and the preset width is smaller than the image width of the image to be recognized.
In this embodiment of the application, operations performed by each unit of the image recognition system are similar to those described in the foregoing first aspect or any specific method embodiment of the first aspect, and details are not repeated here.
Referring to fig. 7, an electronic device 700 according to an embodiment of the present disclosure may include one or more Central Processing Units (CPUs) 701 and a memory 705, where the memory 705 stores one or more applications or data.
The memory 705 may be volatile storage or persistent storage, among others. The program stored in the memory 705 may include one or more modules, each of which may include a sequence of instructions operating on an electronic device. Still further, a central processor 701 may be provided in communication with the memory 705 for performing a sequence of instruction operations in the memory 705 on the electronic device 700.
The electronic device 700 may also include one or more power supplies 702, one or more wired or wireless network interfaces 703, one or more input-output interfaces 704, and/or one or more operating systems, such as Windows Server, mac OS XTM, unixTM, linuxTM, freeBSDTM, etc.
The central processing unit 701 may perform the operations performed by any of the foregoing first aspect or any of the specific method embodiments of the first aspect, which are not described in detail herein.
A computer-readable storage medium is provided, comprising instructions which, when executed on a computer, cause the computer to perform the method as described in the first aspect or any of the specific implementations of the first aspect.
The present application provides a computer program product comprising instructions or a computer program which, when run on a computer, cause the computer to perform the method as described in the first aspect or any of the specific implementations of the first aspect.
It should be understood that, in the various embodiments of the present application, the sequence number of each step does not mean the execution sequence, and the execution sequence of each step should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or integrated into another system or apparatus, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the present application, which are essential or part of the technical solutions contributing to the prior art, or all or part of the technical solutions may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a service server, or a network device, etc.) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and the like.

Claims (10)

1. An image recognition method, comprising:
acquiring an image to be recognized and a plurality of element images, wherein at least one element recorded in the plurality of element images appears in the image to be recognized;
for each element image in the multiple element images, judging multiple target pixel points of the elements in the element image, and respectively determining the element types of the elements in the corresponding positions of the image to be recognized; the target pixel point refers to a pixel point appearing as a non-transparent point in the element image, and the element types include a non-transparent point type and a transparent point type;
if the image belongs to the non-transparent point type, determining that the target pixel point is successfully identified in the corresponding position of the image to be identified;
and if the target pixel points with the preset number are successfully identified, determining the elements recorded by the element images as the target elements appearing in the images to be identified.
2. The image recognition method of claim 1, wherein the determining the types of elements to which a plurality of target pixel points of the elements in the elemental image belong in the corresponding positions of the image to be recognized respectively comprises:
traversing color values corresponding to all pixel points in an element graph array of the element image, and distinguishing the target pixel points according to the color values;
determining the corresponding target position of each target pixel point in the local array according to the ranking of the color value of each target pixel point in the element map array; the local array is contained in an array to be recognized and is proportional to the size of the element diagram array, and the array to be recognized is used for representing the color distribution condition of each pixel point of the image to be recognized;
and judging the element type of each target pixel point in the local array according to the color value correspondingly represented by each target position in the local array, wherein the target pixel point with the color value matching white is a non-transparent point.
3. The image recognition method of claim 2, wherein before determining the target position of each target pixel point in the local array, the method further comprises:
processing the element size of the image to be recognized to a target element size, wherein the target element size is the size of the elements recorded in the element image;
determining the corresponding target position of each target pixel point in the array to be identified, including:
obtaining the coordinates of the pixel values of the target pixel points in the element diagram array;
and determining the position of the coordinate correspondingly specified in the local array as the target position, wherein the array to be recognized and the element diagram array have the same array row number due to the consistent element size.
4. The image recognition method of claim 2, wherein the element described by the element image is determined to be after a target element appearing in the image to be recognized, the method further comprising:
if the target elements represented by the local arrays are successfully determined, the regions correspondingly referred to by the local arrays in the image to be recognized are used as recognized regions, and the sizes of the recognized regions are determined according to the sizes of the element images containing the target elements.
5. The image recognition method according to claim 1 or 2, wherein after acquiring the image to be recognized and a plurality of elemental images, a plurality of target pixel points of constituent elements in the elemental images are determined, and before the element types to which the respective positions of the image to be recognized belong, the method further comprises:
for each target pixel point in the target pixel points, judging whether the target pixel points are correspondingly distributed in an identified region, wherein the identified region refers to an image region of the target pixel which is successfully identified in the image to be identified;
if not, judging the element type of the target pixel point in the corresponding position of the image to be identified;
if yes, returning to judge whether another target pixel point is correspondingly distributed in the identified region;
and if the target pixel points are all located in the identified regions, determining target elements appearing in regions except the identified regions in the image to be identified.
6. The image recognition method according to claim 1, wherein a plurality of target pixel points of constituent elements in the elemental image are determined, and before element types to which the respective positions of the image to be recognized belong, the method further comprises:
judging whether similar elements exist in the multiple element images, wherein the similar elements refer to the fact that the arrangement difference of the non-transparent points of the two elements on the distribution positions is smaller than a preset difference;
and setting the identified sequence among the similar elements according to the number of the non-transparent points of the similar elements, so that the similar elements are respectively judged whether to be the target element according to the sequence.
7. The image recognition method according to claim 1, wherein the preset number is the number of non-transparent dots within a preset width of the image to be recognized, and the preset width is smaller than the image width of the image to be recognized.
8. An image recognition system, comprising:
an acquisition unit configured to acquire an image to be recognized and a plurality of elemental images in which at least one element described in the plurality of elemental images appears in the image to be recognized;
the processing unit is used for judging a plurality of target pixel points of the elements in the element images for each element image in the plurality of element images, and the element types of the elements belong to the corresponding positions of the images to be identified respectively; the target pixel point refers to a pixel point appearing as a non-transparent point in the element image, and the element types include a non-transparent point type and a transparent point type;
the processing unit is further configured to determine that the target pixel point is successfully identified in the corresponding position of the image to be identified if the target pixel point belongs to the non-transparent point type;
the processing unit is further configured to determine that the element recorded in the element image is a target element appearing in the image to be recognized if a preset number of target pixel points are successfully recognized.
9. An electronic device, comprising:
the system comprises a central processing unit, a memory and an input/output interface;
the memory is a transient memory or a persistent memory;
the central processor is configured to communicate with the memory and execute the operations of the instructions in the memory to perform the method of any of claims 1 to 7.
10. A computer-readable storage medium comprising instructions which, when executed on a computer, cause the computer to perform the method of any one of claims 1 to 7.
CN202211347712.1A 2022-10-31 2022-10-31 Image recognition method, system and related equipment Pending CN115690768A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211347712.1A CN115690768A (en) 2022-10-31 2022-10-31 Image recognition method, system and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211347712.1A CN115690768A (en) 2022-10-31 2022-10-31 Image recognition method, system and related equipment

Publications (1)

Publication Number Publication Date
CN115690768A true CN115690768A (en) 2023-02-03

Family

ID=85046828

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211347712.1A Pending CN115690768A (en) 2022-10-31 2022-10-31 Image recognition method, system and related equipment

Country Status (1)

Country Link
CN (1) CN115690768A (en)

Similar Documents

Publication Publication Date Title
US10817741B2 (en) Word segmentation system, method and device
US10692133B2 (en) Color estimation device, color estimation method, and color estimation program
CN106940799B (en) Text image processing method and device
CN107209942B (en) Object detection method and image retrieval system
CN108579094B (en) User interface detection method, related device, system and storage medium
EP4047509A1 (en) Facial parsing method and related devices
CN110390327B (en) Foreground extraction method and device, computer equipment and storage medium
CN108737875B (en) Image processing method and device
CN108229232B (en) Method and device for scanning two-dimensional codes in batch
CN112036295B (en) Bill image processing method and device, storage medium and electronic equipment
US20170178341A1 (en) Single Parameter Segmentation of Images
CN109389110B (en) Region determination method and device
CN104915664B (en) Contact object identifier obtaining method and device
CN114972817A (en) Image similarity matching method, device and storage medium
CN112149570A (en) Multi-person living body detection method and device, electronic equipment and storage medium
CN108877030B (en) Image processing method, device, terminal and computer readable storage medium
CN111614959B (en) Video coding method and device and electronic equipment
CN106933905B (en) Method and device for monitoring webpage access data
JP4967045B2 (en) Background discriminating apparatus, method and program
CN109726722B (en) Character segmentation method and device
CN109145879B (en) Method, equipment and storage medium for identifying printing font
CN115690768A (en) Image recognition method, system and related equipment
CN110751013A (en) Scene recognition method, device and computer-readable storage medium
CN111626313A (en) Feature extraction model training method, image processing method and device
CN115457581A (en) Table extraction method and device and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination