CN103745236A - Texture image identification method and texture image identification device - Google Patents

Texture image identification method and texture image identification device Download PDF

Info

Publication number
CN103745236A
CN103745236A CN201310714283.1A CN201310714283A CN103745236A CN 103745236 A CN103745236 A CN 103745236A CN 201310714283 A CN201310714283 A CN 201310714283A CN 103745236 A CN103745236 A CN 103745236A
Authority
CN
China
Prior art keywords
texture image
module
proper vector
projection
carried out
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201310714283.1A
Other languages
Chinese (zh)
Inventor
戴琼海
尹春霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201310714283.1A priority Critical patent/CN103745236A/en
Publication of CN103745236A publication Critical patent/CN103745236A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a texture image identification method which includes the following steps: preprocessing a texture image and extracting a character area; performing coordinate-system conversion on the character area and calculating a projection histogram; performing Fourier transformation on the projection histogram so as to obtain the characteristic vector of the texture image; and executing the above-mentioned steps on a to-be-identified texture image and a standard texture image so that the characteristic vector of the to-be-identified texture image and the characteristic vector of the standard texture image are obtained and calculating the similarity between the characteristic vector of the to-be-identified texture image and the characteristic vector of the standard texture image and judging whether the to-be-identified texture image is a standard texture image according to a similarity threshold. The invention also discloses a texture image identification device. The texture image identification method and identification device are great in identification effect of dynamic objects, high in interference resistance, low in identification missing rate, particularly low in error identification rate, simple in calculation, great in instantaneity and capable of satisfying real-time target detecting tasks under a dynamic scene.

Description

Texture image recognition methods and texture image recognition device
Technical field
The invention belongs to computing machine and calculate machine vision and area of pattern recognition, particularly a kind of texture image recognition methods and texture image recognition device, can expand in the obvious target identification mission such as other textural characteristics that is applied to character recognition.
Background technology
To the detection and Identification of texture object, it is the important content in target detection.When camera is static and target is static, do not have various interference and rotation, ambiguity to occur, relatively easy many of the Detection task of target.Conventional detection method has frame difference method, corners Matching method etc.; When camera and detect while existing relative motion and motion to estimate between target, in the target image collecting, can there is target disappearance, target rotation, affine variation, the multiple interference such as image blurring.For example, on the motion platforms such as unmanned vehicle, unmanned plane, robot, often need to detect the marks such as road sign, pattern; The applications such as intelligent transportation, need road markings, license plate number, direction board to carry out detection and Identification.Conventional Corner Detection, SIFT(Scale-invariant feature transform, yardstick invariant features conversion) characteristic matching, ORB(ORiented Brief) method such as descriptor, Hu square and Zernike identification, in the dynamic Symbol recognition of height, there is defect, can cause mistake identification and leak identification situation.
Summary of the invention
The present invention is intended at least solve one of technical matters existing in prior art mistake identification and leakage identification.
For this reason, one object of the present invention is to propose a kind of texture image recognition methods.
Another object of the present invention is to propose a kind of texture image recognition device.
To achieve these goals, the texture image recognition methods according to the embodiment of one aspect of the invention, comprising: S1. carries out pre-service to described texture image, extracts character zone; S2. described character zone is carried out to coordinate system transformation, and calculate projection histogram; S3. described projection histogram is carried out to Fourier transform, obtain texture image proper vector; S4. the texture image of texture image to be identified and standard is performed step to S1 to S3, obtain respectively texture image proper vector to be identified and standard texture image proper vector and calculate the similarity between the two, according to similarity threshold, judging whether described texture image to be identified is the texture image of described standard.
Texture image recognition methods according to the embodiment of the present invention, has following beneficial effect: (1) is good to the recognition effect of dynamic object, strong interference immunity.(2) leakage discrimination is low, and especially false recognition rate is very low.(3) calculate simple and real-time is good, can meet the real-time target Detection task under dynamic scene.
In addition, according to the texture image recognition methods of the embodiment of the present invention, also there is following additional technical feature:
In one embodiment of the invention, described step S1 specifically comprises: S11. carries out deblurring processing to described texture image, then carries out binary conversion treatment; S12. the texture image after binary conversion treatment is carried out to morphology processing, extract described character zone.
In one embodiment of the invention, described step S2 specifically comprises: S21. calculates the focus point of described character zone; S22. take described focus point as initial point, described character zone is transformed to polar coordinate system from plane right-angle coordinate, obtain polar coordinate image; S23. in described polar coordinate image, along utmost point footpath and polar angle both direction, add up projection histogram respectively.
In one embodiment of the invention, in described step S23, only along polar angle directional statistics projection histogram.
In one embodiment of the invention, described step S3 specifically comprises: described projection histogram is carried out to Fourier transform, and the information of choosing amplitude-frequency part obtains having the proper vector of invariable rotary characteristic, as the proper vector of described detected image.
The texture image recognition device of embodiment according to a further aspect of the invention, comprising: pretreatment module, and described pretreatment module, for described texture image is carried out to pre-service, extracts character zone; Projection Nogata processing module, described projection Nogata processing module is connected with described pretreatment module, for described character zone is carried out to coordinate system transformation, and calculates projection histogram; Proper vector acquisition module, the connected acquisition module of described feature is connected with described projection Nogata processing module, for to described projection histogram is carried out to Fourier transform, obtains texture image proper vector; Similar judge module, described similar judge module is connected with described proper vector acquisition module, for calculating the texture image of texture image to be identified and standard, distinguish the similarity between corresponding texture image proper vector, and judge according to similarity threshold whether the texture image of described texture image to be identified and described standard is same texture image.
Texture image recognition device according to the embodiment of the present invention, has following beneficial effect: (1) is good to the recognition effect of dynamic object, strong interference immunity.(2) leakage discrimination is low, and especially false recognition rate is very low.(3) calculate simple and real-time is good, can meet the real-time target Detection task under dynamic scene.
In addition, according to the texture image recognition device of the embodiment of the present invention, also there is following additional technical feature:
In one embodiment of the invention, described pretreatment module specifically comprises: deblurring and binary conversion treatment module, for described texture image is carried out to deblurring processing, then carry out binary conversion treatment; Morphology processing module, described morphology processing module is connected with binary conversion treatment module with described deblurring, for the texture image to after binary conversion treatment, carries out morphology processing, extracts described character zone.
In one embodiment of the invention, described projection Nogata processing module specifically comprises: focus point computing module, for calculating the focus point of described character zone; Coordinate system transformation module, described coordinate system transformation module is connected with described focus point computing module, for take described focus point as initial point, described character zone is transformed to polar coordinate system from plane right-angle coordinate, obtains polar coordinate image; Projection statistical module, described projection statistical module is connected with described coordinate system transformation module, for adding up projection histogram along utmost point footpath and polar angle both direction respectively at described polar coordinate image.
In one embodiment of the invention, in described projection statistical module, only along polar angle directional statistics projection histogram.
In one embodiment of the invention, in described proper vector acquisition module, described projection histogram is carried out to Fourier transform, the information of choosing amplitude-frequency part obtains having the proper vector of invariable rotary characteristic, as the proper vector of described detected image.
Additional aspect of the present invention and advantage in the following description part provide, and part will become obviously from the following description, or recognize by practice of the present invention.
Accompanying drawing explanation
Above-mentioned and/or additional aspect of the present invention and advantage accompanying drawing below combination obviously and is easily understood becoming the description of embodiment, wherein:
Fig. 1 is the schematic flow sheet of the texture image recognition methods of the embodiment of the present invention.
Fig. 2 is the idiographic flow schematic diagram of step S1 in Fig. 1.
Fig. 3 is the idiographic flow schematic diagram of step S2 in Fig. 1.
Fig. 4 is the schematic diagram of the texture image of three standards.
Fig. 5 is the schematic diagram of a texture image to be detected.
Fig. 6 is the pretreated result of texture image to be detected shown in Fig. 5.
Fig. 7 is the polar angle projection histogram of character zone in the texture image to be detected shown in Fig. 5.
Fig. 8 is the testing result of the texture image to be detected shown in Fig. 5.
Fig. 9 is the structured flowchart of the texture image recognition device of the embodiment of the present invention.
Figure 10 is the concrete structure block diagram of pretreatment module in Fig. 9.
Figure 11 is the concrete structure block diagram of projection Nogata processing module in Fig. 9.
Embodiment
Describe embodiments of the invention below in detail, the example of described embodiment is shown in the drawings, and wherein same or similar label represents same or similar element or has the element of identical or similar functions from start to finish.Below by the embodiment being described with reference to the drawings, be exemplary, be intended to for explaining the present invention, and can not be interpreted as limitation of the present invention.
As shown in Figure 1, the texture image recognition methods according to the embodiment of the present invention, can comprise the following steps:
S1. texture image is carried out to pre-service, extract character zone.As shown in Figure 2, specifically comprise step S11 and step S12.
S11. texture image is carried out to deblurring processing, then carry out binary conversion treatment.Wherein binary conversion treatment can adopt subregion binary conversion treatment.Subregion binary conversion treatment is a kind of local binarization method, and its basic thought is: each point to image, calculate this point around in neighborhood average gray t a little, if this point is greater than this mean value, think background dot, otherwise be left foreground point.In order to remove noise effect, be the certain fault-tolerant space of Threshold, such as, the segmentation threshold of prospect and background is t, and the fault-tolerant space of 10 gray scales is set, the point in [t-10, t] scope all can be used as foreground point and processes.After binary conversion treatment, prospect and background segment can be come.
S12. the texture image after binary conversion treatment is carried out to morphology processing, extract character zone.For example: the texture image after binary conversion treatment (is mainly referred to prospect part, background parts is blank, can process) carry out expansion process, make the various piece of disperseing in the word in image form a whole connected domain, according to priori, such as the distance between character, occupy ratio of symbol etc. again, the connected domain obtaining is carried out to morphologic filtering processing, remove non-character zone, character zone is thought in remaining region.The character zone length and width that obtain in embodiment only have tens pixels, can greatly reduce in the future feature calculation and target and identify the spent time.It should be noted that, the concrete form that morphology is processed, except expansion process, may be also corrosion treatment etc., according to target texture state, selects flexibly.
S2. character zone is carried out to coordinate system transformation, and calculate projection histogram.As shown in Figure 3, specifically comprise step S21, step S22 and step S23.
S21. the focus point in calculating character region.For example: the focus point by conventional XXX method calculating character region, is designated as O (x 0, y 0).
S22. take focus point as initial point, character zone is transformed to polar coordinate system from plane right-angle coordinate, obtain polar coordinate image.Particularly, the O point of take is coordinate origin, and image is transformed to polar coordinate system from plane right-angle coordinate, has two parameters of utmost point footpath and polar angle: (ρ, θ) under polar coordinate system.
S23. in polar coordinate image, along utmost point footpath and polar angle both direction, add up projection histogram respectively.That is: in polar coordinate image, respectively along ρ and θ both direction statistics projection histogram.This operation is equivalent to add up respectively the projection histogram of the directions of rays of different angles and the donut of different radii in plane right-angle coordinate.Because the projection along utmost point footpath direction (being the annular feature statistics under plane right-angle coordinate) is little to the contribution of target identification.So preferably, only get the projection histogram of polar angle as the projection histogram of target image, be designated as n θ(θ).For convenience, do Fourier transform below, we get altogether 256 angular interval, and each angular interval size is 360 °/256, in polar coordinate system, along 256 angles, distinguish projection.Fig. 4 is the polar angle projection histogram of target image in polar coordinate system.
It should be noted that, in step S2, embodied and carried out one of effect of polar coordinates conversion: while adding up projection histogram in plane right-angle coordinate, due to the discretize of coordinate, can miss partial data.And in polar coordinates, because each point in image has been mapped in polar plot, therefore can retain all information.In this step, carry out two being embodied in subsequent step S3 of effect of polar coordinates conversion.
S3. projection histogram is carried out to Fourier transform, obtain texture image proper vector.Particularly, projection histogram is carried out to Fourier transform, the information of choosing amplitude-frequency part obtains having the proper vector of invariable rotary characteristic, as the proper vector of detected image.
It should be noted that, in step S3, embodied second vital role of step S2 polar coordinates conversion: rotatablely moving in plane right-angle coordinate, is transformed into and in polar coordinate system, becomes translation motion, i.e. f o(ρ, θ)=f (ρ, θ-θ 0), f wherein obe the original image before rotation, f is f obe rotated counterclockwise θ 0the new images obtaining after angle.After Fourier transform, the translation motion of time domain is equivalent to rotatablely moving in frequency domain.If i.e.: f (θ)=F (ω), image rotation θ 0after, have
Figure BDA0000443230320000051
the translation of visible signal in polar coordinates only affects the phase frequency of frequency domain characteristic, on not impact of amplitude-frequency.
S4. the texture image of texture image to be identified and standard is performed step to S1 to S3, obtain respectively texture image proper vector to be identified and standard texture image proper vector and calculate the similarity between the two, according to similarity threshold, judging whether texture image to be identified is the texture image of standard.
For example: first provide the texture image of three standards according to the invariant features vector of the aiming symbol of three kinds of standards of operation steps S1-S3 calculating, do normalized, be denoted as respectively v1, V 2, V 3.Because three kinds of aiming symbols belong to priori features, therefore generally shift to an earlier date off-line, complete.Then the mode by online acquisition obtains texture image to be identified, according to step S1-S3, image is carried out to true-time operation, the normalization proper vector V of foreground picture in computed image.Then calculate respectively V and V 1, V 2, V 3distance D.D 1=D (V, V 1)=1-||V, V 1||, wherein || V, V 1|| represent Euclidean distance; D 2=D (V, V 2); D 3=D (V, V 3).Set similarity threshold threshold=0.9, if D i=max{D 1, D 2, D 3, i=1,2,3 and D i>threshold, judgement texture image to be identified is i aiming symbol.
According to the texture object recognition methods of the embodiment of the present invention, at least there is following beneficial effect;
(1) good to the recognition effect of dynamic object, strong interference immunity.
(2) leakage discrimination is low, and especially false recognition rate is very low.
(3) calculate simple and real-time is good, can meet the real-time target Detection task under dynamic scene.
For making those skilled in the art understand better the present invention, applicant sets forth in conjunction with an Application Example.In experiment, adopt small-sized four rotor wing unmanned aerial vehicles, camera is equipped with in unmanned plane the place ahead, gather image and be of a size of 752 pixel * 480 pixels, image is 256 gray-scale maps, acquisition frame rate is controlled, and the amount of images that gather p.s. decides according to the frequency of communication between the processing time of every two field picture and image and land station.Three standard texture images are as shown in Figure 4 provided in advance.By these three standard texture images by calculating focus point, polar coordinates conversion, along polar angle projection statistic histogram, obtain characteristic of correspondence vector V separately 1, V 2, V 3.Due to the shake in unmanned plane during flying process, the dynamic factor such as shootings, make to gather image and there are ghost image, fuzzy, rotation, affine, light variations, noise etc. and affect the factor of identifying, as shown in Figure 5.If utilize existing texture object recognition methods easily cause Lou identification and identify phenomenon by mistake.Texture image to be detected (being Fig. 5) is carried out to pre-service, obtain Fig. 6.Fig. 6 is carried out polar coordinates conversion and along along polar angle projection, can obtain statistic histogram as shown in Figure 7, thereby further obtain the proper vector V of texture image to be detected.Through comparing V and V 1, V 2, V 3distance and in similarity threshold, make comparisons, judge this texture image to be detected consistent with the texture image of first standard, as shown in Figure 8.Applicant carries out this experiment repeatedly, and recognition correct rate reaches 100%.
It should be noted that, for target detection result, if worry to occur identifying phenomenon by mistake, can also reject false target with SIFT target matching method.This operation is for the zonule of the aiming symbol having detected, so speed, can make false drop rate approach 0.
As shown in Figure 9, the texture image recognition device according to the embodiment of the present invention, can comprise: pretreatment module 100, projection Nogata processing module 200, proper vector acquisition module 300 and similar judge module 400.Wherein, pretreatment module 100, for texture image is carried out to pre-service, extracts character zone.Projection Nogata processing module 200 is connected with pretreatment module 100, for character zone is carried out to coordinate system transformation, and calculates projection histogram.The connected acquisition module 300 of feature is connected with projection Nogata processing module 200, for to projection histogram is carried out to Fourier transform, obtains texture image proper vector.Similar judge module 400 is connected with proper vector acquisition module 300, for calculating the texture image of texture image to be identified and standard, distinguish the similarity between corresponding texture image proper vector, and judge according to similarity threshold whether the texture image of texture image to be identified and standard is same texture image.
According to the texture object recognition device of the embodiment of the present invention, at least there is following beneficial effect;
(1) good to the recognition effect of dynamic object, strong interference immunity.
(2) leakage discrimination is low, and especially false recognition rate is very low.
(3) calculate simple and real-time is good, can meet the real-time target Detection task under dynamic scene.
In one embodiment of the invention, as shown in figure 10, pretreatment module 100 specifically comprises deblurring and binary conversion treatment module 110 and morphology processing module 120.Deblurring and binary conversion treatment module 110, for texture image is carried out to deblurring processing, are then carried out binary conversion treatment.Morphology processing module 120 is connected with binary conversion treatment module 110 with deblurring.Morphology processing module 120 is carried out morphology processing for the texture image to after binary conversion treatment, extracts character zone.
In one embodiment of the invention, as shown in figure 11, projection Nogata processing module 200 specifically comprises focus point computing module 210, coordinate system transformation module 220 and projection statistical module 230.Focus point computing module 210 is for the focus point in calculating character region.Coordinate system transformation module 220 is connected with focus point computing module 210.Coordinate system transformation module 220, for take focus point as initial point, transforms to polar coordinate system by character zone from plane right-angle coordinate, obtains polar coordinate image.Projection statistical module 230 is connected with coordinate system transformation module 220.Projection statistical module 230 is for adding up projection histogram along utmost point footpath and polar angle both direction respectively at polar coordinate image.Preferably, in projection statistical module 230, only along polar angle directional statistics projection histogram.
In one embodiment of the invention, in proper vector acquisition module 300, projection histogram is carried out to Fourier transform, the information of choosing amplitude-frequency part obtains having the proper vector of invariable rotary characteristic, as the proper vector of detected image.
In description of the invention, it will be appreciated that, term " " center ", " longitudinally ", " laterally ", " length ", " width ", " thickness ", " on ", D score, " front ", " afterwards ", " left side ", " right side ", " vertically ", " level ", " top ", " end " " interior ", " outward ", " clockwise ", " counterclockwise ", " axially ", " radially ", orientation or the position relationship of indications such as " circumferentially " are based on orientation shown in the drawings or position relationship, only the present invention for convenience of description and simplified characterization, rather than device or the element of indication or hint indication must have specific orientation, with specific orientation structure and operation, therefore can not be interpreted as limitation of the present invention.
In addition, term " first ", " second " be only for describing object, and can not be interpreted as indication or hint relative importance or the implicit quantity that indicates indicated technical characterictic.Thus, one or more these features can be expressed or impliedly be comprised to the feature that is limited with " first ", " second ".In description of the invention, the implication of " a plurality of " is two or more, unless otherwise expressly limited specifically.
In the present invention, unless otherwise clearly defined and limited, the terms such as term " installation ", " being connected ", " connection ", " fixing " should be interpreted broadly, and for example, can be to be fixedly connected with, and can be also to removably connect, or be integral; Can be mechanical connection, can be to be also electrically connected to; Can be to be directly connected, also can indirectly be connected by intermediary, can be the connection of two element internals or the interaction relationship of two elements.For the ordinary skill in the art, can understand as the case may be above-mentioned term concrete meaning in the present invention.
In the present invention, unless otherwise clearly defined and limited, First Characteristic Second Characteristic " on " or D score can be that the first and second features directly contact, or the first and second features are by intermediary indirect contact.And, First Characteristic Second Characteristic " on ", " top " and " above " but First Characteristic directly over Second Characteristic or oblique upper, or only represent that First Characteristic level height is higher than Second Characteristic.First Characteristic Second Characteristic " under ", " below " and " below " can be First Characteristic under Second Characteristic or tiltedly, or only represent that First Characteristic level height is less than Second Characteristic.
In process flow diagram or any process of otherwise describing at this or method describe and can be understood to, represent to comprise that one or more is for realizing module, fragment or the part of code of executable instruction of the step of specific logical function or process, and the scope of the preferred embodiment of the present invention comprises other realization, wherein can be not according to order shown or that discuss, comprise according to related function by the mode of basic while or by contrary order, carry out function, this should be understood by embodiments of the invention person of ordinary skill in the field.
The logic and/or the step that in process flow diagram, represent or otherwise describe at this, for example, can be considered to for realizing the sequencing list of the executable instruction of logic function, may be embodied in any computer-readable medium, for instruction execution system, device or equipment (as computer based system, comprise that the system of processor or other can and carry out the system of instruction from instruction execution system, device or equipment instruction fetch), use, or use in conjunction with these instruction execution systems, device or equipment.
With regard to this instructions, " computer-readable medium " can be anyly can comprise, storage, communication, propagation or transmission procedure be for instruction execution system, device or equipment or the device that uses in conjunction with these instruction execution systems, device or equipment.The example more specifically of computer-readable medium (non-exhaustive list) comprises following: the electrical connection section (electronic installation) with one or more wirings, portable computer diskette box (magnetic device), random-access memory (ram), ROM (read-only memory) (ROM), the erasable ROM (read-only memory) (EPROM or flash memory) of editing, fiber device, and portable optic disk ROM (read-only memory) (CDROM).In addition, computer-readable medium can be even paper or other the suitable medium that can print described program thereon, because can be for example by paper or other media be carried out to optical scanning, then edit, decipher or process in electronics mode and obtain described program with other suitable methods if desired, be then stored in computer memory.
Should be appreciated that each several part of the present invention can realize with hardware, software, firmware or their combination.In the above-described embodiment, a plurality of steps or method can realize with being stored in storer and by software or the firmware of suitable instruction execution system execution.For example, if realized with hardware, the same in another embodiment, can realize by any one in following technology well known in the art or their combination: have for data-signal being realized to the discrete logic of the logic gates of logic function, the special IC with suitable combinational logic gate circuit, programmable gate array (PGA), field programmable gate array (FPGA) etc.
Those skilled in the art are appreciated that realizing all or part of step that above-described embodiment method carries is to come the hardware that instruction is relevant to complete by program, described program can be stored in a kind of computer-readable recording medium, this program, when carrying out, comprises step of embodiment of the method one or a combination set of.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing module, can be also that the independent physics of unit exists, and also can be integrated in a module two or more unit.Above-mentioned integrated module both can adopt the form of hardware to realize, and also can adopt the form of software function module to realize.If described integrated module usings that the form of software function module realizes and during as production marketing independently or use, also can be stored in a computer read/write memory medium.
The above-mentioned storage medium of mentioning can be ROM (read-only memory), disk or CD etc.
In the description of this instructions, the description of reference term " embodiment ", " some embodiment ", " example ", " concrete example " or " some examples " etc. means to be contained at least one embodiment of the present invention or example in conjunction with specific features, structure, material or the feature of this embodiment or example description.In this manual, to the schematic statement of above-mentioned term not must for be identical embodiment or example.And, the specific features of description, structure, material or feature can one or more embodiment in office or example in suitable mode combination.In addition, those skilled in the art can carry out combination and combination by the different embodiment that describe in this instructions or example.
Although illustrated and described embodiments of the invention above, be understandable that, above-described embodiment is exemplary, can not be interpreted as limitation of the present invention, and those of ordinary skill in the art can change above-described embodiment within the scope of the invention, modification, replacement and modification.

Claims (10)

1. a texture image recognition methods, is characterized in that, comprising:
S1. described texture image is carried out to pre-service, extract character zone;
S2. described character zone is carried out to coordinate system transformation, and calculate projection histogram;
S3. described projection histogram is carried out to Fourier transform, obtain texture image proper vector;
S4. the texture image of texture image to be identified and standard is performed step to S1 to S3, obtain respectively texture image proper vector to be identified and standard texture image proper vector and calculate the similarity between the two, according to similarity threshold, judging whether described texture image to be identified is the texture image of described standard.
2. texture image recognition methods according to claim 1, is characterized in that, described step S1 specifically comprises:
S11. described texture image is carried out to deblurring processing, then carry out binary conversion treatment;
S12. the texture image after binary conversion treatment is carried out to morphology processing, extract described character zone.
3. texture image recognition methods according to claim 1 and 2, is characterized in that, described step S2 specifically comprises:
S21. calculate the focus point of described character zone;
S22. take described focus point as initial point, described character zone is transformed to polar coordinate system from plane right-angle coordinate, obtain polar coordinate image;
S23. in described polar coordinate image, along utmost point footpath and polar angle both direction, add up projection histogram respectively.
4. according to the texture image recognition methods described in claim 1-3 any one, it is characterized in that, in described step S23, only along polar angle directional statistics projection histogram.
5. according to the texture image recognition methods described in claim 1-4 any one, it is characterized in that, described step S3 specifically comprises:
Described projection histogram is carried out to Fourier transform, and the information of choosing amplitude-frequency part obtains having the proper vector of invariable rotary characteristic, as the proper vector of described detected image.
6. a texture image recognition device, is characterized in that, comprising:
Pretreatment module, described pretreatment module, for described texture image is carried out to pre-service, extracts character zone;
Projection Nogata processing module, described projection Nogata processing module is connected with described pretreatment module, for described character zone is carried out to coordinate system transformation, and calculates projection histogram;
Proper vector acquisition module, the connected acquisition module of described feature is connected with described projection Nogata processing module, for to described projection histogram is carried out to Fourier transform, obtains texture image proper vector;
Similar judge module, described similar judge module is connected with described proper vector acquisition module, for calculating the texture image of texture image to be identified and standard, distinguish the similarity between corresponding texture image proper vector, and judge according to similarity threshold whether the texture image of described texture image to be identified and described standard is same texture image.
7. texture image recognition device according to claim 6, is characterized in that, described pretreatment module specifically comprises:
Deblurring and binary conversion treatment module, for described texture image is carried out to deblurring processing, then carry out binary conversion treatment;
Morphology processing module, described morphology processing module is connected with binary conversion treatment module with described deblurring, for the texture image to after binary conversion treatment, carries out morphology processing, extracts described character zone.
8. according to the texture image recognition device described in claim 6 or 7, it is characterized in that, described projection Nogata processing module specifically comprises:
Focus point computing module, for calculating the focus point of described character zone;
Coordinate system transformation module, described coordinate system transformation module is connected with described focus point computing module, for take described focus point as initial point, described character zone is transformed to polar coordinate system from plane right-angle coordinate, obtains polar coordinate image;
Projection statistical module, described projection statistical module is connected with described coordinate system transformation module, for adding up projection histogram along utmost point footpath and polar angle both direction respectively at described polar coordinate image.
9. according to the texture image recognition device described in claim 6-8 any one, it is characterized in that, in described projection statistical module, only along polar angle directional statistics projection histogram.
10. according to the texture image recognition device described in claim 6-9 any one, it is characterized in that, in described proper vector acquisition module, described projection histogram is carried out to Fourier transform, the information of choosing amplitude-frequency part obtains having the proper vector of invariable rotary characteristic, as the proper vector of described detected image.
CN201310714283.1A 2013-12-20 2013-12-20 Texture image identification method and texture image identification device Pending CN103745236A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310714283.1A CN103745236A (en) 2013-12-20 2013-12-20 Texture image identification method and texture image identification device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310714283.1A CN103745236A (en) 2013-12-20 2013-12-20 Texture image identification method and texture image identification device

Publications (1)

Publication Number Publication Date
CN103745236A true CN103745236A (en) 2014-04-23

Family

ID=50502253

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310714283.1A Pending CN103745236A (en) 2013-12-20 2013-12-20 Texture image identification method and texture image identification device

Country Status (1)

Country Link
CN (1) CN103745236A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104850851A (en) * 2015-04-22 2015-08-19 福州大学 ORB feature point matching method with scale invariance
CN105046191A (en) * 2015-05-13 2015-11-11 信阳师范学院 Texture image identifying method
CN109272006A (en) * 2017-07-18 2019-01-25 北京柯斯元科技有限公司 Anti-counterfeit sign with random texture pattern decision-making system and determination method
CN110412547A (en) * 2019-07-24 2019-11-05 中国电子科技集团公司第三十六研究所 The echo signal identifying system of equipment and ground installation is carried based on rotor wing unmanned aerial vehicle
CN110880010A (en) * 2019-07-05 2020-03-13 电子科技大学 Visual SLAM closed loop detection algorithm based on convolutional neural network
CN112767350A (en) * 2021-01-19 2021-05-07 深圳麦科田生物医疗技术股份有限公司 Method, device, equipment and storage medium for predicting maximum interval of thromboelastogram

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104850851A (en) * 2015-04-22 2015-08-19 福州大学 ORB feature point matching method with scale invariance
CN105046191A (en) * 2015-05-13 2015-11-11 信阳师范学院 Texture image identifying method
CN109272006A (en) * 2017-07-18 2019-01-25 北京柯斯元科技有限公司 Anti-counterfeit sign with random texture pattern decision-making system and determination method
CN110880010A (en) * 2019-07-05 2020-03-13 电子科技大学 Visual SLAM closed loop detection algorithm based on convolutional neural network
CN110412547A (en) * 2019-07-24 2019-11-05 中国电子科技集团公司第三十六研究所 The echo signal identifying system of equipment and ground installation is carried based on rotor wing unmanned aerial vehicle
CN110412547B (en) * 2019-07-24 2021-02-26 中国电子科技集团公司第三十六研究所 Target signal identification system based on rotor unmanned aerial vehicle carries equipment and ground equipment
CN112767350A (en) * 2021-01-19 2021-05-07 深圳麦科田生物医疗技术股份有限公司 Method, device, equipment and storage medium for predicting maximum interval of thromboelastogram
CN112767350B (en) * 2021-01-19 2024-04-26 深圳麦科田生物医疗技术股份有限公司 Method, device, equipment and storage medium for predicting maximum section of thromboelastography

Similar Documents

Publication Publication Date Title
Bilal et al. Real-time lane detection and tracking for advanced driver assistance systems
CN108122256B (en) A method of it approaches under state and rotates object pose measurement
US8774510B2 (en) Template matching with histogram of gradient orientations
Yan et al. A method of lane edge detection based on Canny algorithm
CN103745236A (en) Texture image identification method and texture image identification device
CN108038481A (en) A kind of combination maximum extreme value stability region and the text positioning method of stroke width change
CN104036523A (en) Improved mean shift target tracking method based on surf features
CN109447117B (en) Double-layer license plate recognition method and device, computer equipment and storage medium
US10013619B2 (en) Method and device for detecting elliptical structures in an image
CN105427333A (en) Real-time registration method of video sequence image, system and shooting terminal
CN104036244A (en) Checkerboard pattern corner point detecting method and device applicable to low-quality images
Li et al. A robust lane detection method based on hyperbolic model
CN111476804A (en) Method, device and equipment for efficiently segmenting carrier roller image and storage medium
Zhang et al. A vehicle detection and shadow elimination method based on greyscale information, edge information, and prior knowledge
Qiu et al. Research on lane line detection method based on improved hough transform
CN103336964B (en) SIFT image matching method based on module value difference mirror image invariant property
CN112837384B (en) Vehicle marking method and device and electronic equipment
CN105574860A (en) Method for identifying deflection polarity of rudder slice
Heidarizadeh Preprocessing Methods of Lane Detection and Tracking for Autonomous Driving
CN111178111A (en) Two-dimensional code detection method, electronic device, storage medium and system
Lim et al. Vision-based recognition of road regulation for intelligent vehicle
Deb et al. Automatic vehicle identification by plate recognition for intelligent transportation system applications
Dai et al. An Improved ORB Feature Extraction Algorithm Based on Enhanced Image and Truncated Adaptive Threshold
Yu et al. An improved phase correlation method for stop detection of autonomous driving
Kaimkhani et al. UAV with Vision to Recognise Vehicle Number Plates

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20140423

RJ01 Rejection of invention patent application after publication