US20190384999A1 - System and method for searching an image within another image - Google Patents
System and method for searching an image within another image Download PDFInfo
- Publication number
- US20190384999A1 US20190384999A1 US16/011,609 US201816011609A US2019384999A1 US 20190384999 A1 US20190384999 A1 US 20190384999A1 US 201816011609 A US201816011609 A US 201816011609A US 2019384999 A1 US2019384999 A1 US 2019384999A1
- Authority
- US
- United States
- Prior art keywords
- image
- template
- target
- images
- edge
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 50
- 230000015654 memory Effects 0.000 claims description 18
- 230000001052 transient effect Effects 0.000 claims description 2
- 230000006870 function Effects 0.000 description 11
- 238000004891 communication Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 238000005070 sampling Methods 0.000 description 5
- 238000012545 processing Methods 0.000 description 4
- 238000010200 validation analysis Methods 0.000 description 4
- 238000004590 computer program Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000005291 magnetic effect Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G06K9/3258—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/62—Text, e.g. of license plates, overlay texts or captions on TV images
- G06V20/63—Scene text, e.g. street names
-
- G06T5/002—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/52—Scale-space analysis, e.g. wavelet analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/24—Character recognition characterised by the processing or recognition method
- G06V30/248—Character recognition characterised by the processing or recognition method involving plural approaches, e.g. verification by template match; Resolving confusion among similar patterns, e.g. "O" versus "Q"
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20182—Noise reduction or smoothing in the temporal domain; Spatio-temporal filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/469—Contour-based spatial representations, e.g. vector-coding
- G06V10/473—Contour-based spatial representations, e.g. vector-coding using gradient analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/09—Recognition of logos
Definitions
- the present disclosure is generally related to image searching, and more particularly related to a method for searching an image within another image.
- Template matching is a technique to recognize content in an image.
- the template matching techniques include a feature point based template matching that extracts features from an input image and a model image. The features are matched between the model image and the input image with K-nearest neighbor search. Thereafter, a homography transformation is estimated from the matched features, which may be further refined.
- the feature point based template matching technique works well only when images contain a sufficient number of interesting feature points. Further, the feature point based template matching fails to produce a valid homography, and thus result in ambiguous matches.
- the template matching techniques include a technique to search an input image by sliding a window of a model image in a pixel-by-pixel manner, and then computing a degree of similarity between the input image and the model image, where the similarity is given by correlation or normalized cross correlation.
- pixel-by-pixel template matching is very time-consuming and computationally expensive.
- the searching for the input image with arbitrary orientation in the model image makes the template matching technique far more computationally expensive.
- a method for searching an image within another image includes producing a plurality of template edge images, having one or more image scales, based on determination of edge gradients of a template image in one or more directions.
- the template image indicates an image to be searched.
- the method further includes producing a plurality of target edge images, having one or more image scales, based on determination of edge gradients of a target image in the one or more directions.
- the target image indicates another image within which the image needs to be searched.
- the method includes producing images comprising correlation coefficient values for each of the one or more directions by computing correlation coefficients between the plurality of template edge images and the plurality of target edge images.
- the method further includes identifying at least one local peak from each of the images comprising the correlation coefficient values. Further, the method includes determining spatial locations along with the correlation coefficients corresponding to the at least one local peak. Thereafter, the method includes identifying presence of the template image in the target image based upon an intersection of the spatial locations.
- a system for searching an image within another image includes a processor and a memory.
- the processor is configured to produce a plurality of template edge images, having one or more image scales, based on determination of edge gradients of a template image in one or more directions.
- the template image indicates an image to be searched.
- the processor is further configured to produce a plurality of target edge images, having one or more image scales, based on determination of edge gradients of a target image in the one or more directions.
- the target image indicates another image within which the image needs to be searched.
- the processor is configured to produce images comprising correlation coefficient values for each of the one or more directions by computing correlation coefficients between the plurality of template edge images and the plurality of target edge images.
- the processor is configured to identify at least one local peak from each of the images comprising the correlation coefficient values. Further, the processor is configured to determine spatial locations along with the correlation coefficients corresponding to the at least one local peak. Thereafter, the processor is configured to identify presence of the template image in the target image based upon an intersection of the spatial locations.
- a non-transient computer-readable medium comprising instruction for causing a programmable processor to search an image within another image by producing a plurality of template edge images, having one or more image scales, based on determination of edge gradients of a template image in one or more directions.
- the template image indicates an image to be searched.
- a plurality of target edge images, having one or more image scales, are produced based on determination of edge gradients of a target image in the one or more directions.
- the target image indicates another image within which the image needs to be searched.
- images comprising correlation coefficient values are produced for each of the one or more directions by computing correlation coefficients between the plurality of template edge images and the plurality of target edge images.
- At least one local peak is identified from each of the images comprising the correlation coefficient values. Further, spatial locations along with the correlation coefficients corresponding to the at least one local peak are determined. Thereafter, a presence of the template image in the target image is identified based upon an intersection of the spatial locations.
- FIG. 1 illustrates a network connection diagram 100 of a system 102 for searching an image within another image, according to embodiments of the present disclosure
- FIG. 2 illustrates a flowchart 200 showing a method for identifying presence of a template image in a target image, according to embodiments of the present disclosure
- FIG. 3A illustrates an example of a template image 302 a that is to be searched, according to embodiments of the present disclosure
- FIG. 3B illustrates an example of a target image 302 b within which the template image 302 a needs to be searched, according to embodiments of the present disclosure
- FIG. 3C illustrates an example of a template image having backgrounds removed from the template image 302 a illustrated in FIG. 3A , according to embodiments of the present disclosure
- FIG. 3D illustrates an example of a plurality of template edge images 302 d produced in one or more directions, according to embodiments of the present disclosure
- FIG. 3E illustrates an example of template edge images 302 e produced for template images present at different scales, according to embodiments of the present disclosure
- FIG. 3F illustrates an example of determining at least one local peak 304 f from an image 302 f comprising correlation coefficient values, according to embodiments of the present disclosure.
- FIG. 4 illustrates a flowchart 400 showing a method for searching an image within another image, according to embodiments of the present disclosure.
- FIG. 1 illustrates a network connection diagram 100 of a system 102 for searching an image within another image, in accordance with an embodiment of present disclosure.
- the network connection diagram 100 further illustrates a communication network 104 connected to the system 102 and a computing device 106 .
- the communication network 104 may be implemented using at least one communication technique selected from Visible Light Communication (VLC), Worldwide Interoperability for Microwave Access (WiMAX), Long term evolution (LTE), Wireless local area network (WLAN), Infrared (IR) communication, Public Switched Telephone Network (PSTN), Radio waves, and any other wired and/or wireless communication technique known in the art.
- VLC Visible Light Communication
- WiMAX Worldwide Interoperability for Microwave Access
- LTE Long term evolution
- WLAN Wireless local area network
- IR Infrared
- PSTN Public Switched Telephone Network
- Radio waves any other wired and/or wireless communication technique known in the art.
- the computing device 106 may be used by a user to provide a template image and a target image to the system 102 .
- the template image may indicate an image to be searched.
- the target image that may indicate another image within which the image needs to be searched.
- the template image and the target image may be present at one or more image scales.
- the computing device 106 may include suitable hardware that may be capable of reading the one or more storage mediums (e.g., CD, DVD, or Hard Disk). Such storage mediums may include the template image and the target image.
- the computing device 106 may be realized through a variety of computing devices, such as a desktop, a computer server, a laptop, a personal digital assistant (PDA), or a tablet computer.
- PDA personal digital assistant
- the system 102 may further comprise interface(s) 108 , a processor 110 , and a memory 112 .
- the interface(s) 108 may be used to interact with or program the system 102 to search an image within another image.
- the interface(s) 108 may either be a Command Line Interface (CLI) or a Graphical User Interface (GUI).
- CLI Command Line Interface
- GUI Graphical User Interface
- the processor 110 may execute computer program instructions stored in the memory 112 .
- the processor 110 may also be configured to decode and execute any instructions received from one or more other electronic devices or one or more remote servers.
- the processor 110 may also be configured to process an image received from the computing device 106 .
- the processor 110 may include one or more general purpose processors (e.g., INTEL microprocessors) and/or one or more special purpose processors (e.g., digital signal processors or Xilinx System On Chip (SOC) Field Programmable Gate Array (FPGA) processor).
- SOC System On Chip
- FPGA Field Programmable Gate Array
- the processor 110 may be configured to execute one or more computer-readable program instructions, such as program instructions to carry out any of the functions described in this description.
- the memory 112 may include a computer readable medium.
- a computer readable medium may include volatile and/or non-volatile storage components, such as optical, magnetic, organic or other memory or disc storage, which may be integrated in whole or in part with a processor, such as the processor 110 . Alternatively, the entire computer readable medium may be present remotely from the processor 110 and coupled to the processor 110 by connection mechanism and/or network cable. In addition to the memory 112 , there may be additional memories that may be coupled with the processor 110 .
- the system 102 may receive a template image and a target image from a user via the computing device 106 .
- the system 102 may retrieve the template image from a video stream.
- the template image may indicate an image to be searched.
- the template image may be transparent.
- the template image may be a logo or a text.
- a template image 302 a is illustrated in FIG. 3A .
- the target image may indicate another image within which the image needs to be searched.
- the target image may be a video sequence.
- a target image 302 b is illustrated in FIG. 3B .
- the template image and the target image may be processed, at step 202 .
- the processing may include removing backgrounds from the template image. In a case, colored backgrounds may be removed from the template image. For example, as shown in FIG. 3C , background is removed from the template image 302 a illustrated in FIG. 3A . Further, the processing may include eliminating noise from the template image and the target image, using low pass filters. The processing may further include scaling the template image and the target image at different scales to create scaled template images and scaled target images of multiple resolutions.
- Scaling of the images could refer to upscaling or downscaling of the image, and could be performed using any sampling algorithm, such as nearest-neighbour interpolation, Edge-directed interpolation, bilinear and bicubic sampling, Sinc and Lanczos sampling, box sampling, Mipmap sampling, Fourier transformation, vectorization, and deep convolutional neural networks. It should be noted that an aspect ratio may remain same during the scaling of the template image and the target image.
- a plurality of template edge images may be produced, at step 204 .
- the plurality of template images having one or more image scales, may be produced based on determination of edge gradients of the template image in one or more directions.
- the edge gradients of the template image may be determined using gradient operators applied in the one or more directions.
- a gradient operator (g) applied on the template image i.e. a two dimensional function f(x, y), could be represented using below mentioned equation.
- edge gradients of the template image could be determined in the one or more directions.
- absolute values of the determined edge gradients for each of the template images may be stored as 2D images in the memory 112 .
- template edge images 302 d are produced in directions such as 0, 45, 90, or 135 degrees.
- the template edge images may be represented as TemEdge 0 , TemEdge 45 , TemEdge 90 , and TemEdge 135 .
- FIG. 3E shows, template edge images 302 e produced for the scaled template images that are present at different scales.
- a plurality of target edge images may be produced, at step 206 .
- the plurality of target edge images having one or more image scales, may be produced based on determination of edge gradients of the target image in the one or more directions.
- the edge gradients of the target image may be determined using gradient operators applied in the one or more directions. It should be noted that absolute values of the determined edge gradients for each of the target images may be stored as 2D images in the memory 112 .
- target edge images are produced in the directions such as 0, 45, 90, or 135 degrees.
- the target edge images may be represented as TgtEdge 0 , TgtEdge 45 , TgtEdge 90 , and TgtEdge 135 .
- the target edge may be scaled to create scaled target edge images in the one or more directions.
- images comprising correlation coefficient values for each of the one or more directions may be produced, at step 208 .
- the images comprising the correlation coefficient values may be produced by computing correlation coefficients between the plurality of template edge images and the plurality of target edge images.
- the correlation may be indicative of a Normalized-Cross-Correlation (NCC).
- NCC Normalized-Cross-Correlation
- an image 302 f containing correlation coefficient values is produced by computing correlation coefficients between the template edge images and the target edge images in the one or more directions such as 0, 45, 90, or 135 degrees.
- at least one local peak may be identified from each of the images comprising the correlation coefficient values, at step 210 .
- a point (x, y) may be defined as a local peak for a 2D function f: R 2 ->R if f(x,y)>f(u, v) ⁇ (u,v) ⁇ (a,b)
- R may correspond to radius of a circle centered at (x,y). The circle centered around (x,y) can be considered as the area under consideration.
- at least one local peak 304 f may be identified from the image 302 f comprising the correlation coefficient values, as illustrated in FIG. 3F .
- spatial locations along with the correlation coefficients corresponding to the at least one local peak may be determined, at step 212 .
- the spatial locations along with the correlation coefficients corresponding to top five peaks having highest values may be determined for each of the directions such as 0, 45, 90, and 135 degrees.
- the spatial locations could be ranked based on values of corresponding correlation coefficients. It should be noted that the correlation coefficient values may range from ⁇ 1 to 1. In an embodiment, the spatial locations along with the values of the correlation coefficients may be stored in the memory 112 .
- an intersection of the spatial locations may be determined, at step 214 .
- the intersection of the spatial locations at different scales may be determined for each of the one or more directions. For example, when the directions are 0, 45, 90, and 135 degrees, then the intersection, S Match (scale n ) is given as S 0 ⁇ S 45 ⁇ S 90 ⁇ S 135 .
- a presence of the template image in the target image may be identified, at step 216 . For example, in a case, if S Match (scale n ) is found to be empty, then the template image is not present in the target image for scale n . Otherwise, the template image is present in the target image i.e., the template image is expected to match at one or more spatial locations as specified within the S Match (scale n ).
- the presence of the template image in the target image may be validated, at step 218 .
- the validation may be performed based on statistical inference of the correlation coefficients exceeding a predefined threshold.
- the statistical inference may correspond to at least one of a maximum, mean, and a median of the correlation coefficients.
- the validation may be performed based on color match between the template image and the target image at the spatial locations. It will be apparent to one skilled in the art that the above-mentioned validation techniques have been provided only for illustration purposes. In an embodiment, the validation of the presence of the template image in the target image may be performed by some other technique as well, without departing from the scope of the disclosure.
- FIG. 4 illustrates a flowchart 400 of a method for searching an image within another image, according to an embodiment.
- the flow chart of FIG. 4 shows the method steps executed according to one or more embodiments of the present disclosure.
- each block may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
- the functions noted in the blocks may occur out of the order noted in the drawings.
- two blocks shown in succession in FIG. 4 may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
- any process descriptions or blocks in flow charts should be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps in the process, and alternate implementations are included within the scope of the example embodiments in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved.
- the process descriptions or blocks in flow charts should be understood as representing decisions made by a hardware structure such as a state machine.
- the flowchart 400 starts at the step 402 and proceeds to step 412 .
- a plurality of template edge images may be produced based on determination of edge gradients of a template image in one or more directions.
- the template edge images may be present at one or more image scales.
- the template image may indicate an image to be searched.
- the plurality of template images may be produced by the processor 110 .
- a plurality of target edge images may be produced based on determination of edge gradients of a target image in the one or more directions.
- the target edge images having one or more image scales.
- the target image may indicate another image within which the image needs to be searched.
- the plurality of target edge images may be produced by the processor 110 .
- images comprising correlation coefficient values for each of the one or more directions may be produced.
- the images comprising the correlation coefficient values may be produced by computing correlation coefficients between the plurality of template edge images and the plurality of target edge images.
- the images comprising the correlation coefficient values may be produced by the processor 110 .
- At step 408 at least one local peak may be identified from each of the images comprising the correlation coefficient values.
- the at least one local peak may be identified by the processor 110 .
- spatial locations along with the correlation coefficients corresponding to the at least one local peak may be determined.
- the spatial locations along with the correlation coefficients may be determined by the processor 110 .
- a presence of the template image in the target image may be identified based upon an intersection of the spatial locations.
- the presence of the template image in the target image may be identified by the processor 110 .
- the disclosed embodiments encompass numerous advantages.
- Various embodiments of a method for searching an image within another image may be disclosed.
- the method may include processing a template image by removing backgrounds near boundaries in order to get a maximum bounding box containing a structure of the template image.
- the method may include producing a plurality of template edge images and a plurality of target edge images based on determination of edge gradients of the template image and a target image in one or more directions respectively.
- images comprising correlation coefficient values may be produced for each of the one or more directions by computing correlation coefficients between the plurality of template edge images and the plurality of target edge images.
- spatial locations along with the correlation coefficients corresponding to at least one local peak may be determined, where the at least one local peak may be identified from each of the images comprising the correlation coefficient values. Thereafter, based upon an intersection of the spatial locations, a presence of the template image in the target image may be identified.
- the logic of the example embodiment(s) can be implemented in hardware, software, firmware, or a combination thereof.
- the logic is implemented in software or firmware that is stored in a memory and that is executed by a suitable instruction execution system. If implemented in hardware, as in an alternative embodiment, the logic can be implemented with any or a combination of the following technologies, which are all well known in the art: a discrete logic circuit(s) having logic gates for implementing logic functions upon data signals, an application specific integrated circuit (ASIC) having appropriate combinational logic gates, a programmable gate array(s) (PGA), a field programmable gate array (FPGA), etc.
- ASIC application specific integrated circuit
- PGA programmable gate array
- FPGA field programmable gate array
- the scope of the present disclosure includes embodying the functionality of the example embodiments disclosed herein in logic embodied in hardware or software-configured mediums.
- Embodiments of the present disclosure may be provided as a computer program product, which may include a computer-readable medium tangibly embodying thereon instructions, which may be used to program a computer (or other electronic devices) to perform a process.
- the computer-readable medium may include, but is not limited to, fixed (hard) drives, magnetic tape, floppy diskettes, optical disks, compact disc read-only memories (CD-ROMs), and magneto-optical disks, semiconductor memories, such as ROMs, random access memories (RAMs), programmable read-only memories (PROMs), erasable PROMs (EPROMs), electrically erasable PROMs (EEPROMs), flash memory, magnetic or optical cards, or other type of media/machine-readable medium suitable for storing electronic instructions (e.g., computer programming code, such as software or firmware).
- embodiments of the present disclosure may also be downloaded as one or more computer program products, wherein the program may be transferred from a remote computer to a requesting computer by way of data signals embodied in a carrier wave or other propagation medium via a communication link (e. g., a modem or network connection).
- a communication link e. g., a modem or network connection
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
Description
- The present disclosure is generally related to image searching, and more particularly related to a method for searching an image within another image.
- The subject matter disclosed in the background section should not be assumed to be prior art merely as a result of its mention in the background section. Similarly, a problem mentioned in the background section or associated with the subject matter of the background section should not be assumed to have been previously recognized in the prior art. The subject matter in the background section merely represents different approaches, which in and of themselves may also correspond to implementations of the claimed technology.
- Template matching is a technique to recognize content in an image. The template matching techniques include a feature point based template matching that extracts features from an input image and a model image. The features are matched between the model image and the input image with K-nearest neighbor search. Thereafter, a homography transformation is estimated from the matched features, which may be further refined. However, the feature point based template matching technique works well only when images contain a sufficient number of interesting feature points. Further, the feature point based template matching fails to produce a valid homography, and thus result in ambiguous matches.
- Further, the template matching techniques include a technique to search an input image by sliding a window of a model image in a pixel-by-pixel manner, and then computing a degree of similarity between the input image and the model image, where the similarity is given by correlation or normalized cross correlation. However, pixel-by-pixel template matching is very time-consuming and computationally expensive. Further, the searching for the input image with arbitrary orientation in the model image makes the template matching technique far more computationally expensive.
- Therefore, there may be a need for an improved system and a method for template matching in an image or a video that may be cost effective, robust, efficient, and may reduce computation time.
- In one aspect of the present disclosure, a method for searching an image within another image is provided. The method includes producing a plurality of template edge images, having one or more image scales, based on determination of edge gradients of a template image in one or more directions. The template image indicates an image to be searched. The method further includes producing a plurality of target edge images, having one or more image scales, based on determination of edge gradients of a target image in the one or more directions. The target image indicates another image within which the image needs to be searched. Further, the method includes producing images comprising correlation coefficient values for each of the one or more directions by computing correlation coefficients between the plurality of template edge images and the plurality of target edge images. The method further includes identifying at least one local peak from each of the images comprising the correlation coefficient values. Further, the method includes determining spatial locations along with the correlation coefficients corresponding to the at least one local peak. Thereafter, the method includes identifying presence of the template image in the target image based upon an intersection of the spatial locations.
- In another aspect of the present disclosure, a system for searching an image within another image is provided. The system includes a processor and a memory. The processor is configured to produce a plurality of template edge images, having one or more image scales, based on determination of edge gradients of a template image in one or more directions. The template image indicates an image to be searched. The processor is further configured to produce a plurality of target edge images, having one or more image scales, based on determination of edge gradients of a target image in the one or more directions. The target image indicates another image within which the image needs to be searched. Further, the processor is configured to produce images comprising correlation coefficient values for each of the one or more directions by computing correlation coefficients between the plurality of template edge images and the plurality of target edge images. Further, the processor is configured to identify at least one local peak from each of the images comprising the correlation coefficient values. Further, the processor is configured to determine spatial locations along with the correlation coefficients corresponding to the at least one local peak. Thereafter, the processor is configured to identify presence of the template image in the target image based upon an intersection of the spatial locations.
- In one aspect of the present disclosure, a non-transient computer-readable medium comprising instruction for causing a programmable processor to search an image within another image by producing a plurality of template edge images, having one or more image scales, based on determination of edge gradients of a template image in one or more directions. The template image indicates an image to be searched. A plurality of target edge images, having one or more image scales, are produced based on determination of edge gradients of a target image in the one or more directions. The target image indicates another image within which the image needs to be searched. Further, images comprising correlation coefficient values are produced for each of the one or more directions by computing correlation coefficients between the plurality of template edge images and the plurality of target edge images. At least one local peak is identified from each of the images comprising the correlation coefficient values. Further, spatial locations along with the correlation coefficients corresponding to the at least one local peak are determined. Thereafter, a presence of the template image in the target image is identified based upon an intersection of the spatial locations.
- Other features and aspects of this disclosure will be apparent from the following description and the accompanying drawings.
- The accompanying drawings illustrate various embodiments of systems, methods, and embodiments of various other aspects of the disclosure. Any person with ordinary skills in the art will appreciate that the illustrated element boundaries (e.g. boxes, groups of boxes, or other shapes) in the figures represent one example of the boundaries. It may be that in some examples one element may be designed as multiple elements or that multiple elements may be designed as one element. In some examples, an element shown as an internal component of one element may be implemented as an external component in another, and vice versa. Furthermore, elements may not be drawn to scale. Non-limiting and non-exhaustive descriptions are described with reference to the following drawings. The components in the figures are not necessarily to scale, emphasis instead being placed upon illustrating principles.
-
FIG. 1 illustrates a network connection diagram 100 of asystem 102 for searching an image within another image, according to embodiments of the present disclosure; -
FIG. 2 illustrates aflowchart 200 showing a method for identifying presence of a template image in a target image, according to embodiments of the present disclosure; -
FIG. 3A illustrates an example of atemplate image 302 a that is to be searched, according to embodiments of the present disclosure; -
FIG. 3B illustrates an example of atarget image 302 b within which thetemplate image 302 a needs to be searched, according to embodiments of the present disclosure; -
FIG. 3C illustrates an example of a template image having backgrounds removed from thetemplate image 302 a illustrated inFIG. 3A , according to embodiments of the present disclosure; -
FIG. 3D illustrates an example of a plurality oftemplate edge images 302 d produced in one or more directions, according to embodiments of the present disclosure; -
FIG. 3E illustrates an example oftemplate edge images 302 e produced for template images present at different scales, according to embodiments of the present disclosure; -
FIG. 3F illustrates an example of determining at least onelocal peak 304 f from animage 302 f comprising correlation coefficient values, according to embodiments of the present disclosure; and -
FIG. 4 illustrates aflowchart 400 showing a method for searching an image within another image, according to embodiments of the present disclosure. - Some embodiments of this disclosure, illustrating all its features, will now be discussed in detail. The words “comprising,” “having,” “containing,” and “including,” and other forms thereof, are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items.
- It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise. Although any systems and methods similar or equivalent to those described herein can be used in the practice or testing of embodiments of the present disclosure, the preferred, systems and methods are now described.
- Embodiments of the present disclosure will be described more fully hereinafter with reference to the accompanying drawings in which like numerals represent like elements throughout the several figures, and in which example embodiments are shown. Embodiments of the claims may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. The examples set forth herein are non-limiting examples and are merely examples among other possible examples.
- It is an object of the current disclosure to provide a system and a method for searching an image within another image.
FIG. 1 illustrates a network connection diagram 100 of asystem 102 for searching an image within another image, in accordance with an embodiment of present disclosure. The network connection diagram 100 further illustrates acommunication network 104 connected to thesystem 102 and acomputing device 106. - The
communication network 104 may be implemented using at least one communication technique selected from Visible Light Communication (VLC), Worldwide Interoperability for Microwave Access (WiMAX), Long term evolution (LTE), Wireless local area network (WLAN), Infrared (IR) communication, Public Switched Telephone Network (PSTN), Radio waves, and any other wired and/or wireless communication technique known in the art. - The
computing device 106 may be used by a user to provide a template image and a target image to thesystem 102. The template image may indicate an image to be searched. The target image that may indicate another image within which the image needs to be searched. The template image and the target image may be present at one or more image scales. In an embodiment, thecomputing device 106 may include suitable hardware that may be capable of reading the one or more storage mediums (e.g., CD, DVD, or Hard Disk). Such storage mediums may include the template image and the target image. Thecomputing device 106 may be realized through a variety of computing devices, such as a desktop, a computer server, a laptop, a personal digital assistant (PDA), or a tablet computer. - The
system 102 may further comprise interface(s) 108, aprocessor 110, and amemory 112. The interface(s) 108 may be used to interact with or program thesystem 102 to search an image within another image. The interface(s) 108 may either be a Command Line Interface (CLI) or a Graphical User Interface (GUI). - The
processor 110 may execute computer program instructions stored in thememory 112. Theprocessor 110 may also be configured to decode and execute any instructions received from one or more other electronic devices or one or more remote servers. In an embodiment, theprocessor 110 may also be configured to process an image received from thecomputing device 106. Theprocessor 110 may include one or more general purpose processors (e.g., INTEL microprocessors) and/or one or more special purpose processors (e.g., digital signal processors or Xilinx System On Chip (SOC) Field Programmable Gate Array (FPGA) processor). Theprocessor 110 may be configured to execute one or more computer-readable program instructions, such as program instructions to carry out any of the functions described in this description. - The
memory 112 may include a computer readable medium. A computer readable medium may include volatile and/or non-volatile storage components, such as optical, magnetic, organic or other memory or disc storage, which may be integrated in whole or in part with a processor, such as theprocessor 110. Alternatively, the entire computer readable medium may be present remotely from theprocessor 110 and coupled to theprocessor 110 by connection mechanism and/or network cable. In addition to thememory 112, there may be additional memories that may be coupled with theprocessor 110. - The method for searching an image within another image may now be explained with reference to
FIG. 1 andFIG. 2 . One skilled in the art will appreciate that, for this and other processes and methods disclosed herein, the functions performed in the processes and methods may be implemented in differing order. Furthermore, the outlined steps and operations are only provided as examples, and some of the steps and operations may be optional, combined into fewer steps and operations, or expanded into additional steps and operations without detracting from the essence of the disclosed embodiments. - At first, the
system 102 may receive a template image and a target image from a user via thecomputing device 106. In another embodiment, thesystem 102 may retrieve the template image from a video stream. The template image may indicate an image to be searched. In one case, the template image may be transparent. In another case, the template image may be a logo or a text. For example, atemplate image 302 a is illustrated inFIG. 3A . On the other hand, the target image may indicate another image within which the image needs to be searched. In one case, the target image may be a video sequence. For example, atarget image 302 b is illustrated inFIG. 3B . - Successively, the template image and the target image may be processed, at
step 202. The processing may include removing backgrounds from the template image. In a case, colored backgrounds may be removed from the template image. For example, as shown inFIG. 3C , background is removed from thetemplate image 302 a illustrated inFIG. 3A . Further, the processing may include eliminating noise from the template image and the target image, using low pass filters. The processing may further include scaling the template image and the target image at different scales to create scaled template images and scaled target images of multiple resolutions. Scaling of the images could refer to upscaling or downscaling of the image, and could be performed using any sampling algorithm, such as nearest-neighbour interpolation, Edge-directed interpolation, bilinear and bicubic sampling, Sinc and Lanczos sampling, box sampling, Mipmap sampling, Fourier transformation, vectorization, and deep convolutional neural networks. It should be noted that an aspect ratio may remain same during the scaling of the template image and the target image. - Successively, a plurality of template edge images may be produced, at
step 204. The plurality of template images, having one or more image scales, may be produced based on determination of edge gradients of the template image in one or more directions. The edge gradients of the template image may be determined using gradient operators applied in the one or more directions. - A gradient operator (g) applied on the template image i.e. a two dimensional function f(x, y), could be represented using below mentioned equation.
-
- Similarly, edge gradients of the template image could be determined in the one or more directions. It should be noted that absolute values of the determined edge gradients for each of the template images may be stored as 2D images in the
memory 112. For example, as shown inFIG. 3D ,template edge images 302 d are produced in directions such as 0, 45, 90, or 135 degrees. The template edge images may be represented as TemEdge0, TemEdge45, TemEdge90, and TemEdge135. Further,FIG. 3E shows,template edge images 302 e produced for the scaled template images that are present at different scales. - Successively, a plurality of target edge images may be produced, at
step 206. The plurality of target edge images, having one or more image scales, may be produced based on determination of edge gradients of the target image in the one or more directions. The edge gradients of the target image may be determined using gradient operators applied in the one or more directions. It should be noted that absolute values of the determined edge gradients for each of the target images may be stored as 2D images in thememory 112. For example, target edge images are produced in the directions such as 0, 45, 90, or 135 degrees. The target edge images may be represented as TgtEdge0, TgtEdge45, TgtEdge90, and TgtEdge135. In an embodiment, the target edge may be scaled to create scaled target edge images in the one or more directions. - Successively, images comprising correlation coefficient values for each of the one or more directions may be produced, at
step 208. The images comprising the correlation coefficient values may be produced by computing correlation coefficients between the plurality of template edge images and the plurality of target edge images. In one case, the correlation may be indicative of a Normalized-Cross-Correlation (NCC). For example, as shown inFIG. 3F , animage 302 f containing correlation coefficient values is produced by computing correlation coefficients between the template edge images and the target edge images in the one or more directions such as 0, 45, 90, or 135 degrees. Successively, at least one local peak may be identified from each of the images comprising the correlation coefficient values, atstep 210. - In one embodiment, a point (x, y) may be defined as a local peak for a 2D function f: R2->R if f(x,y)>f(u, v)∀(u,v)∈{(a,b)|(a−x)2+(b−y)2<R2}−{(x,y)}. In above mentioned 2D function, R may correspond to radius of a circle centered at (x,y). The circle centered around (x,y) can be considered as the area under consideration. Utilizing above described relation, at least one
local peak 304 f may be identified from theimage 302 f comprising the correlation coefficient values, as illustrated inFIG. 3F . Thereafter, spatial locations along with the correlation coefficients corresponding to the at least one local peak may be determined, atstep 212. In an example, the spatial locations along with the correlation coefficients corresponding to top five peaks having highest values may be determined for each of the directions such as 0, 45, 90, and 135 degrees. The spatial locations may be represented as S0={(x10,y10), (x20,y20), . . . , (x50,y50)}, S45={(x145,y145), (x245,y245), . . . , (x545,y545)}, and so on. In one case, the spatial locations could be ranked based on values of corresponding correlation coefficients. It should be noted that the correlation coefficient values may range from −1 to 1. In an embodiment, the spatial locations along with the values of the correlation coefficients may be stored in thememory 112. - Successively, an intersection of the spatial locations may be determined, at
step 214. In one case, the intersection of the spatial locations at different scales may be determined for each of the one or more directions. For example, when the directions are 0, 45, 90, and 135 degrees, then the intersection, SMatch(scalen) is given as S0∩S45∩S90∩S135. Thereafter, based upon the intersection of the spatial locations, a presence of the template image in the target image may be identified, atstep 216. For example, in a case, if SMatch(scalen) is found to be empty, then the template image is not present in the target image for scalen. Otherwise, the template image is present in the target image i.e., the template image is expected to match at one or more spatial locations as specified within the SMatch(scalen). - Successively, the presence of the template image in the target image may be validated, at
step 218. In one case, the validation may be performed based on statistical inference of the correlation coefficients exceeding a predefined threshold. The statistical inference may correspond to at least one of a maximum, mean, and a median of the correlation coefficients. In another embodiment, the validation may be performed based on color match between the template image and the target image at the spatial locations. It will be apparent to one skilled in the art that the above-mentioned validation techniques have been provided only for illustration purposes. In an embodiment, the validation of the presence of the template image in the target image may be performed by some other technique as well, without departing from the scope of the disclosure. - It should be noted that above-mentioned directions have been provided only for illustration purposes. In an embodiment, the one or more directions such as 30, 60, or 180 degrees, may be used as well, without departing from the scope of the disclosure.
-
FIG. 4 illustrates aflowchart 400 of a method for searching an image within another image, according to an embodiment. The flow chart ofFIG. 4 shows the method steps executed according to one or more embodiments of the present disclosure. In this regard, each block may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the blocks may occur out of the order noted in the drawings. For example, two blocks shown in succession inFIG. 4 may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. Any process descriptions or blocks in flow charts should be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps in the process, and alternate implementations are included within the scope of the example embodiments in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved. In addition, the process descriptions or blocks in flow charts should be understood as representing decisions made by a hardware structure such as a state machine. Theflowchart 400 starts at thestep 402 and proceeds to step 412. - At
step 402, a plurality of template edge images may be produced based on determination of edge gradients of a template image in one or more directions. The template edge images may be present at one or more image scales. The template image may indicate an image to be searched. In one embodiment, the plurality of template images may be produced by theprocessor 110. - At
step 404, a plurality of target edge images may be produced based on determination of edge gradients of a target image in the one or more directions. The target edge images having one or more image scales. The target image may indicate another image within which the image needs to be searched. In one embodiment, the plurality of target edge images may be produced by theprocessor 110. - At
step 406, images comprising correlation coefficient values for each of the one or more directions may be produced. The images comprising the correlation coefficient values may be produced by computing correlation coefficients between the plurality of template edge images and the plurality of target edge images. In one embodiment, the images comprising the correlation coefficient values may be produced by theprocessor 110. - At
step 408, at least one local peak may be identified from each of the images comprising the correlation coefficient values. In one embodiment, the at least one local peak may be identified by theprocessor 110. - At
step 410, spatial locations along with the correlation coefficients corresponding to the at least one local peak may be determined. In one embodiment, the spatial locations along with the correlation coefficients may be determined by theprocessor 110. - At
step 412, a presence of the template image in the target image may be identified based upon an intersection of the spatial locations. In one embodiment, the presence of the template image in the target image may be identified by theprocessor 110. - The disclosed embodiments encompass numerous advantages. Various embodiments of a method for searching an image within another image may be disclosed. The method may include processing a template image by removing backgrounds near boundaries in order to get a maximum bounding box containing a structure of the template image. Further, the method may include producing a plurality of template edge images and a plurality of target edge images based on determination of edge gradients of the template image and a target image in one or more directions respectively. Further, images comprising correlation coefficient values may be produced for each of the one or more directions by computing correlation coefficients between the plurality of template edge images and the plurality of target edge images. Further, spatial locations along with the correlation coefficients corresponding to at least one local peak may be determined, where the at least one local peak may be identified from each of the images comprising the correlation coefficient values. Thereafter, based upon an intersection of the spatial locations, a presence of the template image in the target image may be identified.
- The logic of the example embodiment(s) can be implemented in hardware, software, firmware, or a combination thereof. In example embodiments, the logic is implemented in software or firmware that is stored in a memory and that is executed by a suitable instruction execution system. If implemented in hardware, as in an alternative embodiment, the logic can be implemented with any or a combination of the following technologies, which are all well known in the art: a discrete logic circuit(s) having logic gates for implementing logic functions upon data signals, an application specific integrated circuit (ASIC) having appropriate combinational logic gates, a programmable gate array(s) (PGA), a field programmable gate array (FPGA), etc. In addition, the scope of the present disclosure includes embodying the functionality of the example embodiments disclosed herein in logic embodied in hardware or software-configured mediums.
- Embodiments of the present disclosure may be provided as a computer program product, which may include a computer-readable medium tangibly embodying thereon instructions, which may be used to program a computer (or other electronic devices) to perform a process. The computer-readable medium may include, but is not limited to, fixed (hard) drives, magnetic tape, floppy diskettes, optical disks, compact disc read-only memories (CD-ROMs), and magneto-optical disks, semiconductor memories, such as ROMs, random access memories (RAMs), programmable read-only memories (PROMs), erasable PROMs (EPROMs), electrically erasable PROMs (EEPROMs), flash memory, magnetic or optical cards, or other type of media/machine-readable medium suitable for storing electronic instructions (e.g., computer programming code, such as software or firmware). Moreover, embodiments of the present disclosure may also be downloaded as one or more computer program products, wherein the program may be transferred from a remote computer to a requesting computer by way of data signals embodied in a carrier wave or other propagation medium via a communication link (e. g., a modem or network connection).
- It will be appreciated that variants of the above disclosed, and other features and functions or alternatives thereof, may be combined into many other different systems or applications. Presently unforeseen or unanticipated alternatives, modifications, variations, or improvements therein may be subsequently made by those skilled in the art that are also intended to be encompassed by the following claims.
Claims (13)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/011,609 US10664717B2 (en) | 2018-06-18 | 2018-06-18 | System and method for searching an image within another image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/011,609 US10664717B2 (en) | 2018-06-18 | 2018-06-18 | System and method for searching an image within another image |
Publications (2)
Publication Number | Publication Date |
---|---|
US20190384999A1 true US20190384999A1 (en) | 2019-12-19 |
US10664717B2 US10664717B2 (en) | 2020-05-26 |
Family
ID=68840021
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/011,609 Active 2038-11-09 US10664717B2 (en) | 2018-06-18 | 2018-06-18 | System and method for searching an image within another image |
Country Status (1)
Country | Link |
---|---|
US (1) | US10664717B2 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113869441A (en) * | 2021-10-10 | 2021-12-31 | 青岛星科瑞升信息科技有限公司 | Multi-scale target positioning method based on template matching |
WO2022205614A1 (en) * | 2021-04-01 | 2022-10-06 | 广东拓斯达科技股份有限公司 | Template matching method and apparatus, computer device, and storage medium |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7167583B1 (en) * | 2000-06-28 | 2007-01-23 | Landrex Technologies Co., Ltd. | Image processing system for use with inspection systems |
US6445832B1 (en) * | 2000-10-10 | 2002-09-03 | Lockheed Martin Corporation | Balanced template tracker for tracking an object image sequence |
US7630560B2 (en) * | 2002-04-10 | 2009-12-08 | National Instruments Corporation | Increasing accuracy of discrete curve transform estimates for curve matching in four or more dimensions |
US8162219B2 (en) * | 2008-01-09 | 2012-04-24 | Jadak Llc | System and method for logo identification and verification |
JP5271031B2 (en) * | 2008-08-09 | 2013-08-21 | 株式会社キーエンス | Image data compression method, pattern model positioning method in image processing, image processing apparatus, image processing program, and computer-readable recording medium |
JP2010097438A (en) * | 2008-10-16 | 2010-04-30 | Keyence Corp | Outline information extraction method using image processing, creation method for pattern model in image processing, positioning method for pattern model in image processing, image processor, image processing program and computer-readable recording medium |
-
2018
- 2018-06-18 US US16/011,609 patent/US10664717B2/en active Active
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022205614A1 (en) * | 2021-04-01 | 2022-10-06 | 广东拓斯达科技股份有限公司 | Template matching method and apparatus, computer device, and storage medium |
CN113869441A (en) * | 2021-10-10 | 2021-12-31 | 青岛星科瑞升信息科技有限公司 | Multi-scale target positioning method based on template matching |
Also Published As
Publication number | Publication date |
---|---|
US10664717B2 (en) | 2020-05-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Nandhini Abirami et al. | Deep CNN and Deep GAN in Computational Visual Perception‐Driven Image Analysis | |
Seifi Majdar et al. | A probabilistic SVM approach for hyperspectral image classification using spectral and texture features | |
US20220327657A1 (en) | Generating digital images utilizing high-resolution sparse attention and semantic layout manipulation neural networks | |
US9053367B2 (en) | Detector evolution with multi-order contextual co-occurrence | |
WO2021252712A1 (en) | Systems and methods for identifying and segmenting objects from images | |
US11831833B2 (en) | Methods and arrangements for triggering detection, image correction or fingerprinting | |
US20170147944A1 (en) | Adapted domain specific class means classifier | |
US20140023271A1 (en) | Identifying A Maximally Stable Extremal Region (MSER) In An Image By Skipping Comparison Of Pixels In The Region | |
Jeong et al. | Fast horizon detection in maritime images using region-of-interest | |
Farhat et al. | Optical character recognition on heterogeneous SoC for HD automatic number plate recognition system | |
US10664717B2 (en) | System and method for searching an image within another image | |
CN113869138A (en) | Multi-scale target detection method and device and computer readable storage medium | |
Rybalkin et al. | When massive GPU parallelism ain’t enough: A novel hardware architecture of 2D-LSTM neural network | |
EP4002205A1 (en) | Method and apparatus with image recognition | |
US20230134508A1 (en) | Electronic device and method with machine learning training | |
GB2602880A (en) | Hierarchical image decomposition for defect detection | |
Cao et al. | Hyperspectral image classification based on filtering: a comparative study | |
CN115375999A (en) | Target detection model, method and device applied to dangerous chemical vehicle detection | |
US9710703B1 (en) | Method and apparatus for detecting texts included in a specific image | |
Rajesh et al. | Automatic tracing and extraction of text‐line and word segments directly in JPEG compressed document images | |
Karanwal et al. | Triangle and orthogonal local binary pattern for face recognition | |
Chen et al. | Filter‐based face recognition under varying illumination | |
CN111598841B (en) | Example significance detection method based on regularized dense connection feature pyramid | |
Chen et al. | Learning continuous implicit representation for near-periodic patterns | |
Rotman et al. | Detection masking for improved OCR on noisy documents |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2551); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY Year of fee payment: 4 |