EP2467808A1 - Device and method for identification of objects using morphological coding - Google Patents
Device and method for identification of objects using morphological codingInfo
- Publication number
- EP2467808A1 EP2467808A1 EP10809632A EP10809632A EP2467808A1 EP 2467808 A1 EP2467808 A1 EP 2467808A1 EP 10809632 A EP10809632 A EP 10809632A EP 10809632 A EP10809632 A EP 10809632A EP 2467808 A1 EP2467808 A1 EP 2467808A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- value
- elements
- image
- repeating
- processor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B1/00—Manually or mechanically operated educational appliances using elements forming, or bearing, symbols, signs, pictures, or the like which are arranged or adapted to be arranged in one or more particular ways
- G09B1/32—Manually or mechanically operated educational appliances using elements forming, or bearing, symbols, signs, pictures, or the like which are arranged or adapted to be arranged in one or more particular ways comprising elements to be used without a special support
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/22—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
- G06V10/225—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition based on a marking or identifier characterising the area
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/02—Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip
Definitions
- This application relates to computer vision, and particularly to identifying an object in an image captured in uncontrolled lighting conditions, where the identification is based on repeating units of morphological code that appear on the object.
- Identification of objects in an image captured by a digital imaging device may rely on extracting features of the object and use pattern recognition of those features to identify the object.
- Pattern recognition and feature extraction may be subject to inaccuracies caused by, among other things, lighting conditions, changes in object orientation, relative distance of the object from the imager and occlusion.
- Typical identification processes may therefore entail rotation, translation and scale invariant features, and may call for complicated pattern recognition algorithms. Adding an object to the recognition task may therefore require adjusting a recognition algorithm compensation for lighting condition.
- An embodiment of the invention may include a method of identifying an object by capturing an image of the object, where in the image is detected a repeating reference form and a set of elements that are at a pre-defined distance from and orientation to the repeating reference form.
- a value may be derived from the set of elements, and that value may be compared to a stored value that is associated with the object.
- the derived and detected value of the elements in the image may confirm the object in the image as matching the object having the associated value stored in the memory.
- Instance of the repeating units may appear on a perimeter of an object or elsewhere, and the repetition of the units may compensate for occlusion of some of the units or a failure of the imager to clearly capture one or more of the units or elements.
- a derived value of an element may be determined on the basis of its length relative to the alignment bar, such as a length along a perimeter of the alignment bar, or by a perpendicular height of an element to the alignment bar, such as a distance of an end of the element from its base as is connected to the alignment bar.
- an element representing a "0" may be an absence of an element along a spot on an inner edge of an alignment bar.
- a "1" may represent the presence of an element along a position on an inner edge of an alignment bar. Values may also be derived from other heights, thicknesses, shapes or sizes of elements relative to each other or to the alignment bar.
- the elements may be connected to and may follow a curve of the alignment bar or reference form, and may be of a same or designated color or groups of colors relative to the reference form
- a value may be derived from more than one set of elements detected in the image, and the values may be compared for purposes of confirming the accuracy of the detected image and computation of the value.
- a value may be derived from a third set of elements and compared to the other two values. If derivations for two out of three, or some other proportion, of elements yield the same value, such value may be assumed to be the value appearing in the image. Further derivations of sets of elements may be undertaken to add robustness to the identification.
- a unit may be detected on the basis of it including a reference form such as an alignment bar and elements connected to, or at a known orientation to, the reference form.
- a reference form such as an alignment bar and elements connected to, or at a known orientation to, the reference form.
- a processor may issue a signal upon the confirmation of identification of an object in the image.
- an imager may capture and detect a unit and a set of elements at a distance of up to 5 meters.
- Some embodiments of the invention may include a system of an object having appearing thereon repeating reference forms and a set of elements at a pre-defined orientation and distance from the repeating reference form, and a processor to calculate a value associated with the elements and to compare the value to a value stored in a memory.
- Some embodiments may include a medium to store a set of instructions that may be executed by a processor, where, upon such execution, the processor translates a numeric value into a set of non-numeric binary shapes and creates an insert item to a print file that includes the shapes oriented to a known position relative to a reference form.
- the item may be added to a print file in repeating units around for example a perimeter of an image of the object that is to be printed.
- the repeating unit may appear in the printed image at a known position relative to the image of the printed object.
- the repeating units may be configured or added to the print file to conform to a shape of the printed object.
- the repeating units may form a round frame around the printed face.
- the repeating units may be added to the print file so that they appear at least at a minimum distance from the object being printed, such that the elements do not overlap or touch the printed image.
- the instructions may associate the printed object with the value that was translated into the elements.
- a processor may generate a set of shapes on an alignment bar that have a value of 23, and add repeating units of the shape around a printed round face.
- the printed round face may be associated in a memory with the value 23 so that, upon a later detection of shapes that yield a value of 23, the processor may signal a detection of the round face.
- Fig. 1 is a schematic diagram of a system including an imaging device, a processor, and an object to be identified in accordance with an embodiment of the invention
- Figs. 2 A and 2B are examples of objects that may be identified in accordance with an embodiment of the invention.
- Fig. 3 is a flow diagram of a method in accordance with an embodiment of the invention. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
- the term 'morphological' may, in addition to its regular meaning, imply that a first shape emerges from a second shape.
- a shape in an image that may indicate a bit of information may be attached to, continue, emerge or morph from an alignment bar, such that the detection of the shape representing the bit may be associated with or related to the detection of the shape or presence of the alignment bar.
- Fig. 1 is a schematic diagram of a system including an imaging device, a processor, and an object to be identified in accordance with an embodiment of the invention.
- a system 100 may include for example a screen or display 101 that may be connected to or associated with a processor 106, and an imager 102 that may capture an image of an object 104, and relay or transmit digital information about the image to processor 106.
- Object 104 may include or have printed or placed thereon various shapes 108 that may be arranged for example in a known or pre-defined pattern, on an area of object 104, such as a rim or perimeter 110 of object 104 or on other known areas of object 104, such that such shapes 108 are visible to imager 102.
- such shapes may be or include monochromatic morphological shapes.
- on a middle or other part of object 104 there may be affixed, attached or printed a mark, such as a shape, number, letter, drawing or other figure 112 to be identified.
- a list of patterns of shapes 108 may be associated with one or more objects 104 or figures 112 that may be attached to object 104, and such lists and associations may be stored in a memory 114 that may be connected to processor 106.
- display 101 may show or display figure 112, such as a letter A, to for example, a user.
- User may then find an object that matches or otherwise corresponds to the displayed figure 112, such as a card or other object that may have a letter A printed on or attached to it.
- User may raise or otherwise expose the object 104 to imager 102, which may capture an image of the object 104 and of the shapes 108 that may be printed on, attached to or otherwise make up a part of object 104.
- the image may be transmitted to processor 106 which may find or discern the pattern of the monochromatic morphological shapes 108 on the object 104 in the image.
- Processor 106 may search memory 114 for a numerical or other value of the identified pattern of shapes 108, and may determine that the particular pattern of shapes 108 that appeared on the object 104 in the image is associated with a value that may have been assigned to figure 112 that was shown on display 101. Upon a detected match between the displayed figure 112 and the object 104 that was exposed to the imager, processor 106 may issue a signal to for example to an audio, visual or other indicator or calculator, such as a bell or speaker 116, to indicate that the figure on object 104 that was captured in the image corresponds to the figure shown on display 101. For example, a letter A may be associated with a value 23 as such value and association may be stored in memory 114.
- a card having a letter A as a figure 112 may include a shape 108 pattern that is also is associated in memory 114 with a value 23.
- Processor 106 that signaled display 101 to show an A to a user, may detect a match between the value associated with the displayed figure 112 and the value indicated by the pattern of shape 108.
- a pattern of shape 108 may be identified even if the image is captured in an uncontrolled environment, such as for example against a nonuniform background such as a colored wall or curtain, and under uncontrolled lighting and shadow conditions, such as direct or indirect light, sunlight, indoor ambient light or other non-uniform lighting conditions.
- an uncontrolled environment such as for example against a nonuniform background such as a colored wall or curtain
- lighting and shadow conditions such as direct or indirect light, sunlight, indoor ambient light or other non-uniform lighting conditions.
- a pattern of shapes 108 may be identified even if an orientation of object 104 relative to imager 102 is other than perpendicular to the light entering or reflecting back to imager 102.
- object 104 may be held at an acute or other angle to imager 102, may be rotated, partially occluded by for example interfering objects such as a hand of a user, or otherwise incompletely imaged, and imager 102 may still be able to detect and identify at least one or a part of the repetitive pattern of shapes 108.
- a pattern of shapes 108 may be identified at various distances of object 104 from imager 102, such as for example from 10 cm to up to 5 meters or more.
- the distance from imager 102 at which a pattern of shapes 108 of object 104 may be identified may be dependent on for example a resolution of imager 102 and may allow the determination of the distance of the object 104 from the imager 102.
- processor 106 may determine a size of a first shape 108 and a size of a second shape 108 in the image on the basis of the number of pixels in the captured image that include the first shape 108 and the second shape.
- imager 102 may be or include a suitable digital imager such as a CCD, CMOS or other video or still digital image capture device capable of capturing and transmitting color or intensity image data.
- a low resolution camera such as those typically included in a web-cam or network capable camera configuration, having a resolution of QVGA or VGA may be suitable for an embodiment of the invention.
- processor 106 may be or include an embedded processor or a DSP or a Pentium ® IV or higher processor or other comparable processor typically used in a home computing configuration.
- Memory 114 may be or may be included in any suitable data storage device such as a hard drive, flash, or other electronic data storage on which may be stored for example a data base, array, tree or other data storage structure.
- display 101 and bell or speaker 116 may be or be included in a single device such as for example a display and sound system that may indicate to a user that a match or other action is correct, incorrect or otherwise responsive to a question, challenge or other signal posed to the user.
- processor 106 may be an embedded processor which is housed inside an object such as a mobile phone, toy, doll or toy housing.
- imager 102 and bell/speaker 116 may be or be included inside the doll or toy or toy housing.
- Figs. 2 A and 2B are examples of objects that may be identified in accordance with an embodiment of the invention, and expanded views of the repeating codes and alignment bars on a perimeter of such objects.
- objects 200 and 2B are examples of objects that may be identified in accordance with an embodiment of the invention, and expanded views of the repeating codes and alignment bars on a perimeter of such objects.
- objects 200 and 2B are examples of objects that may be identified in accordance with an embodiment of the invention, and expanded views of the repeating codes and alignment bars on a perimeter of such objects.
- objects 200 and 201 may be or include a flat, spherical, cubical or other shaped object that may be suitable to be moved, raised or otherwise maneuvered by for example a user or some other system to be brought into an area to be imaged by imager 102.
- objects 200 and 201 may be a card or disc that may be or include cardboard, plastic or other semi-rigid material.
- Objects 200 and 201 may include a ball, toy, manufactured device or other item as long as the monochromatic morphological shape pattern can be printed, attached, stamped on or stuck to it.
- Attached or imprinted on for example an outside perimeter of object 200 may be a series of shape code 202 that may create a pattern of for example repeating or repetitive monochromatic binary codes.
- Monochromatic morphological shape codes may be used to create a pattern of for example repeating or repetitive monochromatic binary codes.
- alignment bar 204 may consist of a reference form such as an alignment bar 204 or alignment curve (shown by horizontal lines for visualization purposes in Fig. 2) and data bits 206 (shown by diagonal lines for visualization purposes in Fig. 2).
- alignment bar 205 may be curved or may assume some other shape or geometrical form.
- code 202 may be binary and may be read from right to left, or otherwise in accordance to the position of alignment bar 204. A dark bit may be read as "1" while a light bit may be read as "0".
- Alignment bar 204 and data bits 206 may be colored with any color which may provide contrast to the background color of object 200.
- the proportions of the code 202 elements are depicted in the expanded views of codes and alignment bars in Figs 2 A and 2B. Other proportions and dimensions are possible. In some embodiments, a maximum distance from which this code is recognized by an imager may depend on the code 202 units being detected by even a single pixel in the image.
- code 202 may be depicted along a curved plane of bar 205 as the proportions of the sides of bits 206 of code 202 are maintained relative to each other and to bar 5.
- Shapes 202 may form any geometrical shape (e.g., lines, circles etc.) or may take on other configurations.
- the pattern of code 202 may be marked on object 200 on one, some or all sides exposed to the imager.
- Object 200 may include any tactile object to which code 202 may be affixed or on which code 202 may be shown.
- Sequences of monochromatic morphological shape codes 202 may include at least two binary bits, though the variations of codes and hence the number of items or figures 206 associated with a distinct pattern of monochromatic morphological shape code 202 may increase with increased number of bits 206 included in the monochromatic morphological shapes 202.
- the size of each monochromatic morphological shape 202 may be determined by, for example, the intended distance between object 200 and imager 102 as well as by the resolution of imager 102.
- Localization of the monochromatic morphological shapes 202 around a perimeter, rim, edge or other pre-defined area of object 200 may avoid the effects of occlusion of a portion of code 202, and may provide more robust detection and identification of code 202 once the object 200 is detected in an image. Similarly, repetition of the pattern may allow identification of the pattern even when the object is partially occluded or when an orientation of object 200 in an image partially blocks its exposure to the imager 102.
- a particular monochromatic morphological shape code 202 such as for example binary code 10101, may be pre-defined and stored in memory 114, where the pattern may be associated with a value that may be further associated with an object or figure.
- a monochromatic code may designate a binary code in a pattern, in which case the number of combinations may be defined as
- an additional feature may be used to increase the number of possible codes such as the color of the monochromatic code. Polychromatic combinations or codes may be possible, such that if r different colors are used, the formula becomes:
- even more features may be used to increase the number of possible codes such as the color of the background of the monochromatic code. If r different colors are used and k different background colors are used, then the formula becomes:
- a two sided binary coding scheme may be used, whereas for the same width and an increase in the height of the code may be used, so long as such increase in height is distinguishable by the imager.
- Detection and identification of a bar or code in a captured image may entail some or all of the following processes.
- Identification of captured edges that form closed contours where such identification may be performed using a connected component algorithm such as a regular labeling operation. Such identification may detect the presence of edge contours which comply with the pre-defined ratios of the alignment bar, code or object to one or more of each other.
- a length of the alignment bar may be used to measure the absolute distance from the imager of the object containing the code, where for example the real world size of the printed code is known a priory.
- the shape, alignment bars and codes may be repetitive, so it is likely that more than one sequence of the pattern appears in the image.
- a probability function for the pattern may be used to verify the object classification by taking into account, not only the presence of a code or pattern and the number of times it appears, but also the number of adjacent sequences or other spatial based relation between such repetitions such as distance or dispersion.
- the pattern with the highest probability may be chosen as representing the pattern associated with the correct code and the associated object.
- this procedure may be applied in consecutive frames of an image and the confidence of classification may be increased if a same pattern is selected on a few or consecutive frames.
- shape patterns may be attached to objects such as empty cards with no figure on them, and the user may adhere an image or draw an image on the blank cards.
- a pattern associated in memory with "My Pet” may be attached to a picture of a user's pet, and the blank card may be recognized by the system as being associated with the pet.
- patterns of shapes may be camouflaged or even non- discernible to the human eye, as they may be implanted within a picture or item that is observed by the imager in a way which is integrated into the printed figure itself.
- a letter, number or figure may be drawn to include a shaped pattern that may be identified by the imager but hidden from obvious discernment by a user.
- shape patterns within a picture or on an item may be printed using an ink that is not reflective in the visible light spectrum such as IR or near IR, and the imager that captures the image may be an IR camera.
- a micro-code version of shapes and codes may be affixed to an object to create a more camouflaged version of coded objects.
- detection of a shape that includes a code may include subtraction process to isolate objects in a second image that was not included in a first or prior image on the assumption that the object that includes the shape was introduced into the view of the imager subsequent to the capture of the first image. Such subtraction may reduce the number of attempts or calculations required to detect the object in a series of images.
- the recognition of the coded objects may be used as part of an educational curriculum for children or students.
- the recognition of coded objects may be associated with answers to questions, for example, upon an instruction to "Hold up an animal card!, the student holds up an animal card coded with the monochromatic shapes that are associated with cards showing pictures or names of animals.
- the system may detect the codes and recognize the code number and respond with "That's a bear!” This process may be expanded to a classroom equipped with a computer and one or more imagers where the computer recognizes different objects or cards for a full class experience with multiple students, one or more of whom may hold up a correct or incorrect card or object.
- recognition of the objects having shapes and codes may be used as part of an interaction with a customer. For example: a customer waiting in a restaurant who wants to get the attention of a waiter may hold up a coded image or other coded object, and a camera in the restaurant may detect the coded image and send the location of the table along with the code number, which can be associated with "Please bring me the check" or other message.
- detection or identification of a particular code may elicit a computerized or automated response, by for example a mobile phone whose camera may detect a presence of a shape or code and display on the phone's screen an animation sequence related to that object associated with the code, or dial a number related to the object.
- coded patches, images or even shirts can be used to track the location of people or animals in a farm. Different locations in the surroundings may have cameras connected to a computer which recognizes the people or animals when they cross a field of view of a camera.
- coded objects or cards can be used to navigate to different websites by showing the coded objects or cards to a webcam associated with a computer, for example, showing a coded car from the movie "TransformersTM” may send the browser in a computer to the relevant TransformersTM website or showing a coded image of a BarbieTM doll may direct the browser to the BarbieTM website.
- a flow chart of a method in accordance with an embodiment of the invention may include a method of identifying an object, where such method includes, as in block 300, detecting in an image a repeating reference form and a set of elements, where the set of elements is at a pre-defined distance from and orientation to the repeating reference form.
- a reference form may be a curved or straight alignment bar, and the elements may jut out from the alignment bar at for example a perpendicular angle.
- a value may be derived from the image of the elements.
- the derived value may be compared to a value stored in a memory, where the stored value is associated with the object.
- an image of the object may be captured at a distance from the imager from a distance of up to 5 meters.
- the detecting may include detecting elements that are connected to the alignment bar, and where deriving a value includes deriving a first value from a first set of elements and a second value from a second set of elements, and comparing the values to determine if they are the same. If the values are not the same, in some embodiments a third value may be selected from a third set of elements and a comparison of the third value to each of the first and second values may be made. A value that is derived from two or more of the sets may be assumed to be the true value.
- the elements may be recognized as elements only if they are connected or at a defined distance or orientation from the alignment bar or reference form.
- a signal may be issued indicating such match to the user.
- the nature or meaning of an element may be derived from a size or length of the element of from a distance of the element from the reference or alignment bar.
- a single element of the set of elements may represent either a zero or a 1.
- the elements may have different colors and may be differentiated by such colors.
- a series of instructions in the form of, for example, software may be stored in an electronic storage medium and executed by a processor to translate a numeric value into a set of non-numeric binary shapes such as those that may extend from an alignment bar.
- the processor may orient the shapes at a known position relative to the alignment bar, such as connecting at a perpendicular angle to the bar, and add repeating units of alignment bars and the set of shapes to a print file that includes an image around which are to appear the repeating units.
- the repeating units may appear around or on a side or elsewhere on the printed image.
- the shape of the printed repeating units may conform to an outline of the printed image and may be printed at a given or minimum distance from an edge of the printed image.
- the instructions may store the value represented by the elements along with an association of such value with the printed object.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Business, Economics & Management (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
A method and system for detecting in an image a repeating unit, where the unit includes a reference form and a set of elements at a known distance from and orientation to the reference form, deriving a value from the elements included in the unit, and comparing the derived value to a know value. The elements may indicate binary values that may be used in the derivation of the value, and the value may be associated with an object. The comparison of the value derived from the elements in the image with the stored value that is associated with the object may be used in identifying or confirming the identification of the object in the image.
Description
DEVICE AND METHOD FOR IDENTIFICATION OF OBJECTS USING
MORPHOLOGICAL CODING
FIELD OF THE INVENTION
This application relates to computer vision, and particularly to identifying an object in an image captured in uncontrolled lighting conditions, where the identification is based on repeating units of morphological code that appear on the object.
BACKGROUND OF THE INVENTION
Identification of objects in an image captured by a digital imaging device may rely on extracting features of the object and use pattern recognition of those features to identify the object. Pattern recognition and feature extraction may be subject to inaccuracies caused by, among other things, lighting conditions, changes in object orientation, relative distance of the object from the imager and occlusion. Typical identification processes may therefore entail rotation, translation and scale invariant features, and may call for complicated pattern recognition algorithms. Adding an object to the recognition task may therefore require adjusting a recognition algorithm compensation for lighting condition.
SUMMARY OF THE INVENTION
An embodiment of the invention may include a method of identifying an object by capturing an image of the object, where in the image is detected a repeating reference form and a set of elements that are at a pre-defined distance from and orientation to the repeating reference form. A value may be derived from the set of elements, and that value may be compared to a stored value that is associated with the object. The derived and detected value of the elements in the image may confirm the object in the image as matching the object having the associated value stored in the memory. Instance of the repeating units may appear on a perimeter of an object or elsewhere, and the repetition of the units may compensate for occlusion of some of the units or a failure of the imager to clearly capture one or more of the units or elements. In some embodiments, a derived value of an element may be determined on the basis of its length relative to the alignment bar, such as a length along a perimeter
of the alignment bar, or by a perpendicular height of an element to the alignment bar, such as a distance of an end of the element from its base as is connected to the alignment bar. For example, an element representing a "0" may be an absence of an element along a spot on an inner edge of an alignment bar. A "1" may represent the presence of an element along a position on an inner edge of an alignment bar. Values may also be derived from other heights, thicknesses, shapes or sizes of elements relative to each other or to the alignment bar.
In some embodiments, the elements may be connected to and may follow a curve of the alignment bar or reference form, and may be of a same or designated color or groups of colors relative to the reference form
In some embodiments, a value may be derived from more than one set of elements detected in the image, and the values may be compared for purposes of confirming the accuracy of the detected image and computation of the value.
In some embodiments, if the values of two sets of elements detected in the image are not equal, a value may be derived from a third set of elements and compared to the other two values. If derivations for two out of three, or some other proportion, of elements yield the same value, such value may be assumed to be the value appearing in the image. Further derivations of sets of elements may be undertaken to add robustness to the identification.
In some embodiments, a unit may be detected on the basis of it including a reference form such as an alignment bar and elements connected to, or at a known orientation to, the reference form.
In some embodiments, a processor may issue a signal upon the confirmation of identification of an object in the image. In some embodiments, an imager may capture and detect a unit and a set of elements at a distance of up to 5 meters.
Some embodiments of the invention may include a system of an object having appearing thereon repeating reference forms and a set of elements at a pre-defined orientation and distance from the repeating reference form, and a processor to calculate a value associated with the elements and to compare the value to a value stored in a memory.
Some embodiments may include a medium to store a set of instructions that may be executed by a processor, where, upon such execution, the processor translates a numeric value into a set of non-numeric binary shapes and creates an insert item to a
print file that includes the shapes oriented to a known position relative to a reference form. The item may be added to a print file in repeating units around for example a perimeter of an image of the object that is to be printed. The repeating unit may appear in the printed image at a known position relative to the image of the printed object. In some embodiments, the repeating units may be configured or added to the print file to conform to a shape of the printed object. For example, if the object being printed is a round face, the repeating units may form a round frame around the printed face. The repeating units may be added to the print file so that they appear at least at a minimum distance from the object being printed, such that the elements do not overlap or touch the printed image.
In some embodiments, the instructions may associate the printed object with the value that was translated into the elements. For example, a processor may generate a set of shapes on an alignment bar that have a value of 23, and add repeating units of the shape around a printed round face. The printed round face may be associated in a memory with the value 23 so that, upon a later detection of shapes that yield a value of 23, the processor may signal a detection of the round face.
BRIEF DESCRIPTION OF THE DRAWINGS
The subject matter regarded as the invention is particularly pointed out and distinctly claimed in the concluding portion of the specification. The invention, however, both as to organization and method of operation, together with features and advantages thereof, may best be understood by reference to the following detailed description when read with the accompanied drawings in which:
Fig. 1 is a schematic diagram of a system including an imaging device, a processor, and an object to be identified in accordance with an embodiment of the invention;
Figs. 2 A and 2B are examples of objects that may be identified in accordance with an embodiment of the invention; and
Fig. 3 is a flow diagram of a method in accordance with an embodiment of the invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
In the following description, various embodiments of the invention will be described. For purposes of explanation, specific examples are set forth in order to provide a thorough understanding of at least one embodiment of the invention. However, it will also be apparent to one skilled in the art that other embodiments of the invention are not limited to the examples described herein. Furthermore, well- known features may be omitted or simplified in order not to obscure embodiments of the invention described herein.
Unless specifically stated otherwise, as apparent from the following discussions, it is appreciated that throughout the specification, discussions utilizing terms such as "selecting," "evaluating," "processing," "computing," "calculating," "associating," "determining," "designating," "allocating" or the like, refer to the actions and/or processes of a computer, computer processor or computing system, or similar electronic computing device, that manipulate and/or transform data represented as physical, such as electronic, quantities within the computing system's registers and/or memories into other data similarly represented as physical quantities within the computing system's memories, registers or other such information storage, transmission or display devices.
The processes and functions presented herein are not inherently related to any particular computer, network or other apparatus. Embodiments of the invention described herein are not described with reference to any particular programming language, machine code, etc. It will be appreciated that a variety of programming languages, network systems, protocols or hardware configurations may be used to implement the teachings of the embodiments of the invention as described herein. In some embodiments, one or more methods of embodiments of the invention may be stored as instructions on an article such as a memory device, where such instructions upon execution by a processor result in a method of an embodiment of the invention.
As used in this application, the term 'morphological' may, in addition to its regular meaning, imply that a first shape emerges from a second shape. For example, a shape in an image that may indicate a bit of information may be attached to, continue, emerge or morph from an alignment bar, such that the detection of the shape
representing the bit may be associated with or related to the detection of the shape or presence of the alignment bar.
Fig. 1 is a schematic diagram of a system including an imaging device, a processor, and an object to be identified in accordance with an embodiment of the invention. In some embodiments, a system 100 may include for example a screen or display 101 that may be connected to or associated with a processor 106, and an imager 102 that may capture an image of an object 104, and relay or transmit digital information about the image to processor 106. Object 104 may include or have printed or placed thereon various shapes 108 that may be arranged for example in a known or pre-defined pattern, on an area of object 104, such as a rim or perimeter 110 of object 104 or on other known areas of object 104, such that such shapes 108 are visible to imager 102. In some embodiments, such shapes may be or include monochromatic morphological shapes. In some embodiments, on a middle or other part of object 104, there may be affixed, attached or printed a mark, such as a shape, number, letter, drawing or other figure 112 to be identified. A list of patterns of shapes 108 may be associated with one or more objects 104 or figures 112 that may be attached to object 104, and such lists and associations may be stored in a memory 114 that may be connected to processor 106.
In operation, display 101 may show or display figure 112, such as a letter A, to for example, a user. User may then find an object that matches or otherwise corresponds to the displayed figure 112, such as a card or other object that may have a letter A printed on or attached to it. User may raise or otherwise expose the object 104 to imager 102, which may capture an image of the object 104 and of the shapes 108 that may be printed on, attached to or otherwise make up a part of object 104. The image may be transmitted to processor 106 which may find or discern the pattern of the monochromatic morphological shapes 108 on the object 104 in the image. Processor 106 may search memory 114 for a numerical or other value of the identified pattern of shapes 108, and may determine that the particular pattern of shapes 108 that appeared on the object 104 in the image is associated with a value that may have been assigned to figure 112 that was shown on display 101. Upon a detected match between the displayed figure 112 and the object 104 that was exposed to the imager, processor 106 may issue a signal to for example to an audio, visual or other indicator or calculator, such as a bell or speaker 116, to indicate that the figure on object 104
that was captured in the image corresponds to the figure shown on display 101. For example, a letter A may be associated with a value 23 as such value and association may be stored in memory 114. A card having a letter A as a figure 112, may include a shape 108 pattern that is also is associated in memory 114 with a value 23. Processor 106 that signaled display 101 to show an A to a user, may detect a match between the value associated with the displayed figure 112 and the value indicated by the pattern of shape 108.
In some embodiments, a pattern of shape 108 may be identified even if the image is captured in an uncontrolled environment, such as for example against a nonuniform background such as a colored wall or curtain, and under uncontrolled lighting and shadow conditions, such as direct or indirect light, sunlight, indoor ambient light or other non-uniform lighting conditions.
In some embodiments, a pattern of shapes 108 may be identified even if an orientation of object 104 relative to imager 102 is other than perpendicular to the light entering or reflecting back to imager 102. For example, object 104 may be held at an acute or other angle to imager 102, may be rotated, partially occluded by for example interfering objects such as a hand of a user, or otherwise incompletely imaged, and imager 102 may still be able to detect and identify at least one or a part of the repetitive pattern of shapes 108.
In some embodiments, a pattern of shapes 108 may be identified at various distances of object 104 from imager 102, such as for example from 10 cm to up to 5 meters or more. In some embodiments, the distance from imager 102 at which a pattern of shapes 108 of object 104 may be identified may be dependent on for example a resolution of imager 102 and may allow the determination of the distance of the object 104 from the imager 102. For example, processor 106 may determine a size of a first shape 108 and a size of a second shape 108 in the image on the basis of the number of pixels in the captured image that include the first shape 108 and the second shape.
In some embodiments, imager 102 may be or include a suitable digital imager such as a CCD, CMOS or other video or still digital image capture device capable of capturing and transmitting color or intensity image data. In some embodiments, a low resolution camera, such as those typically included in a web-cam or network capable
camera configuration, having a resolution of QVGA or VGA may be suitable for an embodiment of the invention.
In some embodiments, processor 106 may be or include an embedded processor or a DSP or a Pentium® IV or higher processor or other comparable processor typically used in a home computing configuration. Memory 114 may be or may be included in any suitable data storage device such as a hard drive, flash, or other electronic data storage on which may be stored for example a data base, array, tree or other data storage structure. In some embodiments, display 101 and bell or speaker 116 may be or be included in a single device such as for example a display and sound system that may indicate to a user that a match or other action is correct, incorrect or otherwise responsive to a question, challenge or other signal posed to the user.
In some embodiments, processor 106 may be an embedded processor which is housed inside an object such as a mobile phone, toy, doll or toy housing. In some embodiments, imager 102 and bell/speaker 116 may be or be included inside the doll or toy or toy housing.
Figs. 2 A and 2B are examples of objects that may be identified in accordance with an embodiment of the invention, and expanded views of the repeating codes and alignment bars on a perimeter of such objects. In some embodiments, objects 200 and
201 may be or include a flat, spherical, cubical or other shaped object that may be suitable to be moved, raised or otherwise maneuvered by for example a user or some other system to be brought into an area to be imaged by imager 102. In some embodiments, objects 200 and 201 may be a card or disc that may be or include cardboard, plastic or other semi-rigid material. Objects 200 and 201 may include a ball, toy, manufactured device or other item as long as the monochromatic morphological shape pattern can be printed, attached, stamped on or stuck to it.
Attached or imprinted on for example an outside perimeter of object 200 may be a series of shape code 202 that may create a pattern of for example repeating or repetitive monochromatic binary codes. Monochromatic morphological shape codes
202 may consist of a reference form such as an alignment bar 204 or alignment curve (shown by horizontal lines for visualization purposes in Fig. 2) and data bits 206 (shown by diagonal lines for visualization purposes in Fig. 2). As shown in Fig. 2B, alignment bar 205 may be curved or may assume some other shape or geometrical
form. In some embodiments, code 202 may be binary and may be read from right to left, or otherwise in accordance to the position of alignment bar 204. A dark bit may be read as "1" while a light bit may be read as "0". Alignment bar 204 and data bits 206 may be colored with any color which may provide contrast to the background color of object 200. The proportions of the code 202 elements are depicted in the expanded views of codes and alignment bars in Figs 2 A and 2B. Other proportions and dimensions are possible. In some embodiments, a maximum distance from which this code is recognized by an imager may depend on the code 202 units being detected by even a single pixel in the image.
In some embodiments, code 202 may be depicted along a curved plane of bar 205 as the proportions of the sides of bits 206 of code 202 are maintained relative to each other and to bar 5.
Shapes 202 may form any geometrical shape (e.g., lines, circles etc.) or may take on other configurations. The pattern of code 202 may be marked on object 200 on one, some or all sides exposed to the imager. Object 200 may include any tactile object to which code 202 may be affixed or on which code 202 may be shown.
Sequences of monochromatic morphological shape codes 202 may include at least two binary bits, though the variations of codes and hence the number of items or figures 206 associated with a distinct pattern of monochromatic morphological shape code 202 may increase with increased number of bits 206 included in the monochromatic morphological shapes 202. The size of each monochromatic morphological shape 202 may be determined by, for example, the intended distance between object 200 and imager 102 as well as by the resolution of imager 102.
Localization of the monochromatic morphological shapes 202 around a perimeter, rim, edge or other pre-defined area of object 200 may avoid the effects of occlusion of a portion of code 202, and may provide more robust detection and identification of code 202 once the object 200 is detected in an image. Similarly, repetition of the pattern may allow identification of the pattern even when the object is partially occluded or when an orientation of object 200 in an image partially blocks its exposure to the imager 102.
A particular monochromatic morphological shape code 202, such as for example binary code 10101, may be pre-defined and stored in memory 114, where the pattern may be associated with a value that may be further associated with an object
or figure. In some embodiments, a monochromatic code may designate a binary code in a pattern, in which case the number of combinations may be defined as
2n-2
where n is the number of bits in the code and a code of all zeros or all ones is forbidden. For example, if five bits are used, there may be 25-2 = 30 combinations. In some embodiments, an additional feature may be used to increase the number of possible codes such as the color of the monochromatic code. Polychromatic combinations or codes may be possible, such that if r different colors are used, the formula becomes:
(2n-2)*r
In the given example, if we use 6 different colors, then 180 combinations may be possible.
In some embodiments, even more features may be used to increase the number of possible codes such as the color of the background of the monochromatic code. If r different colors are used and k different background colors are used, then the formula becomes:
(2n-2)*r*k
In the given example, if we use 6 different background colors as well, then we come up to 1080 combinations.
To further enlarge the number of possible codes without using more colors, a two sided binary coding scheme may be used, whereas for the same width and an increase in the height of the code may be used, so long as such increase in height is distinguishable by the imager. Such varying heights may increase the possible combinations exponentially. This allows for example, 25*25 = 210 - 4 = 1020 combinations.
Detection and identification of a bar or code in a captured image may entail some or all of the following processes.
■ · Extraction of edges captured in an image, where such extraction is performed by a suitable edge extraction method, such as for example non-maximal suppression as may be implemented in the Canny operator for edge extraction.
• Identification of captured edges that form closed contours, where such identification may be performed using a connected component algorithm such
as a regular labeling operation. Such identification may detect the presence of edge contours which comply with the pre-defined ratios of the alignment bar, code or object to one or more of each other.
• Extraction of the corners of the identified closed contours, by way of for example a corner detector such as calculation of the curvature of some or all of the pixels that are part of the closed edge contours.
• Calculation of the maximum distance, on one or more contours, between two adjacent corners, so as to identify for example the alignment bar length.
• Calculation of the number of pixels that are contained in one or more bits by dividing the length of the alignment bar by the number of bits forming the code.
• Reading the intensity of one or more bits from the image by for example checking the value of a pixel in the middle or in another area of such bit.
• Comparing an intensity of a bit to the intensity outside the closed contour and inside it, such that if the intensity of a bit is closer to the intensity outside the closed contour it may be identified as "0", whereas otherwise it may be identified as "1".
• Determining the apparent code by identifying the code with the maximum repetitions from the various possible codes that were extracted from the image. In some embodiments, a length of the alignment bar may be used to measure the absolute distance from the imager of the object containing the code, where for example the real world size of the printed code is known a priory.
In some embodiments, the shape, alignment bars and codes may be repetitive, so it is likely that more than one sequence of the pattern appears in the image. A probability function for the pattern may be used to verify the object classification by taking into account, not only the presence of a code or pattern and the number of times it appears, but also the number of adjacent sequences or other spatial based relation between such repetitions such as distance or dispersion. The pattern with the highest probability may be chosen as representing the pattern associated with the correct code and the associated object. In some embodiments, this procedure may be applied in consecutive frames of an image and the confidence of classification may be increased if a same pattern is selected on a few or consecutive frames.
In some embodiments, shape patterns may be attached to objects such as empty cards with no figure on them, and the user may adhere an image or draw an image on the blank cards. For example, a pattern associated in memory with "My Pet", may be attached to a picture of a user's pet, and the blank card may be recognized by the system as being associated with the pet.
In some embodiments, patterns of shapes may be camouflaged or even non- discernible to the human eye, as they may be implanted within a picture or item that is observed by the imager in a way which is integrated into the printed figure itself. For example, a letter, number or figure may be drawn to include a shaped pattern that may be identified by the imager but hidden from obvious discernment by a user. For example, shape patterns within a picture or on an item may be printed using an ink that is not reflective in the visible light spectrum such as IR or near IR, and the imager that captures the image may be an IR camera. In some embodiments, a micro-code version of shapes and codes may be affixed to an object to create a more camouflaged version of coded objects.
In some embodiments, detection of a shape that includes a code may include subtraction process to isolate objects in a second image that was not included in a first or prior image on the assumption that the object that includes the shape was introduced into the view of the imager subsequent to the capture of the first image. Such subtraction may reduce the number of attempts or calculations required to detect the object in a series of images.
In some embodiments, the recognition of the coded objects may be used as part of an educational curriculum for children or students. The recognition of coded objects may be associated with answers to questions, for example, upon an instruction to "Hold up an animal card!", the student holds up an animal card coded with the monochromatic shapes that are associated with cards showing pictures or names of animals. The system may detect the codes and recognize the code number and respond with "That's a bear!" This process may be expanded to a classroom equipped with a computer and one or more imagers where the computer recognizes different objects or cards for a full class experience with multiple students, one or more of whom may hold up a correct or incorrect card or object.
In some embodiments, recognition of the objects having shapes and codes may be used as part of an interaction with a customer. For example: a customer waiting in
a restaurant who wants to get the attention of a waiter may hold up a coded image or other coded object, and a camera in the restaurant may detect the coded image and send the location of the table along with the code number, which can be associated with "Please bring me the check" or other message.
In some embodiments, detection or identification of a particular code may elicit a computerized or automated response, by for example a mobile phone whose camera may detect a presence of a shape or code and display on the phone's screen an animation sequence related to that object associated with the code, or dial a number related to the object.
In another embodiment, coded patches, images or even shirts can be used to track the location of people or animals in a farm. Different locations in the surroundings may have cameras connected to a computer which recognizes the people or animals when they cross a field of view of a camera.
In another embodiment, coded objects or cards can be used to navigate to different websites by showing the coded objects or cards to a webcam associated with a computer, for example, showing a coded car from the movie "Transformers™" may send the browser in a computer to the relevant Transformers™ website or showing a coded image of a Barbie™ doll may direct the browser to the Barbie™ website.
Reference is made to Fig. 3, a flow chart of a method in accordance with an embodiment of the invention. Some embodiments may include a method of identifying an object, where such method includes, as in block 300, detecting in an image a repeating reference form and a set of elements, where the set of elements is at a pre-defined distance from and orientation to the repeating reference form. For example, a reference form may be a curved or straight alignment bar, and the elements may jut out from the alignment bar at for example a perpendicular angle. In block 302, a value may be derived from the image of the elements. In block 304, the derived value may be compared to a value stored in a memory, where the stored value is associated with the object.
In some embodiments, an image of the object may be captured at a distance from the imager from a distance of up to 5 meters.
In some embodiments, the detecting may include detecting elements that are connected to the alignment bar, and where deriving a value includes deriving a first value from a first set of elements and a second value from a second set of elements,
and comparing the values to determine if they are the same. If the values are not the same, in some embodiments a third value may be selected from a third set of elements and a comparison of the third value to each of the first and second values may be made. A value that is derived from two or more of the sets may be assumed to be the true value.
In some embodiments, the elements may be recognized as elements only if they are connected or at a defined distance or orientation from the alignment bar or reference form.
In some embodiments, if the value derived from the elements matches a value stored or designated from a memory, then a signal may be issued indicating such match to the user.
In some embodiments, the nature or meaning of an element may be derived from a size or length of the element of from a distance of the element from the reference or alignment bar. In some embodiments, a single element of the set of elements may represent either a zero or a 1. In some embodiments, the elements may have different colors and may be differentiated by such colors.
In some embodiments, a series of instructions in the form of, for example, software may be stored in an electronic storage medium and executed by a processor to translate a numeric value into a set of non-numeric binary shapes such as those that may extend from an alignment bar. The processor may orient the shapes at a known position relative to the alignment bar, such as connecting at a perpendicular angle to the bar, and add repeating units of alignment bars and the set of shapes to a print file that includes an image around which are to appear the repeating units. When an image of the object is printed, the repeating units may appear around or on a side or elsewhere on the printed image. The shape of the printed repeating units may conform to an outline of the printed image and may be printed at a given or minimum distance from an edge of the printed image.
In some embodiments, the instructions may store the value represented by the elements along with an association of such value with the printed object.
It will be appreciated by persons skilled in the art that embodiments of the invention are not limited by what has been particularly shown and described hereinabove. Rather, the scope of at least one embodiment of the invention is defined by the claims below.
Claims
1. A method of identifying an object, comprising:
detecting in an image a repeating reference form and a set of elements, said set of elements at a pre-defined distance from and orientation to said repeating reference form;
deriving a value associated with said set of elements; and
comparing said value to a value stored in a memory, where said value stored in said memory is associated with said object.
2. The method as in claim 1, wherein
said detecting comprises detecting a set of said elements connected to each of said detected repeating reference forms, and said deriving said value comprises deriving a first value from a first of said elements,
and the method further comprising:
deriving a second value from a second set of elements; and determining if said first value is equal to said second value.
3. The method as in claim 2, further comprising deriving a third value from a third set of said repeating elements, and determining if said third value is equals to either of said first value or said second value.
4. The method as in claim 1 , wherein said detecting comprises detecting that said set of elements are connected to said reference form.
5. The method of claim 1, further comprising determining an orientation of one of said set of elements relative to one of said repeating reference forms.
6. The method as in claim 1 , further comprising issuing a signal that said object is associated with said value.
7. The method as in claim 1, further comprising capturing an image of said object at a distance from an imager of up to 5 meters.
8. The method as in claim 1 , further comprising:
identifying a first of said elements and a second of said elements; and differentiating said first element from said second element on the basis of a size of each of said elements relative to said reference form.
9. The method as in claim 8, wherein said identifying comprises identifying said first element as either a 0 or a 1.
10. The method as in claim 1 , further comprising:
identifying a first of said elements and a second of said elements; and differentiating said first of said elements from said second of said elements on the basis of a color of said first element and a color of said second element.
11. A system for identifying an object comprising:
an object having appearing thereon repeating reference forms and a set of elements, said set of elements at a pre-defined orientation and distance from one of said repeating reference forms;
an imager to capture an image of at least one of said repeating
reference forms and said set of elements; and
a processor to calculate a value associated with said set of elements and to compare said value to a value stored in a memory.
12. The system as in claim 11 , wherein said processor is to
detect a set of elements associated with a first instance of said repeating
reference forms and a set of elements associated with a second instance of said repeating reference forms;
calculate a first value from said first set of elements, and a second value from said second set of elements; and
determine if said first value is equal to said second value.
13. The system as in claim 12, wherein said processor is to derive a third value from a third set of said elements, and determine if said third value is equal to either of said first value and said second value.
14. The system as in claim 11, wherein said processor is to detect that said set of elements is connected to one of said repeating reference forms.
15. The system as in claim 11 , wherein said processor is to determine an orientation of said set of elements relative to one of said repeating reference forms.
16. A medium to store instructions, said instructions when executed on a processor resulting in:
translating a numeric value into a set of binary shapes;
orienting said set of shapes at a known position relative to a reference form; and adding repeating units, said units comprising said set of shapes and said reference form, to a print file of an image, said units to appear at a known position relative to said image in said print file.
17. The medium as in claim 16, including instructions that when executed further result in conforming a configuration of said repeating unit to a shape of said image.
18. The medium as in claim 16, including instructions that when executed further result in associating said set of shapes and said value with said image in a memory.
19. The medium as in claim 16, wherein said orienting comprises connecting said shapes to a first side of said reference form at a perpendicular orientation to said side.
20. The medium as in claim 16, wherein said known position comprises at least a minimum distance from said image.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US27442709P | 2009-08-17 | 2009-08-17 | |
PCT/IL2010/000668 WO2011021193A1 (en) | 2009-08-17 | 2010-08-17 | Device and method for identification of objects using morphological coding |
Publications (1)
Publication Number | Publication Date |
---|---|
EP2467808A1 true EP2467808A1 (en) | 2012-06-27 |
Family
ID=43606696
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP10809632A Withdrawn EP2467808A1 (en) | 2009-08-17 | 2010-08-17 | Device and method for identification of objects using morphological coding |
Country Status (4)
Country | Link |
---|---|
EP (1) | EP2467808A1 (en) |
JP (1) | JP2013502634A (en) |
CN (1) | CN102696043A (en) |
WO (1) | WO2011021193A1 (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013065045A1 (en) * | 2011-10-31 | 2013-05-10 | Eyecue Vision Technologies Ltd | System for vision recognition based toys and games operated by a mobile device |
JP5862623B2 (en) * | 2013-08-08 | 2016-02-16 | カシオ計算機株式会社 | Image processing apparatus, image processing method, and program |
DE102014009686A1 (en) | 2014-07-02 | 2016-01-07 | Csb-System Ag | Method for detecting slaughter-related data on a slaughtered animal |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8301893B2 (en) * | 2003-08-13 | 2012-10-30 | Digimarc Corporation | Detecting media areas likely of hosting watermarks |
US7204428B2 (en) * | 2004-03-31 | 2007-04-17 | Microsoft Corporation | Identification of object on interactive display surface by identifying coded pattern |
CN101163234A (en) * | 2006-10-13 | 2008-04-16 | 杭州波导软件有限公司 | Method of implementing pattern recognition and image monitoring using data processing device |
-
2010
- 2010-08-17 CN CN2010800470100A patent/CN102696043A/en active Pending
- 2010-08-17 WO PCT/IL2010/000668 patent/WO2011021193A1/en active Application Filing
- 2010-08-17 JP JP2012525251A patent/JP2013502634A/en active Pending
- 2010-08-17 EP EP10809632A patent/EP2467808A1/en not_active Withdrawn
Non-Patent Citations (1)
Title |
---|
See references of WO2011021193A1 * |
Also Published As
Publication number | Publication date |
---|---|
WO2011021193A8 (en) | 2011-04-28 |
WO2011021193A1 (en) | 2011-02-24 |
CN102696043A (en) | 2012-09-26 |
JP2013502634A (en) | 2013-01-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10713528B2 (en) | System for determining alignment of a user-marked document and method thereof | |
US8606000B2 (en) | Device and method for identification of objects using morphological coding | |
US8941487B2 (en) | Transferring a mobile tag using a light based communication handshake protocol | |
JP4032776B2 (en) | Mixed reality display apparatus and method, storage medium, and computer program | |
CN1698357B (en) | Method for displaying an output image on an object | |
CN109583304A (en) | A kind of quick 3D face point cloud generation method and device based on structure optical mode group | |
US11948042B2 (en) | System and method for configuring an ID reader using a mobile device | |
JP2013105258A (en) | Information presentation apparatus, method therefor, and program therefor | |
KR20150039252A (en) | Apparatus and method for providing application service by using action recognition | |
KR20110128574A (en) | Method for recognizing human face and recognizing apparatus | |
KR101700120B1 (en) | Apparatus and method for object recognition, and system inculding the same | |
CN111353325A (en) | Key point detection model training method and device | |
CN112200230A (en) | Training board identification method and device and robot | |
WO2011021193A1 (en) | Device and method for identification of objects using morphological coding | |
JP6915611B2 (en) | Information processing equipment, information processing methods and programs | |
US11610375B2 (en) | Modulated display AR tracking systems and methods | |
KR101360999B1 (en) | Real time data providing method and system based on augmented reality and portable terminal using the same | |
US20210176375A1 (en) | Information processing device, information processing system, information processing method and program | |
Beglov | Object information based on marker recognition | |
CN108363980B (en) | Sign language translation device and sign language translation method based on 3D imaging technology | |
US20210390325A1 (en) | System for determining alignment of a user-marked document and method thereof | |
Lámer et al. | Marker based attendance systems in education process | |
CN109117844A (en) | A kind of password determines method and apparatus | |
JP5937745B1 (en) | Image display device, image display method, and program | |
CN117812232A (en) | Identification control method, device and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20120316 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK SM TR |
|
DAX | Request for extension of the european patent (deleted) | ||
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20140301 |