KR101700120B1 - Apparatus and method for object recognition, and system inculding the same - Google Patents
Apparatus and method for object recognition, and system inculding the same Download PDFInfo
- Publication number
- KR101700120B1 KR101700120B1 KR1020150150379A KR20150150379A KR101700120B1 KR 101700120 B1 KR101700120 B1 KR 101700120B1 KR 1020150150379 A KR1020150150379 A KR 1020150150379A KR 20150150379 A KR20150150379 A KR 20150150379A KR 101700120 B1 KR101700120 B1 KR 101700120B1
- Authority
- KR
- South Korea
- Prior art keywords
- content
- objects
- application
- providing
- arrangement
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/60—Rotation of a whole image or part thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
Abstract
Description
The present invention relates to an object recognizing apparatus, and more particularly, to an object recognizing apparatus and a system including the object recognizing apparatus, which can capture an object such as a quadrangle block by using a camera apparatus and recognize a shape,
2. Description of the Related Art [0002] With the development of electronic information technology, digital image processing technology for processing images obtained electronically by a camera or the like in various mathematical operation processes is developed, Environment, and education.
In particular, recently, individuals have portable or stationary electronic information terminals equipped with high-performance processors such as smart phones or tablet PCs, and digital image processing technologies are widely utilized in various applications provided by such terminals, And offers new services.
Currently, services provided by applications running on the electronic information terminal capture backgrounds, buildings and people, process and identify them through image processing techniques, provide information related thereto such as location, name and name, And an Augmented Reality (AR) service in which a created image and a photographed image are superimposed and displayed on a single screen. In order to realize such a service smoothly without any error, a high recognition rate for the object to be photographed must be premised. For this purpose, various image processing techniques have been proposed.
The object of the present invention is to improve the accuracy of object recognition and improve the reliability of a device in an object recognition apparatus that provides a quadruple bridge application provided in an electronic information terminal such as a smart phone or a tablet PC.
In order to solve the above problems, an object recognizing apparatus according to an embodiment of the present invention includes an interface module connected to a camera device and receiving a photographing signal for a plurality of objects; A memory for storing an application including contents based on recognition results of the plurality of objects; An AP executing the application; And a display module for displaying a screen including a design made up of a combination of the plurality of objects corresponding to the execution of the application.
Further, the present invention is a system for solving the aforementioned problems, comprising: a plurality of objects; A camera device for photographing an imaging area in which the plurality of objects are disposed; And a object recognition device connected to the camera device and providing content based on recognition results of the plurality of objects.
According to still another aspect of the present invention, there is provided a method for recognizing an object, comprising: receiving an imaging signal for a plurality of objects from a connected camera device; Executing an application including content based on recognition results of a plurality of objects; Displaying a screen including a design made of a combination of the plurality of objects corresponding to the execution of the application; And advancing the content corresponding to the photographing signal and providing a result through the screen.
According to the object recognizing apparatus and method and the system including the object recognizing method according to the embodiment of the present invention, the image signal obtained through the camera device or the like is pre-processed, and the object image included in the preprocessed image data is converted and rotated Thereby improving the recognition accuracy and improving the reliability of the system as a whole.
FIG. 1 is a diagram showing an example of a system using an object recognizing apparatus according to an embodiment of the present invention.
FIG. 2 is a diagram illustrating a structure of an object recognizing apparatus according to an embodiment of the present invention.
3 is a diagram illustrating a structure of an application according to an embodiment of the present invention.
4 is a diagram illustrating a structure of a classification table of a database included in the object recognizing apparatus according to the embodiment of the present invention.
5A and 5 are views illustrating a method of recognizing an object through the object recognition apparatus according to an embodiment of the present invention.
6 is a diagram illustrating an overall content providing procedure according to an embodiment of the present invention.
7 is a diagram illustrating a method of recognizing objects according to an embodiment of the present invention.
Figures 8, 9A-9C illustrate an embodiment of block-aligned content according to an embodiment of the present invention.
Prior to the description, when an element is referred to as being "comprising" or "including" an element throughout the specification, it is to be understood that the element may be further comprised of other elements . It should be noted that the terms such as " part, "" module, "and" component ", as used in the specification, mean a unit for processing at least one function or operation, Lt; / RTI >
Furthermore, the term "embodiment" is used herein to mean serving as an example, instance, or illustration, but the subject matter of the invention is not limited by such example. It is also to be understood that the terms "including, "" having, "and other similar terms are used, but that they do not exclude any additional or different components when used in the claims, Quot; is < / RTI > used in a manner similar to the term " Comprising ".
The various techniques described herein may be implemented with hardware or software, or may be implemented with a combination of both, where appropriate. The terms "component," " module, " " system, "and the like as used herein are likewise equally applicable to computer- It is treated as equivalent to the software at runtime. For example, a program module may be composed of one component or a combination of two or more components, and may be a process running on a processor, an object, an executable execution thread, But is not limited thereto. By way of example, both the application and the hardware executed in the electronic information terminal can be configured on a module basis. The one or more program modules may reside in a running process and / or thread, may be written to one physical memory, or may be written in a distributed manner between two or more memory and recording media.
Hereinafter, an object recognition apparatus and method according to a preferred embodiment of the present invention will be described with reference to the drawings.
FIG. 1 is a diagram showing an example of a system using an object recognizing apparatus according to an embodiment of the present invention.
The system (1) illustrated in the figure is a learning or play instruction for infants to young children, and provides electronic contents information such as a smart phone or a tablet PC to provide block-matching contents such as a ball game. So that objects such as blocks can be directly placed on the floor or the table so that the related contents can proceed.
1, a
The
In addition, the
The
In particular, as the
In addition, when the
The
For example, the installed application performs a preprocessing process such as noise removal on the photographed signal, determines the type of each block through the color detection process, and extracts the graphics. Next, the layout of the object is recognized through the conversion and rotation process of the extracted graphic object, and the content is processed according to whether the object is matched with the
A detailed description of the structure of the application installed in the
In addition, the
That is, in the system using the object recognition apparatus of the present invention, when providing contents such as learning or play to a child, children correctly recognize a block manually arranged by an image processing technique and reflect the result to the contents, It can provide content related to learning and play that develop children's intelligence and interest.
Although not shown, the object recognition apparatus of the present invention is connected to a content providing server (not shown) of an information communication network, and collects scores according to the progress of contents collected from a plurality of object recognition apparatuses, and provides statistical information thereon And can further provide services associated with a plurality of users.
Hereinafter, a structure of an object recognizing apparatus according to an embodiment of the present invention will be described in detail with reference to the drawings.
FIG. 2 is a diagram illustrating a structure of an object recognizing apparatus according to an embodiment of the present invention.
Referring to FIG. 2, an object recognition apparatus according to an embodiment of the present invention receives a photographing signal from an external camera apparatus (not shown), recognizes the type and arrangement state of objects included in the photographing signal, A
The
The
The AP (Application Processor) 33 is a system on chip (SOC) including a central processing unit (CPU), a graphics processing unit (GPU) And the like. In particular, the
The
In particular, the application stored in the
For example, the photographing signal is an image for an object placed on the photographing area P / A, and the
The
Further, although not shown, a communication module (not shown) may be further provided to perform the communication by connecting update and other information of contents installed in the
Hereinafter, the structure of the application installed in the object recognizing apparatus according to the embodiment of the present invention will be described in detail with reference to the drawings.
3 is a diagram illustrating a structure of an application according to an embodiment of the present invention.
Referring to FIG. 3, an
The
The
In detail, when the contrast between the background and the block is small in the image taken by the lighting environment of the place where the shooting area is defined, or when the noise includes the color or the outline, it is difficult to extract the original color and shape of the accurate object In order to increase the recognition accuracy, the
The
The
More specifically, the coordinate {O (x ', y') outputted through the geometric transformation with respect to the coordinate {I (x, y)} based on the contour data contour of the object input to the figure extracting unit 113 } Is defined by the following equation (1).
The geometric transformations include nonlinear transformations that cause bending in the original image, and linear transformations that do not cause bending in the original image. As the image data transforms the shape of the block according to the angle of the camera device, It is assumed that a block arranged on a plane is photographed from the front.
In the non-linear geometric transformation, for example, warping, if OpenCV is used, the getPerspectiveTranform () function is called first to derive the TranformMatrix matrix, and the output coordinate {O (x ', y')}.
Accordingly, the output coordinate {O (x ', y')} is information capable of determining the position of each block on the plane.
Next, the block is rotated to determine the layout type. 5A and 5B, when the object is a triangle block B3, an outline can be obtained through the above-described warping process and the center of the block is divided into three vertexes (vertex1 to vertex3) (L1 to L3) and the angle between the outline lines. An extension line L1 between the center point and the
The
The
And performs the function of providing a design for a pattern component block together with an associated image and sound. The block design is an image that combines blocks to shape letters, animals and objects so that children can easily guess.
The blind component performs a function of blind processing of the graphic in black for the progress of the content. In the beginning of the execution of the quadrangle application, all the blocks are blind processed so that the user can view only the rough shape of the design, and the user arranges the blocks and sequentially removes the blinds for the matched parts each time the pattern is matched The content is displayed by displaying the original color.
The hint component performs a function of providing assistance when the user feels difficulty in content progression. When the user selects the blind in the displayed screen, blinds are blinked for a predetermined blind block in a predetermined cycle, Make the original color visible.
The timer component performs the function of providing the elapsed time according to the progress of the content, and can calculate the elapsed time and the total progress time at each matching of each block or alphabet. In addition, when the time limit is set, the timer component can be controlled to reset the content that has been processed so far and to restart the content from the beginning.
The score processing component plays a role of providing the result according to the progress of the content, accumulates the scores for each block matching at the time of the content progress, and provides the final sum score data at the end of the content. Here, the cumulative score can be considered as the elapsed time and the total progress time calculated by the timer component.
The
The
In particular, the
4 is a diagram illustrating a structure of a classification table of a database included in the object recognizing apparatus according to the embodiment of the present invention.
FIG. 4 shows an example of a classification data table for a quadruple block divided into seven pieces as data used for block matching.
Referring to FIG. 4, tables for block recognition according to an embodiment of the present invention are categorized and stored for each block, and the IDs, SHAPEs, COLORs, IMAGE) "and" FEATURE_VALUE ".
The "identification code (ID)" is a unique code for identification between blocks, and is assigned to each block without redundancy.
The "SHAPE" is classified according to the shape of each block. The normal quadrangle block is composed of five triangles, one square and a parallelogram, and two of the same size and shape Respectively.
"COLOR" refers to the color of blocks, and is assigned uniquely for each block, such as an identification code, and is assigned a unique color for each block. In the drawing, the color classification is defined by a hexadecimal web color code, and other color codes such as an RGB color code or a CMYK color code may be used. As an example, the triangle of the identification code B1 can be set to "# 9A5FA6 (16)" as the web color code, and R: 66, G: 172, B: 255 as the RGB color code.
"IMAGE" is stored as an image format such as png, jpeg, and bmp to represent the shape of the corresponding block.
"FEATURE_VALUE" defines a value for a geometric feature of a graphic corresponding to each block. In the "FEATURE_VALUE" field, values related to the association between the center point (center_point), the vertex (point) and the angle (angular_value) at each vertex are stored. The object recognizing procedure is performed.
Hereinafter, a content providing method based on a method of recognizing objects according to an embodiment of the present invention will be described with reference to the drawings. 6 is a diagram illustrating an overall content providing procedure according to an embodiment of the present invention.
Referring to FIG. 6, a user connects the camera device or an electronic information terminal having a camera module installed therein, installs a seventh application of the present invention, prepares the object recognition device, selects a seventh application icon, (S100). Here, although not shown, procedures for membership registration and device registration on the web for receiving a service from a vendor may be preceded.
Accordingly, the quadrangle application provides quadrature block matching content based on the setting (S110). The contents provided herein may be a form in which a specific object is shaped according to the arrangement of a plurality of blocks, and the child proceeds according to a direct fit of physical blocks according to the pattern.
To this end, while the content is being displayed on the screen, the quadrangle application may transmit a control signal to the connected camera device to drive the camera device so as to control the photographing of the shooting area in which the object is disposed (S120).
As the photographing progresses, the camera apparatus transmits a photographing signal to the object recognizing apparatus (S130).
Next, the quadrangle application analyzes the shooting signal, recognizes the type and shape of one or more objects included in the signal, and proceeds with the content according to the recognition result (S140). In detail, the quadrangle application extracts image data through noise elimination using a filter, object recognition through color information, figure determination according to contour detection, and batch identification process to recognize the types and arrangement states of blocks on the shooting area, It is judged whether or not the content matches with the content presented in the content, and the block matching is performed.
Then, when the block matching by the contents is completed, the score data according to the progress is calculated and displayed and displayed on the screen of the object recognizing device in the form of an image or a number (S150). According to the above-described steps, the object recognition method according to the embodiment of the present invention provides content such as block alignment. When a child is placed according to a graphic provided by a plurality of blocks in contents, And the placement status to provide content such as block alignment.
Hereinafter, a process of matching an object and a graphic presented in a content based on a photographed signal in a method of recognizing objects according to an embodiment of the present invention will be described in detail with reference to the drawings.
7 is a diagram illustrating a method of recognizing objects according to an embodiment of the present invention. Referring to FIG. 7, in the object recognizing method of the present invention, a photographing signal generated from a camera device is received, and a pre-processing unit (111 in FIG. 3) And preprocesses and outputs the image data (S141).
Next, the object determination unit (112 in Fig. 3) identifies the type of the block included in the image data based on the image data (S142). In this step, the type of the object is determined by referring to the identification table stored in the
Subsequently, the figure extracting unit (113 in FIG. 3) extracts a figure corresponding to the object included in the image data based on the image data (S143). The image data includes the outline information of the object, and the outline information is extracted by geometric transformation. Here, the geometric features of the central point and the central point-vertex of the graphic are used, and the arrangement of the current object is further determined in consideration of the outline angle and the like.
Next, the matching unit (114 in Fig. 3) proceeds with the content according to whether or not the figure layout and the design are matched (S144). There are at least seven blocks of each quadrangle block, and the matching of each block and the pattern is determined according to the user's operation until all the blocks are arranged to proceed with the content.
Hereinafter, a system according to an embodiment of the present invention will be described with reference to the drawings, by way of example of aspects of a content provided by an object recognition apparatus and a system including the same.
Figures 8, 9A-9C illustrate an embodiment of block-aligned content according to an embodiment of the present invention.
Referring to FIGS. 8 and 9A to 9C, when an application of the object recognizing apparatus according to the embodiment of the present invention is executed, the application displays a drawing 40 by combining blocks as a quadruple block matching content . At this time, the entirety of the graphic 40 is displayed in a black color by the blind component at the beginning of execution, and when the user places the blocks at positions corresponding to the graphic 40, The effect is removed.
As the application is executed, as shown in FIG. 9A, when the user places the block B4 on the photographing area P / A, the area on the
9B, the user advances the content by placing the next block B1 on the photographing area P / A, where the original position P2 in which the block B1 is to be placed When placed at a position corresponding to a block, the application recognizes the type and arrangement of the block and determines that the block and the arrangement do not match. The block recognition process proceeds in real time according to the photographing of the camera device.
Even if it is arranged at the position P3 corresponding to the block having the same size as the block B1, the application determines that the block and the graphic do not match according to the color information.
Next, when the user moves the block B1 at the position P1 of the other block and arranges the block B1 at the corresponding position P2, the user recognizes this and determines that the type and arrangement of the block B1 are matched, The blind on the
While the present invention has been described with reference to the exemplary embodiments of the present general inventive concept, it is to be understood that the invention is not limited to the specific embodiments thereof or may be limited to specific platforms It is not.
Further, the program modules disclosed in the embodiments of the present invention may also be embodied in the form of a recording medium including a plurality of executable instructions executed by an electronic information terminal. As described above, the readable medium by the electronic information terminal may be any available medium that can be accessed by the application processor containing the memory and the memory built in the apparatus, and may be volatile and nonvolatile media, removable and non- Media. ≪ / RTI > Further, the readable medium may include both a storage medium in stand-alone form as well as a communication medium.
In addition, the storage medium may include both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as processor readable instructions, data structures, program modules or other data . Communication media typically can include any data transmission medium, including processor readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave, or other transport mechanism.
While a great many are described in the foregoing description, it should be construed as an example of preferred embodiments rather than limiting the scope of the invention. Accordingly, the invention is not to be determined by the embodiments described, but should be determined by equivalents to the claims and the appended claims.
1: System 10: Seven Bridge Block
20: camera device 30: object recognition device
40: Pattern P / A: Shooting area
Claims (16)
The application includes an image processing module for performing image processing on the photographing signal; And a content providing module for performing a procedure for the content,
Wherein the image processing module comprises: a preprocessor for preprocessing the photographing signal and outputting image data having color information and outline information; An object discrimination unit for discriminating the type of the object based on the color information; A figure extracting unit for extracting a figure and an arrangement of the figure corresponding to the object based on the outline information; And a matching unit for providing the content providing module with a result according to the arrangement of the figure and the matching of the figure,
Wherein the figure extracting unit extracts the figure by converting the object into a coordinate value in a state in which the object is arranged on a plane according to the outline information, extracts the arrangement of the figure according to the geometrical feature of the figure, Is calculated by calculating an extension line between a center point of the figure and a center point and a vertex, and is extracted corresponding to an angle between the extension line and the horizontal line.
Wherein the content providing module comprises a plurality of components,
The plurality of components comprising:
A pattern component for providing the pattern;
A blind component for providing blinds for said artwork in accordance with said content progression;
A hint component for displaying the blind in a blinking manner according to a user's request;
A timer component for calculating and displaying the progress time of the content; And
And a score processing component for calculating and displaying score data as the content progresses.
The application comprises:
Further comprising an application DB for storing at least one of an identification code, a shape, a color, an image, and a feature value for the plurality of objects.
A camera device for photographing an imaging area in which the plurality of objects are disposed; And
And an object recognizing device connected to the camera device and providing contents based on recognition results of the plurality of objects,
The object recognizing device
An interface module for receiving a photographing signal for a plurality of objects from a camera device; A memory for storing an application including content based on the plurality of objects; A display module for displaying a screen including a plurality of combinations of objects according to the execution of the application; And an AP for executing the application and proceeding the content based on recognition results of the plurality of objects according to the shooting signal,
The application includes an image processing module for performing image processing on the photographing signal; And a content providing module for performing a procedure for the content,
Wherein the image processing module comprises: a preprocessor for preprocessing the photographing signal and outputting image data having color information and outline information; An object discrimination unit for discriminating the type of the object based on the color information; A figure extracting unit for extracting a figure and an arrangement of the figure corresponding to the object based on the outline information; And a matching unit for providing the content providing module with a result according to the arrangement of the figure and the matching of the figure,
Wherein the figure extracting unit extracts the figure by converting the object into a coordinate value in a state in which the object is arranged on a plane according to the outline information, extracts the arrangement of the figure according to the geometrical feature of the figure, Is calculated by calculating an extension line between the center point and the center point and the vertex of the figure and extracted corresponding to the angle between the extension line and the horizontal line.
Wherein the plurality of objects comprises:
Characterized in that it consists of five triangular, square and parallelogram blocks with different colors.
Receiving photographing signals for a plurality of objects from the camera device;
Executing an application including content related to the plurality of objects;
Displaying a screen including a design made of a combination of the plurality of objects corresponding to the execution of the application; And
And proceeding with the content corresponding to the recognition result of the object included in the photographing signal and providing the progress result through the screen,
Wherein the step of proceeding with the content corresponding to the recognition result of the object included in the photographing signal and providing the progress result through the screen comprises:
Pre-processing the photographing signal to output image data having color information and outline information; Determining a type of the object based on the color information; Extracting a figure and an arrangement of the figure corresponding to the object based on the outline information; And reflecting the result of the arrangement of the figure and the matching of the figure on the screen,
Wherein the step of extracting the figure and the arrangement of the figure corresponding to the object on the basis of the outline information includes a step of converting the object photographed corresponding to the outline information into a coordinate value in a state of being arranged on a plane, Extracting; And extracting an arrangement of the figure according to a geometric characteristic of the figure,
Wherein the layout of the figure is calculated by calculating an extension line between the center point and the center point of the figure and the angle between the extension line and the horizontal line.
The method of claim 1,
Providing the graphic;
Blind processing of the graphic according to progress of the content;
Displaying the blind in a flashing manner according to a user's request;
Calculating and displaying a progress time of the content; And
And when the progress of the content is completed, calculating and displaying score data according to progress.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150150379A KR101700120B1 (en) | 2015-10-28 | 2015-10-28 | Apparatus and method for object recognition, and system inculding the same |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150150379A KR101700120B1 (en) | 2015-10-28 | 2015-10-28 | Apparatus and method for object recognition, and system inculding the same |
Publications (1)
Publication Number | Publication Date |
---|---|
KR101700120B1 true KR101700120B1 (en) | 2017-01-31 |
Family
ID=57990691
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020150150379A KR101700120B1 (en) | 2015-10-28 | 2015-10-28 | Apparatus and method for object recognition, and system inculding the same |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR101700120B1 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20190011034A (en) * | 2017-07-24 | 2019-02-01 | 한동대학교 산학협력단 | Educational Game Device Based on Augmented Reality And Operating Method |
KR102083787B1 (en) * | 2019-10-11 | 2020-03-03 | 두꺼비학교 협동조합 | System for cognitive learning of infirm |
WO2021001692A1 (en) * | 2019-07-04 | 2021-01-07 | Bilous Nazar | Lighting systems and methods for displaying colored light in isolated zones and displaying information |
KR102241435B1 (en) * | 2020-06-26 | 2021-04-15 | 김현기 | Online Mission Game Machine, Online Mission Game System and Control Method of Online Mission Game System |
WO2021125472A1 (en) * | 2019-12-16 | 2021-06-24 | 김현기 | Online mission gaming device, online mission game system, and control method for online mission game system |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH10105045A (en) * | 1996-09-26 | 1998-04-24 | Sony Corp | Forming operation teaching device and information recording medium for forming operation teaching device |
KR20060115023A (en) * | 2005-05-03 | 2006-11-08 | (주)아이미디어아이앤씨 | Image code and method and apparatus for recognizing thereof |
KR20070050878A (en) * | 2004-05-28 | 2007-05-16 | 내셔널 유니버시티 오브 싱가포르 | An interactive system and method |
KR20100112309A (en) * | 2009-04-09 | 2010-10-19 | 의료법인 우리들의료재단 | Method and system for automatic leading surgery position and apparatus having surgery position leading function |
-
2015
- 2015-10-28 KR KR1020150150379A patent/KR101700120B1/en active IP Right Grant
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH10105045A (en) * | 1996-09-26 | 1998-04-24 | Sony Corp | Forming operation teaching device and information recording medium for forming operation teaching device |
KR20070050878A (en) * | 2004-05-28 | 2007-05-16 | 내셔널 유니버시티 오브 싱가포르 | An interactive system and method |
KR20060115023A (en) * | 2005-05-03 | 2006-11-08 | (주)아이미디어아이앤씨 | Image code and method and apparatus for recognizing thereof |
KR20100112309A (en) * | 2009-04-09 | 2010-10-19 | 의료법인 우리들의료재단 | Method and system for automatic leading surgery position and apparatus having surgery position leading function |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20190011034A (en) * | 2017-07-24 | 2019-02-01 | 한동대학교 산학협력단 | Educational Game Device Based on Augmented Reality And Operating Method |
KR101985367B1 (en) * | 2017-07-24 | 2019-06-25 | 한동대학교 산학협력단 | Educational Game Device Based on Augmented Reality And Operating Method |
WO2021001692A1 (en) * | 2019-07-04 | 2021-01-07 | Bilous Nazar | Lighting systems and methods for displaying colored light in isolated zones and displaying information |
KR102083787B1 (en) * | 2019-10-11 | 2020-03-03 | 두꺼비학교 협동조합 | System for cognitive learning of infirm |
WO2021125472A1 (en) * | 2019-12-16 | 2021-06-24 | 김현기 | Online mission gaming device, online mission game system, and control method for online mission game system |
KR102241435B1 (en) * | 2020-06-26 | 2021-04-15 | 김현기 | Online Mission Game Machine, Online Mission Game System and Control Method of Online Mission Game System |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101700120B1 (en) | Apparatus and method for object recognition, and system inculding the same | |
US9669312B2 (en) | System and method for object extraction | |
CN106228628B (en) | Check-in system, method and device based on face recognition | |
US11830230B2 (en) | Living body detection method based on facial recognition, and electronic device and storage medium | |
CN105283905B (en) | Use the robust tracking of Points And lines feature | |
US20170228880A1 (en) | System and method for object extraction | |
US20140357369A1 (en) | Group inputs via image sensor system | |
CN112543343B (en) | Live broadcast picture processing method and device based on live broadcast with wheat | |
WO2013158750A2 (en) | System and method for providing recursive feedback during an assembly operation | |
KR20150039252A (en) | Apparatus and method for providing application service by using action recognition | |
AU2020309094B2 (en) | Image processing method and apparatus, electronic device, and storage medium | |
CN111833457A (en) | Image processing method, apparatus and storage medium | |
CN111325107A (en) | Detection model training method and device, electronic equipment and readable storage medium | |
CN110751728B (en) | Virtual reality equipment with BIM building model mixed reality function and method | |
CN112967180A (en) | Training method for generating countermeasure network, and image style conversion method and device | |
US8606000B2 (en) | Device and method for identification of objects using morphological coding | |
CN111079470B (en) | Method and device for detecting human face living body | |
CN112200230A (en) | Training board identification method and device and robot | |
CN116899205A (en) | Interaction method and device for building block game, electronic equipment and storage medium | |
CN109803450A (en) | Wireless device and computer connection method, electronic device and storage medium | |
CN112598728A (en) | Projector attitude estimation and trapezoidal correction method and device, projector and medium | |
CN108965859B (en) | Projection mode identification method, video playing method and device and electronic equipment | |
JP2012226085A (en) | Electronic apparatus, control method and control program | |
EP2467808A1 (en) | Device and method for identification of objects using morphological coding | |
JP7471030B1 (en) | Display control device, method, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
GRNT | Written decision to grant |