WO2011078596A2 - 상황에 따라 적응적으로 이미지 매칭을 수행하기 위한 방법, 시스템, 및 컴퓨터 판독 가능한 기록 매체 - Google Patents
상황에 따라 적응적으로 이미지 매칭을 수행하기 위한 방법, 시스템, 및 컴퓨터 판독 가능한 기록 매체 Download PDFInfo
- Publication number
- WO2011078596A2 WO2011078596A2 PCT/KR2010/009258 KR2010009258W WO2011078596A2 WO 2011078596 A2 WO2011078596 A2 WO 2011078596A2 KR 2010009258 W KR2010009258 W KR 2010009258W WO 2011078596 A2 WO2011078596 A2 WO 2011078596A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- situation
- distance
- reference image
- photographed
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/28—Determining representative reference patterns, e.g. by averaging or distorting; Generating dictionaries
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
Definitions
- the present invention relates to a method, a system, and a computer readable recording medium for performing image matching on a case-by-case basis. More specifically, the present invention, by recognizing the situation of the shooting distance, shooting location, shooting time, etc. of the object that is the target of image matching, and accordingly adaptively determine the image that is the reference of the image matching for the object, A method, system, and computer readable recording medium for enabling image matching to an object contained in an image to be performed more accurately and quickly.
- an image matching technique which compares an object detected from the image with a reference image existing in a predetermined database to determine a reference image most similar to the object as a matching result.
- the object of the present invention is to solve all the above-mentioned problems.
- the present invention recognizes the situation of the shooting distance, shooting location, shooting time, etc. of the object to be matched with the image, and accordingly the other to be able to adaptively determine the image that is the reference of the image matching for the object The purpose.
- a method for adaptively performing image matching according to a situation comprising: (a) recognizing a situation of an object included in an input image, and (b) according to the recognized situation, Determining an image group including at least one image among a plurality of images stored in a database as a reference image group to be matched with the object, wherein the image set included in the reference image group is recognized.
- a method is provided which can be changed dynamically in accordance with a given situation.
- a system for adaptively performing image matching according to a situation comprising: a situation recognizer for recognizing a situation of an object included in an input image, and a plurality of stored in a database according to the recognized situation. And a reference image determiner configured to determine an image group including at least one of the images of the image as a reference image group to be matched with the object, wherein the set of images included in the reference image group includes the recognized situation.
- a system is provided which can be changed dynamically according to the invention.
- a range of a database for searching for a reference image may be adaptively determined according to a situation such as a photographing distance, a photographing location, a photographing time of the object, etc.
- FIG 1 and 2 are views exemplarily illustrating an internal configuration of an image matching system according to an embodiment of the present invention.
- FIG. 3 is a diagram illustrating an object included in an input image according to an embodiment of the present invention.
- FIGS. 4 and 5 are diagrams exemplarily illustrating an image including an object photographed at a day time zone and a night time zone according to an embodiment of the present invention.
- FIG. 6 is a diagram illustrating a configuration of determining a reference image according to an embodiment of the present invention.
- a reference image refers to an image to be compared with the object in image matching performed to find an image similar to an object included in an input image, and stored in a predetermined database. It may be.
- the input image may be digital data in which photographing is completed or may be digital data projected on the screen in a preview state before photographing.
- a microprocessor is provided and a microprocessor is provided, such as a personal computer (for example, a desktop computer, a notebook computer, etc.), a server, a workstation, a PDA, a web pad, a mobile phone, a camera device, and the like. Therefore, any device having a computing capability may be adopted as the image matching system 100 of the present invention.
- FIG. 1 is a diagram illustrating an internal configuration of an image matching system according to an embodiment of the present invention.
- the image matching system 100 may include a situation recognition unit 110, a reference image determination unit 120, a database 130, a communication unit 140, and a controller 150. ) May be included.
- the situation recognition unit 110, the reference image determination unit 120, the database 130, the communication unit 140 and the control unit 150 at least some of them are external systems (not shown) May be program modules in communication with Such program modules may be included in the image matching system 100 in the form of an operating system, an application module, and other program modules, and may be physically stored on various known storage devices.
- these program modules may be stored in a remote storage device that can communicate with the image matching system 100.
- program modules include, but are not limited to, routines, subroutines, programs, objects, components, data structures, etc. that perform particular tasks or execute particular abstract data types, described below, in accordance with the present invention.
- the situation recognition unit 110 performs a function of recognizing the situation of the object included in the input image. That is, by recognizing the situation such as the distance from the location where the object is photographed to the actual location of the object, the location where the object is photographed, and the time zone where the object is photographed, the reference images that are the target of image matching are adaptively adapted to each situation. Can be determined.
- the situation recognizer 110 may include a distance recognizer 111, a place recognizer 112, and a time recognizer 113.
- the distance recognizing unit 111 may perform a function of recognizing a distance from a position where an object is photographed to an actual position of the object in an input image including an object that is an object of image matching.
- the distance from the photographing position to the actual position of the object may vary according to the subject of the photographing.
- small-sized objects such as writing instruments and books tend to be photographed at a distance of less than 1 m.
- Medium-sized objects, such as people and cars, tend to be photographed at a distance of 1m or more and 10m or less, and large-sized objects such as buildings and landscapes tend to be photographed at a distance of several tens to hundreds of meters.
- a planar image such as an affine transform is not sufficient because the depth or depth of the object is not large. While it can be applied as a matching technique, when performing image matching on a large-scale object such as a building or a landscape photographed at a long distance, a matching technique such as affine transformation is applied because the depth or depth of the object is large.
- Non-planar matching techniques e.g., street view
- repetitive patterns e.g., multiple windows in a building
- the distance recognizing unit 111 may recognize a distance from a location where the object is photographed to an actual location of the object.
- a predetermined distance recognition technique should be used.
- As such distance recognition technique Ashutosh Saxena et al. Reference may be made to a paper entitled “Depth Estimation using Monocular and Stereo Cues” (the content of which should be considered to be incorporated herein in its entirety). This paper describes a method for measuring the depth of a monocular image or a stereo image by using texture change, blur, focus change, known size of an object, etc. of a photographed image.
- the distance recognition technology applicable to the present invention is not limited to the method described in the above paper, and various modifications may be applied to implement the present invention.
- an actual corresponding object from a shooting position is referred to with reference to the degree of change of the position, size, etc. of the object in the captured image according to the minute movement of the photographing device.
- Another example is the technique of recognizing the distance to a distance. That is, an object located relatively close to the shooting position changes its size or position relatively in the captured image as the shooting device moves, whereas an object located relatively far from the shooting position moves even if the shooting device moves. Since the size or position of the photographed image changes relatively little, the distance from the photographed position to the position of the actual object may be recognized in consideration of this point.
- FIG. 3 is a diagram illustrating an object included in an input image according to an embodiment of the present invention.
- the input image may include a book 310 which is an object located relatively close to the photographing position and a building 320 which is an object located relatively far from the photographing position.
- the distance recognizer 111 estimates and recognizes distances between the book 310 and the building 320, respectively, so that a reference image, which is an object of image matching between the book 310 and the building 320, is obtained. Each can be determined adaptively.
- the place recognizing unit 112 may perform a function of recognizing a place where the object to be matched with the image is photographed (that is, the type of the place where the object is photographed).
- images captured at a specific location generally contain objects that are more likely to exist at that particular location. For example, images captured indoors may contain objects such as a desk.
- images taken outdoors are likely to contain objects such as mountains, seas, buildings, etc., while images taken underground are likely to include objects such as lighting fixtures and elevators, while shooting from the ground. It is likely that the image contains objects such as buildings and cars.
- the place recognition unit 112 can recognize the place where the object is photographed.
- the place recognition unit 112 in order to recognize the place where the object included in the input image is photographed (that is, the type of the place where the object is photographed), the place recognition unit 112 must use a predetermined place recognition technology.
- a recognition technology reference may be made to a paper entitled "Recognizing Indoor Scenes" published by Ariadna Quattoni et al. And published in the 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). The entirety of which is incorporated herein by reference). The paper describes a method of recognizing the scene depicted by the image in consideration of the global characteristics of the image or the type of the object included in the image.
- the place recognition technology applicable to the present invention is not limited to the method described in the above paper, and various modifications may be applied to implement the present invention.
- the place recognizing unit 112 may use a wireless signal (eg For example, it is possible to recognize a place where an object is photographed (that is, a location where an object exists) by referring to a GPS reception signal, etc. For example, if the strength of a wireless signal such as a received GPS exceeds a preset value, If it is determined that the place is on the ground or outdoors, and the strength of the received wireless signal is less than or equal to the preset value, it may be determined that the place is underground or indoor. In addition, it may be determined whether the intensity of natural light or the like is to determine whether the object is located underground or indoors.
- a wireless signal eg For example, it is possible to recognize a place where an object is photographed (that is, a location where an object exists) by referring to a GPS reception signal, etc. For example, if the strength of a wireless signal such as a received GPS exceeds a preset value, If it is determined that the place is on the ground or outdoors, and the strength of the received wireless signal is
- the time recognizer 113 may perform a function of recognizing a time zone in which an object, which is an object of image matching, is photographed.
- the time recognizer 113 captures an image of an object. Time can be recognized.
- FIGS. 4 and 5 are diagrams exemplarily illustrating an image including an object photographed at a day time zone and a night time zone according to an embodiment of the present invention.
- the feature points of the objects detected in each image are indicated by red cross marks. Referring to this, even if the same objects are included in the images captured during the day time and the night time, respectively, they are captured during the day time. It can be seen that the feature points of the objects appearing in the image and the feature points of the objects appearing in the image captured at night time may appear differently.
- the reference image determiner 120 adaptively selects a reference image, which is an object of image matching, according to the situation (or condition) of the object recognized by the situation recognizer 110. Determining function can be performed. That is, the reference image determiner 120 according to an exemplary embodiment of the present invention may use only the image corresponding to the situation of the object among the plurality of images stored in the database 130 as a reference image to be matched with the object. By determining this, the image matching result can be derived more accurately and quickly.
- the reference image determiner 120 may determine the reference images based on the distance from the position at which the object is photographed to the actual position of the object.
- the photographed object may be determined as a reference image among images of the plurality of reference images stored in the database 130, such as writing instruments, books, and people, as a reference image.
- a database Among the plurality of reference images stored at 130, an image having a high possibility of being photographed at a long distance, such as a building or a landscape, may be determined as the reference image.
- the distance from the photographing point as a variable has been described as an example in which the distance range is divided into two (that is, near and far) as an example, it is not necessarily limited thereto.
- a state in which the distance range is divided into three may be described as an example.
- small-sized objects such as writing instruments, books, etc., which tend to be photographed in close proximity, such as in and around 1m.
- the reference image containing may be determined.
- the reference image which is the object of image matching for the object B recognized as having a distance from the shooting point of 1 m or more and less than 10 m, is a medium size of a person, a car, etc., which tends to be photographed at an intermediate distance such as 1 m or more and 10 m or less.
- the reference image including the object may be determined.
- a reference image that is an object of image matching for the object C recognized as having a distance of 10 m or more from a shooting point a large-sized object such as a building or a landscape, which tends to be photographed at a long distance from a few tens to hundreds of meters
- the reference image containing may be determined.
- the reference image determiner 120 may determine the reference image based on the type of the place where the object is photographed.
- images which are less likely to be photographed underground such as images of ground buildings, may be excluded from the reference image, and objects stored indoors are stored in the database 130.
- an image which is unlikely to be taken indoors such as a car, may be excluded from the reference image.
- the reference image determiner 120 may determine a reference image based on the time at which the object was photographed.
- the database 130 may be configured for the object photographed during the day time zone. ) Can be determined as a reference image of the image taken during the day time zone of the plurality of reference images stored in), and for the object taken in the night time zone, the image taken at night time of the plurality of reference images stored in the database 130 The image can be determined as a reference image.
- each image may be stored differentially in the database 130 according to the linked context information.
- the contextual information stored in association with the reference image may include information regarding a photographing distance, a photographing place, a photographing time zone, and the like.
- the database 130 is a concept that includes not only a negotiated database but also a database in a broad sense including a data record based on a computer file system, and the like. It should be understood that if the extraction can be included in the database of the present invention.
- the database 130 according to an embodiment of the present invention may be included in the image matching system 100 or may be configured separately from the image matching system 100 according to the needs of those skilled in the art for implementing the present invention. .
- FIG. 6 is a diagram illustrating a configuration of determining a reference image according to an embodiment of the present invention.
- FIG. 6 it may be assumed that image matching is performed on objects A, B, and C 610, 620, and 630 included in three different input images, respectively.
- the likelihood that the object A 610 has been photographed at a short distance, such as a writing instrument or a book, among a plurality of reference images existing in the database is higher than a preset threshold.
- the image 615 containing the object may be determined as a reference image to be matched with the image.
- the object A 620 photographed from the ground includes an object having a higher probability of being photographed from the ground such as a building or a landscape among a plurality of reference images existing in the database.
- Image 625 may be determined as a reference image that is subject to image matching.
- an object A 630 photographed at a time zone of 15:00 to 18:00 includes an object photographed at a time zone of 15:00 to 18:00 of a plurality of reference images existing in the database.
- Some images 635 may be determined as reference images that are subject to image matching.
- the communication unit 140 allows the image matching system 100 to communicate with an external device such as a mobile communication server (not shown) or a web server (not shown). Do this.
- the controller 150 performs a function of controlling the flow of data between the situation recognition unit 110, the reference image determination unit 120, the database 130, and the communication unit 140. do. That is, the controller 150 controls the flow of data from the outside or between the respective components of the image matching system, thereby allowing the situation recognition unit 110, the reference image determination unit 120, the database 130, and the communication unit 140. Controls each to perform its own function.
- Embodiments according to the present invention described above may be implemented in the form of program instructions that may be executed by various computer components, and may be recorded in a computer-readable recording medium.
- the computer-readable recording medium may include program instructions, data files, data structures, etc. alone or in combination.
- Program instructions recorded on the computer-readable recording medium may be those specially designed and configured for the present invention, or may be known and available to those skilled in the computer software arts.
- Examples of computer-readable recording media include magnetic media such as hard disks, floppy disks and magnetic tape, optical recording media such as CD-ROMs, DVDs, and magneto-optical media such as floptical disks. media), and hardware devices specifically configured to store and execute program instructions, such as ROM, RAM, flash memory, and the like.
- Examples of program instructions include not only machine code generated by a compiler, but also high-level language code that can be executed by a computer using an interpreter or the like.
- the hardware device may be configured to operate as one or more software modules to perform the process according to the invention, and vice versa.
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Processing Or Creating Images (AREA)
- Image Processing (AREA)
Abstract
Description
Claims (21)
- 상황에 따라 적응적으로 이미지 매칭을 수행하기 위한 방법으로서,(a) 입력 이미지에 포함된 객체의 상황을 인식하는 단계, 및(b) 상기 인식된 상황에 따라, 데이터베이스에 저장된 복수의 이미지 중 적어도 하나의 이미지를 포함하는 이미지 그룹을 상기 객체에 대한 이미지 매칭의 대상이 되는 참조 이미지 그룹으로서 결정하는 단계를 포함하되,상기 참조 이미지 그룹에 포함된 이미지 세트는 상기 인식된 상황에 따라 동적으로 변화될 수 있는 것을 특징으로 하는 방법.
- 제1항에 있어서,상기 객체의 상황은, 상기 객체의 촬영 위치로부터 상기 객체의 실제 위치까지의 거리, 상기 객체가 촬영된 장소의 유형 및 상기 객체의 촬영 시간 중 적어도 하나를 포함하는 것을 특징으로 하는 방법.
- 제1항에 있어서,상기 (a) 단계에서,상기 객체의 촬영 위치로부터 상기 객체의 실제 위치까지의 거리에 해당되는 거리 범위를 인식하고,상기 (b) 단계에서,상기 데이터베이스에 저장된 이미지 중 상기 인식된 거리에 대응되는 거리 범위에 해당되는 적어도 하나의 이미지를 상기 객체에 대한 이미지 매칭의 대상이 되는 참조 이미지 그룹으로서 결정하는 것을 특징으로 하는 방법.
- 제3항에 있어서,상기 (b) 단계에서,상기 객체에 대한 이미지 매칭의 대상이 되는 참조 이미지 그룹은, 상기 거리 범위 내에 포함되는 거리를 두고 촬영될 확률이 기설정된 임계값보다 높은 객체를 포함하는 이미지를 포함하는 것을 특징으로 하는 방법.
- 제1항에 있어서,상기 (a) 단계에서,상기 입력 이미지의 촬영 장소에 해당되는 장소 범위를 인식하고,상기 (b) 단계에서,상기 데이터베이스에 저장된 이미지 중 상기 인식된 촬영 장소에 대응되는 장소 범위에 해당되는 적어도 하나의 이미지를 상기 객체에 대한 이미지 매칭의 대상이 되는 참조 이미지 그룹으로서 결정하는 것을 특징으로 하는 방법.
- 제5항에 있어서,상기 (a) 단계에서,촬영 장치에 수신되는 GPS(Global Positioning System) 신호의 세기에 기초하여 상기 객체의 촬영 장소를 인식하는 것을 특징으로 하는 방법.
- 제5항에 있어서,상기 (b) 단계에서,상기 객체에 대한 이미지 매칭의 대상이 되는 참조 이미지 그룹은, 상기 장소 범위 내에 포함되는 장소에서 촬영될 확률이 기설정된 임계값보다 높은 객체를 포함하는 이미지를 포함하는 것을 특징으로 하는 방법.
- 제1항에 있어서,상기 (a) 단계에서,상기 입력 이미지의 촬영 시간에 해당되는 시간 범위를 인식하고,상기 (b) 단계에서,상기 데이터베이스에 저장된 이미지 중 상기 인식된 촬영 시간에 대응되는 시간 범위에 해당되는 적어도 하나의 이미지를 상기 객체에 대한 이미지 매칭의 대상이 되는 참조 이미지 그룹으로서 결정하는 것을 특징으로 하는 방법.
- 제1항에 있어서,상기 입력 이미지는, 촬영이 완료된 이미지 및 촬영 전의 프리뷰 상태에서 화면 상에 비추어지는 이미지 중 적어도 하나를 포함하는 것을 특징으로 하는 방법.
- 제1항에 있어서,상기 (a) 단계 이전에,해당 상황 정보와 연계되어 저장되는 적어도 하나의 이미지를 포함하는 데이터베이스를 구축하는 단계를 더 포함하는 것을 특징으로 하는 방법.
- 상황에 따라 적응적으로 이미지 매칭을 수행하기 위한 시스템으로서,입력 이미지에 포함된 객체의 상황을 인식하는 상황 인식부, 및상기 인식된 상황에 따라, 데이터베이스에 저장된 복수의 이미지 중 적어도 하나의 이미지를 포함하는 이미지 그룹을 상기 객체에 대한 이미지 매칭의 대상이 되는 참조 이미지 그룹으로서 결정하는 참조 이미지 결정부를 포함하되,상기 참조 이미지 그룹에 포함된 이미지 세트는 상기 인식된 상황에 따라 동적으로 변화될 수 있는 것을 특징으로 하는 시스템.
- 제11항에 있어서,상기 객체의 상황은, 상기 객체의 촬영 위치로부터 상기 객체의 실제 위치까지의 거리, 상기 객체가 촬영된 장소의 유형 및 상기 객체의 촬영 시간 중 적어도 하나를 포함하는 것을 특징으로 하는 시스템.
- 제11항에 있어서,상기 상황 인식부는,상기 객체의 촬영 위치로부터 상기 객체의 실제 위치까지의 거리에 해당되는 거리 범위를 인식하고,상기 참조 이미지 결정부는,상기 데이터베이스에 저장된 이미지 중 상기 인식된 거리에 대응되는 거리 범위에 해당되는 적어도 하나의 이미지를 상기 객체에 대한 이미지 매칭의 대상이 되는 참조 이미지 그룹으로서 결정하는 것을 특징으로 하는 시스템.
- 제13항에 있어서,상기 객체에 대한 이미지 매칭의 대상이 되는 참조 이미지 그룹은, 상기 거리 범위 내에 포함되는 거리를 두고 촬영될 확률이 기설정된 임계값보다 높은 객체를 포함하는 이미지를 포함하는 것을 특징으로 하는 시스템.
- 제11항에 있어서,상기 상황 인식부는,상기 입력 이미지의 촬영 장소에 해당되는 장소 범위를 인식하고,상기 참조 이미지 결정부는,상기 데이터베이스에 저장된 이미지 중 상기 인식된 촬영 장소에 대응되는 장소 범위에 해당되는 적어도 하나의 이미지를 상기 객체에 대한 이미지 매칭의 대상이 되는 참조 이미지 그룹으로서 결정하는 것을 특징으로 하는 시스템.
- 제15항에 있어서,상기 상황 인식부는,촬영 장치에 수신되는 GPS(Global Positioning System) 신호의 세기에 기초하여 상기 객체의 촬영 장소를 인식하는 것을 특징으로 하는 시스템.
- 제15항에 있어서,상기 객체에 대한 이미지 매칭의 대상이 되는 참조 이미지 그룹은, 상기 장소 범위 내에 포함되는 장소에서 촬영될 확률이 기설정된 임계값보다 높은 객체를 포함하는 이미지를 포함하는 것을 특징으로 하는 시스템.
- 제11항에 있어서,상기 상황 인식부는,상기 입력 이미지의 촬영 시간에 해당되는 시간 범위를 인식하고,상기 참조 이미지 결정부는,상기 데이터베이스에 저장된 이미지 중 상기 인식된 촬영 시간에 대응되는 시간 범위에 해당되는 적어도 하나의 이미지를 상기 객체에 대한 이미지 매칭의 대상이 되는 참조 이미지 그룹으로서 결정하는 것을 특징으로 하는 시스템.
- 제11항에 있어서,상기 입력 이미지는, 촬영이 완료된 이미지 및 촬영 전의 프리뷰 상태에서 화면 상에 비추어지는 이미지 중 적어도 하나를 포함하는 것을 특징으로 하는 시스템.
- 제11항에 있어서,해당 상황 정보와 연계되어 저장되는 적어도 하나의 이미지를 포함하는 데이터베이스를 더 포함하는 것을 특징으로 하는 시스템.
- 제1항 내지 제10항 중 어느 한 항에 따른 방법을 실행하기 위한 컴퓨터 프로그램을 기록한 컴퓨터 판독 가능한 기록 매체.
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2010335126A AU2010335126B2 (en) | 2009-12-24 | 2010-12-23 | Method, system, and computer-readable recording medium for adaptively performing image-matching according to conditions |
CN201080059282.2A CN102792675B (zh) | 2009-12-24 | 2010-12-23 | 用于根据条件自适应地执行图像匹配的方法、系统和计算机可读记录介质 |
US13/378,166 US20120087592A1 (en) | 2009-12-24 | 2010-12-23 | Method, system, and computer-readable recording medium for adaptively performing image-matching according to situations |
EP10839793.6A EP2518998A4 (en) | 2009-12-24 | 2010-12-23 | METHOD, SYSTEM AND COMPUTER READABLE RECORDING MEDIUM FOR ADAPTIVE REALIZING OF IMAGE ADAPTATION ACCORDING TO CERTAIN CONDITIONS |
JP2012545855A JP5330606B2 (ja) | 2009-12-24 | 2010-12-23 | 状況により適応的にイメージマッチングを行うための方法、システム、及びコンピュータ読み取り可能な記録媒体 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2009-0131220 | 2009-12-24 | ||
KR1020090131220A KR100970121B1 (ko) | 2009-12-24 | 2009-12-24 | 상황에 따라 적응적으로 이미지 매칭을 수행하기 위한 방법, 시스템, 및 컴퓨터 판독 가능한 기록 매체 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2011078596A2 true WO2011078596A2 (ko) | 2011-06-30 |
WO2011078596A3 WO2011078596A3 (ko) | 2011-11-17 |
Family
ID=42645562
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2010/009258 WO2011078596A2 (ko) | 2009-12-24 | 2010-12-23 | 상황에 따라 적응적으로 이미지 매칭을 수행하기 위한 방법, 시스템, 및 컴퓨터 판독 가능한 기록 매체 |
Country Status (7)
Country | Link |
---|---|
US (1) | US20120087592A1 (ko) |
EP (1) | EP2518998A4 (ko) |
JP (1) | JP5330606B2 (ko) |
KR (1) | KR100970121B1 (ko) |
CN (1) | CN102792675B (ko) |
AU (1) | AU2010335126B2 (ko) |
WO (1) | WO2011078596A2 (ko) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101102896B1 (ko) * | 2011-03-04 | 2012-01-09 | (주)올라웍스 | 복수의 사용자가 동시에 콜렉션을 수행할 수 있도록 지원하기 위한 방법, 서버 및 컴퓨터 판독 가능한 기록 매체 |
WO2014113451A1 (en) | 2013-01-15 | 2014-07-24 | Intel Corporation | A rack assembly structure |
US9866900B2 (en) * | 2013-03-12 | 2018-01-09 | The Nielsen Company (Us), Llc | Methods, apparatus and articles of manufacture to detect shapes |
CN105701804A (zh) * | 2016-01-05 | 2016-06-22 | 百度在线网络技术(北京)有限公司 | 物体材质的识别方法及装置 |
CN108426578A (zh) * | 2017-12-29 | 2018-08-21 | 达闼科技(北京)有限公司 | 一种基于云端的导航方法、电子设备和可读存储介质 |
CN110647603B (zh) * | 2018-06-27 | 2022-05-27 | 百度在线网络技术(北京)有限公司 | 图像标注信息的处理方法、装置和系统 |
KR20210028934A (ko) | 2019-09-05 | 2021-03-15 | 삼성전자주식회사 | 관련 정보에 기반하여 외부 객체를 특정하는 전자 장치 및 그의 동작 방법 |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100319452B1 (ko) * | 1998-11-26 | 2002-04-22 | 오길록 | 내용기반검색을위한동영상브라우징방법 |
JP2002007432A (ja) * | 2000-06-23 | 2002-01-11 | Ntt Docomo Inc | 情報検索システム |
JP2003323440A (ja) * | 2002-04-30 | 2003-11-14 | Japan Research Institute Ltd | 携帯端末を用いた撮影画像の情報提供システム、撮影画像の情報提供方法、およびその方法をコンピュータに実行させるプログラム |
JP4281402B2 (ja) * | 2003-04-21 | 2009-06-17 | ソニー株式会社 | 画像管理システム及び画像管理方法、並びにコンピュータ・プログラム |
JP2005108027A (ja) * | 2003-09-30 | 2005-04-21 | Ricoh Co Ltd | 被写体情報提供方法及び被写体情報提供プログラム |
US8379990B2 (en) * | 2006-05-10 | 2013-02-19 | Nikon Corporation | Object recognition apparatus, computer readable medium storing object recognition program, and image retrieval service providing method |
CN101460947A (zh) * | 2006-05-29 | 2009-06-17 | 卧龙岗大学 | 基于内容的图像检索 |
US20080208791A1 (en) * | 2007-02-27 | 2008-08-28 | Madirakshi Das | Retrieving images based on an example image |
JP4914268B2 (ja) * | 2007-03-29 | 2012-04-11 | 株式会社日立製作所 | 検索サービスサーバの情報検索方法。 |
US8558883B2 (en) * | 2007-07-27 | 2013-10-15 | Sportvision, Inc. | Providing graphics in images depicting aerodynamic flows and forces |
JP2009260800A (ja) * | 2008-04-18 | 2009-11-05 | Fujifilm Corp | 撮像装置、事典データベースシステム、主要被写体画像出力方法および主要被写体画像出力プログラム |
US20100235356A1 (en) * | 2009-03-10 | 2010-09-16 | Microsoft Corporation | Organization of spatial sensor data |
US8189964B2 (en) * | 2009-12-07 | 2012-05-29 | Google Inc. | Matching an approximately located query image against a reference image set |
-
2009
- 2009-12-24 KR KR1020090131220A patent/KR100970121B1/ko not_active IP Right Cessation
-
2010
- 2010-12-23 JP JP2012545855A patent/JP5330606B2/ja not_active Expired - Fee Related
- 2010-12-23 WO PCT/KR2010/009258 patent/WO2011078596A2/ko active Application Filing
- 2010-12-23 EP EP10839793.6A patent/EP2518998A4/en not_active Withdrawn
- 2010-12-23 CN CN201080059282.2A patent/CN102792675B/zh not_active Expired - Fee Related
- 2010-12-23 US US13/378,166 patent/US20120087592A1/en not_active Abandoned
- 2010-12-23 AU AU2010335126A patent/AU2010335126B2/en not_active Ceased
Non-Patent Citations (1)
Title |
---|
None |
Also Published As
Publication number | Publication date |
---|---|
CN102792675B (zh) | 2016-08-17 |
AU2010335126B2 (en) | 2014-01-09 |
EP2518998A2 (en) | 2012-10-31 |
US20120087592A1 (en) | 2012-04-12 |
WO2011078596A3 (ko) | 2011-11-17 |
JP2013515317A (ja) | 2013-05-02 |
AU2010335126A1 (en) | 2012-07-12 |
CN102792675A (zh) | 2012-11-21 |
KR100970121B1 (ko) | 2010-07-13 |
EP2518998A4 (en) | 2014-07-30 |
JP5330606B2 (ja) | 2013-10-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2011078596A2 (ko) | 상황에 따라 적응적으로 이미지 매칭을 수행하기 위한 방법, 시스템, 및 컴퓨터 판독 가능한 기록 매체 | |
CN112333380B (zh) | 一种拍摄方法及设备 | |
WO2022050473A1 (ko) | 카메라 포즈 추정 장치 및 방법 | |
WO2013048162A2 (ko) | 한정된 메모리 환경 하에서 얼굴 인식 성능 향상을 위한 참조 얼굴 데이터베이스 관리 방법, 장치 및 컴퓨터 판독 가능한 기록 매체 | |
CN110807361B (zh) | 人体识别方法、装置、计算机设备及存储介质 | |
WO2011136608A9 (ko) | 단말 장치로 입력되는 입력 영상 및 상기 입력 영상에 관련된 정보를 이용하여 증강 현실을 제공하기 위한 방법, 단말 장치 및 컴퓨터 판독 가능한 기록 매체 | |
WO2011034308A2 (ko) | 그래프 구조를 이용하여 파노라마 이미지에 대한 이미지 매칭을 수행하기 위한 방법, 시스템 및 컴퓨터 판독 가능한 기록 매체 | |
CN106934351B (zh) | 手势识别方法、装置及电子设备 | |
WO2020111776A1 (en) | Electronic device for focus tracking photographing and method thereof | |
WO2020166883A1 (ko) | 인공지능을 이용한 문맥 파악 기반의 동영상 편집 방법 및 시스템 | |
WO2021101097A1 (en) | Multi-task fusion neural network architecture | |
WO2015102126A1 (ko) | 얼굴 인식 기술을 이용한 전자 앨범 관리 방법 및 시스템 | |
WO2011055930A2 (ko) | 그래프 컷의 초기값을 설정하는 방법, 단말 장치, 및 컴퓨터 판독 가능한 기록 매체 | |
CN112150514A (zh) | 视频的行人轨迹追踪方法、装置、设备及存储介质 | |
JP2022531187A (ja) | 測位方法及び装置、電子機器並びに記憶媒体 | |
CN109543579A (zh) | 一种图像中目标对象的识别方法、装置和存储介质 | |
CN111603772A (zh) | 区域检测方法、装置、设备及存储介质 | |
WO2011083929A2 (ko) | 뷰잉 프러스텀을 이용하여 객체에 대한 정보를 제공하기 위한 방법, 시스템 및 컴퓨터 판독 가능한 기록 매체 | |
WO2014003509A1 (ko) | 증강 현실 표현 장치 및 방법 | |
WO2021066392A2 (ko) | 골프 스윙에 관한 정보를 추정하기 위한 방법, 디바이스 및 비일시성의 컴퓨터 판독 가능한 기록 매체 | |
WO2012121480A2 (ko) | 복수의 사용자가 동시에 콜렉션을 수행할 수 있도록 지원하기 위한 방법, 서버 및 컴퓨터 판독 가능한 기록 매체 | |
WO2011034306A2 (ko) | 파노라마 이미지 사이의 중복을 제거하기 위한 방법, 시스템 및 컴퓨터 판독 가능한 기록 매체 | |
WO2022108127A1 (ko) | 촬영 공간 정보 기반 cctv 저장 영상 검색 방법 및 시스템 | |
WO2017086522A1 (ko) | 배경스크린이 필요 없는 크로마키 영상 합성 방법 | |
WO2018066902A1 (en) | Consistent spherical photo and video orientation correction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201080059282.2 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10839793 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13378166 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2010335126 Country of ref document: AU |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2012545855 Country of ref document: JP Ref document number: 2010839793 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2010335126 Country of ref document: AU Date of ref document: 20101223 Kind code of ref document: A |