WO2014192612A1 - 画像認識装置、その処理方法、およびプログラム - Google Patents
画像認識装置、その処理方法、およびプログラム Download PDFInfo
- Publication number
- WO2014192612A1 WO2014192612A1 PCT/JP2014/063428 JP2014063428W WO2014192612A1 WO 2014192612 A1 WO2014192612 A1 WO 2014192612A1 JP 2014063428 W JP2014063428 W JP 2014063428W WO 2014192612 A1 WO2014192612 A1 WO 2014192612A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- target object
- processing
- image recognition
- processing target
- detection target
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
Definitions
- the present invention relates to an image recognition apparatus, a processing method thereof, and a program.
- a detection target is detected by collating an image feature amount extracted from a captured image with a feature amount of a registered image registered in a dictionary in advance.
- the inventors have found the following problems. For example, when recognizing a catalog item used in a restaurant menu or catalog mail order, the photo of the target item is small, it is a white item with almost no pattern, or it is displayed as a string without a photo. As a result, sufficient amount of feature information to maintain recognition accuracy cannot be obtained from the captured image or registered image of the item, and some items become difficult to recognize. There was a problem.
- the present invention has been made in view of the above circumstances, and an object of the present invention is to provide an image recognition apparatus, a processing method thereof, and a program capable of accurately specifying an object regardless of a recognition object. .
- the first image recognition apparatus of the present invention is: An object that, by image recognition, identifies the position of a detection target object that is set in a predetermined arrangement according to the processing target object and has characteristics according to the processing target object in the shooting target. Specific means, Based on the object position data indicating the relative position between the detection target object in the shooting target and a processing target object set in a predetermined arrangement according to the shooting target and having characteristics according to the shooting target.
- the processing target object specified in the captured image is specified by the relative position from the position of the detection target object specified by the object specifying unit in the captured image.
- the second image recognition apparatus of the present invention is An object that, by image recognition, identifies the position of a detection target object that is set in a predetermined arrangement according to the processing target object and has characteristics according to the processing target object in the shooting target. Specific means, Based on position information of the detection target object in the shooting target and object position data that associates a processing target object that is set in a predetermined arrangement according to the shooting target and has characteristics according to the shooting target.
- the processing target object in the captured image is identified from the position in the captured image of the detection target object specified by the object specifying means, and the process assigned to the specified processing target object Processing means for executing Is provided.
- the processing method of the first image recognition apparatus of the present invention is as follows.
- Image recognition device The position of the detection target object, which is set in a predetermined arrangement according to the processing target object in the shooting target and has characteristics according to the processing target object, is specified by image recognition. Based on the object position data indicating the relative position between the detection target object in the shooting target and a processing target object set in a predetermined arrangement according to the shooting target and having characteristics according to the shooting target. From the position of the specified detection target object in the captured image, the processing target object in the captured image at the relative position is specified and assigned to the specified processing target object This is a method for executing processing.
- the processing method of the second image recognition apparatus of the present invention is as follows.
- Image recognition device The position of the detection target object, which is set in a predetermined arrangement according to the processing target object in the shooting target and has characteristics according to the processing target object, is specified by image recognition. Based on position information of the detection target object in the shooting target and object position data that associates a processing target object that is set in a predetermined arrangement according to the shooting target and has characteristics according to the shooting target.
- a method of identifying the processing target object in the related captured image from the position of the identified detection target object in the captured image and executing the processing assigned to the identified processing target object. is there.
- the first computer program of the present invention is: A procedure for identifying, by image recognition, a position in a captured image of a detection target object that is set in a predetermined arrangement in accordance with the processing target object and has characteristics corresponding to the processing target object. , Based on the object position data indicating the relative position between the detection target object in the shooting target and a processing target object set in a predetermined arrangement according to the shooting target and having characteristics according to the shooting target. From the position of the specified detection target object in the captured image, the processing target object in the captured image at the relative position is specified and assigned to the specified processing target object It is a program for making a computer perform the procedure which performs a process.
- the second computer program of the present invention is: A procedure for identifying, by image recognition, a position in a captured image of a detection target object that is set in a predetermined arrangement in accordance with the processing target object and has characteristics corresponding to the processing target object. , Based on position information of the detection target object in the shooting target and object position data that associates a processing target object that is set in a predetermined arrangement according to the shooting target and has characteristics according to the shooting target. A procedure for specifying the processing target object in the related captured image from the position of the specified detection target object in the captured image and executing the process assigned to the specified processing target object; Is a program for causing a computer to execute.
- a plurality of components are formed as a single member, and a single component is formed of a plurality of members. It may be that a certain component is a part of another component, a part of a certain component overlaps with a part of another component, or the like.
- processing method and the plurality of procedures of the computer program of the present invention are not limited to being executed at different timings. For this reason, another procedure may occur during the execution of a certain procedure, or some or all of the execution timing of a certain procedure and the execution timing of another procedure may overlap.
- an image recognition apparatus a processing method thereof, and a program capable of accurately specifying a detection target regardless of a recognition target.
- FIG. 1 is a block diagram illustrating a configuration example of a mail order system 1 as an example of a system using an image recognition apparatus according to an embodiment of the present invention.
- the mail order system 1 of the present embodiment includes a smartphone 10 that is a mobile terminal used by a user, a server device 60 that can communicate with the smartphone 10 via the network 3, and a database 50 connected to the server device 60 (in the drawing). , Indicated as “DB”).
- the image recognition apparatus of the present invention recognizes each element of the image recognition target included in the shooting target, and performs processing corresponding to each recognized element.
- a user takes a picture of a mobile terminal such as a smartphone 10 over the catalog 7, and the product in the catalog 7 (photographing target) (
- the smartphone 10 displays a marker on the screen, opens a website where information related to the product can be browsed, The process of accepting the order process can be performed.
- the user can browse the product information or place an order on the video preview screen 9 displayed in real time using the smartphone 10.
- the image recognition device is a mobile terminal (smartphone 10), a server device 60 that can communicate with the mobile terminal (smartphone 10), or , And a combination thereof.
- FIG. 3 is a functional block diagram showing the configuration of the image recognition apparatus 100 according to the embodiment of the present invention.
- An image recognition apparatus 100 is a captured image of a detection target object that is set in a predetermined arrangement according to a processing target object and has characteristics according to the processing target object.
- the object position is determined by image recognition, the position information of the detection target object in the shooting target, and a predetermined arrangement according to the shooting target, and a feature corresponding to the shooting target
- the processing target object in the related captured image is specified and specified from the position in the captured image of the detection target object specified by the object specifying unit 102.
- a processing unit 104 that executes processing assigned to the processing target object.
- the shooting target is a target that a user or the like intends to shoot using a terminal such as the smartphone 10.
- the object to be imaged is, for example, the surface of a booklet such as a mail-order catalog.
- the shooting target includes a target that the user recognizes by looking at the shooting target. For example, a product listed in a catalog corresponds to the recognition target.
- the photographing target includes a processing target object to which some processing is assigned and a detection target object for the image recognition apparatus 100 to detect by image recognition.
- the processing target object is an object that a user can see and recognize, such as a product image in a catalog, but is not limited thereto.
- the entire paper surface with only the background image can be set as the processing target object.
- the detection target object only needs to have a feature amount that can be detected by image recognition by the image recognition apparatus 100 regardless of whether or not a human can recognize it.
- the detection target object and the processing target object may be the same, or may be completely different objects.
- the detection target object is included in the shooting range of the user's imaging unit when shooting the processing target object. It is necessary to have been. That is, the detection target object is set so that the processing target object and at least a part of the detection target object associated with the processing target object are included in the same shooting range.
- processing target objects not included in the shooting range are also specified as processing target objects corresponding to the specified detection target objects. Also good.
- Screens displaying internet shopping websites, etc. signs installed in streets and stores, screens displayed on digital signage installed in streets and stores, TV screens displaying shopping programs, buildings and shopping malls Floor guide maps of trains and stations, landscapes that can be seen from specific points (scenery, buildings ), It is possible to shoot the subject at least one of such works of art, such as paintings.
- Examples other than mail order include, for example, recognizing a restaurant menu, displaying information related to the recognized menu, such as allergy information, displaying coupon information and recommended menus, and accepting menu orders. Processing can be performed.
- recognizing a floor guide map a process of displaying information related to a building, for example, displaying a telephone number, opening a website, presenting navigation to the building, or displaying sale information of each store It can be performed.
- the processing target object is an object recognized by the user (an object, an image, a character string, or the like presented to the shooting target) that is included in the shooting target and to which some processing is assigned.
- the processing target object is a target (for example, an image of a product) that is recognized when the user views a catalog that is a shooting target, and that obtains information or performs a purchase procedure.
- the processing target object is a target to be recognized by the user, photographed, and subjected to associated processing.
- the processing target object cannot be recognized with a certain recognition accuracy by image recognition from the photographed image.
- the image recognition apparatus detects a detection target object, which will be described later, from the captured image, and specifies the processing target object from the detected detection target object, thereby associating with the specified processing target object. It is assumed that the processing can be performed.
- the object to be processed can include not-for-sale items such as exhibits and prototypes, in addition to the items to be sold.
- the object to be processed indicates an option of the item, for example, designation of type, color, pattern, size, name input, etc., and a selected part constituting the item, for example, an aero part of a car and a combination thereof.
- a logo mark, a symbol mark, an icon, a character string, a photograph, a design, etc. can also be included.
- the processing target object presents a plurality of options to the user, and the user designates one or more optional options.
- a logo indicating each option when selecting a questionnaire or quiz answer option, etc. It may be a mark, a symbol mark, an icon, a character string, a photograph, a design, or the like.
- the processing target object information 110 includes an ID for identifying the processing target object, image data (file name, storage location of the image data, etc.), and the position of the processing target object in the shooting target.
- the processing target object information 110 can be held in the database 50 of FIG. Both the object ID and the image data are not necessarily required, either one may be used, or alternatively, other information indicated by the image of the object to be processed, such as product information (product ID, product data) Name, model number, price, specification, product description, or URL of a web page on which product information is posted).
- the processing target object information 110 is not necessarily required. Information on the processing target object only needs to be held as object position data indicating at least a relative position of the processing target object with respect to the position of the detection target object.
- the present invention is particularly effective if the positional relationship between at least some of the processing target objects included in the shooting target and the detection target object is fixed.
- the present invention does not exclude the case where the shooting target is a video and the relative positional relationship of the processing target object or the detection target object included in the shooting target changes as in digital signage or the like.
- the image recognizing apparatus may prepare information that allows a change in the relative positional relationship between at least some of the processing target objects and the detection target objects in the video to be reproduced for each reproduction time. By doing so, the image recognition apparatus can acquire the relative positional relationship between at least a part of the processing target object and the detection target object included in the shooting target from the reproduction time of the video at the time of shooting.
- the detection target object is a target to be detected by image recognition from a photographed image obtained by photographing the photographing target included in the photographing target.
- the detection target object is set in a predetermined arrangement according to the processing target object, and has characteristics according to the processing target object. For example, when a user shoots a shooting target including a processing target object, an area in the shooting target from which sufficient feature information that can obtain a certain recognition accuracy by image recognition can be extracted is the detection target object. preferable. Further, the detection target object may or may not include at least a part of the processing target object. However, as described above, since the user shoots the processing target object, it is assumed that at least a part of the detection target object associated with the processing target object is included in the shooting range when the processing target object is shot.
- the detection target object is set within the shooting target.
- the detection target object is embedded in the imaging target in advance so that the imaging target is arranged so as to maintain a predetermined relationship with the processing target object and the visibility of the processing target object is not impaired. May be.
- the detection target object of the present invention is different from a general marker (for example, markers provided at the four corners of the target frame) provided for detecting the position of the shooting target, according to the processing target object included in the shooting target.
- the arrangement, range, feature information amount, and the like can be changed, and the user can be prevented from recognizing the presence.
- the detection target object can be indicated by feature information of at least a part of the image area included in the shooting target. It may be the image data itself of the corresponding area, or may be feature information extracted or generated for image recognition based on the image area.
- the feature information is a feature amount of the image area, and may be, for example, a ratio of a red component included in the image area or an average luminance of the image area.
- the distribution (position, number) of feature points extracted under a predetermined condition in the image area may be used, and further information including the conditions etc. for extracting each extracted feature point may be included. Good.
- the feature information various modes are conceivable depending on the method of image recognition. Therefore, it is assumed that appropriate information is adopted according to the method used.
- the detection target object information 112 includes an ID for identifying the detection target object, feature information of the detection target object (or a plurality of feature points included in the detection target object). 1) and the position of the detection target object in the shooting target (or the positions of a plurality of feature points included in the detection target object in the shooting target), and the detection target object information 112 is held in the database 50 of FIG. be able to.
- the photographed image is an image obtained as a result of photographing the photographing object.
- the photographed image obtained by photographing the photographing target includes at least a part of the processing target object recognized by the user, and can further include a background.
- the photographed image preferably includes at least a part of the detection target object within a range having a certain image recognition accuracy.
- the object position data indicates a relative position (which may be an arrangement and a range) between a photographing target or a detection target object in a photographed image and a processing target object. That is, the object position data is data for specifying the processing target object at the relative position in the captured image from the position of the detection target object in the captured image obtained by capturing the imaging target.
- the processing target object that is included in the photographic target but is not included in the photographic image. May also be identified.
- the object position data only needs to include at least association information indicating a correspondence relationship between the detection target object and the processing target object.
- the correspondence relationship between the detection target object and the processing target object is at least one of one-to-one, one-to-multiple, multiple-to-one, and multiple-to-multiple.
- the process assigned to the object to be processed is at least one of displaying various items such as markers, menus, icons, operation buttons (operation reception), realizing a user interface function, sending detection results to the server, and window operations. Can be included.
- the marker surrounds the product image with a line, highlights information such as the product name, etc. Is displayed superimposed on the image by a process of blinking or highlighting.
- the marker may be displayed in a balloon shape, and information regarding the processing target object may be displayed therein, or an operation button for accepting processing such as purchase may be included.
- Menus, icons, operation buttons, and the like are for receiving an instruction to execute a predetermined process assigned to the object to be processed from the user, and may also receive designation of processing conditions and the like.
- the user in response to the result of recognition of the processing target object, the user can fly to a predetermined URL address assigned to the processing target object automatically or in response to a user operation, browse a website, or execute a predetermined application. Processing such as activation or termination, opening, switching, or closing of other windows may be included.
- the process information 114 of the process assigned to the process target object includes an ID for identifying the process, a position where the process is executed in the captured image (or a detection target).
- the processing information 114 can be held in the database 50 of FIG.
- the process assignment information 116 is information for associating the process assigned to each process target object, for each process target object ID of the process target object.
- the process ID is associated and held in the database 50 of FIG. Note that the processing target object and the processing do not necessarily have to be assigned one-to-one, and may be assigned a plurality of one-to-one, a one-to-multiple, a plurality of a plurality, or a combination thereof.
- the smartphone 10 is described as an example of a mobile terminal used by the user, but the present invention is not limited to this.
- a mobile mobile wireless communication terminal such as a mobile phone, a PDA (Personal Digital Assistant), a tablet terminal, a game machine, or other electronic devices can be used.
- the mobile terminal of the present invention may be a mobile terminal deployed in a store or a product exhibition hall, in addition to the mobile terminal carried by the user, and can be commonly used by the users who visited or visited the place. Such a terminal may be used.
- the image recognition apparatus particularly shoots a shooting target in which a plurality of processing target objects are arranged side by side while changing the direction and position of the camera at least partly. Then, the user captures the object to be processed for image recognition while the user sequentially browses a mobile-size screen such as the smartphone 10.
- a processing target object what the user recognizes and shoots is a processing target object, but what the image recognition apparatus 100 recognizes through image recognition is a detection target object.
- the image recognition apparatus 100 notifies the user, for example, by adding a marker to the processing target object and displaying it as if the processing target object has been recognized. Then, information corresponding to the processing target object specified by the image recognition device 100 can be displayed on the touch panel of the smartphone 10 in association with the processing target object, or an operation such as an order can be received on the touch panel of the smartphone 10.
- FIG. 4 is a block diagram illustrating a hardware configuration of the smartphone 10 as an example of the mobile terminal that configures the image recognition apparatus 100 according to the embodiment of the present invention.
- the smartphone 10 of this embodiment includes a CPU (Central Processing Unit) 12, a ROM (Read Only Memory) 14, a RAM (Random Access Memory) 16, a mobile phone network communication unit 18, A wireless LAN (Local Area Network) communication unit 20, an operation unit 22, an operation reception unit 24, a display unit 26, a display control unit 28, an imaging unit 30, a speaker 32, a microphone 34, and an audio control unit. 36.
- a CPU Central Processing Unit
- ROM Read Only Memory
- RAM Random Access Memory
- the CPU 12 is connected to each element of the smartphone 10 via the bus 40 and controls the entire smartphone 10 together with each element.
- the ROM 14 stores programs and various application programs for operating the smartphone 10, various setting data used when these programs operate, and user data including address data and various content data.
- the RAM 16 has an area for temporarily storing data, such as a work area for operating the program.
- Each component of the smartphone 10 includes a CPU 12, a RAM 16, a program that realizes the components shown in FIG. 3 loaded in the RAM 16, a ROM 14 that stores the program, a network connection interface (mobile phone network communication unit 18, wireless LAN It is realized by any combination of hardware and software of any computer having the communication unit 20). It will be understood by those skilled in the art that there are various modifications to the implementation method and apparatus. Each figure described below shows a functional unit block, not a hardware unit configuration.
- the ROM 14 and the RAM 16 described above may be other devices having a function for storing application programs and setting data for operating the programs, temporary storage data, user data, and the like, such as a flash memory and a disk drive. Good.
- the operation unit 22 includes operation keys, operation buttons, switches, a jog dial, a touch pad, a touch panel integrated with the display unit 26, and the like.
- the operation reception unit 24 receives an operation of the operation unit 22 by the user and notifies the CPU 12 of the operation.
- the display unit 26 includes an LED (Light Emitting Diode) display, a liquid crystal display, an organic EL (ElectroLuminescence) display, and the like.
- the display control unit 28 displays various screens on the display unit 26 in accordance with instructions from the CPU 12.
- the voice control unit 36 performs voice output from the speaker 32 and voice input from the microphone 34 in accordance with instructions from the CPU 12.
- the mobile phone network communication unit 18 is connected to a mobile communication network (not shown) via a base station (not shown) in a 3G (3rd generation mobile phone) system, for example, via a mobile phone network antenna 19. Connect and communicate.
- the smartphone 10 is connected to the network 3 (FIG. 1) such as the Internet from the mobile communication network, and can communicate with the server device 60 (FIG. 1).
- the wireless LAN communication unit 20 performs wireless LAN communication with a relay device (not shown) via a wireless LAN antenna 21 by a method compliant with, for example, IEEE 802.11 standard.
- the smartphone 10 performs wireless LAN communication with a relay device (not shown) installed indoors by the wireless LAN communication unit 20 and connects to a home network (not shown), and the like via the home network. It connects to the network 3 (FIG. 1) and can communicate with the server device 60 (FIG. 1).
- the smartphone 10 can realize at least a part of the functions of the image recognition apparatus 100 by installing and executing in advance an application program for realizing the image recognition apparatus 100 according to the embodiment of the present invention.
- the smart phone 10 can utilize the function of the image recognition apparatus 100 by providing a web page on a web server (not shown) and accessing by the user using the smart phone 10.
- FIG. 5 is a block diagram showing a hardware configuration of server device 60 constituting image recognition device 100 according to the embodiment of the present invention.
- the server device 60 of the present embodiment can be realized by a server computer or a personal computer connected to the database 50 (FIG. 1), or a device corresponding to them. Moreover, you may comprise by a virtual server etc.
- Each component of the server device 60 of the mail order system 1 includes a CPU 62, a RAM 66, a program for realizing the components shown in the figure loaded in the RAM 66, a ROM 64 for storing the program, and a network connection interface.
- This is realized by an arbitrary combination of hardware and software of an arbitrary computer having an I / O (Input / Output) 68.
- the CPU 62 is connected to each element of the server device 60 via the bus 69 and controls the entire server device 60 together with each element. It will be understood by those skilled in the art that there are various modifications to the implementation method and apparatus. Each figure described below shows a functional unit block, not a hardware unit configuration.
- the server device 60 can also be connected to an input / output device (not shown) via the I / O 68.
- the smartphone 10 sequentially acquires video data in which at least a part of an imaging target (catalog 7 in FIG. 2) that presents images of a plurality of products is captured by the imaging unit 30 (FIG. 4).
- the user holds the smartphone 10 over the catalog 7, and at least a part of the images of the plurality of products presented in the catalog 7 or the like is displayed as a real-time video on the display unit 26 (FIG. 4) of the smartphone 10. 2) Display live view on top.
- the size of the video data is the size displayed on the screen of the mobile terminal size of the smartphone 10.
- the smartphone 10 of the above embodiment is configured to realize the imaging unit with a built-in or connected camera, but is not limited thereto.
- the imaging unit can be realized by the server device 60.
- the video data acquired by the imaging unit of the server device 60 may be streamed to the user's smartphone 10 and displayed on the display unit 26 (FIG. 4) of the smartphone 10. Further, the video data captured on the server device 60 side may be stream-distributed and displayed on the smartphone 10 while operating the video by remotely operating the server device 60 from the smartphone 10 side.
- an image captured by remotely operating the show window of the store with a live camera from the smartphone 10 may be streamed to the smartphone 10 via the server device 60 and displayed.
- the above-described object position data can be further held in the database 50.
- the object position data 118 includes, as an example, a detection target object ID in a photographing target, its position information (or positions of a plurality of feature points included in the detection target object), and detection.
- the processing target object ID associated with the target object and the relative position of the detection target object and the processing target object or the relative position of the processing target object with the position of multiple feature points included in the detection target object. It can be attached and held.
- the position information of the detection target object can be indicated by at least one of the following information or a combination thereof.
- A1 Information indicating the absolute position of the image area corresponding to at least one detection target object in the imaging target (for example, in a predetermined position (centroid, center, end point, etc.) coordinates of the image area)
- the imaging target Information indicating the absolute position (for example, in coordinates) of a plurality of feature points included in an image region corresponding to the detection target object in the image target
- (a3) Corresponding to a plurality of detection target objects in the shooting target Information indicating the relative position of the image areas to be performed (for example, by a vector amount indicating a positional relationship with the feature amount)
- the lower left corner of the magazine page is defined as the reference point (0, 0) of the coordinate axis, and (a1) corresponds to a detection target object placed on the page.
- the centroids of the two image regions R1 and R2 can be indicated by coordinates (x1, y1) and (x2, y2), respectively.
- the positions of a plurality of feature points f11, f12, f13 and f21, f22 respectively included in the image regions R1 and R2 corresponding to the detection target object are represented by coordinates (x11, y11), (x12, y12), (x13, y13) and (x21, y21), (x22, y22), respectively.
- the number of image areas and the number of feature points corresponding to the detection target object are not limited to this.
- the image region R1 is a vector indicating the direction and length of a straight line connecting the coordinates (x1, y1) and (x2, y2) of the center of gravity of the image regions R1 and R2 corresponding to the detection target object. And the relative position of the center of gravity of R2 may be shown to each other. Further, the feature amounts of the image regions R1 and R2 may be included in the vectors.
- the information indicating the position of the processing target object can be indicated by at least one of the following information or a combination thereof.
- (B1) Information indicating the absolute position of the image area of the processing target object in the photographing target (for example, in a predetermined position (centroid, center, end point, etc.) coordinates of the image area)
- (b2) assigned to the processing corresponding object Information
- (b3) indicating the position (for example, in coordinates, etc.) within the photographic target or within the photographic image at which the process is executed.
- Information shown eg, vector quantity indicating positional relationship
- the information indicating the detection target object that is the target for holding the position data in the object position data 118 is the detection target object ID, but is not limited thereto.
- the image data itself of the image area of the detection target object, the storage location and file name (path) of the image data held separately, the feature information of the image area of the detection target object, and a plurality of features included in the image area It is good also as at least any one of the information of a point, or these combination.
- the information indicating the processing target object included in the object position data 118 is the processing target object ID in the example of FIG. 20A, but is not limited thereto.
- the image data itself of the product (within the shooting target) presented in the catalog 7 and the storage location and file name (path) of the image data held separately, or a combination thereof, may be used. Good.
- the detection target object and the processing target object can be held as information indicating the relative position of the object.
- the object position information is not included, and information indicating the correspondence between the detection target object ID and the processing target object ID may be held in the database 50 as the object correspondence data 122.
- the object position data 118 may be configured by combining the processing target object information 110 in FIG. 19A and the detection target object information 112 in FIG. 19B and the object correspondence data 122.
- information indicating the correspondence between the detection target object ID and the processing ID may be held in the database 50 as the object correspondence data 122.
- the object position data 118 may be configured by combining the processing target object information 110 in FIG. 19A and the processing information 114 in FIG. 19C and the object correspondence data 122 (FIG. 20D). .
- the object position data 118 indicates the relative position (arrangement and range) between the imaging target or the detection target object in the captured image and the processing target object. May be held as information on the relative positions of the feature points included in the image area of the detection target object in the shooting target that is associated with the processing target object (FIG. 20A).
- the plurality of feature points included in the image area of the detection target object can indicate relative positions with respect to a predetermined location such as the center of the image area of the detection target object.
- the processing position assigned to the processing target object for example, the display position of the marker display processing (in this example, the position of the frame surrounding the product image that is the processing target object) is the detection target object. (Or a plurality of feature points included in the image area) may be held as a relative position (FIG. 20B).
- the process assigned to the process target object is a process of displaying a marker on the product image that is the process target object listed in the catalog
- the process is limited to this. is not.
- any position indicating the position of the detection target object described above The relative position between the information and the display position of the corresponding marker can be included in the object position data 118.
- the detection target object or the marker display position may be held as an arrangement within a predetermined shooting range, if the shooting range can be specified in addition to the relative position of each other.
- the marker is, for example, a mark, a frame, a balloon, an icon, or image processing for notifying the user that the smartphone 10 has recognized the product image on the catalog 7.
- the position to be displayed with respect to the processing target object can be designated as appropriate.
- the size and attributes of the marker may be included in the object position data. For example, if the marker is a rectangular frame surrounding the object to be processed, the marker display position includes the position of the upper left corner of the frame and the vertical and horizontal sizes, or the center position of the frame, the diagonal slope, and the length from the center. Information.
- product information corresponding to the product image can be stored in the database 50 as a product table 120.
- the product table 120 identifies, for example, a product ID, a product name, an image ID indicating image information of a product (processing target object), and a marker displayed on the shooting screen in association with the processing target object as product identification information.
- Marker ID (or process ID that classifies the process associated with the object to be processed), product unit price, sales tax-inclusive price, product discount information, coupon information related to the product, and the like.
- the marker ID is not set for each product, but may be set for each catalog 7, each page, each product series, or the like.
- the image ID does not necessarily need to be set for each product, and the same image ID (processing target object) is assigned to the same series of products having different products, the same page, or different products included in a predetermined area. May be. That is, the image recognition apparatus 100 according to the present embodiment includes a plurality of products (processing target objects) corresponding to image IDs, and a table (non-processing) that associates marker IDs (processing) assigned to the respective products (processing target objects). May be provided.
- the database 50 can further hold information on the marker associated with the marker ID.
- the marker information since the process corresponding to the processing target object is a marker display, the marker information is retained. However, when other processes are associated, various types of information related to the corresponding process are stored. Information can be retained. Marker information includes marker type (mark, text, frame, balloon, icon, pop-up window, operation menu, replacement image, image processing, etc.), display format (flashing, 3D, zoom, color change, animation, etc.), shape Information regarding marker attributes such as size, color, and pattern can be stored in the database 50 in association with the marker ID or the image ID.
- only the specified processing target object may be masked so that only the specified processing target object is masked so that the user can watch only the specified processing target object.
- information such as a marker balloon or content displayed on the operation menu, an operation, or the like is processed as a processing target object (product ID or image ID) or processing ( It may be held in the database 50 in association with a marker ID or the like.
- the object specifying unit 102 uses pattern recognition or the like from image data obtained by imaging at least a part of the image by the imaging unit 30 (FIG. 4). At least one region having a feature amount capable of obtaining a certain recognition accuracy in the captured image is extracted. Then, the object specifying unit 102 searches the database 50 and specifies a detection target object having a feature amount at least partially matching the feature amount of the extracted region within a certain accuracy range. At this time, the object specifying unit 102 may extract a plurality of feature points included in the extracted region, and specify a detection target object based on parameter information including position information of each extracted feature point. . As the recognition accuracy, it is desirable to use an optimum value as appropriate depending on the image recognition processing accuracy and the photographing object.
- the object specifying unit 102 can simultaneously identify a plurality of detection target objects from the imaging data.
- the area having a feature amount equal to or greater than a predetermined threshold obtained by image recognition from the imaging target (captured image) includes at least one of the processing target object, the detection target object, and other areas.
- the object specifying unit 102 can recognize the detection target object by collating the feature information of the region obtained by the image recognition with the feature information of the detection target object in the database 50.
- the object specifying unit 102 can be realized by either the smartphone 10 or the server device 60.
- the object position data, the product table 120 (processing target object information), the marker (processing content) information, and the like are stored in the database 50, but are not limited thereto.
- at least a part of the information is held in the ROM 14 of the smartphone 10 or a recording medium that can be read by the smartphone 10 attached to the smartphone 10 (hereinafter, both are also abbreviated as “memory of the smartphone 10”). Can do.
- the mail order system 1 may be configured such that update information of these pieces of information is transmitted from the server device 60 to the smartphone 10 and can be updated by the smartphone 10.
- a user specifies necessary information using the smartphone 10, for example, a catalog number, a product field, a type, and the like, and information on the specified catalog is selectively downloaded from the server device 60 to the smartphone 10. And may be stored.
- the processing unit 104 refers to the object position data, specifies a processing target object at a position associated with the position of the recognized detection target object, and displays a marker display position corresponding to the specified processing target object To get.
- the object specifying unit 102 specifies a detection target object from the captured image
- the processing unit 104 specifies a processing object corresponding to the detection target object.
- the object specifying unit 102 or the processing unit 104 it is not always necessary to specify the “processing target object”.
- the processing unit 104 only needs to be able to identify the process assigned to the processing target object corresponding to the detection target object based on the object position data.
- the process assigned to the processing target object is marker display on the processing target object, and the processing unit 104 displays a marker corresponding to the processing target object at the acquired display position.
- the processing unit 104 can identify a plurality of processing target objects from the captured image and simultaneously display a plurality of markers corresponding to the plurality of processing target objects on the screen of the smartphone 10.
- the database 50 holds feature information (such as feature amounts of an image region) in a region including the periphery of the processing target object within the photographing target as information on the search target object. Or the database 50 may hold
- the processing unit 104 of the image recognition apparatus 100 searches the database 50 based on a region having a predetermined feature amount extracted from the imaging target by image recognition.
- the processing unit 104 specifies the processing target object at the relative position from the position of the image area having the feature amount of the detection target object in the captured image. And the process part 104 can acquire the display position of the marker for performing the process allocated to the specified process target object, for example, a marker display process. Then, the processing unit 104 displays a marker corresponding to the processing target object at the acquired display position.
- FIG. 7 is a diagram for explaining the relationship among the processing target object, the detection target object (feature point thereof), and the marker in the image recognition apparatus 100 according to the embodiment of the present invention.
- feature points are indicated by “points (circles)” in the drawings, but this is a description for the sake of simplicity and does not limit the shape of the feature points.
- FIG. 7A in the processing object 130, when the feature point having a predetermined feature amount is only a1, it is conceivable that the recognition accuracy of the processing object 130 is lowered.
- the size of the processing target object 130 is small, it may be difficult to recognize because the feature amount is small or there are no feature points.
- the range of the detection target object region associated with the processing target object 130 is expanded to the region 132 including the periphery of the processing target object 130. It is desirable that the area 132 as the detection target object is determined so as to have a feature amount with a recognition accuracy equal to or higher than a predetermined value.
- a plurality of feature points a1 to a7 are set as detection target objects, and information on these feature points is held in the object position data. It will be.
- the relative positions of the feature points are associated with the processing target object 130 (relative position thereof) and held in the object position data as detection target objects.
- the display position of the marker 136 is also held in the object position data in association with the processing target object 130.
- the database 50 includes feature points included in an area including the periphery of the processing target object, its relative position (information on the detection target object in the shooting target (captured image)), and the processing target object.
- the display position of the corresponding marker is held in advance in the object position data.
- the processing unit 104 identifies a plurality of processing target objects corresponding to the detection target objects detected from the captured image.
- the object specifying unit 102 extracts feature points for at least a part of the area 132 including the periphery of the processing target object 130 and the relative position thereof. For example, as illustrated in FIG. 7B, when at least a part of the photographing target including the processing target object 130 is photographed while holding the smartphone 10, for example, feature points a ⁇ b> 1 and a ⁇ b> 4 included in the photographing range 134 of the photographed image. , A5, a6, a7 are extracted and their relative positions are determined. At this time, the feature amount of at least a part of the photographed region is extracted from the region 132 corresponding to the detection target object.
- the object specifying unit 102 includes feature points a1, a4, a5, a6, a7 extracted by image recognition and their relative positions, and feature information of the detection target object in the database 50 (object position data). When at least a part of them matches, the position of the detection target object in the captured image is recognized. Then, the processing unit 104 identifies the processing target object 130 at the relative position from the recognized position of the detection target object based on the object position data. Then, the processing unit 104 acquires the display position of the marker 136 that is information for performing marker display as the process assigned to the specified processing target object 130.
- the object specifying unit 102 recognizes that at least a part of the region 132 (detection target object) has been shot, and the processing unit 104 is recognized.
- the processing target object 130 corresponding to the area 132 (detection target object) can be specified.
- the processing unit 104 applies the processing target object 130 identified based on the relative positions of the detected feature points a1, a4, a5, a6, and a7 (information on the detection target object in the shooting target (captured image)).
- the marker display process the display position 138 of the marker 136 is acquired.
- the processing unit 104 displays a marker 136 corresponding to the processing target object 130 in the imaging range 134 with the acquired display position 138 as a reference.
- the image recognition apparatus 100 may be configured such that the constituent elements are allocated to the smartphone 10 and the server apparatus 60 in any combination.
- the image recognition apparatus 100 realizes the following functions.
- B Feature information of a region extracted from a captured image is used as feature information of a detection target object in the database 50. Collate. And a function of recognizing that the detection target object is included in the captured image when there is a detection target object that matches the threshold value or more.
- C Based on the object position data, the detection target object recognized as included in the captured image.
- it is assigned to the processing target object from the position of the detection target object recognized in (b). It is good also as a function which specifies processing. That is, only the processing may be performed without specifying the processing target object.
- the following 10 function sharing combinations are conceivable.
- All functions are realized by the smartphone 10.
- the function (a) is realized by the smartphone 10, the result is transmitted to the server device 60, and the functions (b) to (e) are realized by the server device 60.
- the functions (a) to (b) are realized by the smartphone 10, the result is transmitted to the server device 60, and the functions (c) to (e) are realized by the server device 60.
- the functions (a) to (c) are realized by the smartphone 10, the result is transmitted to the server device 60, and the functions (d) to (e) are realized by the server device 60.
- the functions (a) to (d) are realized by the smartphone 10, the result is transmitted to the server device 60, and the function (e) is realized by the server device 60.
- All functions are realized by the server device 60.
- the function (a) is realized by the server device 60, the extracted area is received from the server device 60, and the functions (b) to (e) are realized by the smartphone 10.
- At least the function (b) is realized by the server device 60, the specified detection target object is received from the server device 60, and the functions (c) and (e) are realized by the smartphone 10.
- At least the functions (b) and (c) are realized by the server device 60, the specified processing target object is received from the server device 60, and the functions (d) and (e) are realized by the smartphone 10.
- At least the function (d) is realized by the server device 60, the marker display position is received from the server device 60, and the function (e) is realized by the smartphone 10.
- an augmented reality (AR) technique that can additionally present information using a computer to a real environment photographed by a camera or the like can be applied to the image recognition apparatus 100.
- AR augmented reality
- a three-dimensional coordinate system having an XY plane as an area where a processing target object is specified on a video photographed by a camera such as the smartphone 10 is recognized, and a corresponding marker is displayed as a 3D object, for example. 26 can also be displayed.
- the marker corresponding to the processing target object may have a user interface function that accepts a user operation on the processing target object.
- the image recognition apparatus 100 may further include a reception unit that receives a user operation using a user interface function of a marker corresponding to the processing target object displayed by the processing unit 104.
- an instruction to perform a predetermined process such as a product purchase process for which a selection operation has been performed, may be output.
- the computer program of the present embodiment is a detection for setting a predetermined arrangement in a shooting target according to a processing target object and having characteristics according to the processing target object in a computer for realizing the image recognition apparatus 100
- a procedure for specifying the position of the target object in the captured image by image recognition, a detection target object in the captured target, and a predetermined arrangement according to the captured target, and a feature corresponding to the captured target Based on the object position data indicating the relative position with respect to the processing target object, the processing target object in the captured image at the relative position is identified and specified from the position in the captured image of the identified detection target object.
- a procedure for executing the process assigned to the processed object is executed by the CPU of the smartphone 10 or the server apparatus 60, whereby the various units as described above are realized as various functions.
- the computer program of the present embodiment is a detection for setting a predetermined arrangement in a shooting target according to a processing target object and having characteristics according to the processing target object in a computer for realizing the image recognition apparatus 100
- the procedure for executing the process is to identify the processing target object in the captured image at the relative position from the position in the captured image of the identified detection target object, and specify the specified process.
- the process assigned to the target object is being executed. However, it is not always necessary to specify the position of the processing target object, as long as at least the processing target object can be specified.
- the computer program of the present invention is set in the computer in a predetermined arrangement according to the position information of the detection target object in the shooting target and the shooting target, and Based on the object position data for associating the processing target object having characteristics according to the shooting target, the processing target object in the related shooting image is specified from the position in the shooting image of the specified detection target object, It may be described that the process assigned to the specified processing target object is executed.
- the computer program of this embodiment may be recorded on a computer-readable recording medium.
- the recording medium is not particularly limited, and various forms can be considered.
- the program may be loaded from a recording medium into a computer memory, or downloaded to a computer through a network and loaded into the memory.
- FIG. 8 is a flowchart illustrating an example of the operation of the image recognition apparatus 100 according to the present embodiment.
- the processing method of the image recognition apparatus 100 according to the embodiment of the present invention is characterized in that the image recognition apparatus 100 is set with a predetermined arrangement in the shooting target according to the processing target object and according to the processing target object.
- the position of the detection target object having a position in the captured image is specified by image recognition (step S11), is set in a predetermined arrangement according to the detection target object in the shooting target and the shooting target, and is shot.
- the processing target in the captured image that is in the relative position from the position in the captured image of the identified detection target object based on the object position data indicating the relative position with the processing target object having the characteristics according to the target An object is specified (step S13), and the process assigned to the specified processing target object is executed (step S1). ).
- step S ⁇ b> 101 at least a part of the image is imaged by the imaging unit 30 (FIG. 4) (step S ⁇ b> 101), and object identification is performed.
- the unit 102 extracts a region having a feature amount with which a certain recognition accuracy can be obtained by image recognition (step S103).
- the object specifying unit 102 searches the database 50 based on the feature information of the region extracted from the captured data by image recognition (step S105).
- the object specifying unit 102 finds in the database 50 a detection target object whose feature information of the extracted area matches a threshold value or more (YES in step S107), the detection target object is specified (step S109).
- the processing unit 104 specifies a processing target object at a relative position from the position of the detection target object based on the object position data (step S111). And the process part 104 acquires the display position of the marker for the marker display process allocated to the specified process target object (step S113), and displays the marker corresponding to the process target object at the acquired display position. (Step S115).
- the image recognition apparatus 100 of the embodiment of the present invention it is possible to prevent a reduction in recognition accuracy even when the feature amount of the processing target object is small or the size is small.
- the reason is that the area including the periphery of the processing target object is associated with the processing target object and held as a detection target object, and instead of detecting the processing target object, the detection target object is detected and the processing target object is This is because it can be specified.
- the detection target object that is set according to the processing target object is used even if the image recognition process of the processing target object itself does not provide a good result.
- the processing target object is specified, so that the processing target object is accurately specified, and further, the processing assigned to the processing target object, such as displaying the marker at an appropriate position, can be appropriately performed. it can.
- the database 50 does not hold the information about the entire image of the processing target object in the database 50, but at least a part of the imaging target as the detection target object corresponding to the processing target object is stored in the database 50. Therefore, the storage capacity required for holding can be greatly reduced.
- the image recognition apparatus 100 according to the embodiment of the present invention corresponds to a part of the region as compared with the case where collation processing with the image of the database 50 is performed using information on the entire image of the processing target object. Since it is only necessary to perform the collation processing of the detection target object, the recognition processing speed is remarkably improved.
- the image recognition apparatus according to the embodiment of the present invention is different from the image recognition apparatus 100 according to the above-described embodiment in that an easily recognizable area is used as a detection target object in a shooting target.
- the image recognition apparatus according to the present embodiment has the same configuration as the image recognition apparatus 100 according to the above-described embodiment of FIG. 3 and will be described below with reference to FIG. This embodiment is different from the above embodiment in the detection target object held in the database 50.
- FIG. 10 is a diagram for explaining the relationship among the processing target object, the detection target object, and the marker (processing position) in the image recognition apparatus 100 according to the embodiment of the present invention.
- feature points corresponding to the processing target object indicated by the marker 136A are a11 to a13.
- the feature points corresponding to the processing target object indicated by the marker 136B are b1 to b3.
- the feature points corresponding to the processing target object indicated by the marker 136C are c1 to c4.
- the feature points corresponding to the processing target object indicated by the marker 136D are d1 to d3.
- information on these feature points may be stored as a detection target object in the database 50 in association with the plurality of processing target objects, but in the present embodiment, these four processing target objects are stored.
- the feature information of the common feature points for example, the area 142 where the feature points are dense, is stored in the database 50 as a detection target object in association with a plurality of processing target objects.
- the feature amounts of the feature points a11, a12, b1, b2, c1, c2, and d1 included in the region 142 and their relative positions are set as detection target objects to a plurality of processing target objects, respectively.
- the data are stored in the database 50 in association with each other.
- the relative positions of the marker display positions 138A to 138D for marker display processing assigned to each processing target object are stored in the database 50 in association with the processing target object.
- the region 142 is a region including a feature point having a feature amount equal to or greater than a threshold or a likelihood equal to or greater than the threshold.
- the detection target object includes a predetermined region (position or size) that is easily detected in the photographing target in accordance with the processing target object.
- the object specifying unit 102 extracts at least a part of feature information in the recognition area 142 by image recognition. Then, the object specifying unit 102 searches the database 50 based on the feature information extracted by image recognition. Then, when the object specifying unit 102 finds in the database 50 a detection target object having feature information whose feature information in the captured image matches a threshold value or more, the object specification unit 102 specifies the detection target object. Then, based on the object position data, the processing unit 104 specifies a processing target object at a relative position from the position of the detection target object. And the process part 104 acquires the display position for the marker display process allocated to the specified process target object. Then, the processing unit 104 displays a marker corresponding to the processing target object at the acquired display position.
- the object specifying unit 102 extracts, for example, feature points included in the photographing range 134 and their relative positions. Then, the object specifying unit 102 searches the database 50 to find a detection target object that matches at least a part of the feature points a11, a12, b1, b2, c1, c2, and d1 extracted from the captured image. Then, the processing unit 104 specifies the processing target object at the relative position from the position of the detected detection target object (region 142) based on the object position data. In the present embodiment, four processing target objects are specified, and the processing unit 104 acquires display positions 138A to 138D of the four markers 136A to 136D respectively assigned to the four processing target objects.
- the image recognition apparatus 100 may display four markers based on the display position. However, as illustrated in FIG. 10C, the markers 136 ⁇ / b> A and 136 ⁇ / b> B are displayed only for the processing target object at the center of the shooting range 134. You may make it display.
- a plurality of markers 136A to 136D respectively corresponding to the plurality of processing target objects 140A to 140D are displayed on the preview screen 9 of the catalog 7 photographed while holding the smartphone 10.
- the smartphone 10 at least a part of the area 142 is extracted from the imaging data, and the corresponding detection target object is specified.
- four processing target objects 140A to 140D associated with the detection target object are specified.
- the relative position of the marker display position with respect to the position of the detection target object is acquired, and a plurality of markers 136A to 136D respectively corresponding to the processing target objects 140A to 140D are displayed.
- the image recognition device of the present invention can be used even when the feature amount of each processing target object is small. Can prevent a decrease in recognition accuracy.
- the recognition area 142 includes all objects to be processed, but is not limited to this.
- the recognition area 142 may be an area in the image other than the processing target object. Alternatively, it may be an area including at least a part of the processing target object.
- the feature points e1 to e5 included in the recognition area 142 other than the image areas of the processing target objects 130A to 130D and their relative positions are stored in the database 50 as detection target objects. May be.
- the display positions 138A to 138D of the markers 136A to 136D of the processing target objects 130A to 130D may be held in the database 50 in association with the relative positions of the feature points e1 to e5.
- the same effects as those of the above-described embodiment can be obtained.
- the reason is that an image region having a feature quantity that can be easily recognized is held as a detection target object associated with the processing target object from among images including a plurality of processing target objects, and is used for specifying the processing target object. Because it can. Since the feature information of the common area can be used as the detection target object corresponding to the plurality of processing target objects, the necessary storage is required as compared with the case where the feature information of the image is held in the database 50 for each processing target object. The capacity can be further reduced.
- the image recognition apparatus according to the embodiment of the present invention is different from the image recognition apparatus 100 of the above-described embodiment in that an area including a plurality of processing target objects adjacent in the image is used as a detection target object.
- the image recognition apparatus according to the present embodiment has the same configuration as the image recognition apparatus 100 according to the above-described embodiment of FIG. 3 and will be described below with reference to FIG. This embodiment is different from the above embodiment in the detection target object held in the database 50.
- the detection target object includes at least a part of a region including the periphery of the processing target object. Furthermore, in the image recognition device according to the embodiment of the present invention, the detection target object is associated with a plurality of processing target objects (relative positions thereof).
- FIG. 13 is a diagram for explaining an image recognition method in the image recognition apparatus 100 according to the embodiment of the present invention.
- the object specifying unit 102 searches the database 50 and is extracted from the captured image by image recognition and has at least one feature information of a region having a feature amount with which a certain recognition accuracy can be obtained. Find a detection target object whose parts match.
- the processing unit 104 detects a plurality of detection target objects (for example, a plurality of detection target objects included in the adjacent target object region 242 including a plurality of processing target objects (three points of a sofa, a coffee table, and a chair in the figure) adjacent to each other in the photographing target.
- a plurality of detection target objects for example, a plurality of detection target objects included in the adjacent target object region 242 including a plurality of processing target objects (three points of a sofa, a coffee table, and a chair in the figure) adjacent to each other in the photographing target.
- the processing target object at the relative position is specified from the position of the detected detection target object. Then, the processing unit 104 acquires a marker display position for marker display processing assigned to the identified processing target object. Then, the processing unit 104 displays the markers 230A to 230C corresponding to the processing target object at the acquired display position.
- “adjacent” processing target objects in the shooting target do not necessarily have to be in contact with each other.
- the same recognition target screen contains multiple processing target objects (items) that you want to recognize individually.
- images that are difficult to recognize individually for example, processing target objects (items) or processing targets It includes images that are difficult to recognize due to a mix of objects (items) and background.
- the “adjacent” processing target object includes an imaging target in which one processing target object (item) includes or overlaps at least a part of another processing target object (item).
- the image recognition apparatus according to the embodiment of the present invention is different from the image recognition apparatus 100 of the above-described embodiment in that a detection target object embedded for recognition in a shooting target is used.
- the image recognition apparatus according to the present embodiment has the same configuration as the image recognition apparatus 100 according to the above-described embodiment of FIG. 3 and will be described below with reference to FIG. This embodiment is different from the above embodiment in the detection target object held in the database 50.
- the detection target object includes at least information for detection.
- the object specifying unit 102 searches the database 50, and a detection target at least partially coincides with a region having a feature amount that is obtained by image recognition and that provides a certain recognition accuracy. Find the object.
- the processing unit 104 has an object indicating a relative position between the detection target object for recognition and the processing target object embedded in advance in the shooting target so that at least a part is included in the shooting screen by the imaging unit. Based on the position data, the processing target object at the relative position is specified from the position of the found detection target object. Then, the processing unit 104 acquires a marker display position for marker display processing assigned to the identified processing target object. Then, the processing unit 104 displays a marker corresponding to the processing target object at the acquired display position.
- the detection target object for recognition is an image region having a predetermined feature amount that is intentionally embedded in the photographing target in advance.
- the detection target object for recognition may be, for example, a digital watermark or a two-dimensional code such as a QR code (registered trademark).
- the position of the detection target object and the position of the corresponding processing target object can be individually set according to the shooting target.
- sufficient feature information is obtained so that a certain recognition accuracy can be obtained by image recognition when the user has shot an arbitrary area that satisfies a certain requirement, for example, a shooting target in which a detection target object is embedded.
- An area within the imaging target that can be extracted can be set as a detection target object.
- the detection target object for deliberate recognition can be obtained even in the case of a shooting target image in which it is difficult to set the detection target object (for example, a one-color image), as well as the above-described embodiment.
- the processing target object can be specified and the marker can be displayed.
- the digital watermark is used, the user cannot see the detection target object, so that the browsing of the image is not hindered.
- the image recognition apparatus is configured to set a plurality of detection target objects so that the detection target objects are arranged to some extent in the shooting target, and to detect them. The difference is that all the processing target objects included in the photographing target can be specified by the relative position with respect to the target object.
- the image recognition apparatus according to the present embodiment has the same configuration as the image recognition apparatus 100 according to the above-described embodiment of FIG. 3 and will be described below with reference to FIG. This embodiment is different from the above embodiment in the detection target object held in the database 50.
- the detection target object is arranged on the shooting target so that at least one detection target object is included in the shot image.
- FIGS. 17 and 18 are diagrams for explaining a state where detection target objects are uniformly arranged in the image recognition apparatus 100 according to the embodiment of the present invention.
- the object specifying unit 102 includes at least a part of a region having a feature amount that provides a certain recognition accuracy in a shooting screen in which at least a part of the image is captured by the imaging unit.
- the database 50 that stores the region selected as the detection target object is searched.
- FIG. 17 when a plurality of processing target objects 140A, 140B, 140C, 140D, 260A, 260B, 260C, 420A, 420B, and 420C are included in the shooting target (catalog 7), the shooting target is included in the shooting target.
- a plurality of detection target objects 410A, 410B, 410C, 410D, 410E, 410F are arranged. In this way, the detection target objects are uniformly arranged in the shooting target.
- the detection target objects 410A, 410B, 410C, 410D, 410E, and 410F are all within the shooting target so that at least a part of the shooting screen is included in consideration of the assumed size of the shooting screen. Evenly arranged.
- the detection target object 410A is included in the shooting range 430A indicated by the dotted line in FIG. 18 and the shooting range is shifted to become the shooting range 430B
- the detection target object 410B is included in the shooting range 430B.
- the detection target object 410C is included in the shooting range 430C.
- the viewing angle of the photographing camera in addition to selecting an area from which a sufficient amount of features can be extracted to maintain a certain recognition accuracy, the viewing angle of the photographing camera, the size of the photographing target, the photographing target and the camera It is preferable to consider the distance.
- the distance between the shooting target and the face indicating the distance between the shooting target and the camera, and the terminal, in the case of catalogs and menus, the position of the shooting target to the face is at most several tens of centimeters. The distance can be assumed to be about several centimeters. From these, the distance between the detection target objects can be appropriately set in consideration of the viewing angle of the camera.
- the detection target object may be information on the distribution of feature amounts in the shooting target.
- the distribution of the feature amount indicates not only a region having a large feature amount in the photographing target but also a region having a small feature amount, a region having a middle feature amount, and the like. From these, at least a part of information is selected as a detection target object so as to be included in the captured image. Then, when the detection target object is set in this way, a feature amount distribution is acquired from the captured image by image recognition, and a feature amount distribution that at least partially matches the acquired feature amount distribution is stored in the database.
- the search target object may be specified by searching from 50.
- the detection target objects for recognition described in the above embodiment can be uniformly arranged in the photographing target.
- a plurality of detection target objects are set so that they are arranged to some extent evenly in the shooting target, and their detection is performed. All the processing target objects included in the shooting target can be specified by the relative position with the target object, so that the number of detection target objects to be prepared in advance is suppressed efficiently while preventing the recognition accuracy from being lowered. Can be set.
- the image recognition apparatus according to the embodiment of the present invention is different from the image recognition apparatus 100 according to the above-described embodiment in that a detection target object is not set at a position where recognition conditions are likely to deteriorate, such as a portion that is easily distorted in the recognition target. .
- a detection target object is not set at a position where recognition conditions are likely to deteriorate, such as a portion that is easily distorted in the recognition target.
- the photographing target is a booklet
- the detection target object is set in a region excluding the curved portion (node region) around the binding portion.
- the image recognition apparatus according to the present embodiment has the same configuration as the image recognition apparatus 100 according to the above-described embodiment of FIG. 3 and will be described below with reference to FIG. This embodiment is different from the above embodiment in the detection target object held in the database 50.
- FIG. 14 is a diagram for explaining the distribution range of the detection target object in the image recognition apparatus 100 according to the embodiment of the present invention.
- FIG. 14 exemplifies a case where a plurality of regions (indicated by circles in the figure) having a characteristic amount having a certain detection accuracy exist in the imaging target. As shown in the drawing, there is a possibility that a plurality of areas that can be detected by photographing actually exist uniformly over the entire surface of the booklet 250.
- the throat portion region 254 around the binding portion 252 has a curved surface as compared with the spread-side fore edge 256 that is opposite to the binding portion 252. For this reason, in the throat region 254, distortion, light reflection, and the like are likely to occur, and there is a high possibility that the captured image recognition accuracy is reduced. Therefore, in the present embodiment, at least one detection target object is set by excluding the gutter region 254 around the binding unit 252 from among a plurality of regions existing on the entire paper surface of the booklet 250.
- the same effects as those of the above-described embodiment can be obtained. Further, when the photographing target is a booklet, the detection is performed in a range excluding the nodule region around the binding portion. Since it sets so that a target object may be included, the fall of recognition accuracy can be prevented.
- the image recognition apparatus according to the embodiment of the present invention is different from the image recognition apparatus 100 of the above-described embodiment in that the process target objects of an image in which a plurality of process target objects are shown in a list format are individually specified.
- the image recognition apparatus according to the present embodiment has the same configuration as the image recognition apparatus 100 according to the above-described embodiment of FIG. 3 and will be described below with reference to FIG. This embodiment is different from the above embodiment in the detection target object held in the database 50.
- FIG. 15 is a diagram for explaining recognition processing by the image recognition apparatus according to the embodiment of the present invention.
- the image 310 may include a plurality of processing target objects (images including a plurality of character strings) 320a, 320b, and 320c presented in a list format.
- the detection target object is, for example, feature information (for example, a feature point and its position) in an area including at least a part of a list including a plurality of processing target objects 320a, 320b, and 320c. Can do.
- feature information of a region within the imaging target other than the list may be used as the detection target object.
- the object specifying unit 102 searches the database 50 and finds from the database 50 a detection target object that at least partially matches the feature information of the region extracted by image recognition. Then, the processing unit 104 includes a detection target object (feature point and its relative position) included in the image 310 including a list including a plurality of processing target objects 320a, 320b, and 320c, and each processing target object in a list format. Based on the object position data indicating the relative positions with respect to 320a, 320b, and 320c, each processing target object 320a, 320b, and 320c at the relative position is specified from the position of the detected detection target object.
- a detection target object feature point and its relative position
- the processing unit 104 acquires a marker display position for marker display processing assigned to each of the specified processing target objects 320a, 320b, and 320c. Then, the processing unit 104 displays markers corresponding to the character strings 320a, 320b, and 320c of the processing target object at the acquired display position.
- each processing target object can be specified in the same manner as in the above-described embodiment described with reference to FIG.
- the marker corresponding to each processing target object in the list format has a user interface function that receives a user's predetermined operation on each processing target object.
- the processing unit 104 specifies each processing target object 320a, 320b, and 320c from the captured image including a list including the plurality of processing target objects 320a, 320b, and 320c, and specifies each specified processing target object.
- Information corresponding to 320a, 320b, and 320c is acquired from the database 50.
- the processing unit 104 Based on the information corresponding to the acquired processing target objects 320a, 320b, and 320c, the processing unit 104 displays a user interface for receiving a predetermined operation on the processing target objects 320a, 320b, and 320c as a marker display position. To display.
- FIG. 16 is a diagram illustrating an example of a graphical user interface of the image recognition apparatus according to the present embodiment.
- FIG. 16A shows an example of a drum type user interface 330.
- the processing unit 104 specifies each character string 320a, 320b, and 320c from an image including a plurality of processing target objects 320a, 320b, and 320c, and information corresponding to each specified processing target object 320a, 320b, and 320c Is obtained from the database 50. For example, information on a character string corresponding to each processing target object is acquired, and the processing unit 104 displays a user interface 330 for selecting the acquired character string superimposed on an image 310 including a list.
- the character string information may be stored in advance in the database 50 in association with each character string.
- the processing unit 104 acquires a display area of each processing target object as information corresponding to each processing target object, cuts out the display area of each processing target object from the image, and uses an OCR (Optical Character Reader) function, You may extract and acquire the text corresponding to each process target object.
- OCR Optical Character Reader
- FIG. 16B shows an example of a jog dial type user interface 340.
- the user interface 340 for selecting the character string acquired by the processing unit 104 is displayed superimposed on the image 310 including the list.
- each element such as a plurality of character strings included in the processing target object, which is generally difficult to recognize, is extracted from the image.
- Each can be specified based on the feature information of the region having the feature amount.
- a flyer on which a list of product names is presented can be photographed, and the product names in the list can be presented on the user interface. This facilitates an operation for selecting a specific product in a list that is generally difficult to operate.
- the image recognition apparatus may be configured by combining any of the configurations of the above-described embodiments within a range in which no contradiction occurs.
- the state of each image that is a processing target object on each sheet of the catalog for example, the image is small, the image color is light (feature information amount is small), and the product image is the background
- Appropriate ones of the configurations of the above-described embodiments can be adopted and combined depending on the individual state such as being buried (hard to be identified).
- each element may be selected by receiving an operation of touching a drum slider on the touch panel. Or you may receive operation which rotates a drum by tilting the main body of the smart phone 10 back and forth in the rotation direction of a drum.
- each element may be selected by receiving an operation of rotating the jog dial on the touch panel.
- an operation for rotating the jog dial may be received by moving the main body of the smartphone 10 left and right in the rotation direction of the jog dial.
- the processing unit 104 is configured according to the difference in scale between the detection target object specified by the object specifying unit 102 and the processing target object specified from the detection target object. It is possible to control the process assigned to the object to be processed. For example, if the imaging unit 30 of the smartphone 10 is far from the shooting target, the detection target object appears small in the captured image, and appears large if the imaging unit 30 is close. Here, the detection target object reflected in the captured image may not be visible to the user like the above-described digital watermark. In accordance with the shooting distance, the detection target object reflected in the captured image also increases, that is, the position and size of the display processing for the processing target object change according to the size of the detection target object. Furthermore, the relative position or size also changes.
- the relative position of the processing target object with respect to the detection target object is, for example, relative to this vector as a reference with a side connecting one vertex of the detection target object and another vertex of the same detection target object as a vector. It is expressed as vector data indicated by length. Then, based on the length of one side serving as a reference of the detected detection target object, the position and size of the display process for the processing target object can be specified using the vector data.
- a feature point having a feature equal to or higher than the threshold obtained from the captured image by image recognition may be used, or obtained by image recognition.
- the feature point of the detection target object obtained by collating the obtained feature information with the database 50 may be used.
- the size and position of the display process for the processing target object are appropriately changed according to the detection target object.
- the displayed content is changed. You can also.
- a plurality of corresponding processing target objects may be specified.
- the detection target object is photographed with a large close-up in the captured image, only a specific processing target object may be identified. Therefore, in the image recognition device, when the detection target object in the captured image is small, a plurality of processing target objects, such as a process of displaying rough information for each group of the plurality of specified processing target objects, is associated as a group. If the detection target object in the captured image is large, a process for displaying detailed information of the specific processing target object is performed. You may change and execute.
- the detection target object is When a plurality of small objects to be processed are shown, that is, when shooting from a distance and a plurality of menus are included in the screen, a frame painted in red is displayed in the menu area containing the allergen. Conversely, when the object to be detected is large (when the shooting distance is short and only one specific menu is shown in the shot image), in addition to the red-painted frame, a specific example such as “use wheat” is used.
- the display process can be changed to display the allergen name.
- Image recognition device The position of the detection target object, which is set in a predetermined arrangement according to the processing target object in the shooting target and has characteristics according to the processing target object, is specified by image recognition. Based on the object position data indicating the relative position between the detection target object in the shooting target and a processing target object set in a predetermined arrangement according to the shooting target and having characteristics according to the shooting target. From the position of the specified detection target object in the captured image, the processing target object in the captured image at the relative position is specified and assigned to the specified processing target object The processing method of the image recognition apparatus which performs a process. 2.
- Image recognition device The position of the detection target object, which is set in a predetermined arrangement according to the processing target object in the shooting target and has characteristics according to the processing target object, is specified by image recognition. Based on position information of the detection target object in the shooting target and object position data that associates a processing target object that is set in a predetermined arrangement according to the shooting target and has characteristics according to the shooting target. Image recognition for specifying the processing target object in the related captured image from the position of the specified detection target object in the captured image and executing the processing assigned to the specified processing target object Device processing method. 3. 1. Or 2. In the processing method of the image recognition apparatus described in The processing method of the image recognition apparatus, wherein the detection target object includes a predetermined region that is easy to detect within the shooting target in accordance with the processing target object.
- the processing method of the image recognition apparatus wherein the photographing target is a booklet, and the detection target object is included in an area excluding a knot part area around a binding part of the booklet. 8). 1. To 7. In the processing method of the image recognition device according to any one of the above, The processing method of an image recognition apparatus, wherein the processing includes at least one of a marker, a balloon, a menu display, realization of a user interface function, and transmission of a detection result to a server. 9. 8). In the processing method of the image recognition apparatus described in The processing method of the image recognition apparatus includes processing for realizing a user interface function that enables selective processing of a plurality of processing target objects. 10. 1. To 9.
- the image recognition device The processing method of the image recognition apparatus which controls the process allocated to the said process target object according to the difference in the scale of the said target object to be detected and the said process target object specified from the said target object for detection. 11. 1. To 10. In the processing method of the image recognition device according to any one of the above, The processing method of the image recognition apparatus arrange
- the object position data includes, as information indicating the position of the detection target object, positional information in the captured image of a plurality of feature points of the detection target object, Image recognition in which the image recognition device specifies the processing target object in the shooting target from positions in the shooting target of the plurality of feature points of the specified detection target object based on the object position data Device processing method. 13. 1. To 12. In the processing method of the image recognition device according to any one of the above, The image recognition apparatus is a processing method of an image recognition apparatus that is a mobile terminal, a server apparatus that can communicate with the mobile terminal, or a combination thereof.
- the detection target object includes a predetermined easy-to-detect area in the shooting target according to the processing target object. 17.
- the detection target object includes a program including at least a part of an area including a periphery of the processing target object. 18. 14 To 17. In any program, The detection target object is a program associated with a plurality of the processing target objects. 19. 14 To 18. In any program, The detection target object includes at least information for detection. 20. 14 Thru 19. In any program, The shooting target is a booklet, and the detection target object is a program included in a region excluding a knot portion region around a binding portion of the booklet. 21. 14 To 20. In any program, The processing includes a program including at least one of a marker, a balloon, a menu display, realization of a user interface function, and transmission of a detection result to a server. 22. 21.
- the process is a program including a process for realizing a user interface function that allows a plurality of objects to be processed to be selectively processed.
- the processing target object is determined according to a difference in scale between the detection target object specified by the object specifying means and the processing target object specified from the detection target object.
- the program arrange
- the object position data includes, as information indicating the position of the detection target object, positional information in the captured image of a plurality of feature points of the detection target object, In the procedure of executing the assigned processing, based on the object position data, the processing target object in the shooting target from the positions in the shooting target of the plurality of feature points of the identified detection target object
- the image recognition apparatus realized by the computer executing the program is a portable terminal, a server apparatus that can communicate with the portable terminal, or a combination thereof.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
たとえば、レストランのメニューや、カタログ通販で利用されるカタログ掲載アイテムを認識させる場合に、対象アイテムの写真が小さかったり、ほとんど模様のない真っ白なアイテムだったり、写真すらなく、文字列でリスト表示されるだけだったりすることがあり、そのため、そのアイテムの撮影画像や、登録画像から、認識精度を保つのに十分な量の特徴情報を取得できず、一部のアイテムが認識困難になってしまうという課題があった。
撮影対象内に、処理対象オブジェクトに応じて所定の配置で設定され、かつ、前記処理対象オブジェクトに応じた特徴を有する検出対象オブジェクトの、撮影画像内での位置を、画像認識により、特定するオブジェクト特定手段と、
前記撮影対象内の前記検出対象オブジェクトと、前記撮影対象に応じて所定の配置で設定され、かつ、前記撮影対象に応じた特徴を有する処理対象オブジェクトとの相対位置を示すオブジェクト位置データに基づいて、前記オブジェクト特定手段により特定された前記検出対象オブジェクトの前記撮影画像内での前記位置から、その前記相対位置にある、前記撮影画像内の前記処理対象オブジェクトを特定し、特定された前記処理対象オブジェクトに割り当てられた処理を実行する処理手段と、
を備える。
撮影対象内に、処理対象オブジェクトに応じて所定の配置で設定され、かつ、前記処理対象オブジェクトに応じた特徴を有する検出対象オブジェクトの、撮影画像内での位置を、画像認識により、特定するオブジェクト特定手段と、
前記撮影対象内の前記検出対象オブジェクトの位置情報と、前記撮影対象に応じて所定の配置で設定され、かつ、前記撮影対象に応じた特徴を有する処理対象オブジェクトとを関連付けるオブジェクト位置データに基づいて、前記オブジェクト特定手段により特定された前記検出対象オブジェクトの前記撮影画像内での位置から、関連する前記撮影画像内の前記処理対象オブジェクトを特定し、特定された前記処理対象オブジェクトに割り当てられた処理を実行する処理手段と、
を備える。
画像認識装置が、
撮影対象内に、処理対象オブジェクトに応じて所定の配置で設定され、かつ、前記処理対象オブジェクトに応じた特徴を有する検出対象オブジェクトの、撮影画像内での位置を、画像認識により、特定し、
前記撮影対象内の前記検出対象オブジェクトと、前記撮影対象に応じて所定の配置で設定され、かつ、前記撮影対象に応じた特徴を有する処理対象オブジェクトとの相対位置を示すオブジェクト位置データに基づいて、特定された前記検出対象オブジェクトの前記撮影画像内での前記位置から、その前記相対位置にある、前記撮影画像内の前記処理対象オブジェクトを特定し、特定された前記処理対象オブジェクトに割り当てられた処理を実行する方法である。
画像認識装置が、
撮影対象内に、処理対象オブジェクトに応じて所定の配置で設定され、かつ、前記処理対象オブジェクトに応じた特徴を有する検出対象オブジェクトの、撮影画像内での位置を、画像認識により、特定し、
前記撮影対象内の前記検出対象オブジェクトの位置情報と、前記撮影対象に応じて所定の配置で設定され、かつ、前記撮影対象に応じた特徴を有する処理対象オブジェクトとを関連付けるオブジェクト位置データに基づいて、特定された前記検出対象オブジェクトの前記撮影画像内での位置から、関連する前記撮影画像内の前記処理対象オブジェクトを特定し、特定された前記処理対象オブジェクトに割り当てられた処理を実行する方法である。
撮影対象内に、処理対象オブジェクトに応じて所定の配置で設定され、かつ、前記処理対象オブジェクトに応じた特徴を有する検出対象オブジェクトの、撮影画像内での位置を、画像認識により、特定する手順、
前記撮影対象内の前記検出対象オブジェクトと、前記撮影対象に応じて所定の配置で設定され、かつ、前記撮影対象に応じた特徴を有する処理対象オブジェクトとの相対位置を示すオブジェクト位置データに基づいて、特定された前記検出対象オブジェクトの前記撮影画像内での前記位置から、その前記相対位置にある、前記撮影画像内の前記処理対象オブジェクトを特定し、特定された前記処理対象オブジェクトに割り当てられた処理を実行する手順、をコンピュータに実行させるためのプログラムである。
撮影対象内に、処理対象オブジェクトに応じて所定の配置で設定され、かつ、前記処理対象オブジェクトに応じた特徴を有する検出対象オブジェクトの、撮影画像内での位置を、画像認識により、特定する手順、
前記撮影対象内の前記検出対象オブジェクトの位置情報と、前記撮影対象に応じて所定の配置で設定され、かつ、前記撮影対象に応じた特徴を有する処理対象オブジェクトとを関連付けるオブジェクト位置データに基づいて、特定された前記検出対象オブジェクトの前記撮影画像内での位置から、関連する前記撮影画像内の前記処理対象オブジェクトを特定し、特定された前記処理対象オブジェクトに割り当てられた処理を実行する手順、をコンピュータに実行させるためのプログラムである。
図1は、本発明の実施の形態に係る画像認識装置を用いたシステムの一例として、通信販売システム1の構成例を示すブロック図である。
本実施形態の通信販売システム1は、ユーザが利用する携帯端末であるスマートフォン10と、スマートフォン10とネットワーク3を介して通信可能なサーバ装置60と、サーバ装置60に接続されるデータベース50(図中、「DB」と示す)と、を備える。
本発明の実施の形態に係る画像認識装置100は、撮影対象内に、処理対象オブジェクトに応じて所定の配置で設定され、かつ、処理対象オブジェクトに応じた特徴を有する検出対象オブジェクトの、撮影画像内での位置を、画像認識により、特定するオブジェクト特定部102と、撮影対象内の検出対象オブジェクトの位置情報と、撮影対象に応じて所定の配置で設定され、かつ、撮影対象に応じた特徴を有する処理対象オブジェクトとを関連付けるブジェクト位置データに基づいて、オブジェクト特定部102により特定された検出対象オブジェクトの撮影画像内での位置から、関連する撮影画像内の処理対象オブジェクトを特定し、特定された処理対象オブジェクトに割り当てられた処理を実行する処理部104と、を備える。
撮影対象とは、ユーザ等が、スマートフォン10等の端末を用いて撮影しようとする対象である。上述したように撮影対象は、たとえば、通販カタログ等の冊子の紙面等である。撮影対象には、ユーザが撮影対象を見て認知する対象が含まれていて、たとえば、カタログに掲載されている商品等が認知対象にあたる。
ここで、撮影対象の一部分を撮影した撮影画像の場合、検出対象オブジェクトの撮影画像内の位置から特定できる処理対象オブジェクトのうち、撮影対象に含まれるが、撮影画像には含まれない処理対象オブジェクトも特定できてもよい。なお、オブジェクト位置データは、検出対象オブジェクトと処理対象オブジェクトとの対応関係を示す紐付け情報を少なくとも含んでいればよい。検出対象オブジェクトと処理対象オブジェクトとの対応関係は、1対1、1対複数、複数対1、および複数対複数の少なくとも一つの関係を有する。
本発明の実施の形態の画像認識装置は、特に、複数の処理対象オブジェクトが並んで配置されているような撮影対象を、少なくとも一部分ずつカメラの向きや位置を変えながら撮影する。そして、スマートフォン10のような携帯サイズの画面をユーザが順次閲覧しながら、処理対象オブジェクトを画像認識のために撮影する。本発明では、ユーザが認知して撮影しているのは、処理対象オブジェクトであるが、画像認識装置100が画像認識により認識するのは検出対象オブジェクトとなる。そして、画像認識装置100は、検出対象オブジェクトが認識できたとき、あたかも、処理対象オブジェクトが認識できたかのように、たとえば、処理対象オブジェクトにマーカを付けて表示する等して、ユーザに通知する。そして、画像認識装置100により特定された処理対象オブジェクトに対応する情報を処理対象オブジェクトと関連付けてスマートフォン10のタッチパネル上に表示したり、スマートフォン10のタッチパネル上で注文等の操作を受け付けたりできる。
本実施形態のサーバ装置60は、データベース50(図1)に接続されるサーバコンピュータやパーソナルコンピュータ、またはそれらに相当する装置により実現することができる。また、仮想サーバなどにより構成されてもよい。
たとえば、本実施形態のスマートフォン10の場合、映像データのサイズは、スマートフォン10の携帯端末サイズの画面に表示されるサイズとなる。
図20(a)に示すように、オブジェクト位置データ118は、一例として、撮影対象内の検出対象オブジェクトIDと、その位置情報(または検出対象オブジェクトに含まれる複数の特徴点の位置)と、検出対象オブジェクトに対応付けられた処理対象オブジェクトIDと、検出対象オブジェクトと処理対象オブジェクトの相対位置(または、検出対象オブジェクトに含まれる複数の特徴点の位置と処理対象オブジェクトとの相対位置)とを対応付けて保持することができる。
(a1)撮影対象内の少なくとも一つの検出対象オブジェクトに対応する画像領域の絶対位置(たとえば、画像領域の所定の位置(重心、中心、端点等)座標等で)を示す情報
(a2)撮影対象内の検出対象オブジェクトに対応する画像領域内に含まれる複数の特徴点の、撮影対象内の絶対位置を(たとえば、座標等で)示す情報
(a3)撮影対象内の複数の検出対象オブジェクトに対応する画像領域同士の相対位置を(たとえば、特徴量と位置関係を示すベクトル量等で)示す情報
(a3)では、たとえば、上記検出対象オブジェクトに対応する画像領域R1およびR2の重心の座標(x1,y1)および(x2,y2)を結ぶ直線の向きと長さを示すベクトルで、画像領域R1およびR2の重心の相対位置を互いに示してもよい。さらに、画像領域R1およびR2の特徴量をそれぞれベクトルに含めてもよい。
(b1)撮影対象内の処理対象オブジェクトの画像領域の絶対位置を(たとえば、画像領域の所定の位置(重心、中心、端点等)座標等で)示す情報
(b2)処理対応オブジェクトに割り当てられた処理を実行する、撮影対象内または撮影画像内の位置を(たとえば、座標等で)示す情報
(b3)撮影対象内の複数の処理対象オブジェクトの画像領域同士の相対位置を(たとえば、特徴量と位置関係を示すベクトル量等で)示す情報
なお、検出対象オブジェクト(または、その画像領域に含まれる複数の特徴点)に一つまたは複数のマーカ(の表示位置)が対応付けられる場合、上述した検出対象オブジェクトの位置を示すいずれかの位置情報と、対応するマーカの表示位置との相対位置をオブジェクト位置データ118に含むことができる。
なお、検出対象オブジェクトまたはマーカ表示位置(処理を実行する位置)は、互いの相対位置以外に、撮影範囲が予め特定可能な場合には、所定の撮影範囲内における配置として保持してもよい。
マーカの情報として、マーカの種類(マーク、テキスト、枠、吹き出し、アイコン、ポップアップウインドウ、操作メニュー、置き換え画像、画像処理等)、表示形式(点滅、3D、ズーム、色替え、アニメーション等)、形状、サイズ、色、柄等のマーカの属性に関する情報をマーカIDまたは画像IDに対応付けてデータベース50に保持することができる。あるいは、特定された処理対象オブジェクトのみをクローズアップするように、特定された処理対象オブジェクト以外をマスクして、特定された処理対象オブジェクトのみをユーザが注視できるようにしてもよい。
さらに、処理対象オブジェクトに対応する処理の付加情報として、マーカである吹き出しや操作メニューに表示されるコンテンツや操作などの情報を、処理対象オブジェクト(商品ID、または画像ID等)、または、処理(マーカID等)に対応付けてデータベース50に保持してもよい。
本実施形態において、処理部104は、撮影画像から複数の処理対象オブジェクトを特定し、複数の処理対象オブジェクトに対応する複数のマーカを同時にスマートフォン10の画面上に表示できることが望ましい。
画像認識装置100の処理部104が、画像認識により撮影対象から抽出された所定の特徴量を有する領域に基づいて、データベース50を検索する。そして、データベース50に検出対象オブジェクトとして保持される処理対象オブジェクトの周囲も含めた領域内に含まれる特徴量と、少なくとも一部が一致する特徴量を有する撮影画像内の検出対象オブジェクトがあった場合、処理部104が、オブジェクト位置データに基づき、撮影画像内の検出対象オブジェクトの特徴量を有する画像領域の位置から、相対位置にある、処理対象オブジェクトを特定する。そして、処理部104が、特定された処理対象オブジェクトに割り当てられた処理、たとえば、マーカ表示処理を行うためのマーカの表示位置を取得することができる。そして、処理部104は、取得した表示位置に、処理対象オブジェクトに対応するマーカを表示する。
たとえば、処理対象オブジェクトの特徴量が少ない場合について説明する。図7(a)に示すように、処理対象オブジェクト130において、所定の特徴量を有する特徴点がa1のみの場合、処理対象オブジェクト130の認識精度が低下することが考えられる。また、処理対象オブジェクト130のサイズが小さい場合にも、特徴量が少ない、または特徴点がない等の理由で、認識が困難になることが考えられる。そこで、本実施形態では、処理対象オブジェクト130の周囲も含めた領域132まで、処理対象オブジェクト130に対応付ける検出対象オブジェクトの領域の範囲を広げる。検出対象オブジェクトとしての領域132は、認識精度が所定以上となる特徴量を有するように決めるのが望ましい。
本実施形態において、データベース50は、処理対象オブジェクトの周囲も含めた領域内に含まれる特徴点、およびその相対位置(撮影対象(撮影画像)内の検出対象オブジェクトの情報)と、処理対象オブジェクトに対応するマーカの表示位置を予めオブジェクト位置データに保持する。
処理部104は、図7(c)に示すように、取得した表示位置138を基準として、処理対象オブジェクト130に対応するマーカ136を撮影範囲134内に表示する。
(a)撮影画像から一定の認識精度が得られる特徴量を有する領域を画像認識により抽出する機能
(b)撮像画像から抽出された領域の特徴情報を、データベース50の検出対象オブジェクトの特徴情報と照合する。そして、閾値以上、一致する検出対象オブジェクトがあったとき、撮影画像に検出対象オブジェクトが含まれることを認識する機能
(c)オブジェクト位置データに基づき、撮影画像に含まれると認識された検出対象オブジェクトの位置から、相対位置にある処理対象オブジェクトを特定する機能
(d)特定された処理対象オブジェクトに割り当てられた処理として、マーカ表示処理のためのマーカの表示位置を取得する機能
(e)取得した表示位置に、処理対象オブジェクトに対応するマーカを表示する機能
なお、上記(c)、(d)の代わりに、(b)で認識された検出対象オブジェクトの位置から、処理対象オブジェクトに割り当てられた処理を特定する機能としてもよい。すなわち、処理対象オブジェクトを特定せずに、処理だけ行ってもよい。
(1)全ての機能をスマートフォン10で実現する。
(2)機能(a)をスマートフォン10で実現し、結果をサーバ装置60に送信し、機能(b)~(e)をサーバ装置60で実現する。
(3)機能(a)~(b)をスマートフォン10で実現し、結果をサーバ装置60に送信し、機能(c)~(e)をサーバ装置60で実現する。
(4)機能(a)~(c)をスマートフォン10で実現し、結果をサーバ装置60に送信し、機能(d)~(e)をサーバ装置60で実現する。
(5)機能(a)~(d)をスマートフォン10で実現し、結果をサーバ装置60に送信し、機能(e)をサーバ装置60で実現する。
(6)全ての機能をサーバ装置60で実現する。
(7)機能(a)をサーバ装置60で実現し、抽出された領域をサーバ装置60から受信し、機能(b)~(e)をスマートフォン10で実現する。
(8)少なくとも機能(b)をサーバ装置60で実現し、特定された検出対象オブジェクトをサーバ装置60から受信し、機能(c)と(e)をスマートフォン10で実現する。
(9)少なくとも機能(b)および(c)をサーバ装置60で実現し、特定された処理対象オブジェクトをサーバ装置60から受信し、機能(d)と(e)をスマートフォン10で実現する。
(10)少なくとも機能(d)をサーバ装置60で実現し、マーカ表示位置をサーバ装置60から受信し、機能(e)をスマートフォン10で実現する。
画像認識装置100は、処理部104が表示した処理対象オブジェクトに対応するマーカのユーザインタフェース機能を用いたユーザの操作を受け付ける受付部をさらに備えてもよい。
本実施形態のコンピュータプログラムは、画像認識装置100を実現させるためのコンピュータに、撮影対象内に、処理対象オブジェクトに応じて所定の配置で設定され、かつ、処理対象オブジェクトに応じた特徴を有する検出対象オブジェクトの、撮影画像内での位置を、画像認識により、特定する手順、撮影対象内の検出対象オブジェクトと、撮影対象に応じて所定の配置で設定され、かつ、撮影対象に応じた特徴を有する処理対象オブジェクトとの相対位置を示すオブジェクト位置データに基づいて、特定された検出対象オブジェクトの撮影画像内での位置から、その相対位置にある、撮影画像内の処理対象オブジェクトを特定し、特定された処理対象オブジェクトに割り当てられた処理を実行する手順、を実行させるように記述されている。
したがって、上記処理を実行する手順に替えて、たとえば、本発明のコンピュータプログラムは、コンピュータに、撮影対象内の検出対象オブジェクトの位置情報と、撮影対象に応じて所定の配置で設定され、かつ、撮影対象に応じた特徴を有する処理対象オブジェクトとを関連付けるオブジェクト位置データに基づいて、特定された検出対象オブジェクトの撮影画像内での位置から、関連する撮影画像内の前記処理対象オブジェクトを特定し、特定された前記処理対象オブジェクトに割り当てられた処理を実行するように記述されてもよい。
本発明の実施の形態に係る画像認識装置100の処理方法は、画像認識装置100が、撮影対象内に、処理対象オブジェクトに応じて所定の配置で設定され、かつ、処理対象オブジェクトに応じた特徴を有する検出対象オブジェクトの、撮影画像内での位置を、画像認識により、特定し(ステップS11)、撮影対象内の検出対象オブジェクトと、撮影対象に応じて所定の配置で設定され、かつ、撮影対象に応じた特徴を有する処理対象オブジェクトとの相対位置を示すオブジェクト位置データに基づいて、特定された検出対象オブジェクトの撮影画像内での位置から、その相対位置にある、撮影画像内の処理対象オブジェクトを特定し(ステップS13)、特定された処理対象オブジェクトに割り当てられた処理を実行する(ステップS15)。
本発明の実施の形態に係る画像認識装置は、上記実施形態の画像認識装置100とは、撮影対象内において、認識しやすい領域を検出対象オブジェクトとして用いる点で相違する。本実施形態の画像認識装置は、図3の上記実施形態の画像認識装置100と同様な構成を有するので、以下、図3を用いて説明する。本実施形態は、上記実施形態とは、データベース50に保持される検出対象オブジェクトが異なる。
処理対象オブジェクトとしてたとえば、4つの画像が隣接しているとする。各画像の特徴点は複数存在する。たとえば、図10(a)に示すように、マーカ136Aで示される処理対象オブジェクトに対応する特徴点は、a11~a13である。マーカ136Bで示される処理対象オブジェクトに対応する特徴点は、b1~b3である。マーカ136Cで示される処理対象オブジェクトに対応する特徴点は、c1~c4である。マーカ136Dで示される処理対象オブジェクトに対応する特徴点は、d1~d3である。
ここで、領域142は、特徴量が閾値以上、または尤度が閾値以上の特徴点を含む領域である。
本実施形態の画像認識装置100において、オブジェクト特定部102が、認識用の領域142内の少なくとも一部の特徴情報を、画像認識により抽出する。そして、オブジェクト特定部102が、画像認識により抽出された特徴情報に基づいて、データベース50を検索する。そして、オブジェクト特定部102が、撮影画像内の特徴情報が閾値以上一致する特徴情報を有する検出対象オブジェクトをデータベース50内に見つけたとき、その検出対象オブジェクトを特定する。そして、処理部104が、オブジェクト位置データに基づき、その検出対象オブジェクトの位置から、相対位置にある、処理対象オブジェクトを特定する。そして、処理部104が、特定された処理対象オブジェクトに割り当てられたマーカ表示処理のための表示位置を取得する。そして、処理部104、取得した表示位置に、処理対象オブジェクトに対応するマーカを表示する。
複数の処理対象オブジェクトに対応する検出対象オブジェクトとして、共通の領域の特徴情報を用いることができるので、処理対象オブジェクト毎に画像の特徴情報をデータベース50に保持する場合に比較して、必要な記憶容量をさらに削減することができる。
本発明の実施の形態に係る画像認識装置は、上記実施形態の画像認識装置100とは、画像内で隣接する複数の処理対象オブジェクトを含む領域を、検出対象オブジェクトとして用いる点で相違する。本実施形態の画像認識装置は、図3の上記実施形態の画像認識装置100と同様な構成を有するので、以下、図3を用いて説明する。本実施形態は、上記実施形態とは、データベース50に保持される検出対象オブジェクトが異なる。
さらに、本発明の実施の形態に係る画像認識装置において、検出対象オブジェクトは、複数の処理対象オブジェクト(の相対位置)に紐付けられる。
本実施形態の画像認識装置100において、オブジェクト特定部102が、データベース50を検索し、撮影画像から画像認識により抽出された、一定の認識精度が得られる特徴量を有する領域の特徴情報と少なくとも一部が一致する検出対象オブジェクトを見つける。そして、処理部104が、撮影対象内で隣接する複数の処理対象オブジェクト(図では、ソファ、コーヒーテーブル、チェアの3点)を含む隣接対象物領域242内に含まれる検出対象オブジェクト(たとえば、複数の特徴点、およびその相対位置)と、処理対象オブジェクトとの相対位置を示すオブジェクト位置データに基づき、見つかった検出対象オブジェクトの位置から、相対位置にある、処理対象オブジェクトを特定する。そして、処理部104が、特定された処理対象オブジェクトに割り当てられたマーカ表示処理のためのマーカの表示位置を取得する。そして、処理部104が、取得した表示位置に、処理対象オブジェクトに対応するマーカ230A~230Cを表示する。
本発明の実施の形態に係る画像認識装置は、上記実施形態の画像認識装置100とは、撮影対象内に認識用に埋め込まれた検出対象オブジェクトを用いる点で相違する。本実施形態の画像認識装置は、図3の上記実施形態の画像認識装置100と同様な構成を有するので、以下、図3を用いて説明する。本実施形態は、上記実施形態とは、データベース50に保持される検出対象オブジェクトが異なる。
本実施形態の画像認識装置100において、オブジェクト特定部102が、データベース50を検索し、画像認識により抽出された、一定の認識精度が得られる特徴量を有する領域と少なくとも一部が一致する検出対象オブジェクトを見つける。そして、処理部104が、撮像部による撮影画面内に、少なくとも一部が含まれるように、撮影対象内に予め埋め込まれた認識用の検出対象オブジェクトと、処理対象オブジェクトとの相対位置を示すオブジェクト位置データに基づき、見つかった検出対象オブジェクトの位置から、相対位置にある処理対象オブジェクトを特定する。そして、処理部104が、特定された処理対象オブジェクトに割り当てられたマーカ表示処理のためのマーカの表示位置を取得する。そして、処理部104が、取得した表示位置に、処理対象オブジェクトに対応するマーカを表示する。
本発明の実施の形態に係る画像認識装置は、上記実施形態の画像認識装置100とは、撮影対象内に複数の検出対象オブジェクトをある程度万遍なく配置されるように複数設定し、それらの検出対象オブジェクトとの相対位置によって撮影対象内に含まれる全ての処理対象オブジェクトを特定可能にする点で相違する。本実施形態の画像認識装置は、図3の上記実施形態の画像認識装置100と同様な構成を有するので、以下、図3を用いて説明する。本実施形態は、上記実施形態とは、データベース50に保持される検出対象オブジェクトが異なる。
本実施形態の画像認識装置100において、オブジェクト特定部102が、画像の少なくとも一部分を撮像部により撮影した撮影画面内に、一定の認識精度が得られる特徴量を有する領域が少なくとも一部含まれるように選択された領域を検出対象オブジェクトとして設定して保持するデータベース50を検索する。
図17に示すように、撮影対象(カタログ7)内に複数の処理対象オブジェクト140A、140B、140C、140D、260A、260B、260C、420A、420B、420Cが含まれている時、撮影対象内に複数の検出対象オブジェクト410A、410B、410C、410D、410E、410Fが配置される。このように、撮影対象の中に、検出対象オブジェクトは万遍なく配置される。
また、上記実施形態で説明した認識用の検出対象オブジェクトを、同様に、撮影対象内に万遍なく配置することができる。
本発明の実施の形態に係る画像認識装置は、上記実施形態の画像認識装置100とは、認識対象内のゆがみやすい部分など認識条件が悪化しやすい個所に検出対象オブジェクトを設定しない点で相違する。たとえば、撮影対象が冊子の場合に、綴じ部周辺の湾曲部分(ノド部領域)を除外した領域に検出対象オブジェクトを設定する。
本実施形態の画像認識装置は、図3の上記実施形態の画像認識装置100と同様な構成を有するので、以下、図3を用いて説明する。本実施形態は、上記実施形態とは、データベース50に保持される検出対象オブジェクトが異なる。
図14は、本発明の実施の形態に係る画像認識装置100における検出対象オブジェクトの分布範囲を説明するための図である。
そこで、本実施形態では、冊子250の紙面全体に存在している複数の領域のうち、綴じ部252の周辺のノド部領域254は除外して少なくとも一つの検出対象オブジェクトを設定する。
本発明の実施の形態に係る画像認識装置は、上記実施形態の画像認識装置100とは、複数の処理対象オブジェクトをリスト形式で示した画像の処理対象オブジェクトを個別に特定する点で相違する。
本実施形態の画像認識装置は、図3の上記実施形態の画像認識装置100と同様な構成を有するので、以下、図3を用いて説明する。本実施形態は、上記実施形態とは、データベース50に保持される検出対象オブジェクトが異なる。
図15に示すように、本実施形態の画像認識装置において、画像310が、リスト形式で提示される複数の処理対象オブジェクト(複数の文字列を含む画像)320a、320b、および320cを含んでもよい。本実施形態では、検出対象オブジェクトは、たとえば、複数の処理対象オブジェクト320a、320b、および320cを含むリストの少なくとも一部を含む領域内の特徴情報(たとえば、特徴点と、その位置)とすることができる。あるいは、リスト以外の撮影対象内の領域の特徴情報を検出対象オブジェクトとしてもよい。
処理部104は、上述したように、複数の処理対象オブジェクト320a、320b、および320cを含むリストを含む撮影画像から、各処理対象オブジェクト320a、320b、および320cを特定し、特定した各処理対象オブジェクト320a、320b、および320cに対応する情報をデータベース50から取得する。
処理部104は、取得した各処理対象オブジェクト320a、320b、および320cに対応する情報に基づいて、各処理対象オブジェクト320a、320b、および320cに対する所定の操作を受け付けるためのユーザインタフェースをマーカの表示位置に表示する。
図16(a)は、ドラム型のユーザインタフェース330の例を示す。
処理部104が、複数の処理対象オブジェクト320a、320b、および320cを含む画像から、各文字列320a、320b、および320cを特定し、特定した各処理対象オブジェクト320a、320b、および320cに対応する情報をデータベース50から取得する。たとえば、各処理対象オブジェクトに対応する文字列の情報を取得し、処理部104が、取得した文字列を選択するためのユーザインタフェース330を、リストを含む画像310に重畳して表示する。
このように、本実施形態の画像認識装置によれば、一般的に認識が困難な、処理対象オブジェクトに含まれる複数の文字列などの各要素を、画像から抽出した一定の認識精度が得られる特徴量を有する領域の特徴情報に基づいて、それぞれ特定することが可能になる。
そして、画像認識装置では、たとえば、商品名のリストが提示されたチラシなどを撮影して、リスト中の商品名をユーザインタフェースで提示させることができる。これにより、一般的に操作が困難なリスト中の特定の商品の選択操作が容易になる。
たとえば、図16(a)のユーザインタフェース330において、各要素の選択は、タッチパネル上で、ドラムのスライダをタッチする操作を受け付けることで行ってもよい。または、スマートフォン10の本体を、ドラムの回転方向に前後に倒すことで、ドラムを回転させる操作を受け付けてもよい。
たとえば、撮影対象からスマートフォン10の撮像部30が遠ければ、検出対象オブジェクトは、撮影画像内に小さく写り込み、撮像部30が近ければ大きく写り込む。ここで、撮影画像内に写り込んだ検出対象オブジェクトとは、上述した電子透かしのように、ユーザには見えなくてもよい。その撮影距離に応じて、撮影画像内に写り込んだ検出対象オブジェクトも大きくなり、つまり、検出対象オブジェクトが写り込んだ大きさに応じて、処理対象オブジェクトに対する表示処理の位置とサイズが変化する。さらに、相対位置、またはサイズも変化する。
たとえば、検出対象オブジェクトが撮影画像内で小さく撮影されている場合、対応する処理対象オブジェクトが複数特定される可能性がある。一方、検出対象オブジェクトが撮影画像内で大きくクローズアップされて撮影されている場合は、特定の処理対象オブジェクトのみが特定される可能性がある。そこで、画像認識装置では、撮影画像内の検出対象オブジェクトが小さい場合は、特定された複数の処理対象オブジェクトのグループ毎の大まかな情報を表示する処理等、複数の処理対象オブジェクトをグループとして紐付けられた処理を行い、撮影画像内の検出対象オブジェクトが大きい場合は、特定の処理対象オブジェクトの詳細な情報を表示する処理を行う等、特定される処理対象オブジェクトのグループ属性等に基づいて、処理を変えて実行させてもよい。
なお、本発明においてユーザに関する情報を取得、利用する場合は、これを適法に行うものとする。
1. 画像認識装置が、
撮影対象内に、処理対象オブジェクトに応じて所定の配置で設定され、かつ、前記処理対象オブジェクトに応じた特徴を有する検出対象オブジェクトの、撮影画像内での位置を、画像認識により、特定し、
前記撮影対象内の前記検出対象オブジェクトと、前記撮影対象に応じて所定の配置で設定され、かつ、前記撮影対象に応じた特徴を有する処理対象オブジェクトとの相対位置を示すオブジェクト位置データに基づいて、特定された前記検出対象オブジェクトの前記撮影画像内での前記位置から、その前記相対位置にある、前記撮影画像内の前記処理対象オブジェクトを特定し、特定された前記処理対象オブジェクトに割り当てられた処理を実行する画像認識装置の処理方法。
2. 画像認識装置が、
撮影対象内に、処理対象オブジェクトに応じて所定の配置で設定され、かつ、前記処理対象オブジェクトに応じた特徴を有する検出対象オブジェクトの、撮影画像内での位置を、画像認識により、特定し、
前記撮影対象内の前記検出対象オブジェクトの位置情報と、前記撮影対象に応じて所定の配置で設定され、かつ、前記撮影対象に応じた特徴を有する処理対象オブジェクトとを関連付けるオブジェクト位置データに基づいて、特定された前記検出対象オブジェクトの前記撮影画像内での位置から、関連する前記撮影画像内の前記処理対象オブジェクトを特定し、特定された前記処理対象オブジェクトに割り当てられた処理を実行する画像認識装置の処理方法。
3. 1.または2.に記載の画像認識装置の処理方法において、
前記検出対象オブジェクトは、前記処理対象オブジェクトに応じて、前記撮影対象内の、予め定められた検出しやすい領域を含む画像認識装置の処理方法。
4. 1.乃至3.いずれかに記載の画像認識装置の処理方法において、
前記検出対象オブジェクトは、前記処理対象オブジェクトの周囲を含む領域を、少なくとも一部含む画像認識装置の処理方法。
5. 1.乃至4.いずれかに記載の画像認識装置の処理方法において、
前記検出対象オブジェクトは、複数の前記処理対象オブジェクトに紐付けられる画像認識装置の処理方法。
6. 1.乃至5.いずれかに記載の画像認識装置の処理方法において、
前記検出対象オブジェクトは、少なくとも検出のための情報を含む画像認識装置の処理方法。
7. 1.乃至6.いずれかに記載の画像認識装置の処理方法において、
前記撮影対象が冊子であり、前記検出対象オブジェクトは、前記冊子の綴じ部周辺のノド部領域を除外した領域に含まれる画像認識装置の処理方法。
8. 1.乃至7.いずれかに記載の画像認識装置の処理方法において、
前記処理は、マーカ、吹き出し、メニューの表示、ユーザインタフェース機能の実現、および検出結果のサーバへの送信の少なくとも1つを含む画像認識装置の処理方法。
9. 8.に記載の画像認識装置の処理方法において、
前記処理は、複数の処理対象オブジェクトを選択的に処理可能にするユーザインタフェース機能を実現する処理を含む画像認識装置の処理方法。
10. 1.乃至9.いずれかに記載の画像認識装置の処理方法において、
前記画像認識装置が、
特定される前記検出対象オブジェクトと、当該検出対象オブジェクトから特定される前記処理対象オブジェクトとのスケールの差異に応じて、前記処理対象オブジェクトに割り当てられる処理を制御する画像認識装置の処理方法。
11. 1.乃至10.いずれかに記載の画像認識装置の処理方法において、
前記検出対象オブジェクトは、前記撮影画像内に少なくとも1つ含まれるように、前記撮影対象に配置される画像認識装置の処理方法。
12. 1.乃至11.いずれかに記載の画像認識装置の処理方法において、
前記オブジェクト位置データは、前記検出対象オブジェクトが有する複数の特徴点の前記撮影画像内での位置情報を、前記検出対象オブジェクトの位置を示す情報とし、
前記画像認識装置が、前記オブジェクト位置データに基づいて、特定された前記検出対象オブジェクトの前記複数の特徴点の前記撮影対象内の位置から、前記撮影対象内の前記処理対象オブジェクトを特定する画像認識装置の処理方法。
13. 1.乃至12.いずれかに記載の画像認識装置の処理方法において、
当該画像認識装置は、携帯端末、または、携帯端末と通信可能なサーバ装置、あるいは、それらの組み合わせである画像認識装置の処理方法。
前記撮影対象内の前記検出対象オブジェクトと、前記撮影対象に応じて所定の配置で設定され、かつ、前記撮影対象に応じた特徴を有する処理対象オブジェクトとの相対位置を示すオブジェクト位置データに基づいて、特定された前記検出対象オブジェクトの前記撮影画像内での前記位置から、その前記相対位置にある、前記撮影画像内の前記処理対象オブジェクトを特定し、特定された前記処理対象オブジェクトに割り当てられた処理を実行する手順、をコンピュータに実行させるためのプログラム。
15. 撮影対象内に、処理対象オブジェクトに応じて所定の配置で設定され、かつ、前記処理対象オブジェクトに応じた特徴を有する検出対象オブジェクトの、撮影画像内での位置を、画像認識により、特定する手順、
前記撮影対象内の前記検出対象オブジェクトの位置情報と、前記撮影対象に応じて所定の配置で設定され、かつ、前記撮影対象に応じた特徴を有する処理対象オブジェクトとを関連付けるオブジェクト位置データに基づいて、特定された前記検出対象オブジェクトの前記撮影画像内での位置から、関連する前記撮影画像内の前記処理対象オブジェクトを特定し、特定された前記処理対象オブジェクトに割り当てられた処理を実行する手順、をコンピュータに実行させるためのプログラム。
16. 14.または15.に記載のプログラムにおいて、
前記検出対象オブジェクトは、前記処理対象オブジェクトに応じて、前記撮影対象内の、予め定められた検出しやすい領域を含むプログラム。
17. 14.乃至16.いずれかに記載のプログラムにおいて、
前記検出対象オブジェクトは、前記処理対象オブジェクトの周囲を含む領域を、少なくとも一部含むプログラム。
18. 14.乃至17.いずれかに記載のプログラムにおいて、
前記検出対象オブジェクトは、複数の前記処理対象オブジェクトに紐付けられるプログラム。
19. 14.乃至18.いずれかに記載のプログラムにおいて、
前記検出対象オブジェクトは、少なくとも検出のための情報を含むプログラム。
20. 14.乃至19.いずれかに記載のプログラムにおいて、
前記撮影対象が冊子であり、前記検出対象オブジェクトは、前記冊子の綴じ部周辺のノド部領域を除外した領域に含まれるプログラム。
21. 14.乃至20.いずれかに記載のプログラムにおいて、
前記処理は、マーカ、吹き出し、メニューの表示、ユーザインタフェース機能の実現、および検出結果のサーバへの送信の少なくとも1つを含むプログラム。
22. 21.に記載のプログラムにおいて、
前記処理は、複数の処理対象オブジェクトを選択的に処理可能にするユーザインタフェース機能を実現する処理を含むプログラム。
23. 14.乃至22.いずれかに記載のプログラムにおいて、
前記割り当てられた処理を実行する手順において、前記オブジェクト特定手段により特定される前記検出対象オブジェクトと、当該検出対象オブジェクトから特定される前記処理対象オブジェクトとのスケールの差異に応じて、前記処理対象オブジェクトに割り当てられる処理を制御する手順をさらにコンピュータに実行させるためのプログラム。
24. 14.乃至23.いずれかに記載のプログラムにおいて、
前記検出対象オブジェクトは、前記撮影画像内に少なくとも1つ含まれるように、前記撮影対象に配置されるプログラム。
25. 14.乃至24.いずれかに記載のプログラムにおいて、
前記オブジェクト位置データは、前記検出対象オブジェクトが有する複数の特徴点の前記撮影画像内での位置情報を、前記検出対象オブジェクトの位置を示す情報とし、
前記割り当てられた処理を実行する手順において、前記オブジェクト位置データに基づいて、特定された前記検出対象オブジェクトの前記複数の特徴点の前記撮影対象内の位置から、前記撮影対象内の前記処理対象オブジェクトを特定する手順をさらにコンピュータに実行させるためのプログラム。
26. 14.乃至25.いずれかに記載のプログラムにおいて、
前記コンピュータが当該プログラムを実行することにより実現される画像認識装置は、携帯端末、または、携帯端末と通信可能なサーバ装置、あるいは、それらの組み合わせであるプログラム。
Claims (17)
- 撮影対象内に、処理対象オブジェクトに応じて所定の配置で設定され、かつ、前記処理対象オブジェクトに応じた特徴を有する検出対象オブジェクトの、撮影画像内での位置を、画像認識により、特定するオブジェクト特定手段と、
前記撮影対象内の前記検出対象オブジェクトと、前記撮影対象に応じて所定の配置で設定され、かつ、前記撮影対象に応じた特徴を有する処理対象オブジェクトとの相対位置を示すオブジェクト位置データに基づいて、前記オブジェクト特定手段により特定された前記検出対象オブジェクトの前記撮影画像内での前記位置から、その前記相対位置にある、前記撮影画像内の前記処理対象オブジェクトを特定し、特定された前記処理対象オブジェクトに割り当てられた処理を実行する処理手段と、
を備える画像認識装置。 - 撮影対象内に、処理対象オブジェクトに応じて所定の配置で設定され、かつ、前記処理対象オブジェクトに応じた特徴を有する検出対象オブジェクトの、撮影画像内での位置を、画像認識により、特定するオブジェクト特定手段と、
前記撮影対象内の前記検出対象オブジェクトの位置情報と、前記撮影対象に応じて所定の配置で設定され、かつ、前記撮影対象に応じた特徴を有する処理対象オブジェクトとを関連付けるオブジェクト位置データに基づいて、前記オブジェクト特定手段により特定された前記検出対象オブジェクトの前記撮影画像内での位置から、関連する前記撮影画像内の前記処理対象オブジェクトを特定し、特定された前記処理対象オブジェクトに割り当てられた処理を実行する処理手段と、
を備える画像認識装置。 - 請求項1または2に記載の画像認識装置において、
前記検出対象オブジェクトは、前記処理対象オブジェクトに応じて、前記撮影対象内の、予め定められた検出しやすい領域を含む画像認識装置。 - 請求項1乃至3いずれかに記載の画像認識装置において、
前記検出対象オブジェクトは、前記処理対象オブジェクトの周囲を含む領域を、少なくとも一部含む画像認識装置。 - 請求項1乃至4いずれかに記載の画像認識装置において、
前記検出対象オブジェクトは、複数の前記処理対象オブジェクトに紐付けられる画像認識装置。 - 請求項1乃至5いずれかに記載の画像認識装置において、
前記検出対象オブジェクトは、少なくとも検出のための情報を含む画像認識装置。 - 請求項1乃至6いずれかに記載の画像認識装置において、
前記撮影対象が冊子であり、前記検出対象オブジェクトは、前記冊子の綴じ部周辺のノド部領域を除外した領域に含まれる画像認識装置。 - 請求項1乃至7いずれかに記載の画像認識装置において、
前記処理は、マーカ、吹き出し、メニューの表示、ユーザインタフェース機能の実現、および検出結果のサーバへの送信の少なくとも1つを含む画像認識装置。 - 請求項8に記載の画像認識装置において、
前記処理は、複数の処理対象オブジェクトを選択的に処理可能にするユーザインタフェース機能を実現する処理を含む画像認識装置。 - 請求項1乃至9いずれかに記載の画像認識装置において、
前記処理手段は、前記オブジェクト特定手段により特定される前記検出対象オブジェクトと、当該検出対象オブジェクトから特定される前記処理対象オブジェクトとのスケールの差異に応じて、前記処理対象オブジェクトに割り当てられる処理を制御する画像認識装置。 - 請求項1乃至10いずれかに記載の画像認識装置において、
前記検出対象オブジェクトは、前記撮影画像内に少なくとも1つ含まれるように、前記撮影対象に配置される画像認識装置。 - 請求項1乃至11いずれかに記載の画像認識装置において、
前記オブジェクト位置データは、前記検出対象オブジェクトが有する複数の特徴点の前記撮影画像内での位置情報を、前記検出対象オブジェクトの位置を示す情報とし、
前記処理手段は、前記オブジェクト位置データに基づいて、特定された前記検出対象オブジェクトの前記複数の特徴点の前記撮影対象内の位置から、前記撮影対象内の前記処理対象オブジェクトを特定する画像認識装置。 - 請求項1乃至12いずれかに記載の画像認識装置において、
当該画像認識装置は、携帯端末、または、携帯端末と通信可能なサーバ装置、あるいは、それらの組み合わせである画像認識装置。 - 画像認識装置が、
撮影対象内に、処理対象オブジェクトに応じて所定の配置で設定され、かつ、前記処理対象オブジェクトに応じた特徴を有する検出対象オブジェクトの、撮影画像内での位置を、画像認識により、特定し、
前記撮影対象内の前記検出対象オブジェクトと、前記撮影対象に応じて所定の配置で設定され、かつ、前記撮影対象に応じた特徴を有する処理対象オブジェクトとの相対位置を示すオブジェクト位置データに基づいて、特定された前記検出対象オブジェクトの前記撮影画像内での前記位置から、その前記相対位置にある、前記撮影画像内の前記処理対象オブジェクトを特定し、特定された前記処理対象オブジェクトに割り当てられた処理を実行する画像認識装置の処理方法。 - 画像認識装置が、
撮影対象内に、処理対象オブジェクトに応じて所定の配置で設定され、かつ、前記処理対象オブジェクトに応じた特徴を有する検出対象オブジェクトの、撮影画像内での位置を、画像認識により、特定し、
前記撮影対象内の前記検出対象オブジェクトの位置情報と、前記撮影対象に応じて所定の配置で設定され、かつ、前記撮影対象に応じた特徴を有する処理対象オブジェクトとを関連付けるオブジェクト位置データに基づいて、特定された前記検出対象オブジェクトの前記撮影画像内での位置から、関連する前記撮影画像内の前記処理対象オブジェクトを特定し、特定された前記処理対象オブジェクトに割り当てられた処理を実行する画像認識装置の処理方法。 - 撮影対象内に、処理対象オブジェクトに応じて所定の配置で設定され、かつ、前記処理対象オブジェクトに応じた特徴を有する検出対象オブジェクトの、撮影画像内での位置を、画像認識により、特定する手順、
前記撮影対象内の前記検出対象オブジェクトと、前記撮影対象に応じて所定の配置で設定され、かつ、前記撮影対象に応じた特徴を有する処理対象オブジェクトとの相対位置を示すオブジェクト位置データに基づいて、特定された前記検出対象オブジェクトの前記撮影画像内での前記位置から、その前記相対位置にある、前記撮影画像内の前記処理対象オブジェクトを特定し、特定された前記処理対象オブジェクトに割り当てられた処理を実行する手順、をコンピュータに実行させるためのプログラム。 - 撮影対象内に、処理対象オブジェクトに応じて所定の配置で設定され、かつ、前記処理対象オブジェクトに応じた特徴を有する検出対象オブジェクトの、撮影画像内での位置を、画像認識により、特定する手順、
前記撮影対象内の前記検出対象オブジェクトの位置情報と、前記撮影対象に応じて所定の配置で設定され、かつ、前記撮影対象に応じた特徴を有する処理対象オブジェクトとを関連付けるオブジェクト位置データに基づいて、特定された前記検出対象オブジェクトの前記撮影画像内での位置から、関連する前記撮影画像内の前記処理対象オブジェクトを特定し、特定された前記処理対象オブジェクトに割り当てられた処理を実行する手順、をコンピュータに実行させるためのプログラム。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015519809A JP6179592B2 (ja) | 2013-05-31 | 2014-05-21 | 画像認識装置、その処理方法、およびプログラム |
US14/893,249 US10650264B2 (en) | 2013-05-31 | 2014-05-21 | Image recognition apparatus, processing method thereof, and program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013115029 | 2013-05-31 | ||
JP2013-115029 | 2013-05-31 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014192612A1 true WO2014192612A1 (ja) | 2014-12-04 |
Family
ID=51988645
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2014/063428 WO2014192612A1 (ja) | 2013-05-31 | 2014-05-21 | 画像認識装置、その処理方法、およびプログラム |
Country Status (3)
Country | Link |
---|---|
US (1) | US10650264B2 (ja) |
JP (3) | JP6179592B2 (ja) |
WO (1) | WO2014192612A1 (ja) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017033278A (ja) * | 2015-07-31 | 2017-02-09 | 三菱電機ビルテクノサービス株式会社 | 設備管理台帳作成支援システム、設備管理台帳作成支援装置及びプログラム |
CN108664829A (zh) * | 2017-03-27 | 2018-10-16 | 三星电子株式会社 | 用于提供与图像中对象有关的信息的设备 |
JP2018198060A (ja) * | 2017-05-23 | 2018-12-13 | アバイア インコーポレーテッド | 画像解析に基づくワークフローを実施するサービス |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6213177B2 (ja) * | 2013-11-18 | 2017-10-18 | 株式会社リコー | 表示処理装置及び表示処理方法 |
DE102014009686A1 (de) * | 2014-07-02 | 2016-01-07 | Csb-System Ag | Verfahren zur Erfassung schlachttierbezogener Daten an einem Schlachttier |
US10185976B2 (en) * | 2014-07-23 | 2019-01-22 | Target Brands Inc. | Shopping systems, user interfaces and methods |
US9818038B2 (en) * | 2016-01-06 | 2017-11-14 | Toshiba Tec Kabushiki Kaisha | Image recognition apparatus |
WO2018151008A1 (ja) * | 2017-02-14 | 2018-08-23 | 日本電気株式会社 | 画像認識システム、画像認識方法及び記録媒体 |
CN108510917A (zh) * | 2017-02-27 | 2018-09-07 | 北京康得新创科技股份有限公司 | 基于讲解装置的事件处理方法和讲解装置 |
US10740613B1 (en) | 2017-04-20 | 2020-08-11 | Digimarc Corporation | Hybrid feature point/watermark-based augmented reality |
KR102387767B1 (ko) * | 2017-11-10 | 2022-04-19 | 삼성전자주식회사 | 사용자 관심 정보 생성 장치 및 그 방법 |
JP6839391B2 (ja) * | 2018-03-29 | 2021-03-10 | 京セラドキュメントソリューションズ株式会社 | アイテム管理装置、アイテム管理方法及びアイテム管理プログラム |
KR102468309B1 (ko) * | 2018-04-26 | 2022-11-17 | 한국전자통신연구원 | 영상 기반 건물 검색 방법 및 장치 |
CN111354038B (zh) * | 2018-12-21 | 2023-10-13 | 广东美的白色家电技术创新中心有限公司 | 锚定物检测方法及装置、电子设备及存储介质 |
SG10201913005YA (en) * | 2019-12-23 | 2020-09-29 | Sensetime Int Pte Ltd | Method, apparatus, and system for recognizing target object |
CN111161346B (zh) * | 2019-12-30 | 2023-09-12 | 北京三快在线科技有限公司 | 将商品在货架中进行分层的方法、装置和电子设备 |
JP2021149439A (ja) * | 2020-03-18 | 2021-09-27 | 富士フイルムビジネスイノベーション株式会社 | 情報処理装置及び情報処理プログラム |
CN117235831B (zh) * | 2023-11-13 | 2024-02-23 | 北京天圣华信息技术有限责任公司 | 一种零件自动标注方法、装置、设备及存储介质 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004178111A (ja) * | 2002-11-25 | 2004-06-24 | Castle Computer Co Ltd | 電子広告提供システム、電子広告提供方法およびこの方法に用いられる広告媒体並びにプログラム |
JP2006229894A (ja) * | 2005-02-21 | 2006-08-31 | Ibm Japan Ltd | 表示装置、表示システム、表示方法、及びプログラム |
JP2011035800A (ja) * | 2009-08-05 | 2011-02-17 | National Institute Of Information & Communication Technology | 電子価格提示システム、電子価格提示装置、及び電子価格提示方法 |
JP2011107740A (ja) * | 2009-11-12 | 2011-06-02 | Nomura Research Institute Ltd | 標識および標識使用方法 |
JP2012164157A (ja) * | 2011-02-07 | 2012-08-30 | Toyota Motor Corp | 画像合成装置 |
WO2013012013A1 (ja) * | 2011-07-21 | 2013-01-24 | 株式会社日立ソリューションズ | 電子透かし広告コンテンツサービスシステム |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH09146755A (ja) | 1995-06-16 | 1997-06-06 | Seiko Epson Corp | 端末装置 |
JP2003271942A (ja) * | 2002-03-18 | 2003-09-26 | Ricoh Co Ltd | バーコード記録方法、画像補正方法および画像補正装置 |
JP4345622B2 (ja) | 2003-11-05 | 2009-10-14 | オムロン株式会社 | 瞳色推定装置 |
JP2005208706A (ja) | 2004-01-20 | 2005-08-04 | Dainippon Printing Co Ltd | 位置情報取得装置及び方法、並びに、表示メディア作成装置等 |
JP3929450B2 (ja) | 2004-03-30 | 2007-06-13 | 株式会社エム・エム・シー | 商品販売システム、それに用る商品販売用印刷物及びその印刷方法 |
WO2006070476A1 (ja) * | 2004-12-28 | 2006-07-06 | Fujitsu Limited | 画像内の処理対象の位置を特定する画像処理装置 |
US8787706B2 (en) * | 2005-03-18 | 2014-07-22 | The Invention Science Fund I, Llc | Acquisition of a user expression and an environment of the expression |
GB2452512B (en) * | 2007-09-05 | 2012-02-29 | Sony Corp | Apparatus and method of object tracking |
JP5596273B2 (ja) | 2008-03-31 | 2014-09-24 | 生活協同組合コープさっぽろ | 商品情報提供サーバ、及び商品情報提供システム |
US20090319388A1 (en) | 2008-06-20 | 2009-12-24 | Jian Yuan | Image Capture for Purchases |
US8811742B2 (en) * | 2009-12-02 | 2014-08-19 | Google Inc. | Identifying matching canonical documents consistent with visual query structural information |
JP5280475B2 (ja) | 2010-03-31 | 2013-09-04 | 新日鉄住金ソリューションズ株式会社 | 情報処理システム、情報処理方法及びプログラム |
JP5776218B2 (ja) * | 2011-02-24 | 2015-09-09 | 株式会社大林組 | 画像合成方法 |
JP5664346B2 (ja) | 2011-03-04 | 2015-02-04 | 富士ゼロックス株式会社 | 画像処理装置、情報提供システム及びプログラム |
JP5412457B2 (ja) | 2011-03-29 | 2014-02-12 | 東芝テック株式会社 | 商品購入装置およびプログラム |
CA3164530C (en) * | 2011-10-28 | 2023-09-19 | Magic Leap, Inc. | System and method for augmented and virtual reality |
-
2014
- 2014-05-21 JP JP2015519809A patent/JP6179592B2/ja active Active
- 2014-05-21 WO PCT/JP2014/063428 patent/WO2014192612A1/ja active Application Filing
- 2014-05-21 US US14/893,249 patent/US10650264B2/en active Active
-
2017
- 2017-04-28 JP JP2017089284A patent/JP6659041B2/ja active Active
-
2018
- 2018-03-19 JP JP2018050901A patent/JP6687051B2/ja active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004178111A (ja) * | 2002-11-25 | 2004-06-24 | Castle Computer Co Ltd | 電子広告提供システム、電子広告提供方法およびこの方法に用いられる広告媒体並びにプログラム |
JP2006229894A (ja) * | 2005-02-21 | 2006-08-31 | Ibm Japan Ltd | 表示装置、表示システム、表示方法、及びプログラム |
JP2011035800A (ja) * | 2009-08-05 | 2011-02-17 | National Institute Of Information & Communication Technology | 電子価格提示システム、電子価格提示装置、及び電子価格提示方法 |
JP2011107740A (ja) * | 2009-11-12 | 2011-06-02 | Nomura Research Institute Ltd | 標識および標識使用方法 |
JP2012164157A (ja) * | 2011-02-07 | 2012-08-30 | Toyota Motor Corp | 画像合成装置 |
WO2013012013A1 (ja) * | 2011-07-21 | 2013-01-24 | 株式会社日立ソリューションズ | 電子透かし広告コンテンツサービスシステム |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017033278A (ja) * | 2015-07-31 | 2017-02-09 | 三菱電機ビルテクノサービス株式会社 | 設備管理台帳作成支援システム、設備管理台帳作成支援装置及びプログラム |
CN108664829A (zh) * | 2017-03-27 | 2018-10-16 | 三星电子株式会社 | 用于提供与图像中对象有关的信息的设备 |
CN108664829B (zh) * | 2017-03-27 | 2023-10-17 | 三星电子株式会社 | 用于提供与图像中对象有关的信息的设备 |
JP2018198060A (ja) * | 2017-05-23 | 2018-12-13 | アバイア インコーポレーテッド | 画像解析に基づくワークフローを実施するサービス |
US10671847B2 (en) | 2017-05-23 | 2020-06-02 | Avaya Inc. | Service implementing a work flow based on image analysis |
Also Published As
Publication number | Publication date |
---|---|
JP6659041B2 (ja) | 2020-03-04 |
US10650264B2 (en) | 2020-05-12 |
US20160125252A1 (en) | 2016-05-05 |
JP2017174444A (ja) | 2017-09-28 |
JP2018152075A (ja) | 2018-09-27 |
JP6179592B2 (ja) | 2017-08-16 |
JPWO2014192612A1 (ja) | 2017-02-23 |
JP6687051B2 (ja) | 2020-04-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6687051B2 (ja) | 画像認識装置、その処理方法、およびプログラム | |
US10339383B2 (en) | Method and system for providing augmented reality contents by using user editing image | |
US9172879B2 (en) | Image display control apparatus, image display apparatus, non-transitory computer readable medium, and image display control method | |
JP2012094138A (ja) | 拡張現実ユーザインタフェース提供装置および方法 | |
JP5948842B2 (ja) | 情報処理装置、情報処理方法およびプログラム | |
WO2014017392A1 (ja) | 情報処理装置、そのデータ処理方法、およびプログラム | |
CN107341185A (zh) | 信息显示的方法及装置 | |
WO2013145566A1 (en) | Information processing apparatus, information processing method, and program | |
KR102330637B1 (ko) | 증강현실 포토카드를 제공하는 시스템, 서버, 방법 및 그 기록매체 | |
JP6120467B1 (ja) | サーバ装置、端末装置、情報処理方法、およびプログラム | |
WO2014017393A1 (ja) | 情報処理装置、そのデータ処理方法、およびプログラム | |
US9607094B2 (en) | Information communication method and information communication apparatus | |
US8941767B2 (en) | Mobile device and method for controlling the same | |
US8866953B2 (en) | Mobile device and method for controlling the same | |
WO2014027433A1 (ja) | 情報提供装置、情報提供方法、及び、プログラム | |
KR101809673B1 (ko) | 단말 및 그의 제어 방법 | |
JP2017228278A (ja) | サーバ装置、端末装置、情報処理方法、およびプログラム | |
US10069984B2 (en) | Mobile device and method for controlling the same | |
KR20200008359A (ko) | Mr 콘텐츠 제공 시스템 및 그 방법 | |
US20220343571A1 (en) | Information processing system, information processing apparatus, and method of processing information | |
US20210158595A1 (en) | Information processing apparatus, information processing method, and information processing system | |
KR20190043001A (ko) | 단말 및 그의 제어 방법 | |
KR101722053B1 (ko) | 목적별 관심 영상으로 반복 전환하는 안내 장치 및 그 방법 | |
KR20170022029A (ko) | 디지털 액자 제어방법 | |
KR20190140311A (ko) | 단말 및 그의 제어 방법 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14804595 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2015519809 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14893249 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 14804595 Country of ref document: EP Kind code of ref document: A1 |