WO2022044923A1 - Information processing device - Google Patents

Information processing device Download PDF

Info

Publication number
WO2022044923A1
WO2022044923A1 PCT/JP2021/030217 JP2021030217W WO2022044923A1 WO 2022044923 A1 WO2022044923 A1 WO 2022044923A1 JP 2021030217 W JP2021030217 W JP 2021030217W WO 2022044923 A1 WO2022044923 A1 WO 2022044923A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
feature amount
subject
extracted
candidate
Prior art date
Application number
PCT/JP2021/030217
Other languages
French (fr)
Japanese (ja)
Inventor
久央 勝見
渉 山田
桂一 落合
Original Assignee
株式会社Nttドコモ
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社Nttドコモ filed Critical 株式会社Nttドコモ
Priority to JP2022544500A priority Critical patent/JP7412575B2/en
Publication of WO2022044923A1 publication Critical patent/WO2022044923A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • One aspect of the present invention relates to an information processing device.
  • the similarity between the two images is calculated by comparing the feature quantities of a plurality of types of the input search image with the feature quantities of a plurality of types of the registered images registered in advance, and the search image is based on the similarity degree.
  • a mechanism for searching for registered images similar to the above is known (see, for example, Patent Document 1).
  • One aspect of the present invention is to provide an information processing apparatus capable of easily and appropriately extracting a set of subjects similar to each other.
  • the information processing apparatus has a first acquisition unit that acquires one or more first images for each of a plurality of first subjects, and one or more second images for each of the plurality of second subjects.
  • a feature amount calculation unit for calculating the feature amount for each feature amount extraction method and a feature for each first image and each second image by using the second acquisition unit to be acquired and each of the plurality of feature amount extraction methods.
  • a similarity calculation unit that calculates the similarity for each feature amount extraction method for each set of the first image and the second image based on the feature amount for each feature amount extraction method of each image calculated by the quantity calculation unit.
  • a set of first and second subjects that are similar to each other based on the degree of similarity for each combination of the first image and the second image set and the feature amount extraction method calculated by the similarity calculation unit. It is provided with an extraction unit for extracting pairs.
  • the information processing apparatus for all combinations of one or more first images for each of the plurality of first subjects and one or more second images for each of the plurality of second subjects.
  • the degree of similarity for each feature extraction method is calculated. Then, based on the similarity calculated for each combination of the first image and the second image pair and the feature amount extraction method in this way, a similar pair that is a pair of the first subject and the second subject that are similar to each other. Is extracted.
  • an information processing apparatus capable of easily and appropriately extracting a set of subjects similar to each other.
  • FIG. 1 is a diagram showing a configuration of a server 10 which is an information processing device according to an embodiment.
  • the server 10 is a device configured to be able to execute a process of extracting a similar pair, which is a set of candidate spots (first subject) and famous spots (second subject) that are similar to each other.
  • Famous spots are tourist spots (sightseeing spots) that have a certain degree of name recognition. Examples of famous spots include Niagara Falls, Machu Picchu and other well-known tourist destinations. Candidate spots are less well-known than famous spots and have fewer tourists than famous spots. Famous spots and candidate spots are listed in advance by, for example, the operator of the server 10.
  • the server 10 includes a first acquisition unit 11, a second acquisition unit 12, a feature amount calculation unit 13, a similarity calculation unit 14, an extraction unit 15, and a presentation unit 16. To prepare for.
  • the first acquisition unit 11 acquires a plurality of candidate spot images X ik for each candidate spot X i .
  • the candidate spot image X ik indicates the kth image including the candidate spot X i in the subject.
  • the plurality of candidate spots Xi may include candidate spots having only one candidate spot image.
  • the first acquisition unit 11 acquires a plurality of candidate spot images X ik from, for example, the candidate spot image DB 20.
  • the candidate spot image DB 20 is a database that stores one or more candidate spot images X ik for each of the plurality of candidate spots X i .
  • candidate spot images X ik collected in advance by an operator or the like are accumulated.
  • the candidate spot image X ik is stored in the candidate spot image DB 20 in advance by being extracted from an image provided in, for example, a Google map (registered trademark).
  • the second acquisition unit 12 acquires a plurality of famous spot images Y jm for each famous spot Y j .
  • the famous spot image Y jm indicates the m-th image including the famous spot Y j as a subject.
  • the plurality of famous spots Yj may include a famous spot having only one famous spot image.
  • the second acquisition unit 12 acquires a plurality of famous spot images Y jm from, for example, the famous spot image DB 30.
  • the famous spot image DB 30 is a database that stores one or more famous spot images Y jm for each of the plurality of famous spots Y j .
  • the famous spot image Y jm collected in advance by an operator or the like is stored in the famous spot image DB 30.
  • the famous spot image Y jm is stored in the candidate spot image DB 20 in advance by being extracted from an image provided in, for example, a website accessible via a communication network such as the Internet and a Google map (registered trademark). Has been done.
  • the feature amount calculation unit 13 calculates the feature amount (CNN feature amount) obtained by the convolutional neural network, the feature amount based on the Visual Concept, and the GIST feature amount for each image X ik and Y jm .
  • F n (X ik ) and F n which are the feature amount (feature amount vector) of each image X ik and Y jm for each feature amount extraction method (that is, the feature amount extraction function F n ). (Y jm ) is obtained.
  • the similarity calculation unit 14 is based on the feature amount (F n (X ik ) and F n (Y jm )) for each feature amount extraction method of each image X ik , Y jm calculated by the feature amount calculation unit 13. ,
  • the similarity Sim n (X ik , Y jm ) for each feature extraction method is calculated for each set (X ik , Y jm ) of the candidate spot image X ik and the famous spot image Y jm .
  • Sim n (X ik , Y jm ) uses the feature amount F n (X ik ) of the candidate spot image X ik obtained by using the feature amount extraction function F n and the feature amount extraction function F n .
  • the degree of similarity between the feature amount F n (Y jm ) of the famous spot image Y jm obtained in the above and the feature amount F n (Y jm) is shown.
  • the similarity calculation unit 14 may calculate the similarity for each feature amount extraction method by a calculation method predetermined according to the feature amount extraction method. For example, the similarity calculation unit 14 may calculate the cosine similarity between the feature amounts as the above-mentioned similarity for the feature amount obtained by the convolutional neural network and the feature amount based on the Visual Concept. Further, the similarity calculation unit 14 may calculate the L2 distance between the feature amounts as the similarity for the GIST feature amount.
  • the extraction unit 15 has a similarity degree for each combination of the set of the candidate spot image X ik and the famous spot image Y jm calculated by the similarity calculation unit 14 and the feature amount extraction method (that is, the feature amount extraction function F n ). Based on Sim n (F n (X ik ), F n (Y jm )), a similar pair that is a pair (X i , Y j ) of candidate spots X i and famous spots Y j that are similar to each other is extracted.
  • Sim n (F n (X ik ), F n (Y jm ) a similar pair that is a pair (X i , Y j ) of candidate spots X i and famous spots Y j that are similar to each other is extracted.
  • the presentation unit 16 presents information on famous spots to the user.
  • the presentation unit 16 presents the information of the candidate spot similar to the famous spot to the user in association with the information of the famous spot based on the information of the similar pair extracted by the extraction unit 15.
  • the presentation unit 16 extracts, for example, a famous spot that matches the condition in response to a search request from the user (for example, an operation that requests information on a tourist spot that matches the condition input by the user). Then, the presentation unit 16 presents to the user information about the candidate spots having a similar pair relationship with the famous spots, together with the information about the extracted famous spots.
  • step S2 may be executed before step S1 or may be executed in parallel with step S1.
  • the feature amount calculation unit 13 has, for each candidate spot image X ik and each famous spot image Y jm , the feature amount F n (X ik ), F for each feature amount extraction function F n (feature amount extraction method). Calculate n (Y jm ).
  • step S4 the similarity calculation unit 14 determines the similarity Sim n (X ik ,) for each feature amount extraction function F n for each set (X ik , Y jm ) of the candidate spot image X ik and the famous spot image Y jm . Y jm ) is calculated.
  • step S5 the extraction unit 15 is similar based on the similarity Sim n (X ik , Y jm ) for each feature amount extraction function F n for each set (X ik , Y jm ) calculated in step S4. Extract pairs.
  • Sim n X ik , Y jm
  • F n feature amount extraction function
  • step S11 the extraction unit 15 determines the total similarity Sim, which is the sum of the similarity for each feature amount extraction function F n for each set (X ik , Y jm ) of the candidate spot image X ik and the famous spot image Y jm .
  • step S12 the extraction unit 15 sets the total similarity Sim total up to the top N (the top predetermined number) in the set (X ik , Y jm ) of all the candidate spot images X ik and the famous spot image Y jm .
  • the set (X ik , Y jm ) to have is extracted.
  • N is a number arbitrarily determined in advance by, for example, an operator or the like.
  • step S13 the extraction unit 15 sets the candidate spot X i and the famous spot Y j corresponding to the pair (X ik , Y jm ) of the candidate spot image X ik and the famous spot image Y jm extracted in step S12 (the set of the candidate spot X i and the famous spot Y j).
  • X i , Y j is extracted as a similar pair.
  • step S12 the pair (X 11 , Y 23 ) of the candidate spot image X 11 (the first image of the candidate spot X 1 ) and the famous spot image Y 23 (the third image of the famous spot Y 2 ) is When extracted as a pair having a total similarity up to the top N ranks, the extraction unit 15 extracts a pair (X 1 , Y 2 ) of the candidate spot X 1 and the famous spot Y 2 as a similar pair.
  • the similarity corresponding to a single feature extraction method is used as compared with the case of using only the similarity. , It is possible to extract similar pairs with high accuracy.
  • step S21 the extraction unit 15 ranks up to the top N in the set (X ik , Y jm ) of all the candidate spot images X ik and the famous spot image Y jm for each feature amount extraction function F n (upper predetermined).
  • a set (X ik , Y jm ) having a similarity Sim n of (number) is extracted.
  • N is a number arbitrarily determined in advance by, for example, an operator or the like.
  • step S22 the extraction unit 15 sets the candidate spot X i and the famous spot Y j corresponding to the set (X ik , Y jm ) of the candidate spot image X ik and the famous spot image Y jm extracted in step S21 (the set of the candidate spot X i and the famous spot Y j).
  • X i , Y j ) are extracted as similar candidates. That is, for each of the feature quantity extraction functions F 1 to F Nf , a set TopN_F 1 to TopN_F Nf containing N similar candidates is extracted.
  • the extraction unit 15 extracts a pair (X i , Y j ) of candidate spots X i and famous spots Y j extracted as similar candidates for a predetermined number or more of feature quantity extraction functions F n as similar pairs. do.
  • the extraction unit 15 extracts a set (X i , Y j ) extracted as a similar candidate for all the feature quantity extraction functions F 1 to F Nf as a similar pair. For example, when three feature quantity extraction functions F 1 to F 3 are used, a set TopN_F 1 containing N similar candidates for the feature quantity extraction function F 1 and N similar candidates for the feature quantity extraction function F 2 .
  • a set (X i , Y j ) included in any of the set TopN_F 2 containing the above and the set TopN_F 3 containing N similar candidates for the feature extraction function F 3 is extracted as a similar pair.
  • the pairs (X i , Y j ) included in the intersection (TopN_F 1 ⁇ TopN_F 2 ⁇ TopN_F 3 ) of the sets TopN_F 1 , TopN_F 2 , and TopN_F 3 are extracted as similar pairs.
  • step S31 the extraction unit 15 sets all the candidate spot images X ik and the famous spot image Y jm (X) for each combination (X i , F n ) of the candidate spot X i and the feature amount extraction function F n .
  • a set (X ik , Y jm ) having a degree of similarity up to the upper N rank (upper predetermined number) among ik , Y jm ) is extracted.
  • the extraction unit 15 is a candidate corresponding to the set (X ik , Y jm ) of the candidate spot image X ik and the famous spot image Y jm extracted in step S31 for each combination (X i , F n ).
  • a set of spots X i and famous spots Y j (X i , Y j ) is extracted as a first similarity candidate. That is, for each combination (X i , F n ), a set TopN_F n (X i ) containing N first similar candidates is extracted.
  • the extraction unit 15 extracts the famous spot Y j extracted as the first similar candidate for the feature quantity extraction function F n of a predetermined number or more for each candidate spot X i as the second similar candidate.
  • the extraction unit 15 uses the famous spot Y j included in the set (X i , Y j ) extracted as the first similar candidate for all the feature quantity extraction functions F 1 to F Nf as the second similar candidate. Extract. For example, focusing on a certain candidate spot X i , when three feature quantity extraction functions F 1 to F 3 are used, a set TopN_F 1 (X i ) containing N first similar candidates for the feature quantity extraction function F 1 .
  • step S33 a set g (X 1 ), ..., G (X Nx ) of the second similar candidates corresponding to each of the candidate spots X 1 , ..., X Nx is obtained.
  • step S34 the extraction unit 15 calculates tf (X i , Y j ), which is a score (first evaluation value) for each set (X i , Y j ) of the candidate spot X i and the famous spot Y j .
  • tf (X i , Y j ) is a score that applies the idea of tf (Term Frequency) in tf-idf, which is one of the methods for evaluating the importance of words contained in a document.
  • the tf (X i , Y j ) for the set (X i , Y j ) of the specific candidate spot X i and the specific famous spot Y j is the second similar candidate corresponding to the specific candidate spot X i . It is a value obtained by dividing the number of the specific famous spots Y j included in the set g (X i ) by the total number of famous spots included in the set g (X i ). That is, tf (X i , Y j ) is expressed by the following equations (2-1) to (2-3).
  • step S35 the extraction unit 15 calculates idf (Y j ), which is a score (second evaluation value) for each famous spot Y j .
  • idf (Y j ) is a score that applies the concept of idf (Inverse Document Frequency) in tf-idf.
  • the idf ( Y j ) for the specific famous spot Y j is the total number of candidate spots (NX in this embodiment) and the number of candidate spots including the specific famous spot Y j as the second similar candidate (that is,).
  • the number of candidate spots X i having the set g (X i ) including the famous spot Y j ). That is, idf (Y j ) is expressed by the following equation (3). Equation (3): idf (Y j ) total number of candidate spots / number of candidate spots X i having a set g (X i ) including the famous spot Y j .
  • step S36 the extraction unit 15 sets a similar pair based on tf (X i , Y j ) and idf (Y j ) for each pair (X i , Y j ) of the candidate spot X i and the famous spot Y j . Extract.
  • the extraction unit 15 calculates tf-idf (X i , Y j ) as the final score of the set of spots (X i , Y j ).
  • the extraction unit 15 is, for example, a set of spots (X i , Y j ) having a high tf-idf (X i , Y j ) (for example, tf-idf up to the upper M rank among all the sets of spots).
  • a set of spots having (X i , Y j ) (X i , Y j )) is extracted as a similar pair.
  • similar pairs can be extracted more appropriately by using a score to which the tf-idf method is applied. For example, it is possible to prevent a famous spot corresponding to a famous spot image having an average feature amount from being ranked high.
  • An example of a famous spot image having an average feature amount is an image in which a certain degree of similarity with any candidate spot image is calculated. It is not desirable for such famous spots to be ranked high because they are not particularly similar to specific candidate spots.
  • the score tf-idf (X i , Y j ) based on the above-mentioned tf-idf method, it is possible to prevent pairs including such famous spots from being extracted as similar pairs.
  • the score tf-idf (X i , Y j ) of the group including the famous spot Y j should be lowered. Can be done.
  • step S6 in response to a search request from the user (for example, an operation of requesting information on a tourist spot that matches the condition input by the user), a famous spot that matches the condition is extracted. Then, the presentation unit 16 presents to the user information about the candidate spots having a similar pair relationship with the famous spots, together with the information about the extracted famous spots.
  • feature amount extraction function F n For each feature amount extraction method (feature amount extraction function F n ), the similarity Sim n (X ik , Y jm ) is calculated. Then, based on the similarity Sim n (X ik , Y jm ) calculated for each combination of the candidate spot image X ik and the famous spot image Y jm and the feature amount extraction function F n in this way, each other. Similar pairs that are a pair of similar candidate spots X i and famous spots Y j are extracted.
  • the server 10 includes a presentation unit 16.
  • a presentation unit 16 According to the above configuration, for example, it is possible to present information on a candidate spot similar to the famous spot to a user who is interested in the famous spot. This makes it possible to present the user with a new spot that the user did not know before.
  • tourists users
  • new tourist spots candidate spots similar to famous spots
  • tourists from concentrating on specific famous spots, which in turn eliminates tourism pollution and infectious diseases. It can be expected to take measures against infectious diseases and regional revitalization.
  • the server 10 is not limited to the above embodiment.
  • the server 10 does not have to include the presentation unit 16.
  • the presentation unit 16 described above may be mounted on an external server different from the server 10. In this case, if the server 10 transmits the information of the similar pair extracted by the extraction unit 15 to the external server, and the external server executes the same process as the presentation unit 16 described above based on the information of the similar pair. good.
  • a set of candidate spots and a set of famous spots is shown as an example of a set of subjects to be extracted as a similar pair, but a set of subjects to be extracted as a similar pair is described in the above example. Not limited.
  • each functional block may be realized using one physically or logically coupled device, or two or more physically or logically separated devices can be directly or indirectly (eg, for example). , Wired, wireless, etc.) and may be realized using these plurality of devices.
  • the functional block may be realized by combining the software with the one device or the plurality of devices.
  • Functions include judgment, decision, judgment, calculation, calculation, processing, derivation, investigation, search, confirmation, reception, transmission, output, access, solution, selection, selection, establishment, comparison, assumption, expectation, and assumption.
  • the server 10 in one embodiment of the present disclosure may function as a computer that performs the communication control method of the present disclosure.
  • FIG. 6 is a diagram showing an example of the hardware configuration of the server 10 according to the embodiment of the present disclosure.
  • the server 10 described above may be physically configured as a computer device including a processor 1001, a memory 1002, a storage 1003, a communication device 1004, an input device 1005, an output device 1006, a bus 1007, and the like.
  • the word “device” can be read as a circuit, device, unit, etc.
  • the hardware configuration of the server 10 may be configured to include one or more of the devices shown in FIG. 1, or may be configured not to include some of the devices.
  • the processor 1001 For each function in the server 10, by loading predetermined software (program) on hardware such as the processor 1001 and the memory 1002, the processor 1001 performs an operation and controls communication by the communication device 1004, or the memory 1002 and the memory 1002. It is realized by controlling at least one of reading and writing of data in the storage 1003.
  • predetermined software program
  • the processor 1001 operates, for example, an operating system to control the entire computer.
  • the processor 1001 may be configured by a central processing unit (CPU: Central Processing Unit) including an interface with a peripheral device, a control device, an arithmetic unit, a register, and the like.
  • CPU Central Processing Unit
  • the processor 1001 reads a program (program code), a software module, data, etc. from at least one of the storage 1003 and the communication device 1004 into the memory 1002, and executes various processes according to these.
  • a program program code
  • the extraction unit 15 may be realized by a control program stored in the memory 1002 and operating in the processor 1001, and may be realized in the same manner for other functional blocks.
  • the memory 1002 is a computer-readable recording medium, and is composed of at least one such as a ROM (Read Only Memory), an EPROM (Erasable Programmable ROM), an EEPROM (Electrically Erasable Programmable ROM), and a RAM (Random Access Memory). May be done.
  • the memory 1002 may be referred to as a register, a cache, a main memory (main storage device), or the like.
  • the memory 1002 can store a program (program code), a software module, or the like that can be executed to implement the communication control method according to the embodiment of the present disclosure.
  • the storage 1003 is a computer-readable recording medium, and is, for example, an optical disk such as a CD-ROM (Compact Disc ROM), a hard disk drive, a flexible disk, an optical magnetic disk (for example, a compact disk, a digital versatile disk, a Blu-ray). It may consist of at least one (registered trademark) disk), smart card, flash memory (eg, card, stick, key drive), floppy (registered trademark) disk, magnetic strip, and the like.
  • the storage 1003 may be referred to as an auxiliary storage device.
  • the storage medium described above may be, for example, a database, server or other suitable medium containing at least one of the memory 1002 and the storage 1003.
  • the communication device 1004 is hardware (transmission / reception device) for communicating between computers via at least one of a wired network and a wireless network, and is also referred to as, for example, a network device, a network controller, a network card, a communication module, or the like.
  • the input device 1005 is an input device (for example, a keyboard, a mouse, a microphone, a switch, a button, a sensor, etc.) that accepts an input from the outside.
  • the output device 1006 is an output device (for example, a display, a speaker, an LED lamp, etc.) that outputs to the outside.
  • the input device 1005 and the output device 1006 may have an integrated configuration (for example, a touch panel).
  • each device such as the processor 1001 and the memory 1002 is connected by the bus 1007 for communicating information.
  • the bus 1007 may be configured by using a single bus, or may be configured by using a different bus for each device.
  • the server 10 is configured to include hardware such as a microprocessor, a digital signal processor (DSP: Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), a PLD (Programmable Logic Device), and an FPGA (Field Programmable Gate Array).
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • the hardware may implement some or all of each functional block.
  • processor 1001 may be implemented using at least one of these hardware.
  • the input / output information and the like may be stored in a specific location (for example, a memory) or may be managed using a management table. Information to be input / output may be overwritten, updated, or added. The output information and the like may be deleted. The input information or the like may be transmitted to another device.
  • the determination may be made by a value represented by 1 bit (0 or 1), by a true / false value (Boolean: true or false), or by comparing numerical values (for example, a predetermined value). It may be done by comparison with the value).
  • the notification of predetermined information (for example, the notification of "being X") is not limited to the explicit one, but is performed implicitly (for example, the notification of the predetermined information is not performed). May be good.
  • Software whether referred to as software, firmware, middleware, microcode, hardware description language, or other names, is an instruction, instruction set, code, code segment, program code, program, subprogram, software module.
  • Applications, software applications, software packages, routines, subroutines, objects, executable files, execution threads, procedures, features, etc. should be broadly interpreted.
  • software, instructions, information, etc. may be transmitted and received via a transmission medium.
  • a transmission medium For example, a website where the software uses at least one of wired technology (coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL: Digital Subscriber Line), etc.) and wireless technology (infrared, microwave, etc.).
  • wired technology coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL: Digital Subscriber Line), etc.
  • wireless technology infrared, microwave, etc.
  • the information, signals, etc. described in this disclosure may be represented using any of a variety of different techniques.
  • data, instructions, commands, information, signals, bits, symbols, chips, etc. that may be referred to throughout the above description are voltages, currents, electromagnetic waves, magnetic fields or magnetic particles, light fields or photons, or any of these. It may be represented by a combination of.
  • the information, parameters, etc. described in the present disclosure may be expressed using an absolute value, a relative value from a predetermined value, or another corresponding information. It may be represented.
  • references to elements using designations such as “first” and “second” as used in this disclosure does not generally limit the quantity or order of those elements. These designations can be used in the present disclosure as a convenient way to distinguish between two or more elements. Therefore, references to the first and second elements do not mean that only two elements can be adopted, or that the first element must somehow precede the second element.
  • the term "A and B are different” may mean “A and B are different from each other”.
  • the term may mean that "A and B are different from C”.
  • Terms such as “separate” and “combined” may be interpreted in the same way as “different”.
  • 10 server (information processing device), 11 ... first acquisition unit, 12 ... second acquisition unit, 13 ... feature amount calculation unit, 14 ... similarity calculation unit, 15 ... extraction unit, 16 ... presentation unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

A server 10 of one embodiment of the present invention comprises: a first acquisition unit 11 that acquires one or more candidate spot images for each of a plurality of candidate spots; a second acquisition unit 12 that acquires one or more famous spot images for each of a plurality of famous spots; a feature amount calculation unit 13 that uses each of a plurality of feature amount extraction methods to calculate a feature amount by each feature amount extraction method, for each candidate spot image and each famous spot image; a similarity degree calculation unit 14 that calculates, on the basis of the feature amounts of each image according to each feature amount extraction method, a degree of similarity for each feature amount extraction method, such calculation performed for each set consisting of a candidate spot image and a famous spot; and an extraction unit 15 that, on the basis of the similarity degrees for each combination of a feature amount extraction method and a set consisting of a candidate spot image and a famous spot, extracts a similar pair, which is a set consisting of a candidate spot image and a famous spot that are similar to each other.

Description

情報処理装置Information processing equipment
 本発明の一側面は、情報処理装置に関する。 One aspect of the present invention relates to an information processing device.
 従来、入力された検索画像の複数種類の特徴量と予め登録された登録画像の複数種類の特徴量とを各々比較することで両画像の類似度を算出し、当該類似度に基づいて検索画像と類似する登録画像を検索する仕組みが知られている(例えば特許文献1参照)。 Conventionally, the similarity between the two images is calculated by comparing the feature quantities of a plurality of types of the input search image with the feature quantities of a plurality of types of the registered images registered in advance, and the search image is based on the similarity degree. A mechanism for searching for registered images similar to the above is known (see, for example, Patent Document 1).
特開2001-319232号公報Japanese Unexamined Patent Publication No. 2001-319232
 上記のような仕組みにおいて、一般的には、入力画像と登録画像との類似度が予め定められた閾値以上である場合に、当該入力画像と当該登録画像とが互いに類似すると決定される。しかし、このような処理において、類似の判断基準となる閾値を予め適切に定めることが困難である場合がある。このため、不適切な閾値が設定されてしまうことによって、互いに類似する画像同士の組(すなわち、互いに類似する被写体の組)を適切に抽出することができない場合がある。 In the above mechanism, it is generally determined that the input image and the registered image are similar to each other when the similarity between the input image and the registered image is equal to or higher than a predetermined threshold value. However, in such a process, it may be difficult to appropriately set a threshold value as a similar determination criterion in advance. Therefore, if an inappropriate threshold value is set, it may not be possible to properly extract a set of images that are similar to each other (that is, a set of subjects that are similar to each other).
 本発明の一側面は、互いに類似する被写体の組を容易且つ適切に抽出することが可能な情報処理装置を提供することを目的とする。 One aspect of the present invention is to provide an information processing apparatus capable of easily and appropriately extracting a set of subjects similar to each other.
 本発明の一側面に係る情報処理装置は、複数の第1被写体の各々について一以上の第1画像を取得する第1取得部と、複数の第2被写体の各々について一以上の第2画像を取得する第2取得部と、複数の特徴量抽出手法の各々を用いることにより、各第1画像及び各第2画像について、特徴量抽出手法毎の特徴量を算出する特徴量算出部と、特徴量算出部により算出された各画像の特徴量抽出手法毎の特徴量に基づいて、第1画像及び第2画像の組毎に、特徴量抽出手法毎の類似度を算出する類似度算出部と、類似度算出部により算出された、第1画像及び第2画像の組と特徴量抽出手法との組み合わせ毎の類似度に基づいて、互いに類似する第1被写体及び第2被写体の組である類似ペアを抽出する抽出部と、を備える。 The information processing apparatus according to one aspect of the present invention has a first acquisition unit that acquires one or more first images for each of a plurality of first subjects, and one or more second images for each of the plurality of second subjects. A feature amount calculation unit for calculating the feature amount for each feature amount extraction method and a feature for each first image and each second image by using the second acquisition unit to be acquired and each of the plurality of feature amount extraction methods. A similarity calculation unit that calculates the similarity for each feature amount extraction method for each set of the first image and the second image based on the feature amount for each feature amount extraction method of each image calculated by the quantity calculation unit. , A set of first and second subjects that are similar to each other based on the degree of similarity for each combination of the first image and the second image set and the feature amount extraction method calculated by the similarity calculation unit. It is provided with an extraction unit for extracting pairs.
 本発明の一側面に係る情報処理装置では、複数の第1被写体の各々についての一以上の第1画像と複数の第2被写体の各々についての一以上の第2画像との全ての組み合わせについて、特徴量抽出手法毎の類似度が算出される。そして、このようにして第1画像及び第2画像の組と特徴量抽出手法との組み合わせ毎に算出された類似度に基づいて、互いに類似する第1被写体及び第2被写体の組である類似ペアが抽出される。上記構成によれば、複数の第1被写体と複数の第2被写体との間で類似度を用いた相対的な評価を行うことによって、類似度に関する閾値を予め設定することが不要となる。その結果、不適切な閾値を設定してしまうことを防ぐことができる。また、閾値を設定する必要がないことから、互いに類似する被写体の組を抽出する処理をより容易に行うことができる。従って、上記情報処理装置によれば、互いに類似する被写体の組を容易且つ適切に抽出することができる。 In the information processing apparatus according to one aspect of the present invention, for all combinations of one or more first images for each of the plurality of first subjects and one or more second images for each of the plurality of second subjects. The degree of similarity for each feature extraction method is calculated. Then, based on the similarity calculated for each combination of the first image and the second image pair and the feature amount extraction method in this way, a similar pair that is a pair of the first subject and the second subject that are similar to each other. Is extracted. According to the above configuration, it is not necessary to set a threshold value for the similarity in advance by performing a relative evaluation using the similarity between the plurality of first subjects and the plurality of second subjects. As a result, it is possible to prevent setting an inappropriate threshold value. Further, since it is not necessary to set a threshold value, it is possible to more easily perform the process of extracting a set of subjects similar to each other. Therefore, according to the information processing apparatus, it is possible to easily and appropriately extract a set of subjects similar to each other.
 本発明の一側面によれば、互いに類似する被写体の組を容易且つ適切に抽出することが可能な情報処理装置を提供することができる。 According to one aspect of the present invention, it is possible to provide an information processing apparatus capable of easily and appropriately extracting a set of subjects similar to each other.
実施形態に係る情報処理装置であるサーバの構成を示す図である。It is a figure which shows the structure of the server which is the information processing apparatus which concerns on embodiment. サーバの動作の一例を示すフローチャートである。It is a flowchart which shows an example of the operation of a server. 図2におけるステップS5の第1の例の処理手順を示すフローチャートである。It is a flowchart which shows the processing procedure of the 1st example of step S5 in FIG. 図2におけるステップS5の第2の例の処理手順を示すフローチャートである。It is a flowchart which shows the processing procedure of the 2nd example of step S5 in FIG. 図2におけるステップS5の第3の例の処理手順を示すフローチャートである。It is a flowchart which shows the processing procedure of the 3rd example of step S5 in FIG. サーバのハードウェア構成の一例を示す図である。It is a figure which shows an example of the hardware configuration of a server.
 以下、添付図面を参照して、本発明の一実施形態について詳細に説明する。なお、図面の説明において同一又は相当要素には同一符号を付し、重複する説明を省略する。 Hereinafter, an embodiment of the present invention will be described in detail with reference to the accompanying drawings. In the description of the drawings, the same or equivalent elements are designated by the same reference numerals, and duplicate description will be omitted.
 図1は、一実施形態に係る情報処理装置であるサーバ10の構成を示す図である。サーバ10は、互いに類似する候補スポット(第1被写体)及び有名スポット(第2被写体)の組である類似ペアを抽出する処理を実行可能に構成された装置である。 FIG. 1 is a diagram showing a configuration of a server 10 which is an information processing device according to an embodiment. The server 10 is a device configured to be able to execute a process of extracting a similar pair, which is a set of candidate spots (first subject) and famous spots (second subject) that are similar to each other.
 有名スポットは、一定以上の知名度を有する観光地(観光スポット)である。有名スポットの例としては、ナイアガラの滝、マチュピチュ等の広く知られた観光地等が挙げられる。候補スポットは、有名スポットよりも知名度が低く、有名スポットよりも観光客が少ない場所である。有名スポット及び候補スポットは、例えばサーバ10のオペレータによって予めリストアップされる。 Famous spots are tourist spots (sightseeing spots) that have a certain degree of name recognition. Examples of famous spots include Niagara Falls, Machu Picchu and other well-known tourist destinations. Candidate spots are less well-known than famous spots and have fewer tourists than famous spots. Famous spots and candidate spots are listed in advance by, for example, the operator of the server 10.
 図1に示されるように、サーバ10は、第1取得部11と、第2取得部12と、特徴量算出部13と、類似度算出部14と、抽出部15と、提示部16と、を備える。 As shown in FIG. 1, the server 10 includes a first acquisition unit 11, a second acquisition unit 12, a feature amount calculation unit 13, a similarity calculation unit 14, an extraction unit 15, and a presentation unit 16. To prepare for.
 第1取得部11は、複数(本実施形態ではN個)の候補スポットX(i=1,…,N)の各々について一以上の候補スポット画像Xik(第1画像)を取得する。本実施形態では、第1取得部11は、候補スポットX毎に複数の候補スポット画像Xikを取得する。ここで、候補スポット画像Xikは、候補スポットXを被写体に含んだk番目の画像を示す。ただし、複数の候補スポットXの中には、1つの候補スポット画像のみを有する候補スポットが含まれてもよい。 The first acquisition unit 11 acquires one or more candidate spot images X ik (first image) for each of a plurality of (NX in this embodiment) candidate spots X i (i = 1, ..., NX). do. In the present embodiment, the first acquisition unit 11 acquires a plurality of candidate spot images X ik for each candidate spot X i . Here, the candidate spot image X ik indicates the kth image including the candidate spot X i in the subject. However, the plurality of candidate spots Xi may include candidate spots having only one candidate spot image.
 第1取得部11は、例えば、候補スポット画像DB20から、複数の候補スポット画像Xikを取得する。候補スポット画像DB20は、複数の候補スポットXの各々についての一以上の候補スポット画像Xikを記憶したデータベースである。候補スポット画像DB20には、予めオペレータ等によって収集された候補スポット画像Xikが蓄積されている。候補スポット画像Xikは、例えば、Googleマップ(登録商標)等において提供されている画像から抽出されることで、予め候補スポット画像DB20に格納されている。 The first acquisition unit 11 acquires a plurality of candidate spot images X ik from, for example, the candidate spot image DB 20. The candidate spot image DB 20 is a database that stores one or more candidate spot images X ik for each of the plurality of candidate spots X i . In the candidate spot image DB 20, candidate spot images X ik collected in advance by an operator or the like are accumulated. The candidate spot image X ik is stored in the candidate spot image DB 20 in advance by being extracted from an image provided in, for example, a Google map (registered trademark).
 第2取得部12は、複数(本実施形態ではN個)の有名スポットY(j=1,…,N)の各々について一以上の有名スポット画像Yjm(第2画像)を取得する。本実施形態では、第2取得部12は、有名スポットY毎に複数の有名スポット画像Yjmを取得する。ここで、有名スポット画像Yjmは、有名スポットYを被写体に含んだm番目の画像を示す。ただし、複数の有名スポットYの中には、1つの有名スポット画像のみを有する有名スポットが含まれてもよい。 The second acquisition unit 12 acquires one or more famous spot images Y jm (second image) for each of a plurality of ( NY pieces in this embodiment) famous spots Y j (j = 1, ..., NY ). do. In the present embodiment, the second acquisition unit 12 acquires a plurality of famous spot images Y jm for each famous spot Y j . Here, the famous spot image Y jm indicates the m-th image including the famous spot Y j as a subject. However, the plurality of famous spots Yj may include a famous spot having only one famous spot image.
 第2取得部12は、例えば、有名スポット画像DB30から、複数の有名スポット画像Yjmを取得する。有名スポット画像DB30は、複数の有名スポットYの各々についての一以上の有名スポット画像Yjmを記憶したデータベースである。有名スポット画像DB30には、予めオペレータ等によって収集された有名スポット画像Yjmが蓄積されている。有名スポット画像Yjmは、例えば、インターネット等の通信ネットワークを介してアクセス可能なウェブサイト及びGoogleマップ(登録商標)等において提供されている画像から抽出されることで、予め候補スポット画像DB20に格納されている。 The second acquisition unit 12 acquires a plurality of famous spot images Y jm from, for example, the famous spot image DB 30. The famous spot image DB 30 is a database that stores one or more famous spot images Y jm for each of the plurality of famous spots Y j . The famous spot image Y jm collected in advance by an operator or the like is stored in the famous spot image DB 30. The famous spot image Y jm is stored in the candidate spot image DB 20 in advance by being extracted from an image provided in, for example, a website accessible via a communication network such as the Internet and a Google map (registered trademark). Has been done.
 特徴量算出部13は、複数の特徴量抽出手法の各々を用いることにより、各候補スポット画像Xik及び各有名スポット画像Yjmについて、特徴量抽出手法毎の特徴量を算出する。例えば、特徴量算出部13は、予め、複数(本実施形態ではNf個)の特徴量抽出手法の各々に対応する特徴量抽出関数F(n=1,…,Nf)を用いた計算を実行可能にプログラムされている。より具体的には、特徴量算出部13は、特徴量抽出関数Fを用いることにより、各画像Xik,Yjmを画像特徴量ベクトルに変換する。以降の説明において、特徴量抽出関数Fを用いて画像pを変換することにより得られる画像特徴量ベクトルを、F(p)と表記する。 The feature amount calculation unit 13 calculates the feature amount for each feature amount extraction method for each candidate spot image X ik and each famous spot image Y jm by using each of the plurality of feature amount extraction methods. For example, the feature amount calculation unit 13 previously performs a calculation using the feature amount extraction functions F n (n = 1, ..., Nf) corresponding to each of a plurality of (Nf pieces in this embodiment) feature amount extraction methods. It is programmed to be executable. More specifically, the feature amount calculation unit 13 converts each image X ik and Y jm into an image feature amount vector by using the feature amount extraction function F n . In the following description, the image feature quantity vector obtained by converting the image p using the feature quantity extraction function F n is referred to as F n (p).
 複数の特徴量抽出手法の例として、畳み込みニューラルネットワーク、Visual Concept、及びGIST等が挙げられる。この場合、特徴量算出部13は、各画像Xik,Yjmについて、畳み込みニューラルネットワークにより得られる特徴量(CNN特徴量)、Visual Conceptに基づく特徴量、及びGIST特徴量を算出する。 Examples of the plurality of feature amount extraction methods include convolutional neural networks, Visual Concept, GIST, and the like. In this case, the feature amount calculation unit 13 calculates the feature amount (CNN feature amount) obtained by the convolutional neural network, the feature amount based on the Visual Concept, and the GIST feature amount for each image X ik and Y jm .
 特徴量算出部13によって、特徴量抽出手法(すなわち、特徴量抽出関数F)毎に、各画像Xik,Yjmの特徴量(特徴量ベクトル)であるF(Xik)及びF(Yjm)が得られる。 By the feature amount calculation unit 13, F n (X ik ) and F n which are the feature amount (feature amount vector) of each image X ik and Y jm for each feature amount extraction method (that is, the feature amount extraction function F n ). (Y jm ) is obtained.
 類似度算出部14は、特徴量算出部13により算出された各画像Xik,Yjmの特徴量抽出手法毎の特徴量(F(Xik)及びF(Yjm))に基づいて、候補スポット画像Xik及び有名スポット画像Yjmの組(Xik,Yjm)毎に、特徴量抽出手法毎の類似度Sim(Xik,Yjm)を算出する。ここで、Sim(Xik,Yjm)は、特徴量抽出関数Fを用いて得られた候補スポット画像Xikの特徴量F(Xik)と、特徴量抽出関数Fを用いて得られた有名スポット画像Yjmの特徴量F(Yjm)と、の間の類似度を示す。 The similarity calculation unit 14 is based on the feature amount (F n (X ik ) and F n (Y jm )) for each feature amount extraction method of each image X ik , Y jm calculated by the feature amount calculation unit 13. , The similarity Sim n (X ik , Y jm ) for each feature extraction method is calculated for each set (X ik , Y jm ) of the candidate spot image X ik and the famous spot image Y jm . Here, Sim n (X ik , Y jm ) uses the feature amount F n (X ik ) of the candidate spot image X ik obtained by using the feature amount extraction function F n and the feature amount extraction function F n . The degree of similarity between the feature amount F n (Y jm ) of the famous spot image Y jm obtained in the above and the feature amount F n (Y jm) is shown.
 類似度算出部14は、特徴量抽出手法に応じて予め定められた計算方法によって、特徴量抽出手法毎の類似度を算出してもよい。例えば、類似度算出部14は、畳み込みニューラルネットワークにより得られる特徴量、及びVisual Conceptに基づく特徴量については、特徴量同士のコサイン類似度を上記類似度として算出してもよい。また、類似度算出部14は、GIST特徴量については、特徴量同士のL2距離を上記類似度として算出してもよい。 The similarity calculation unit 14 may calculate the similarity for each feature amount extraction method by a calculation method predetermined according to the feature amount extraction method. For example, the similarity calculation unit 14 may calculate the cosine similarity between the feature amounts as the above-mentioned similarity for the feature amount obtained by the convolutional neural network and the feature amount based on the Visual Concept. Further, the similarity calculation unit 14 may calculate the L2 distance between the feature amounts as the similarity for the GIST feature amount.
 抽出部15は、類似度算出部14により算出された、候補スポット画像Xik及び有名スポット画像Yjmの組と特徴量抽出手法(すなわち、特徴量抽出関数F)との組み合わせ毎の類似度Sim(F(Xik),F(Yjm))に基づいて、互いに類似する候補スポットX及び有名スポットYの組(X,Y)である類似ペアを抽出する。抽出部15の処理の具体例については、フローチャートを用いて後述する。 The extraction unit 15 has a similarity degree for each combination of the set of the candidate spot image X ik and the famous spot image Y jm calculated by the similarity calculation unit 14 and the feature amount extraction method (that is, the feature amount extraction function F n ). Based on Sim n (F n (X ik ), F n (Y jm )), a similar pair that is a pair (X i , Y j ) of candidate spots X i and famous spots Y j that are similar to each other is extracted. A specific example of the processing of the extraction unit 15 will be described later using a flowchart.
 提示部16は、有名スポットの情報をユーザに提示する。提示部16は、抽出部15により抽出された類似ペアの情報に基づいて、有名スポットと類似する候補スポットの情報を有名スポットの情報と関連付けてユーザに提示する。提示部16は、例えば、ユーザからの検索要求(例えば、ユーザにより入力された条件に合致する観光スポットの情報を要求する操作)に応じて、当該条件に合致する有名スポットを抽出する。そして、提示部16は、抽出された有名スポットに関する情報と共に、当該有名スポットと類似ペアの関係にある候補スポットに関する情報をユーザに提示する。 The presentation unit 16 presents information on famous spots to the user. The presentation unit 16 presents the information of the candidate spot similar to the famous spot to the user in association with the information of the famous spot based on the information of the similar pair extracted by the extraction unit 15. The presentation unit 16 extracts, for example, a famous spot that matches the condition in response to a search request from the user (for example, an operation that requests information on a tourist spot that matches the condition input by the user). Then, the presentation unit 16 presents to the user information about the candidate spots having a similar pair relationship with the famous spots, together with the information about the extracted famous spots.
 次に、図2のフローチャートを参照して、サーバ10の動作の一例について説明する。 Next, an example of the operation of the server 10 will be described with reference to the flowchart of FIG.
 ステップS1において、第1取得部11は、候補スポット画像DB20から、複数(本実施形態ではN個)の候補スポットX(i=1,…,N)の各々について、複数の候補スポット画像Xikを取得する。 In step S1, the first acquisition unit 11 has a plurality of candidate spots X i (i = 1, ..., NX) for each of the plurality of (NX in the present embodiment) candidate spots X i from the candidate spot image DB 20. Get the image X ik .
 ステップS2において、第2取得部12は、有名スポット画像DB30から、複数(本実施形態ではN個)の有名スポットY(j=1,…,N)の各々について、複数の有名スポット画像Yjmを取得する。なお、ステップS2は、ステップS1よりも先に実行されてもよいし、ステップS1と同時並行的に実行されてもよい。 In step S2, the second acquisition unit 12 has a plurality of famous spots Y j (j = 1, ..., NY ) for each of a plurality of (NY pieces in this embodiment) famous spots Y j (j = 1, ..., NY) from the famous spot image DB 30. Acquire the image Y jm . Note that step S2 may be executed before step S1 or may be executed in parallel with step S1.
 ステップS3において、特徴量算出部13は、各候補スポット画像Xik及び各有名スポット画像Yjmについて、特徴量抽出関数F(特徴量抽出手法)毎の特徴量F(Xik),F(Yjm)を算出する。 In step S3, the feature amount calculation unit 13 has, for each candidate spot image X ik and each famous spot image Y jm , the feature amount F n (X ik ), F for each feature amount extraction function F n (feature amount extraction method). Calculate n (Y jm ).
 ステップS4において、類似度算出部14は、候補スポット画像Xik及び有名スポット画像Yjmの組(Xik,Yjm)毎に、特徴量抽出関数F毎の類似度Sim(Xik,Yjm)を算出する。 In step S4, the similarity calculation unit 14 determines the similarity Sim n (X ik ,) for each feature amount extraction function F n for each set (X ik , Y jm ) of the candidate spot image X ik and the famous spot image Y jm . Y jm ) is calculated.
 ステップS5において、抽出部15は、ステップS4で算出された各組(Xik,Yjm)についての特徴量抽出関数F毎の類似度Sim(Xik,Yjm)に基づいて、類似ペアを抽出する。以下、図3~図5を参照して、抽出部15の処理の第1~第3の例について説明する。 In step S5, the extraction unit 15 is similar based on the similarity Sim n (X ik , Y jm ) for each feature amount extraction function F n for each set (X ik , Y jm ) calculated in step S4. Extract pairs. Hereinafter, the first to third examples of the processing of the extraction unit 15 will be described with reference to FIGS. 3 to 5.
(第1の例)
 図3を参照して、抽出部15の処理(図2におけるステップS5)の第1の例について説明する。
(First example)
A first example of the process of the extraction unit 15 (step S5 in FIG. 2) will be described with reference to FIG.
 ステップS11において、抽出部15は、候補スポット画像Xik及び有名スポット画像Yjmの組(Xik,Yjm)毎に、特徴量抽出関数F毎の類似度の和である合計類似度Simtotal(Xik,Yjm)を算出する。ここで、合計類似度Simtotal(Xik,Yjm)は、下記式(1)により表される。
式(1):Simtotal(Xik,Yjm)=Sim(Xik,Yjm)+…+SimNf(Xik,Yjm
In step S11, the extraction unit 15 determines the total similarity Sim, which is the sum of the similarity for each feature amount extraction function F n for each set (X ik , Y jm ) of the candidate spot image X ik and the famous spot image Y jm . Calculate total (X ik , Y jm ). Here, the total similarity Sim total (X ik , Y jm ) is expressed by the following equation (1).
Equation (1): Sim total (X ik , Y jm ) = Sim 1 (X ik , Y jm ) + ... + Sim Nf (X ik , Y jm )
 ステップS12において、抽出部15は、全ての候補スポット画像Xik及び有名スポット画像Yjmの組(Xik,Yjm)の中で上位N位まで(上位所定数)の合計類似度Simtotalを有する組(Xik,Yjm)を抽出する。Nは、例えばオペレータ等によって予め任意に定められる数である。 In step S12, the extraction unit 15 sets the total similarity Sim total up to the top N (the top predetermined number) in the set (X ik , Y jm ) of all the candidate spot images X ik and the famous spot image Y jm . The set (X ik , Y jm ) to have is extracted. N is a number arbitrarily determined in advance by, for example, an operator or the like.
 ステップS13において、抽出部15は、ステップS12において抽出された候補スポット画像Xik及び有名スポット画像Yjmの組(Xik,Yjm)に対応する候補スポットX及び有名スポットYの組(X,Y)を、類似ペアとして抽出する。例えば、ステップS12において、候補スポット画像X11(候補スポットXの1番目の画像)と有名スポット画像Y23(有名スポットYの3番目の画像)との組(X11,Y23)が上位N位までの合計類似度を有する組として抽出された場合、抽出部15は、候補スポットXと有名スポットYとの組(X,Y)を類似ペアとして抽出する。 In step S13, the extraction unit 15 sets the candidate spot X i and the famous spot Y j corresponding to the pair (X ik , Y jm ) of the candidate spot image X ik and the famous spot image Y jm extracted in step S12 (the set of the candidate spot X i and the famous spot Y j). X i , Y j ) is extracted as a similar pair. For example, in step S12, the pair (X 11 , Y 23 ) of the candidate spot image X 11 (the first image of the candidate spot X 1 ) and the famous spot image Y 23 (the third image of the famous spot Y 2 ) is When extracted as a pair having a total similarity up to the top N ranks, the extraction unit 15 extracts a pair (X 1 , Y 2 ) of the candidate spot X 1 and the famous spot Y 2 as a similar pair.
 上記第1の例によれば、複数の特徴量抽出関数Fのそれぞれに対応する類似度Simを参酌することにより、単一の特徴量抽出手法に対応する類似度のみを用いる場合よりも、類似ペアを精度良く抽出することが可能となる。 According to the first example above, by considering the similarity Sim n corresponding to each of the plurality of feature extraction functions F n , the similarity corresponding to a single feature extraction method is used as compared with the case of using only the similarity. , It is possible to extract similar pairs with high accuracy.
(第2の例)
 図4を参照して、抽出部15の処理(図2におけるステップS5)の第2の例について説明する。
(Second example)
A second example of the process of the extraction unit 15 (step S5 in FIG. 2) will be described with reference to FIG.
 ステップS21において、抽出部15は、特徴量抽出関数F毎に、全ての候補スポット画像Xik及び有名スポット画像Yjmの組(Xik,Yjm)の中で上位N位まで(上位所定数)の類似度Simを有する組(Xik,Yjm)を抽出する。Nは、例えばオペレータ等によって予め任意に定められる数である。 In step S21, the extraction unit 15 ranks up to the top N in the set (X ik , Y jm ) of all the candidate spot images X ik and the famous spot image Y jm for each feature amount extraction function F n (upper predetermined). A set (X ik , Y jm ) having a similarity Sim n of (number) is extracted. N is a number arbitrarily determined in advance by, for example, an operator or the like.
 ステップS22において、抽出部15は、ステップS21において抽出された候補スポット画像Xik及び有名スポット画像Yjmの組(Xik,Yjm)に対応する候補スポットX及び有名スポットYの組(X,Y)を、類似候補として抽出する。つまり、特徴量抽出関数F~FNfの各々について、それぞれN個の類似候補を含む集合TopN_F~TopN_FNfが抽出される。 In step S22, the extraction unit 15 sets the candidate spot X i and the famous spot Y j corresponding to the set (X ik , Y jm ) of the candidate spot image X ik and the famous spot image Y jm extracted in step S21 (the set of the candidate spot X i and the famous spot Y j). X i , Y j ) are extracted as similar candidates. That is, for each of the feature quantity extraction functions F 1 to F Nf , a set TopN_F 1 to TopN_F Nf containing N similar candidates is extracted.
 ステップS23において、抽出部15は、所定数以上の特徴量抽出関数Fについて類似候補として抽出された候補スポットX及び有名スポットYの組(X,Y)を、類似ペアとして抽出する。一例として、抽出部15は、全ての特徴量抽出関数F~FNfについて類似候補として抽出された組(X,Y)を、類似ペアとして抽出する。例えば、3つの特徴量抽出関数F~Fが用いられる場合、特徴量抽出関数FについてのN個の類似候補を含む集合TopN_F、特徴量抽出関数FについてのN個の類似候補を含む集合TopN_F、及び特徴量抽出関数FについてのN個の類似候補を含む集合TopN_Fのいずれにも含まれる組(X,Y)が、類似ペアとして抽出される。言い換えれば、集合TopN_F、TopN_F、及びTopN_Fの積集合(TopN_F∩TopN_F∩TopN_F)に含まれる組(X,Y)が、類似ペアとして抽出される。 In step S23, the extraction unit 15 extracts a pair (X i , Y j ) of candidate spots X i and famous spots Y j extracted as similar candidates for a predetermined number or more of feature quantity extraction functions F n as similar pairs. do. As an example, the extraction unit 15 extracts a set (X i , Y j ) extracted as a similar candidate for all the feature quantity extraction functions F 1 to F Nf as a similar pair. For example, when three feature quantity extraction functions F 1 to F 3 are used, a set TopN_F 1 containing N similar candidates for the feature quantity extraction function F 1 and N similar candidates for the feature quantity extraction function F 2 . A set (X i , Y j ) included in any of the set TopN_F 2 containing the above and the set TopN_F 3 containing N similar candidates for the feature extraction function F 3 is extracted as a similar pair. In other words, the pairs (X i , Y j ) included in the intersection (TopN_F 1 ∩ TopN_F 2 ∩ TopN_F 3 ) of the sets TopN_F 1 , TopN_F 2 , and TopN_F 3 are extracted as similar pairs.
 上記第2の例によれば、複数の特徴量抽出関数Fのそれぞれに対応する類似度Simを参酌することにより、単一の特徴量抽出手法に対応する類似度のみを用いる場合よりも、類似ペアを精度良く抽出することが可能となる。より具体的には、複数の観点(特徴量抽出手法)において互いに類似すると判断されるスポット(被写体)同士の組(X,Y)を類似ペアとして抽出することができる。 According to the second example above, by considering the similarity Sim n corresponding to each of the plurality of feature extraction functions F n , as compared with the case where only the similarity corresponding to a single feature extraction method is used. , It is possible to extract similar pairs with high accuracy. More specifically, a pair (X i , Y j ) of spots (subjects) judged to be similar to each other from a plurality of viewpoints (feature amount extraction method) can be extracted as a similar pair.
(第3の例)
 図5を参照して、抽出部15の処理(図2におけるステップS5)の第3の例について説明する。
(Third example)
A third example of the process of the extraction unit 15 (step S5 in FIG. 2) will be described with reference to FIG.
 ステップS31において、抽出部15は、候補スポットXと特徴量抽出関数Fとの組み合わせ(X,F)毎に、全ての候補スポット画像Xik及び有名スポット画像Yjmの組(Xik,Yjm)の中で上位N位まで(上位所定数)の類似度を有する組(Xik,Yjm)を抽出する。 In step S31, the extraction unit 15 sets all the candidate spot images X ik and the famous spot image Y jm (X) for each combination (X i , F n ) of the candidate spot X i and the feature amount extraction function F n . A set (X ik , Y jm ) having a degree of similarity up to the upper N rank (upper predetermined number) among ik , Y jm ) is extracted.
 ステップS32において、抽出部15は、組み合わせ(X,F)毎に、ステップS31で抽出された候補スポット画像Xik及び有名スポット画像Yjmの組(Xik,Yjm)に対応する候補スポットX及び有名スポットYの組(X,Y)を、第1類似候補として抽出する。つまり、組み合わせ(X,F)毎に、N個の第1類似候補を含む集合TopN_F(X)が抽出される。 In step S32, the extraction unit 15 is a candidate corresponding to the set (X ik , Y jm ) of the candidate spot image X ik and the famous spot image Y jm extracted in step S31 for each combination (X i , F n ). A set of spots X i and famous spots Y j (X i , Y j ) is extracted as a first similarity candidate. That is, for each combination (X i , F n ), a set TopN_F n (X i ) containing N first similar candidates is extracted.
 ステップS33において、抽出部15は、候補スポットX毎に、所定数以上の特徴量抽出関数Fについて第1類似候補として抽出された有名スポットYを第2類似候補として抽出する。一例として、抽出部15は、全ての特徴量抽出関数F~FNfについて第1類似候補として抽出された組(X,Y)に含まれる有名スポットYを、第2類似候補として抽出する。例えば、ある候補スポットXに着目すると、3つの特徴量抽出関数F~Fが用いられる場合、特徴量抽出関数FについてのN個の第1類似候補を含む集合TopN_F(X)、特徴量抽出関数FについてのN個の第1類似候補を含む集合TopN_F(X)、及び特徴量抽出関数FについてのN個の第1類似候補を含む集合TopN_F(X)のいずれにも組(X,Y)が含まれる場合、当該組に含まれる有名スポットYは、候補スポットXに対応する第2類似候補として抽出される。言い換えれば、集合TopN_F(X)、TopN_F(X)、及びTopN_F(X)の積集合(TopN_F(X)∩TopN_F(X)∩TopN_F(X))に含まれる組(X,Y)に含まれる有名スポットYが、候補スポットXに対応する第2類似候補として抽出される。以降の説明において、ある候補スポットXに対応する第2類似候補の集合をg(X)と表記する。例えば、候補スポットXに対応する第2類似候補として、有名スポットY及びYが抽出された場合、g(X)は、有名スポットY,Yを含む集合である。すなわち、「g(X)={Y,Y}」である。ステップS33により、候補スポットX,…,XNxの各々に対応する第2類似候補の集合g(X),…,g(XNx)が得られる。 In step S33, the extraction unit 15 extracts the famous spot Y j extracted as the first similar candidate for the feature quantity extraction function F n of a predetermined number or more for each candidate spot X i as the second similar candidate. As an example, the extraction unit 15 uses the famous spot Y j included in the set (X i , Y j ) extracted as the first similar candidate for all the feature quantity extraction functions F 1 to F Nf as the second similar candidate. Extract. For example, focusing on a certain candidate spot X i , when three feature quantity extraction functions F 1 to F 3 are used, a set TopN_F 1 (X i ) containing N first similar candidates for the feature quantity extraction function F 1 . ), The set TopN_F 2 (X i ) containing N first similar candidates for the feature extraction function F 2 , and the set TopN_F 3 (X i) containing N first similar candidates for the feature extraction function F 3 . When a set (X i , Y j ) is included in any of i ), the famous spot Y j included in the set is extracted as a second similar candidate corresponding to the candidate spot X i . In other words, the intersection of the sets TopN_F 1 (X i ), TopN_F 2 (X i ), and TopN_F 3 (X i ) (TopN_F 1 (X i ) ∩ TopN_F 2 (X i ) ∩ TopN_F 3 (X i ) The famous spot Y j included in the set (X i , Y j ) included in is extracted as the second intersection candidate corresponding to the candidate spot X i . In the following description, the set of the second similar candidates corresponding to a certain candidate spot X i is referred to as g (X i ). For example, when the famous spots Y 2 and Y 3 are extracted as the second similar candidate corresponding to the candidate spot X 1 , g (X 1 ) is a set including the famous spots Y 2 and Y 3 . That is, "g (X 1 ) = {Y 2 , Y 3 }". By step S33, a set g (X 1 ), ..., G (X Nx ) of the second similar candidates corresponding to each of the candidate spots X 1 , ..., X Nx is obtained.
 ステップS34において、抽出部15は、候補スポットX及び有名スポットYの組(X,Y)毎のスコア(第1評価値)であるtf(X,Y)を算出する。tf(X,Y)は、文書中に含まれる単語の重要度を評価する手法の1つであるtf-idfにおけるtf(Term Frequency)の考え方を応用したスコアである。 In step S34, the extraction unit 15 calculates tf (X i , Y j ), which is a score (first evaluation value) for each set (X i , Y j ) of the candidate spot X i and the famous spot Y j . tf (X i , Y j ) is a score that applies the idea of tf (Term Frequency) in tf-idf, which is one of the methods for evaluating the importance of words contained in a document.
 特定の候補スポットXと特定の有名スポットYとの組(X,Y)についてのtf(X,Y)は、当該特定の候補スポットXに対応する第2類似候補の集合g(X)に含まれる当該特定の有名スポットYの数を、当該集合g(X)に含まれる有名スポットの総数で割った値である。すなわち、tf(X,Y)は、下記式(2-1)~(2-3)により表される。
式(2-1):tf(X,Y)=N1/N2
式(2-2):N1=g(X)に含まれる有名スポットYの数
式(2-3):N2=g(X)に含まれる有名スポットの総数
The tf (X i , Y j ) for the set (X i , Y j ) of the specific candidate spot X i and the specific famous spot Y j is the second similar candidate corresponding to the specific candidate spot X i . It is a value obtained by dividing the number of the specific famous spots Y j included in the set g (X i ) by the total number of famous spots included in the set g (X i ). That is, tf (X i , Y j ) is expressed by the following equations (2-1) to (2-3).
Equation (2-1): tf (X i , Y j ) = N1 / N2
Equation (2-2): Formula of famous spot Y j included in N1 = g (X i ) (2-3): Total number of famous spots included in N2 = g (X i )
 例えば、g(X)={Y,Y,Y}の場合、g(X)に含まれる有名スポットYの数は「1」であり、g(X)に含まれる有名スポットの総数は「3」であるため、「tf(X,Y)=1/3」となる。 For example, when g (X 1 ) = {Y 1 , Y 2 , Y 4 }, the number of famous spots Y 1 included in g (X 1 ) is "1" and is included in g (X 1 ). Since the total number of famous spots is "3", "tf (X 1 , Y 1 ) = 1/3".
 ステップS35において、抽出部15は、有名スポットY毎のスコア(第2評価値)であるidf(Y)を算出する。idf(Y)は、tf-idfにおけるidf(Inverse Document Frequency)の考え方を応用したスコアである。 In step S35, the extraction unit 15 calculates idf (Y j ), which is a score (second evaluation value) for each famous spot Y j . idf (Y j ) is a score that applies the concept of idf (Inverse Document Frequency) in tf-idf.
 特定の有名スポットYについてのidf(Y)は、候補スポットの総数(本実施形態では、N)を、当該特定の有名スポットYを第2類似候補として含む候補スポットの数(すなわち、有名スポットYを含む集合g(X)を有する候補スポットXの数)で割った値である。すなわち、idf(Y)は、下記式(3)により表される。
式(3):idf(Y)=候補スポットの総数/有名スポットYを含む集合g(X)を有する候補スポットXの数
The idf ( Y j ) for the specific famous spot Y j is the total number of candidate spots (NX in this embodiment) and the number of candidate spots including the specific famous spot Y j as the second similar candidate (that is,). , The number of candidate spots X i having the set g (X i ) including the famous spot Y j ). That is, idf (Y j ) is expressed by the following equation (3).
Equation (3): idf (Y j ) = total number of candidate spots / number of candidate spots X i having a set g (X i ) including the famous spot Y j .
 ステップS36において、抽出部15は、候補スポットX及び有名スポットYの組(X,Y)毎のtf(X,Y)及びidf(Y)に基づいて、類似ペアを抽出する。例えば、抽出部15は、スポットの組(X,Y)の最終的なスコアとして、tf-idf(X,Y)を算出する。tf-idf(X,Y)は、下記式(4)により表される。
式(4):tf-idf(X,Y)=tf(X,Y)×idf(Y
In step S36, the extraction unit 15 sets a similar pair based on tf (X i , Y j ) and idf (Y j ) for each pair (X i , Y j ) of the candidate spot X i and the famous spot Y j . Extract. For example, the extraction unit 15 calculates tf-idf (X i , Y j ) as the final score of the set of spots (X i , Y j ). tf-idf (X i , Y j ) is expressed by the following equation (4).
Equation (4): tf-idf (X i , Y j ) = tf (X i , Y j ) × idf (Y j )
 そして、抽出部15は、例えば、tf-idf(X,Y)が高いスポットの組(X,Y)(例えば、全てのスポットの組の中で上位M位までのtf-idf(X,Y)を有するスポットの組(X,Y))を、類似ペアとして抽出する。 Then, the extraction unit 15 is, for example, a set of spots (X i , Y j ) having a high tf-idf (X i , Y j ) (for example, tf-idf up to the upper M rank among all the sets of spots). A set of spots having (X i , Y j ) (X i , Y j )) is extracted as a similar pair.
 上記第3の例によれば、tf-idfの手法を応用したスコアを用いることにより、類似ペアをより適切に抽出することができる。例えば、平均的な特徴量を有する有名スポット画像に対応する有名スポットが上位にランクインされることを防ぐことができる。平均的な特徴量を有する有名スポット画像の例としては、どの候補スポット画像との間でも一定以上の類似度が算出されてしまうような画像である。このような有名スポットは、特定の候補スポットと特に類似しているとはいえないため、上位にランクインされることは好ましくない。上述したtf-idfの手法に基づくスコアtf-idf(X,Y)を用いることにより、このような有名スポットを含む組が類似ペアとして抽出されることを防ぐことができる。具体的には、このような有名スポットYのidf(Y)は低くなる傾向があるため、当該有名スポットYを含む組のスコアtf-idf(X,Y)を低くすることができる。 According to the third example above, similar pairs can be extracted more appropriately by using a score to which the tf-idf method is applied. For example, it is possible to prevent a famous spot corresponding to a famous spot image having an average feature amount from being ranked high. An example of a famous spot image having an average feature amount is an image in which a certain degree of similarity with any candidate spot image is calculated. It is not desirable for such famous spots to be ranked high because they are not particularly similar to specific candidate spots. By using the score tf-idf (X i , Y j ) based on the above-mentioned tf-idf method, it is possible to prevent pairs including such famous spots from being extracted as similar pairs. Specifically, since the idf (Y j ) of such a famous spot Y j tends to be low, the score tf-idf (X i , Y j ) of the group including the famous spot Y j should be lowered. Can be done.
 図2に戻り、ステップS6において、ユーザからの検索要求(例えば、ユーザにより入力された条件に合致する観光スポットの情報を要求する操作)に応じて、当該条件に合致する有名スポットを抽出する。そして、提示部16は、抽出された有名スポットに関する情報と共に、当該有名スポットと類似ペアの関係にある候補スポットに関する情報をユーザに提示する。 Returning to FIG. 2, in step S6, in response to a search request from the user (for example, an operation of requesting information on a tourist spot that matches the condition input by the user), a famous spot that matches the condition is extracted. Then, the presentation unit 16 presents to the user information about the candidate spots having a similar pair relationship with the famous spots, together with the information about the extracted famous spots.
 以上説明したサーバ10においては、複数の候補スポットXの各々についての一以上の候補スポット画像Xikと複数の有名スポットYの各々についての一以上の有名スポット画像Yjmとの全ての組み合わせについて、特徴量抽出手法(特徴量抽出関数F)毎の類似度Sim(Xik,Yjm)が算出される。そして、このようにして候補スポット画像Xik及び有名スポット画像Yjmの組と特徴量抽出関数Fとの組み合わせ毎に算出された類似度Sim(Xik,Yjm)に基づいて、互いに類似する候補スポットX及び有名スポットYの組である類似ペアが抽出される。上記構成によれば、複数の候補スポットXと複数の有名スポットYとの間で類似度を用いた相対的な評価を行うことによって、類似度に関する閾値を予め設定することが不要となる。その結果、不適切な閾値を設定してしまうことを防ぐことができる。また、閾値を設定する必要がないことから、互いに類似する被写体の組(本実施形態では、候補スポット及び有名スポットの組)を抽出する処理をより容易に行うことができる。従って、サーバ10によれば、互いに類似する被写体の組を容易且つ適切に抽出することができる。 In the server 10 described above, all combinations of one or more candidate spot images X ik for each of the plurality of candidate spots X i and one or more famous spot images Y jm for each of the plurality of famous spots Y j . For each feature amount extraction method (feature amount extraction function F n ), the similarity Sim n (X ik , Y jm ) is calculated. Then, based on the similarity Sim n (X ik , Y jm ) calculated for each combination of the candidate spot image X ik and the famous spot image Y jm and the feature amount extraction function F n in this way, each other. Similar pairs that are a pair of similar candidate spots X i and famous spots Y j are extracted. According to the above configuration, it is not necessary to preset a threshold value for the similarity by performing a relative evaluation using the similarity between the plurality of candidate spots X i and the plurality of famous spots Y j . .. As a result, it is possible to prevent setting an inappropriate threshold value. Further, since it is not necessary to set a threshold value, it is possible to more easily perform the process of extracting a set of subjects similar to each other (in the present embodiment, a set of candidate spots and a set of famous spots). Therefore, according to the server 10, it is possible to easily and appropriately extract a set of subjects similar to each other.
 また、サーバ10は、提示部16を備えている。上記構成によれば、例えば有名スポットに興味のあるユーザに対して、当該有名スポットと類似する候補スポットの情報を提示することが可能となる。これにより、ユーザに対して、ユーザがそれまで知らなかった新たなスポットを提示することが可能となる。その結果、新たな観光スポット(有名スポットと類似する候補スポット)に観光客(ユーザ)を誘導することによって、特定の有名スポットに観光客が集中することを抑制でき、ひいては観光公害の解消、感染症対策、地方創生等を図ることが期待できる。 Further, the server 10 includes a presentation unit 16. According to the above configuration, for example, it is possible to present information on a candidate spot similar to the famous spot to a user who is interested in the famous spot. This makes it possible to present the user with a new spot that the user did not know before. As a result, by guiding tourists (users) to new tourist spots (candidate spots similar to famous spots), it is possible to prevent tourists from concentrating on specific famous spots, which in turn eliminates tourism pollution and infectious diseases. It can be expected to take measures against infectious diseases and regional revitalization.
 ただし、サーバ10は、上記実施形態に限定されない。例えば、サーバ10は、提示部16を備えていなくてもよい。例えば、上述した提示部16は、サーバ10とは異なる外部サーバに実装されてもよい。この場合、サーバ10が抽出部15により抽出された類似ペアの情報を当該外部サーバに送信し、外部サーバが当該類似ペアの情報に基づいて、上述した提示部16と同様の処理を実行すればよい。また、上記実施形態では、類似ペアを抽出する対象となる被写体の組の例として、候補スポット及び有名スポットの組を示したが、類似ペアを抽出する対象となる被写体の組は、上記例に限られない。 However, the server 10 is not limited to the above embodiment. For example, the server 10 does not have to include the presentation unit 16. For example, the presentation unit 16 described above may be mounted on an external server different from the server 10. In this case, if the server 10 transmits the information of the similar pair extracted by the extraction unit 15 to the external server, and the external server executes the same process as the presentation unit 16 described above based on the information of the similar pair. good. Further, in the above embodiment, a set of candidate spots and a set of famous spots is shown as an example of a set of subjects to be extracted as a similar pair, but a set of subjects to be extracted as a similar pair is described in the above example. Not limited.
 なお、上記実施形態の説明に用いたブロック図は、機能単位のブロックを示している。これらの機能ブロック(構成部)は、ハードウェア及びソフトウェアの少なくとも一方の任意の組み合わせによって実現される。また、各機能ブロックの実現方法は特に限定されない。すなわち、各機能ブロックは、物理的又は論理的に結合した1つの装置を用いて実現されてもよいし、物理的又は論理的に分離した2つ以上の装置を直接的又は間接的に(例えば、有線、無線などを用いて)接続し、これら複数の装置を用いて実現されてもよい。機能ブロックは、上記1つの装置又は上記複数の装置にソフトウェアを組み合わせて実現されてもよい。 The block diagram used in the explanation of the above embodiment shows a block of functional units. These functional blocks (components) are realized by any combination of at least one of hardware and software. Further, the method of realizing each functional block is not particularly limited. That is, each functional block may be realized using one physically or logically coupled device, or two or more physically or logically separated devices can be directly or indirectly (eg, for example). , Wired, wireless, etc.) and may be realized using these plurality of devices. The functional block may be realized by combining the software with the one device or the plurality of devices.
 機能には、判断、決定、判定、計算、算出、処理、導出、調査、探索、確認、受信、送信、出力、アクセス、解決、選択、選定、確立、比較、想定、期待、見做し、報知(broadcasting)、通知(notifying)、通信(communicating)、転送(forwarding)、構成(configuring)、再構成(reconfiguring)、割り当て(allocating、mapping)、割り振り(assigning)などがあるが、これらに限られない。 Functions include judgment, decision, judgment, calculation, calculation, processing, derivation, investigation, search, confirmation, reception, transmission, output, access, solution, selection, selection, establishment, comparison, assumption, expectation, and assumption. There are broadcasting, notifying, communicating, forwarding, configuring, reconfiguring, allocating, mapping, assigning, etc., but only these. I can't.
 例えば、本開示の一実施の形態におけるサーバ10は、本開示の通信制御方法を行うコンピュータとして機能してもよい。図6は、本開示の一実施の形態に係るサーバ10のハードウェア構成の一例を示す図である。上述のサーバ10は、物理的には、プロセッサ1001、メモリ1002、ストレージ1003、通信装置1004、入力装置1005、出力装置1006、バス1007などを含むコンピュータ装置として構成されてもよい。 For example, the server 10 in one embodiment of the present disclosure may function as a computer that performs the communication control method of the present disclosure. FIG. 6 is a diagram showing an example of the hardware configuration of the server 10 according to the embodiment of the present disclosure. The server 10 described above may be physically configured as a computer device including a processor 1001, a memory 1002, a storage 1003, a communication device 1004, an input device 1005, an output device 1006, a bus 1007, and the like.
 なお、以下の説明では、「装置」という文言は、回路、デバイス、ユニットなどに読み替えることができる。サーバ10のハードウェア構成は、図1に示した各装置を1つ又は複数含むように構成されてもよいし、一部の装置を含まずに構成されてもよい。 In the following explanation, the word "device" can be read as a circuit, device, unit, etc. The hardware configuration of the server 10 may be configured to include one or more of the devices shown in FIG. 1, or may be configured not to include some of the devices.
 サーバ10における各機能は、プロセッサ1001、メモリ1002などのハードウェア上に所定のソフトウェア(プログラム)を読み込ませることによって、プロセッサ1001が演算を行い、通信装置1004による通信を制御したり、メモリ1002及びストレージ1003におけるデータの読み出し及び書き込みの少なくとも一方を制御したりすることによって実現される。 For each function in the server 10, by loading predetermined software (program) on hardware such as the processor 1001 and the memory 1002, the processor 1001 performs an operation and controls communication by the communication device 1004, or the memory 1002 and the memory 1002. It is realized by controlling at least one of reading and writing of data in the storage 1003.
 プロセッサ1001は、例えば、オペレーティングシステムを動作させてコンピュータ全体を制御する。プロセッサ1001は、周辺装置とのインターフェース、制御装置、演算装置、レジスタなどを含む中央処理装置(CPU:Central Processing Unit)によって構成されてもよい。 The processor 1001 operates, for example, an operating system to control the entire computer. The processor 1001 may be configured by a central processing unit (CPU: Central Processing Unit) including an interface with a peripheral device, a control device, an arithmetic unit, a register, and the like.
 また、プロセッサ1001は、プログラム(プログラムコード)、ソフトウェアモジュール、データなどを、ストレージ1003及び通信装置1004の少なくとも一方からメモリ1002に読み出し、これらに従って各種の処理を実行する。プログラムとしては、上述の実施の形態において説明した動作の少なくとも一部をコンピュータに実行させるプログラムが用いられる。例えば、抽出部15は、メモリ1002に格納され、プロセッサ1001において動作する制御プログラムによって実現されてもよく、他の機能ブロックについても同様に実現されてもよい。上述の各種処理は、1つのプロセッサ1001によって実行される旨を説明してきたが、2以上のプロセッサ1001により同時又は逐次に実行されてもよい。プロセッサ1001は、1以上のチップによって実装されてもよい。なお、プログラムは、電気通信回線を介してネットワークから送信されても良い。 Further, the processor 1001 reads a program (program code), a software module, data, etc. from at least one of the storage 1003 and the communication device 1004 into the memory 1002, and executes various processes according to these. As the program, a program that causes a computer to execute at least a part of the operations described in the above-described embodiment is used. For example, the extraction unit 15 may be realized by a control program stored in the memory 1002 and operating in the processor 1001, and may be realized in the same manner for other functional blocks. Although it has been described that the various processes described above are executed by one processor 1001, they may be executed simultaneously or sequentially by two or more processors 1001. Processor 1001 may be mounted by one or more chips. The program may be transmitted from the network via a telecommunication line.
 メモリ1002は、コンピュータ読み取り可能な記録媒体であり、例えば、ROM(Read Only Memory)、EPROM(Erasable Programmable ROM)、EEPROM(Electrically Erasable Programmable ROM)、RAM(Random Access Memory)などの少なくとも1つによって構成されてもよい。メモリ1002は、レジスタ、キャッシュ、メインメモリ(主記憶装置)などと呼ばれてもよい。メモリ1002は、本開示の一実施の形態に係る通信制御方法を実施するために実行可能なプログラム(プログラムコード)、ソフトウェアモジュールなどを保存することができる。 The memory 1002 is a computer-readable recording medium, and is composed of at least one such as a ROM (Read Only Memory), an EPROM (Erasable Programmable ROM), an EEPROM (Electrically Erasable Programmable ROM), and a RAM (Random Access Memory). May be done. The memory 1002 may be referred to as a register, a cache, a main memory (main storage device), or the like. The memory 1002 can store a program (program code), a software module, or the like that can be executed to implement the communication control method according to the embodiment of the present disclosure.
 ストレージ1003は、コンピュータ読み取り可能な記録媒体であり、例えば、CD-ROM(Compact Disc ROM)などの光ディスク、ハードディスクドライブ、フレキシブルディスク、光磁気ディスク(例えば、コンパクトディスク、デジタル多用途ディスク、Blu-ray(登録商標)ディスク)、スマートカード、フラッシュメモリ(例えば、カード、スティック、キードライブ)、フロッピー(登録商標)ディスク、磁気ストリップなどの少なくとも1つによって構成されてもよい。ストレージ1003は、補助記憶装置と呼ばれてもよい。上述の記憶媒体は、例えば、メモリ1002及びストレージ1003の少なくとも一方を含むデータベース、サーバその他の適切な媒体であってもよい。 The storage 1003 is a computer-readable recording medium, and is, for example, an optical disk such as a CD-ROM (Compact Disc ROM), a hard disk drive, a flexible disk, an optical magnetic disk (for example, a compact disk, a digital versatile disk, a Blu-ray). It may consist of at least one (registered trademark) disk), smart card, flash memory (eg, card, stick, key drive), floppy (registered trademark) disk, magnetic strip, and the like. The storage 1003 may be referred to as an auxiliary storage device. The storage medium described above may be, for example, a database, server or other suitable medium containing at least one of the memory 1002 and the storage 1003.
 通信装置1004は、有線ネットワーク及び無線ネットワークの少なくとも一方を介してコンピュータ間の通信を行うためのハードウェア(送受信デバイス)であり、例えばネットワークデバイス、ネットワークコントローラ、ネットワークカード、通信モジュールなどともいう。 The communication device 1004 is hardware (transmission / reception device) for communicating between computers via at least one of a wired network and a wireless network, and is also referred to as, for example, a network device, a network controller, a network card, a communication module, or the like.
 入力装置1005は、外部からの入力を受け付ける入力デバイス(例えば、キーボード、マウス、マイクロフォン、スイッチ、ボタン、センサなど)である。出力装置1006は、外部への出力を実施する出力デバイス(例えば、ディスプレイ、スピーカー、LEDランプなど)である。なお、入力装置1005及び出力装置1006は、一体となった構成(例えば、タッチパネル)であってもよい。 The input device 1005 is an input device (for example, a keyboard, a mouse, a microphone, a switch, a button, a sensor, etc.) that accepts an input from the outside. The output device 1006 is an output device (for example, a display, a speaker, an LED lamp, etc.) that outputs to the outside. The input device 1005 and the output device 1006 may have an integrated configuration (for example, a touch panel).
 また、プロセッサ1001、メモリ1002などの各装置は、情報を通信するためのバス1007によって接続される。バス1007は、単一のバスを用いて構成されてもよいし、装置間ごとに異なるバスを用いて構成されてもよい。 Further, each device such as the processor 1001 and the memory 1002 is connected by the bus 1007 for communicating information. The bus 1007 may be configured by using a single bus, or may be configured by using a different bus for each device.
 また、サーバ10は、マイクロプロセッサ、デジタル信号プロセッサ(DSP:Digital Signal Processor)、ASIC(Application Specific Integrated Circuit)、PLD(Programmable Logic Device)、FPGA(Field Programmable Gate Array)などのハードウェアを含んで構成されてもよく、当該ハードウェアにより、各機能ブロックの一部又は全てが実現されてもよい。例えば、プロセッサ1001は、これらのハードウェアの少なくとも1つを用いて実装されてもよい。 Further, the server 10 is configured to include hardware such as a microprocessor, a digital signal processor (DSP: Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), a PLD (Programmable Logic Device), and an FPGA (Field Programmable Gate Array). The hardware may implement some or all of each functional block. For example, processor 1001 may be implemented using at least one of these hardware.
 以上、本実施形態について詳細に説明したが、当業者にとっては、本実施形態が本明細書中に説明した実施形態に限定されるものではないということは明らかである。本実施形態は、特許請求の範囲の記載により定まる本発明の趣旨及び範囲を逸脱することなく修正及び変更態様として実施することができる。したがって、本明細書の記載は、例示説明を目的とするものであり、本実施形態に対して何ら制限的な意味を有するものではない。 Although the present embodiment has been described in detail above, it is clear to those skilled in the art that the present embodiment is not limited to the embodiment described in the present specification. This embodiment can be implemented as an amendment or modification without departing from the spirit and scope of the present invention as determined by the description of the scope of claims. Therefore, the description herein is for purposes of illustration only and has no limiting implications for this embodiment.
 本開示において説明した各態様/実施形態の処理手順、シーケンス、フローチャートなどは、矛盾の無い限り、順序を入れ替えてもよい。例えば、本開示において説明した方法については、例示的な順序を用いて様々なステップの要素を提示しており、提示した特定の順序に限定されない。 The order of the processing procedures, sequences, flowcharts, etc. of each aspect / embodiment described in the present disclosure may be changed as long as there is no contradiction. For example, the methods described in the present disclosure present elements of various steps using exemplary order, and are not limited to the particular order presented.
 入出力された情報等は特定の場所(例えば、メモリ)に保存されてもよいし、管理テーブルを用いて管理してもよい。入出力される情報等は、上書き、更新、又は追記され得る。出力された情報等は削除されてもよい。入力された情報等は他の装置へ送信されてもよい。 The input / output information and the like may be stored in a specific location (for example, a memory) or may be managed using a management table. Information to be input / output may be overwritten, updated, or added. The output information and the like may be deleted. The input information or the like may be transmitted to another device.
 判定は、1ビットで表される値(0か1か)によって行われてもよいし、真偽値(Boolean:true又はfalse)によって行われてもよいし、数値の比較(例えば、所定の値との比較)によって行われてもよい。 The determination may be made by a value represented by 1 bit (0 or 1), by a true / false value (Boolean: true or false), or by comparing numerical values (for example, a predetermined value). It may be done by comparison with the value).
 本開示において説明した各態様/実施形態は単独で用いてもよいし、組み合わせて用いてもよいし、実行に伴って切り替えて用いてもよい。また、所定の情報の通知(例えば、「Xであること」の通知)は、明示的に行うものに限られず、暗黙的(例えば、当該所定の情報の通知を行わない)ことによって行われてもよい。 Each aspect / embodiment described in the present disclosure may be used alone, in combination, or may be switched and used according to the execution. Further, the notification of predetermined information (for example, the notification of "being X") is not limited to the explicit one, but is performed implicitly (for example, the notification of the predetermined information is not performed). May be good.
 ソフトウェアは、ソフトウェア、ファームウェア、ミドルウェア、マイクロコード、ハードウェア記述言語と呼ばれるか、他の名称で呼ばれるかを問わず、命令、命令セット、コード、コードセグメント、プログラムコード、プログラム、サブプログラム、ソフトウェアモジュール、アプリケーション、ソフトウェアアプリケーション、ソフトウェアパッケージ、ルーチン、サブルーチン、オブジェクト、実行可能ファイル、実行スレッド、手順、機能などを意味するよう広く解釈されるべきである。 Software, whether referred to as software, firmware, middleware, microcode, hardware description language, or other names, is an instruction, instruction set, code, code segment, program code, program, subprogram, software module. , Applications, software applications, software packages, routines, subroutines, objects, executable files, execution threads, procedures, features, etc. should be broadly interpreted.
 また、ソフトウェア、命令、情報などは、伝送媒体を介して送受信されてもよい。例えば、ソフトウェアが、有線技術(同軸ケーブル、光ファイバケーブル、ツイストペア、デジタル加入者回線(DSL:Digital Subscriber Line)など)及び無線技術(赤外線、マイクロ波など)の少なくとも一方を使用してウェブサイト、サーバ、又は他のリモートソースから送信される場合、これらの有線技術及び無線技術の少なくとも一方は、伝送媒体の定義内に含まれる。 Further, software, instructions, information, etc. may be transmitted and received via a transmission medium. For example, a website where the software uses at least one of wired technology (coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL: Digital Subscriber Line), etc.) and wireless technology (infrared, microwave, etc.). When transmitted from a server or other remote source, at least one of these wired and wireless technologies is included within the definition of transmission medium.
 本開示において説明した情報、信号などは、様々な異なる技術のいずれかを使用して表されてもよい。例えば、上記の説明全体に渡って言及され得るデータ、命令、コマンド、情報、信号、ビット、シンボル、チップなどは、電圧、電流、電磁波、磁界若しくは磁性粒子、光場若しくは光子、又はこれらの任意の組み合わせによって表されてもよい。 The information, signals, etc. described in this disclosure may be represented using any of a variety of different techniques. For example, data, instructions, commands, information, signals, bits, symbols, chips, etc. that may be referred to throughout the above description are voltages, currents, electromagnetic waves, magnetic fields or magnetic particles, light fields or photons, or any of these. It may be represented by a combination of.
 また、本開示において説明した情報、パラメータなどは、絶対値を用いて表されてもよいし、所定の値からの相対値を用いて表されてもよいし、対応する別の情報を用いて表されてもよい。 Further, the information, parameters, etc. described in the present disclosure may be expressed using an absolute value, a relative value from a predetermined value, or another corresponding information. It may be represented.
 上述したパラメータに使用する名称はいかなる点においても限定的な名称ではない。さらに、これらのパラメータを使用する数式等は、本開示で明示的に開示したものと異なる場合もある。様々な情報要素は、あらゆる好適な名称によって識別できるので、これらの様々な情報要素に割り当てている様々な名称は、いかなる点においても限定的な名称ではない。 The names used for the above parameters are not limited in any respect. Further, mathematical formulas and the like using these parameters may differ from those expressly disclosed in this disclosure. The various names assigned to these various information elements are not limiting in any way, as the various information elements can be identified by any suitable name.
 本開示において使用する「に基づいて」という記載は、別段に明記されていない限り、「のみに基づいて」を意味しない。言い換えれば、「に基づいて」という記載は、「のみに基づいて」と「に少なくとも基づいて」の両方を意味する。 The statement "based on" used in this disclosure does not mean "based on" unless otherwise stated. In other words, the statement "based on" means both "based only" and "at least based on".
 本開示において使用する「第1の」、「第2の」などの呼称を使用した要素へのいかなる参照も、それらの要素の量又は順序を全般的に限定しない。これらの呼称は、2つ以上の要素間を区別する便利な方法として本開示において使用され得る。したがって、第1及び第2の要素への参照は、2つの要素のみが採用され得ること、又は何らかの形で第1の要素が第2の要素に先行しなければならないことを意味しない。 Any reference to elements using designations such as "first" and "second" as used in this disclosure does not generally limit the quantity or order of those elements. These designations can be used in the present disclosure as a convenient way to distinguish between two or more elements. Therefore, references to the first and second elements do not mean that only two elements can be adopted, or that the first element must somehow precede the second element.
 本開示において、「含む(include)」、「含んでいる(including)」及びそれらの変形が使用されている場合、これらの用語は、用語「備える(comprising)」と同様に、包括的であることが意図される。さらに、本開示において使用されている用語「又は(or)」は、排他的論理和ではないことが意図される。 When "include", "including" and variations thereof are used in the present disclosure, these terms are as inclusive as the term "comprising". Is intended. Moreover, the term "or" used in the present disclosure is intended not to be an exclusive OR.
 本開示において、例えば、英語でのa, an及びtheのように、翻訳により冠詞が追加された場合、本開示は、これらの冠詞の後に続く名詞が複数形であることを含んでもよい。 In the present disclosure, if articles are added by translation, for example, a, an and the in English, the disclosure may include the plural nouns following these articles.
 本開示において、「AとBが異なる」という用語は、「AとBが互いに異なる」ことを意味してもよい。なお、当該用語は、「AとBがそれぞれCと異なる」ことを意味してもよい。「離れる」、「結合される」などの用語も、「異なる」と同様に解釈されてもよい。 In the present disclosure, the term "A and B are different" may mean "A and B are different from each other". The term may mean that "A and B are different from C". Terms such as "separate" and "combined" may be interpreted in the same way as "different".
 10…サーバ(情報処理装置)、11…第1取得部、12…第2取得部、13…特徴量算出部、14…類似度算出部、15…抽出部、16…提示部。 10 ... server (information processing device), 11 ... first acquisition unit, 12 ... second acquisition unit, 13 ... feature amount calculation unit, 14 ... similarity calculation unit, 15 ... extraction unit, 16 ... presentation unit.

Claims (5)

  1.  複数の第1被写体の各々について一以上の第1画像を取得する第1取得部と、
     複数の第2被写体の各々について一以上の第2画像を取得する第2取得部と、
     複数の特徴量抽出手法の各々を用いることにより、各前記第1画像及び各前記第2画像について、前記特徴量抽出手法毎の特徴量を算出する特徴量算出部と、
     前記特徴量算出部により算出された各画像の前記特徴量抽出手法毎の前記特徴量に基づいて、前記第1画像及び前記第2画像の組毎に、前記特徴量抽出手法毎の類似度を算出する類似度算出部と、
     前記類似度算出部により算出された、前記第1画像及び前記第2画像の組と前記特徴量抽出手法との組み合わせ毎の前記類似度に基づいて、互いに類似する前記第1被写体及び前記第2被写体の組である類似ペアを抽出する抽出部と、
    を備える情報処理装置。
    A first acquisition unit that acquires one or more first images for each of a plurality of first subjects,
    A second acquisition unit that acquires one or more second images for each of a plurality of second subjects,
    By using each of the plurality of feature amount extraction methods, a feature amount calculation unit for calculating the feature amount for each feature amount extraction method for each of the first image and each of the second images, and a feature amount calculation unit.
    Based on the feature amount of each image calculated by the feature amount calculation unit for each feature amount extraction method, the similarity of each feature amount extraction method is calculated for each set of the first image and the second image. The similarity calculation unit to be calculated and
    The first subject and the second subject that are similar to each other based on the similarity for each combination of the first image and the second image set and the feature amount extraction method calculated by the similarity calculation unit. An extraction unit that extracts similar pairs that are a set of subjects,
    Information processing device equipped with.
  2.  前記抽出部は、
      前記第1画像及び前記第2画像の組毎に、前記特徴量抽出手法毎の前記類似度の和である合計類似度を算出し、
      全ての前記第1画像及び前記第2画像の組の中で上位所定数の前記合計類似度を有する前記第1画像及び前記第2画像の組を抽出し、
      前記抽出された前記第1画像及び前記第2画像の組に対応する前記第1被写体及び前記第2被写体の組を、前記類似ペアとして抽出する、請求項1に記載の情報処理装置。
    The extraction unit
    For each set of the first image and the second image, the total similarity, which is the sum of the similarity for each feature extraction method, is calculated.
    Among all the sets of the first image and the second image, the set of the first image and the second image having the total similarity of the upper predetermined number is extracted.
    The information processing apparatus according to claim 1, wherein the pair of the first subject and the second subject corresponding to the extracted pair of the first image and the second image is extracted as the similar pair.
  3.  前記抽出部は、
      前記特徴量抽出手法毎に、全ての前記第1画像及び前記第2画像の組の中で上位所定数の前記類似度を有する前記第1画像及び前記第2画像の組を抽出し、
      前記特徴量抽出手法毎に、前記抽出された前記第1画像及び前記第2画像の組に対応する前記第1被写体及び前記第2被写体の組を類似候補として抽出し、
      所定数以上の前記特徴量抽出手法について前記類似候補として抽出された前記第1被写体及び前記第2被写体の組を、前記類似ペアとして抽出する、請求項1に記載の情報処理装置。
    The extraction unit
    For each feature amount extraction method, a set of the first image and the second image having a predetermined number of similarities is extracted from all the sets of the first image and the second image.
    For each feature amount extraction method, the pair of the first subject and the second subject corresponding to the extracted set of the first image and the second image is extracted as similar candidates.
    The information processing apparatus according to claim 1, wherein the pair of the first subject and the second subject extracted as the similarity candidates for the feature amount extraction method of a predetermined number or more is extracted as the similarity pair.
  4.  前記抽出部は、
      前記第1被写体と前記特徴量抽出手法との組み合わせ毎に、全ての前記第1画像及び前記第2画像の組の中で上位所定数の前記類似度を有する前記第1画像及び前記第2画像の組を抽出し、
      前記第1被写体と前記特徴量抽出手法との組み合わせ毎に、前記抽出された前記第1画像及び前記第2画像の組に対応する前記第1被写体及び前記第2被写体の組を第1類似候補として抽出し、
      前記第1被写体毎に、所定数以上の前記特徴量抽出手法について前記第1類似候補として抽出された前記第2被写体を第2類似候補として抽出し、
      前記第1被写体及び前記第2被写体の組毎の第1評価値と、前記第2被写体毎の第2評価値と、を算出し、
      前記第1被写体及び前記第2被写体の組毎の前記第1評価値及び前記第2評価値に基づいて、前記類似ペアを抽出し、
     特定の第1被写体と特定の第2被写体との組についての前記第1評価値は、前記特定の第1被写体に対応する前記第2類似候補の集合に含まれる前記特定の第2被写体の数を、前記集合に含まれる前記第2被写体の数で割った値であり、
     特定の第2被写体についての前記第2評価値は、前記第1被写体の総数を、前記特定の第2被写体を前記第2類似候補として含む前記第1被写体の数で割った値である、請求項1に記載の情報処理装置。
    The extraction unit
    For each combination of the first subject and the feature amount extraction method, the first image and the second image having a predetermined number of similarities among all the sets of the first image and the second image. Extract the pair of
    For each combination of the first subject and the feature amount extraction method, the set of the first subject and the second subject corresponding to the extracted set of the first image and the second image is the first similar candidate. Extracted as
    For each of the first subjects, the second subject extracted as the first similarity candidate for the feature amount extraction method having a predetermined number or more is extracted as the second similarity candidate.
    The first evaluation value for each set of the first subject and the second subject and the second evaluation value for each second subject are calculated.
    The similar pair was extracted based on the first evaluation value and the second evaluation value for each pair of the first subject and the second subject.
    The first evaluation value for a set of a specific first subject and a specific second subject is the number of the specific second subjects included in the set of the second similar candidates corresponding to the specific first subject. Is divided by the number of the second subjects included in the set.
    The second evaluation value for a specific second subject is a value obtained by dividing the total number of the first subjects by the number of the first subjects including the specific second subject as the second similar candidate. Item 1. The information processing apparatus according to item 1.
  5.  前記第2被写体の情報をユーザに提示する提示部を備え、
     前記提示部は、前記抽出部により抽出された前記類似ペアの情報に基づいて、前記第2被写体と類似する前記第1被写体の情報を前記第2被写体の情報と関連付けて前記ユーザに提示する、請求項1~4のいずれか一項に記載の情報処理装置。
    A presentation unit that presents information on the second subject to the user is provided.
    Based on the information of the similar pair extracted by the extraction unit, the presentation unit presents the information of the first subject similar to the second subject to the user in association with the information of the second subject. The information processing apparatus according to any one of claims 1 to 4.
PCT/JP2021/030217 2020-08-26 2021-08-18 Information processing device WO2022044923A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2022544500A JP7412575B2 (en) 2020-08-26 2021-08-18 information processing equipment

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020142725 2020-08-26
JP2020-142725 2020-08-26

Publications (1)

Publication Number Publication Date
WO2022044923A1 true WO2022044923A1 (en) 2022-03-03

Family

ID=80354211

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/030217 WO2022044923A1 (en) 2020-08-26 2021-08-18 Information processing device

Country Status (2)

Country Link
JP (1) JP7412575B2 (en)
WO (1) WO2022044923A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007094962A (en) * 2005-09-30 2007-04-12 Seiko Epson Corp Specifying of object expressed in image
WO2017006648A1 (en) * 2015-07-03 2017-01-12 Necソリューションイノベータ株式会社 Image discrimination device, image discrimination method, and computer-readable recording medium
JP2020095408A (en) * 2018-12-11 2020-06-18 日本電信電話株式会社 List generating device, subject discriminating device, list generating method, and program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007094962A (en) * 2005-09-30 2007-04-12 Seiko Epson Corp Specifying of object expressed in image
WO2017006648A1 (en) * 2015-07-03 2017-01-12 Necソリューションイノベータ株式会社 Image discrimination device, image discrimination method, and computer-readable recording medium
JP2020095408A (en) * 2018-12-11 2020-06-18 日本電信電話株式会社 List generating device, subject discriminating device, list generating method, and program

Also Published As

Publication number Publication date
JP7412575B2 (en) 2024-01-12
JPWO2022044923A1 (en) 2022-03-03

Similar Documents

Publication Publication Date Title
US7769771B2 (en) Searching a document using relevance feedback
US9087111B2 (en) Personalized tag ranking
US10585915B2 (en) Database sharding
US9846708B2 (en) Searching of images based upon visual similarity
US20150234927A1 (en) Application search method, apparatus, and terminal
JP6517352B2 (en) Method and system for providing translation information
CN106528579A (en) Search method, device and system based on sharding structure databases
CN106095738B (en) Recommending form fragments
JP6390139B2 (en) Document search device, document search method, program, and document search system
JP6020191B2 (en) Display control apparatus and program
CN111914020A (en) Data synchronization method and device and data query method and device
WO2021027149A1 (en) Portrait similarity-based information retrieval recommendation method and device and storage medium
JP2010102593A (en) Information processing device and method, program, and storage medium
US9036946B2 (en) Image processing apparatus that retrieves similar images, method of controlling the same, and storage medium
WO2022044923A1 (en) Information processing device
JP2016167237A (en) Image searching device and program
JP6515457B2 (en) INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING APPARATUS
WO2020235135A1 (en) Interactive system
JP7339148B2 (en) Search support device
US9501534B1 (en) Extreme value computation
JP6797618B2 (en) Search device, search method, program and search system
WO2021111769A1 (en) Retrieval device
JP7490670B2 (en) Search Device
WO2021010290A1 (en) Search device
JP6282051B2 (en) Data processing apparatus, data processing method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21861351

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022544500

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21861351

Country of ref document: EP

Kind code of ref document: A1