WO2023281903A1 - Dispositif de mise en correspondance d'images, procédé de mise en correspondance d'images, et programme - Google Patents

Dispositif de mise en correspondance d'images, procédé de mise en correspondance d'images, et programme Download PDF

Info

Publication number
WO2023281903A1
WO2023281903A1 PCT/JP2022/018752 JP2022018752W WO2023281903A1 WO 2023281903 A1 WO2023281903 A1 WO 2023281903A1 JP 2022018752 W JP2022018752 W JP 2022018752W WO 2023281903 A1 WO2023281903 A1 WO 2023281903A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
similarity
attribute
candidate images
group
Prior art date
Application number
PCT/JP2022/018752
Other languages
English (en)
Japanese (ja)
Inventor
俊介 安木
拓実 小島
祐介 加藤
Original Assignee
パナソニックIpマネジメント株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パナソニックIpマネジメント株式会社 filed Critical パナソニックIpマネジメント株式会社
Publication of WO2023281903A1 publication Critical patent/WO2023281903A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present disclosure relates to an image matching device, an image matching method, and a program.
  • Non-Patent Document 1 discloses a matching technique for matching a person image with a query and displaying matching results in a ranking format.
  • An object of the present disclosure is to provide an image matching device, an image matching method, and a program that make it easier to grasp information indicating matching results compared to conventional techniques.
  • One aspect of the present disclosure provides an image matching device that matches each of a plurality of candidate images with a query image.
  • the image matching device uses an attribute determination unit that determines an object attribute of an object to be matched included in each candidate image, and determines a similarity that indicates the degree of similarity between each candidate image and a query image using a predetermined similarity calculation algorithm.
  • a similarity determination unit for classifying a plurality of candidate images into a plurality of groups for each object attribute determined by the attribute determination unit; one or more candidate images classified into each group; a rank assigning unit that assigns ranks in descending order of similarity; and an output unit that outputs information about one or more candidate images classified into each group for two or more groups among the plurality of groups under the control of the control unit.
  • the image matching method includes an attribute determining step of determining an object attribute of an object to be matched included in each candidate image, and determining a similarity indicating the degree of similarity of each candidate image to the query image using a predetermined similarity calculation algorithm. a step of classifying a plurality of candidate images into a plurality of groups for each object attribute determined in the attribute determination step; and one or more candidate images classified into each group having high similarity for each group and outputting information about one or more candidate images classified into each group for two or more groups among the plurality of groups.
  • Yet another aspect of the present disclosure provides a program for causing a control unit to execute the above image matching method.
  • the image matching device According to the image matching device, the image matching method, and the program according to the present disclosure, it is possible to grasp the information indicating the matching result more easily than in the conventional technology.
  • Block diagram showing a configuration example of the image matching device in FIG. 3 is a flow chart illustrating the procedure of processing executed by the control unit of the image collating apparatus of FIG. 2; Schematic diagram illustrating the distance between the feature amount vector of the candidate image and the feature amount vector of the query image Schematic diagram for explaining an example of step S5 for determining the degree of similarity in FIG. Schematic diagram illustrating conventional technology for displaying matching results in a similarity ranking format
  • a matching technique is known for searching for a person to be searched from among a plurality of captured images generated by a plurality of surveillance cameras installed in towns, premises, and the like.
  • An example of such a matching technique is a technique of detecting an image of a person from a plurality of captured images, using this as a candidate image, and calculating a similarity indicating the degree of similarity between the candidate image and the query image.
  • a technique of determining whether or not the degree of similarity is equal to or greater than a predetermined threshold, a technique of arranging and displaying candidate images in ranking order in descending order of similarity, and the like are known.
  • a query image to be matched is selected by the user from among the plurality of captured images, or is selected in advance from existing images. Alternatively, the query image may be automatically selected by a program from an externally input image, the plurality of captured images, or the like.
  • Non-Patent Document 1 discloses a technique for arranging and displaying candidate images in a ranking format in descending order of similarity.
  • FIG. 6 is a schematic diagram illustrating a conventional technique for arranging and displaying matching results in a similarity ranking format.
  • the similarity of the candidate image may be calculated to be high when the orientation of the person in the candidate image matches the orientation of the person in the query image Q.
  • the candidate images (T1 to T3 in FIG. 6) showing the candidates facing the same direction as the person in the query image Q are the candidate images (T1 to T3 in FIG. 6) showing candidates facing in a direction different from the direction of the person in the query image Q ( From T4 to T6), the similarity of the candidate image is calculated to be high.
  • the candidate images (T1 to T3 in FIG. 6) showing the candidate facing the same direction as the person in the query image Q are displayed in the query image Q even if the candidate is a different person from the query image Q.
  • the similarity is calculated to be higher than the candidate image (T4 in FIG. 6) in which the same person is facing a different direction. Therefore, when the candidate images are arranged in a ranking format, the candidate images (T1 to T3 in FIG. 6) showing the candidates facing the same direction as the person in the query image Q are arranged at the top of the ranking. A candidate image facing a different direction (T4 in FIG. 6) appears at a lower level, and information indicating the matching result for finding the same person is buried in other information.
  • the inventors have conducted research to solve the above problems, and have developed an image matching device, an image matching method, and a program that make it easier to grasp the information indicating the matching result compared to the conventional technology.
  • the attribute of the object to be matched that can be recognized from the image itself by image recognition technology or the like, that is, the “object attribute” will be described by exemplifying “human orientation”, but the “object attribute” of the present disclosure also is not limited to Other examples of matching objects and object attributes will be described after the description of the embodiments.
  • FIG. 1 is a schematic diagram showing an outline of an image matching device 100 according to an embodiment of the present disclosure.
  • the image matching device 100 detects human images from a plurality of image data 50 generated by a plurality of cameras and uses them as candidate images. to rank.
  • the image matching apparatus 100 classifies each candidate image into a plurality of groups according to the orientation of the person. For example, the image matching apparatus 100 calculates an attribute rank regarding the orientation of a person in a plurality of candidate images similar to the orientation of the person in the query image.
  • the image matching apparatus 100 calculates the attribute ranking of a plurality of candidate images in the same order as the query image, the group facing forward (first orientation), backward facing (second orientation), and facing right (third orientation). do.
  • the image matching device 100 classifies the plurality of candidate images into respective groups as shown in FIG.
  • the image matching apparatus 100 ranks each classified group in descending order of similarity, and displays the rank of each group for two or more of the plurality of groups on a display device or the like.
  • FIG. 2 is a block diagram showing a configuration example of the image matching device 100 of FIG.
  • the image matching device 100 includes a control unit 1, a storage device 2, an image acquisition unit 3, an input interface (I/F) 5, and an output interface (I/F) 4.
  • the control unit 1 implements the functions of the image matching device 100 by executing information processing. Such information processing is realized by executing a program stored in the storage device 2 by the control unit 1, for example.
  • the control unit 1 includes a person detection unit 11 , a query determination unit 12 , an orientation detection unit 13 , a similarity determination unit 14 , a classification unit 15 and a ranking unit 16 .
  • the control unit 1 is composed of circuits such as a CPU, MPU, and FPGA.
  • the person detection unit 11 detects a person in the image data 50 and uses the image of the detected person as a candidate image.
  • the query determination unit 12 determines a query image to be matched with candidate images.
  • the orientation detection unit 13 detects the orientation of a person's face and/or body (hereinafter referred to as "person's orientation") included in the query image determined by the query determination unit 12 and each candidate image detected by the person detection unit 11. ) is detected.
  • the similarity determination unit 14 determines a similarity indicating the degree of similarity of each candidate image to the query image using a predetermined similarity calculation algorithm.
  • the classification unit 15 classifies the plurality of candidate images into a plurality of groups for each orientation of the person determined by the similarity determination unit 14 .
  • the ranking unit 16 ranks one or more candidate images classified into each group in descending order of similarity for each group. Details of each of the functions described above will be further described in relation to the operation of the image collating apparatus 100, which will be described later.
  • the storage device 2 is a recording medium for recording various information such as data and programs including a predetermined similarity calculation algorithm for causing the control unit 1 to execute the image matching method by the image matching device 100 .
  • the storage device 2 stores a later-described feature extraction model 21 that is a trained model and an image list 22 .
  • the storage device 2 is realized by, for example, a semiconductor storage device such as a flash memory, a solid state drive (SSD), a magnetic storage device such as a hard disk drive (HDD), or other recording media alone or in combination.
  • the storage device 2 may include volatile memory such as SRAM and DRAM.
  • the image acquisition unit 3 is an interface circuit that connects the image matching device 100 and an external device in order to input information such as the image data 50 to the image matching device 100 .
  • an external device is, for example, another information processing terminal (not shown) or a device such as a camera that acquires the image data 50 .
  • the image acquisition unit 3 may be a communication circuit that performs data communication according to existing wired communication standards or wireless communication standards.
  • the input interface 5 is an interface circuit that connects the image collating device 100 and an input device 80 such as a keyboard and a mouse in order to accept user input.
  • the input interface 5 may be a communication circuit that performs data communication according to existing wired communication standards or wireless communication standards.
  • the output interface 4 is an interface circuit that connects the image matching device 100 and an external output device in order to output information from the image matching device 100 .
  • Such an output device is, for example, the display device 70 .
  • the output interface 4 may be a communication circuit that is connected to the network 60 and performs data communication according to existing wired communication standards or wireless communication standards.
  • the image acquisition unit 3, input interface 5 and output interface 4 may be realized by separate or common hardware.
  • FIG. 3 is a flow chart illustrating a procedure of processing executed by the control section 1 of the image matching apparatus 100 of FIG.
  • the control unit 1 acquires image data 50 via the image acquisition unit 3 (S1).
  • the image data 50 is, for example, a plurality of image data captured by a plurality of cameras installed in the premises.
  • the person detection unit 11 detects a person in the image data 50 acquired in step S1, and uses the image of the detected person as a candidate image (S2). When detecting a plurality of persons in the image data 50, the person detection unit 11 generates a candidate image for each of the detected persons.
  • detecting a person in the image data 50 includes detecting an area in which a person exists in the image data 50 .
  • the query determination unit 12 determines a query image to be matched with candidate images (S3). For example, the query determination unit 12 uses, as a query image, a candidate image selected by the user using the input device 80 such as a keyboard or mouse from among the plurality of candidate images obtained in step S2.
  • the query determination unit 12 may use an image of a person stored in advance in the storage device 2, an image of a person input via the image acquisition unit 3, or the like as a query image.
  • the query determination unit 12 operates in accordance with instructions from the program to obtain the plurality of captured images obtained in step S2, the person's image stored in advance in the storage device 2, and the person's image input via the image acquisition unit 3.
  • a query image may be automatically selected from images and the like.
  • the orientation detection unit 13 detects the orientation of the person included in the query image determined in step S3 and each candidate image detected in step S2 (S4). Specifically, first, the orientation detection unit 13 determines to which of a plurality of predetermined face and/or body different orientations the orientation of the person included in the query image belongs. Then, the orientation detection unit 13 determines to which of a plurality of predetermined face and/or body different orientations the orientation of the person included in each candidate image belongs.
  • the orientation detection unit 13 is the “attribute determination It is an example of "part”.
  • the “orientation of the person included in the query image” is an example of the “query attribute indicating the object attribute of the subject included in the query image” of the present disclosure.
  • the orientation detection unit 13 compares the feature amount vector of each candidate image output by the feature extraction model 21 with each of the feature amount vectors of human images in a plurality of predetermined orientations, so that each candidate image is Detect the orientation of a person in The orientation of the person is the orientation of the face and/or body of the person in the candidate image, such as the orientation of the face, the orientation of the upper body, the orientation of the lower body, or an orientation determined by combining these information. .
  • the orientation detection unit 13 may use an orientation detection model that outputs the orientation of the person in the person image by inputting the person image.
  • an orientation detection model is a trained model constructed by having the model learn the relationship between the learning image and the correct information.
  • a known skeleton detector, posture detector, or face orientation detector may be applied to the orientation detection unit 13 .
  • the direction of the person detected in this way is, for example, 8 directions of forward, obliquely forward to the right, facing right, obliquely backward to the right, backward, obliquely backward to the left, facing left, and obliquely forward to the left when viewed from the person in the candidate image.
  • direction can be classified.
  • the orientation of a person is not limited to the eight directions described above, and can be classified into less than eight directions or nine or more directions.
  • the similarity determination unit 14 uses a predetermined similarity calculation algorithm to determine a similarity indicating the degree of similarity of each candidate image to the query image (S5). For example, the similarity determining unit 14 calculates the similarity based on comparison between the feature amount vector of each candidate image and the feature amount vector of the query image. For example, the predetermined similarity calculation algorithm calculates the similarity such that the smaller the distance such as the Euclidean distance, the Mahalanobis distance, or the inner product between the feature amount vector of each candidate image and the feature amount vector of the query image, the similarity is increased. is an algorithm for calculating The predetermined similarity calculation algorithm may be an algorithm that applies a model constructed by metric learning to calculate the distance between a plurality of feature amount vectors.
  • the degree of similarity means, for example, that the larger the value, the higher the degree of matching between the candidate image and the query image.
  • FIG. 4 is a schematic diagram illustrating the distance between the feature amount vector of the candidate image and the feature amount vector of the query image.
  • FIG. 4 shows an n-dimensional (n: an integer equal to or greater than 1) feature amount vector space.
  • the similarity determination unit 14 determines the similarity of each candidate image so that the similarity of the candidate image X is higher than the similarity of the candidate image Y by a predetermined similarity calculation algorithm.
  • FIG. 5 is a schematic diagram for explaining an example of step S5 for determining the degree of similarity in FIG.
  • the similarity determination unit 14 may use the feature extraction model 21 that outputs the feature amount vector of the image by inputting the image.
  • the similarity determination unit 14 calculates a similarity based on a comparison between the feature amount vector of each candidate image output by the feature extraction model 21 and the feature amount vector of the query image.
  • Such a feature extraction model 21 is a trained model constructed by having the model learn the relationship between the learning image and the correct information.
  • the feature extraction model 21, which is a trained model may be a model having the structure of a neural network, for example, a convolutional neural network (CNN).
  • CNN convolutional neural network
  • the classification unit 15 classifies the multiple candidate images into multiple groups for each orientation of the person determined in step S4 (S6). For example, if it is determined in step S4 that the orientation of the person in the candidate image belongs to "forward", the classification unit 15 classifies the candidate image into a first group corresponding to "forward" in step S6 ( See Figure 1). Alternatively, for example, the classification unit 15 may classify the orientation of the person in the candidate image detected in step S4 based on the orientation relative to the orientation of the person in the query image. For example, the classification unit 15 classifies a plurality of candidate images into a group that is the same as the orientation of the person in the query image and a group that is different.
  • the ranking unit 16 ranks the one or more candidate images classified into each group in step S6 in descending order of similarity determined in step S5 (S7). For example, the ranking unit 16 calculates an attribute ranking regarding the orientation of a person in a plurality of candidate images similar to the orientation of the person in the query image. For example, if the person orientation of the query image is forward facing (first orientation) as shown in FIG. ing. Based on these facts, the order of attributes regarding the orientation of a person is forward facing (first orientation), backward facing (second orientation), and right facing (third orientation). Then, as shown in FIG.
  • the ranking unit 16 ranks the candidate images classified into the forward-looking group from the first rank, and also ranks the candidate images classified into the backward-looking group and the candidate images classified into the right-facing group respectively. rank in order from .
  • the above-mentioned attribute ranking regarding the orientation of a person is, for example, the degree of similarity of the orientation of a person included in the candidate images classified into each group (for example, facing forward, facing backward, facing right, etc.) to the orientation of a person included in the query image ( hereinafter referred to as “orientation similarity”).
  • Orientation similarity is an example of “attribute similarity” of the present disclosure.
  • the orientation similarity is calculated by the similarity determining unit 14, for example.
  • the similarity determining unit 14 calculates the orientation similarity based on a comparison between the outline of a person included in the candidate images classified into each group and the outline of a person included in the query image.
  • Orientation similarity may be predetermined by the orientation of the person in the candidate image relative to the orientation of the person in the query image. For example, when the orientation of a person included in a query image is forward facing, the orientation similarity may be set to a larger value in order of the forward facing group, the backward facing group, and the right facing group.
  • the control unit 1 may output to the display device 70 information in which the one or more candidate images classified into each group in step S6 are linked with the ranking in each group for two or more groups among the plurality of groups. .
  • Each candidate image is displayed on the display device 70 in the order of the ranking in each group (S8).
  • the control unit 1 displays, via the output interface 4, a plurality of candidate images, the group to which each candidate image determined in step S6 belongs, and the information indicating the order given to each group in step S7.
  • Output to device 70 For example, as shown in FIG. 1, the control unit 1 causes the display device 70 to display two or more groups among a plurality of groups arranged in the order of attribute ranking in the vertical direction (first direction), and displays the object attributes.
  • One or more candidate images classified into each group are arranged in the horizontal direction (second direction) in order of rank in each group and displayed on the display device 70 .
  • the similarity ranking of the object attribute having the first attribute ranking may be displayed, and the similarity ranking of the object attributes having the second and subsequent attribute rankings may be displayed by switching the screen.
  • the object attributes may be displayed in a predetermined order regardless of the attribute order.
  • control unit 1 may output to the display device 70 the candidate images from the first place to the predetermined order in each group in association with each order. For example, the control unit 1 selects the first to tenth candidate images in each group and causes the display device 70 to display them together with information on the group to which each candidate image belongs and the ranking.
  • step S8 the control unit 1 displays candidate images having a degree of similarity equal to or higher than a predetermined threshold among the one or more candidate images classified into each group for two or more of the plurality of groups on the display device 70.
  • a predetermined threshold among the one or more candidate images classified into each group for two or more of the plurality of groups on the display device 70.
  • the control unit 1 selects candidate images having a similarity of 0.7 or higher in each group, and causes the display device 70 to display them together with information on the group to which each candidate image belongs and the ranking.
  • the amount of data output to the display device 70 can be reduced, and the processing load can be reduced.
  • the amount of information processing in the display device 70 can also be reduced.
  • a predetermined threshold may be set for each group. For example, if the person in the query image is forward-facing, the control unit 1 selects candidate images with a similarity of 0.8 or higher from the forward-facing group, and selects candidate images with a similarity of 0.5 or higher from the backward-facing group. An image is selected and displayed on the display device 70 .
  • setting a higher threshold than other groups for candidate images belonging to a group in which candidate images in the same orientation as the orientation of the person in the query image are classified has the following advantages. That is, a candidate image in which a candidate facing the same direction as the person in the query image shows the same person as in the query image, but facing a different direction, even if the candidate is a different person from the query image. There is a tendency that the calculated similarity is higher than that of the candidate image shown. Therefore, even if a candidate image belonging to a group in which candidate images with the same orientation as the orientation of the person in the query image are classified has a high degree of similarity, it belongs to another group with a similarly high degree of similarity.
  • the probability of an image matching the query image may actually be lower than the candidate image. Therefore, the control unit 1 sets a threshold higher than that of the other groups for the candidate images belonging to the group into which the candidate images in the same direction as the direction of the person in the query image are classified. By reflecting the degree of matching, it is possible to select and display on the display device 70 only the candidate images whose substantial degree of matching exceeds a predetermined reference value.
  • the image matching device 100 for matching each of a plurality of candidate images with a query image includes the orientation detection unit 13, which is an example of an attribute determination unit, the similarity determination unit 14, the classification unit 15, A ranking unit 16 and an output interface 4 are provided.
  • the orientation detection unit 13 determines the orientation of the person included in each candidate image.
  • the similarity determination unit 14 determines a similarity indicating the degree of similarity of each candidate image to the query image using a predetermined similarity calculation algorithm.
  • the classification unit 15 classifies the multiple candidate images into multiple groups for each orientation determined by the orientation detection unit 13 .
  • the ranking unit 16 ranks one or more candidate images classified into each group in descending order of similarity for each group.
  • the output interface 4 outputs information about one or more candidate images classified into each group for two or more of the plurality of groups under the control of the control unit 1 .
  • the information processing apparatus of the output destination or the user who views the displayed output information can easily grasp the information indicating the matching result compared to the conventional technology, and the image matching apparatus 100 can display the information indicating the matching result. You can solve problems that are buried in other information.
  • the output interface 4 may output the candidate images from the first to the predetermined rank in each group in association with the respective ranks.
  • the output interface 4 outputs candidate images having a degree of similarity equal to or higher than a predetermined threshold among the one or more candidate images classified into each group, for two or more of the plurality of groups.
  • the output interface 4 outputs information to the display device 70 under the control of the control unit 1, and ranks one or more candidate images classified into each group in each of two or more groups among the plurality of groups. may be displayed on the display device 70 in this order.
  • the orientation detection unit 13 may further determine the orientation of the subject included in the query image.
  • the similarity determination unit 14 further determines orientation similarity indicating the degree to which the orientation of the person corresponding to each of the plurality of groups classified by the classification unit 15 is similar to the orientation of the subject included in the query image. do.
  • Orientation similarity is an example of “attribute similarity” of the present disclosure.
  • the rank assigning unit 16 further assigns ranks (orientation ranks) to the plurality of groups in descending order of orientation similarity.
  • the output interface 4 arranges the plurality of groups in the vertical direction (first direction) in order of orientation order and displays them on the display device 70, and displays one or more candidate images classified into each group in each group. They are arranged in the horizontal direction (second direction) in order of similarity order and displayed on the display device 70 .
  • a plurality of groups are arranged in order of orientation order, and candidate images belonging to each group are arranged in order of similarity order in the horizontal direction in each group arranged in order of orientation. Information indicating the result can be grasped more easily.
  • an object to be matched is not limited to a person, and may be an object other than a person, such as a vehicle, a building, a robot, or the like.
  • the object attribute may be an attribute of an object other than a person, such as the color, material, shape, etc. of the object.
  • the object attribute is not limited to the orientation of the person, and may be attributes such as a person's height, body type, age, age group, and gender.
  • a person's height and body shape can be easily estimated using image recognition technology.
  • age, generation, and gender can also be estimated using image recognition technology. For example, it is possible to estimate gender from the type of clothes, hairstyle, body shape, etc. of a person in an image, or estimate age from the degree of facial wrinkles and hair color.
  • the object attribute may be an attribute representing whether or not a person is wearing clothes of a specific shape such as a suit.
  • An object attribute may be an attribute that indicates whether a person is holding a bag, carrying a backpack, pulling a suitcase, or making a phone call.
  • the presence/absence of belongings such as a bag as an object attribute includes information indicating not only whether or not a person actually has belongings in an image, but also whether or not the belongings are shown in the image.
  • Such belongings are visible in images taken at a certain time, but in images taken at other times, they may be hidden by the owner's body or the owner may have left the belongings somewhere. , may not be reflected.
  • the information processing device of the output destination or the user viewing the displayed output information can determine the matching result. It becomes easier to grasp the information to be displayed than in the conventional technology, and the image matching apparatus 100 can solve the problem that the information indicating the matching result is buried in other information.
  • the object attribute may be an attribute indicating whether the person is riding a vehicle such as a bicycle or motorcycle, walking, running, standing still, standing or sitting.
  • the attribute determination unit may detect the posture of the person in each candidate image and estimate the attributes as described above based on the detected posture.
  • the display device 70 was exemplified as an output destination of information by the image collating device 100 .
  • the output destination of the information is not limited to this, and the image matching device 100 may output the information to an information processing terminal such as a smart phone, a tablet, or a notebook computer via the network 60, for example.
  • the image matching device 100 executes rough matching processing in steps S1 to S7 in FIG. Steps S4 to S8 may be performed.
  • Such a means reduces the processing load of the second round of fine matching processing by excluding candidate images whose similarity is lower than a predetermined threshold as a result of rough matching processing from targets of the second round of fine matching processing. , can improve the processing speed.
  • the present disclosure is applicable to image search technology and image matching technology.
  • control unit 2 storage device 3 image acquisition unit 4 output interface 5 input interface 11 person detection unit 12 query determination unit 13 detection unit 14 similarity determination unit 15 classification unit 16 ranking unit 21 feature extraction model 22 image list 50 image data 60 Network 70 Display Device 80 Input Device 100 Image Verification Device

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Ce dispositif de mise en correspondance d'images comprend : une unité de détermination d'attribut qui détermine un attribut d'objet d'un objet de correspondance, compris dans chaque image candidate; une unité de détermination de similitude qui détermine une similitude indiquant le degré de similitude de chaque image candidate avec une image d'interrogation à l'aide d'un algorithme de calcul de similitude prédéterminé; une unité de classification qui classifie une pluralité des images candidates en une pluralité de groupes pour chaque attribut d'objet déterminé par l'unité de détermination d'attribut; une unité de classement qui classe au moins une image candidate classifiée dans chaque groupe dans un ordre décroissant de similitude pour chaque groupe; et une unité de sortie qui délivre en sortie des informations concernant l'au moins une image candidate classifiée dans chaque groupe sous la commande d'une unité de commande.
PCT/JP2022/018752 2021-07-09 2022-04-25 Dispositif de mise en correspondance d'images, procédé de mise en correspondance d'images, et programme WO2023281903A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021114129 2021-07-09
JP2021-114129 2021-07-09

Publications (1)

Publication Number Publication Date
WO2023281903A1 true WO2023281903A1 (fr) 2023-01-12

Family

ID=84801510

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/018752 WO2023281903A1 (fr) 2021-07-09 2022-04-25 Dispositif de mise en correspondance d'images, procédé de mise en correspondance d'images, et programme

Country Status (1)

Country Link
WO (1) WO2023281903A1 (fr)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11316846A (ja) * 1998-04-22 1999-11-16 Nec Corp 画像の領域情報とエッジ情報を利用する画像照合の方法および画像照会装置
JP2014182480A (ja) * 2013-03-18 2014-09-29 Toshiba Corp 人物認識装置、及び方法
WO2015001791A1 (fr) * 2013-07-03 2015-01-08 パナソニックIpマネジメント株式会社 Procédé de reconnaissance d'objection de dispositif de reconnaissance d'objet
JP2016103084A (ja) * 2014-11-27 2016-06-02 株式会社 日立産業制御ソリューションズ 画像検索装置、及び画像検索システム
JP2016157165A (ja) * 2015-02-23 2016-09-01 三菱電機マイコン機器ソフトウエア株式会社 人物特定システム
JP2017054493A (ja) * 2015-09-11 2017-03-16 キヤノン株式会社 情報処理装置及びその制御方法及びプログラム
JP2018054472A (ja) * 2016-09-29 2018-04-05 ケーディーアイコンズ株式会社 情報処理装置及びプログラム
WO2019103912A2 (fr) * 2017-11-22 2019-05-31 Arterys Inc. Récupération d'image basée sur le contenu pour analyse de lésion

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11316846A (ja) * 1998-04-22 1999-11-16 Nec Corp 画像の領域情報とエッジ情報を利用する画像照合の方法および画像照会装置
JP2014182480A (ja) * 2013-03-18 2014-09-29 Toshiba Corp 人物認識装置、及び方法
WO2015001791A1 (fr) * 2013-07-03 2015-01-08 パナソニックIpマネジメント株式会社 Procédé de reconnaissance d'objection de dispositif de reconnaissance d'objet
JP2016103084A (ja) * 2014-11-27 2016-06-02 株式会社 日立産業制御ソリューションズ 画像検索装置、及び画像検索システム
JP2016157165A (ja) * 2015-02-23 2016-09-01 三菱電機マイコン機器ソフトウエア株式会社 人物特定システム
JP2017054493A (ja) * 2015-09-11 2017-03-16 キヤノン株式会社 情報処理装置及びその制御方法及びプログラム
JP2018054472A (ja) * 2016-09-29 2018-04-05 ケーディーアイコンズ株式会社 情報処理装置及びプログラム
WO2019103912A2 (fr) * 2017-11-22 2019-05-31 Arterys Inc. Récupération d'image basée sur le contenu pour analyse de lésion

Similar Documents

Publication Publication Date Title
US20220327155A1 (en) Method, apparatus, electronic device and computer readable storage medium for image searching
Lu et al. Feature extraction and fusion using deep convolutional neural networks for face detection
US11222044B2 (en) Natural language image search
Ghimire et al. Extreme learning machine ensemble using bagging for facial expression recognition
Tang et al. Facial landmark detection by semi-supervised deep learning
CN110555481A (zh) 一种人像风格识别方法、装置和计算机可读存储介质
CN104487915A (zh) 维持扩增的连续性
KR102372017B1 (ko) 표정 기반 컨텐츠 추천 장치, 이를 위한 방법 및 이 방법을 수행하는 프로그램이 기록된 컴퓨터 판독 가능한 기록매체
JP2011221791A (ja) 顔クラスタリング装置、顔クラスタリング方法、及びプログラム
Kumar et al. 3D sign language recognition using spatio temporal graph kernels
Al-Akam et al. Local feature extraction from RGB and depth videos for human action recognition
Hsu et al. Fast landmark localization with 3D component reconstruction and CNN for cross-pose recognition
Xia et al. Face occlusion detection using deep convolutional neural networks
Hu et al. Speech Emotion Recognition Model Based on Attention CNN Bi-GRU Fusing Visual Information.
Katkade et al. Advances in real-time object detection and information retrieval: A review
Li et al. Recognizing hand gestures using the weighted elastic graph matching (WEGM) method
Lin et al. Region-based context enhanced network for robust multiple face alignment
JP7409499B2 (ja) 画像処理装置、画像処理方法、及びプログラム
WO2023281903A1 (fr) Dispositif de mise en correspondance d'images, procédé de mise en correspondance d'images, et programme
Axyonov et al. Method of multi-modal video analysis of hand movements for automatic recognition of isolated signs of Russian sign language
CN115240127A (zh) 一种面向智能电视的儿童监控方法
Elakkiya et al. Interactive real time fuzzy class level gesture similarity measure based sign language recognition using artificial neural networks
Yun et al. Riemannian manifold-based support vector machine for human activity classification in images
JP7171361B2 (ja) データ解析システム、学習装置、及びその方法
Feng et al. Object activity scene description, construction, and recognition

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22837324

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE