WO2023210095A1 - 動画識別装置、動画識別方法、および動画識別プログラム - Google Patents
動画識別装置、動画識別方法、および動画識別プログラム Download PDFInfo
- Publication number
- WO2023210095A1 WO2023210095A1 PCT/JP2023/003897 JP2023003897W WO2023210095A1 WO 2023210095 A1 WO2023210095 A1 WO 2023210095A1 JP 2023003897 W JP2023003897 W JP 2023003897W WO 2023210095 A1 WO2023210095 A1 WO 2023210095A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- video
- public
- target video
- cut
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/10—Protecting distributed programs or content, e.g. vending or licensing of copyrighted material ; Digital rights management [DRM]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/18—Legal services
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
Definitions
- the present invention relates to a technique for detecting a video based on a target video from among a plurality of public videos.
- Patent Document 1 describes a method for calculating the mutual similarity of contents posted by individual users, and determining whether the posted content is appropriate in terms of copyright based on the calculated mutual similarity. A technique for performing this has been disclosed.
- Patent Document 1 In the inappropriate content detection method disclosed in Patent Document 1, the degree of similarity between posted contents is calculated, and a group of mutually similar contents is detected as copyright-inappropriate content. Therefore, the method of Patent Document 1 is not suitable for the purpose of a content provider to detect plagiarism by others of his/her own specific content.
- One purpose of the present disclosure is to provide a technique for appropriately detecting a video that relies on a target video from among multiple public videos.
- a video identification device is a video identification device that detects a video that relies on a target video from among a plurality of public videos, and the video identification device detects a video that relies on a target video from among a plurality of public videos, and the video identification device detects a video that relies on a target video from among a plurality of public videos, and the video identification device detects a video that relies on a target video from among a plurality of public videos.
- a target video image recording unit that records images of frames included in the constituent unit for each unit as representative images, and for each of the plurality of public videos, selects images of frames from the public video at predetermined time intervals as cut images. whether or not the public video is a video that is likely to be based on the target video, based on the representative image of the target video and the cut image of the public video; and a determination unit that determines.
- FIG. 2 is a block diagram showing the functional configuration of a moving image identification device.
- FIG. 2 is a block diagram showing the hardware configuration of a moving image identification device.
- FIG. 3 is a block diagram showing the functional configuration of a determination unit. Flowchart of the entire process. An image diagram for explaining overall processing. Flowchart of determination processing. Flowchart of secondary judgment process
- FIG. 1 is a block diagram showing the functional configuration of a moving image identification device.
- FIG. 2 is a block diagram showing the hardware configuration of the moving image identification device.
- the video identification device 10 includes a target video image recording section 11, a public video image selection section 12, a determination section 13, and a display section 14.
- the video identification device 10 of this embodiment is a device that detects a video based on the target video TM (see FIG. 4) from among a plurality of public videos RM.
- the public video RM is a video that is made public on the communication network 90 such as the Internet by a company or individual through a web page such as a video sharing site provided by the web server 91.
- the target video TM is a copyrighted video that is searched for plagiarism on the Internet. It is assumed that the public video RM includes videos that depend on the target video TM. A video created based on the target video TM may depend on the target video TM. Creating a video that relies on the target video TM without permission from the copyright holder of the target video TM may constitute plagiarism.
- the video identification device 10 In order to discover a public video RM that has plagiarized the target video TM, the video identification device 10 detects a public video RM that is presumed to be the same as the target video TM or a public video that is estimated to be based on the target video TM. Explore RM.
- the target video image recording unit 11 records, from the target video TM, an image of a frame included in each scene, which is a constituent unit in the time axis direction of the target video TM, as a representative image TMx representing that scene. . That is, the target video TM includes a plurality of representative images TMx.
- the structural unit may be a cut or a shot in addition to a scene configuring the target video TM.
- a frame is, for example, the entire area of still images that constitute the target video TM.
- the target video TM may be multiple types of video content.
- the video content may be, for example, a moving image of a movie, play, literature, animation, sports, or the like.
- the representative image TMx may be the first image of each scene of the target video TM.
- the representative image TMx may be a characteristic image selected by the user from the target video TM.
- the representative image TMx may include, for example, a thumbnail image.
- the public video image selection unit 12 selects frame images from the public video RM at predetermined time intervals as cut images RMx. That is, the public video RM includes a plurality of cut images RMx.
- the predetermined time may be, for example, a fixed time such as 10 seconds.
- the representative image TMx is selected from the scene of the target video TM, while the cutout image RMx is acquired from the public video RM at predetermined time intervals. This is because if the public video RM is created illegally, it may be difficult to distinguish between scene boundaries due to processing or deterioration.
- the public video image selection unit 12 may select, from the public video RM, an image that is not an insertion image inserted between frames of the public video RM as the cut image RMx.
- the inserted image is an image unrelated to the content that is inserted between frames of the public video RM to avoid detection by someone other than the copyright holder of the target video TM.
- the determining unit 13 determines whether the public video RM is a video that may be based on the target video TM by comparing the representative image TMx of the target video TM and the cut image RMx of the public video RM. do. At this time, the determining unit 13 first performs a simple primary judgment to exclude public video RMs that are extremely unlikely to rely on the target video TM, and determines whether the remaining public video RMs have a possibility of relying on the target video TM. A detailed secondary judgment is made as to whether or not it is true. This makes it possible to both reduce the processing time required for determination and improve the accuracy of the determination results.
- FIG. 3 is a block diagram showing the functional configuration of the determination section.
- the determination unit 13 includes a feature point extraction unit 131, a first similar video determination unit 132, and a second similar video determination unit 133.
- the feature point extraction unit 131 extracts feature points from each of the representative image TMx of the target video TM and the cut image RMx of the public video RM. Further, the feature point extraction unit 131 specifies the feature amount of the extracted feature point. At this time, the feature point extracting unit 131 may identify the feature amount from the luminance gradient around each feature point of the representative image TMx of the target video TM and the cut image RMx of the public video RM.
- the first similar video determination unit 132 creates a plurality of groups by clustering all the feature points of all the representative images TMx of the target video TM, and creates the representative images of the target video TM extracted by the feature point extraction unit 131.
- the feature points of TMx and the cut image RMx of the public video RM are classified into groups, and a public video RM similar to the target video TM is determined based on the classification results.
- the first similar video determining unit 132 may use a bag-of-features method as an example of category recognition.
- the first similar video determination unit 132 defines each feature point of the representative image TMx of the target video TM and the cut image RMx of the public video RM extracted by the feature point extraction unit 131 as a feature based on the brightness gradient around the feature point. May be classified according to quantity.
- the first similar video determination unit 132 classifies the feature points of each of the representative image TMx of the target video TM extracted by the feature point extraction unit 131 and the cut image RMx of the public video RM to create a histogram, and the created representative Based on the Bhattacharya distance between the histogram of the image TMx and the histogram of the cut image RMx, it is determined whether the representative image TMx and the cut image RMx are similar. Thereby, public video RMs similar to the target video TM can be easily compared.
- the second similar video determination unit 133 excludes the combinations of the target video TM and public video RM that were determined by the first similarity determination unit 132 to be not similar in the primary determination, and selects the remaining target video TM and public video RM.
- the similarity between the images is calculated by comparing the feature points of the representative image TMx of the target video TM and the cut image RMx of the public video RM, and the similarity between the target video TM and the public video TM is calculated based on the similarity between the images.
- the similarity between the videos RM is calculated, and based on the similarity between the videos, it is determined whether the public video RM is based on the target video TM.
- the second similar video determining unit 133 first selects a representative image TMx and a cut image RMx for each combination of the target video TM and the public video RM that were not excluded in the primary determination by the first similar video determining unit 132.
- the degree of mutual similarity (degree of similarity between feature points) between the representative image TMx and the cut image RMx in each of the round-robin combinations is calculated.
- the second similar video determining unit 133 identifies pairs of feature points (similar feature point pairs) for which the degree of similarity between feature points exceeds a predetermined threshold for each round-robin combination of the representative image TMx and the cut image RMx. do.
- the second similar video determining unit 133 identifies a combination of the representative image TMx and the cut image RMx (feature point pairing image pair) in which the number of similar feature point pairs satisfies a predetermined pairing condition.
- the pairing condition is a condition for associating the representative image TMx and the cut image RMx, which may be the same scene.
- the second similar video determination unit 133 calculates a score of similarity between videos based on the number of similar feature point pairs in the feature point pairing image pair, and based on the score, the public video RM becomes the target video TM.
- a secondary determination is made as to whether there is a possibility that the
- the display unit 14 displays the determination result by the determination unit 13 on the screen of a display device 26, which will be described later.
- the public video RM that relies on the target video TM from among the multiple public video RMs can be detected appropriately and quickly.
- the similarity as video content is determined. Judgment that takes into account the degree of oscillation can be realized by comparing images TMx and RMx.
- the degree of similarity between the target video TM and the public video RM can be appropriately evaluated even if an unrelated image is inserted in the public video RM. It becomes possible to do so.
- the moving image identification device 10 can also be realized by causing a computer to execute a software program that defines the processing procedure of each part shown in FIG.
- FIG. 2 shows an example of the hardware configuration of a computer that implements the moving image identification device.
- the video identification device 10 can be connected to a web server 91 via a communication network 90 such as the Internet.
- a communication network 90 such as the Internet.
- the video identification device 10 includes a processing device 21, a main memory 22, a storage device 23, a communication device 24, an input device 25, and a display device 26, which are connected to a bus 27. has been done.
- the recording device 23 records data in a writable and readable manner, and stores the public video RM (cut image RMx) used for processing by the video identification device 10 and the target video TM (representative image).
- TMx data is recorded.
- data of public moving images RM (cut images RMx) collected from a plurality of web servers 91 is stored in the recording device 23.
- the data of the target moving image TM (representative image TMx) which is the object of searching whether or not the moving image has been plagiarized, is also recorded in the recording device 23.
- the processing device 21 is a processor that reads data recorded in the storage device 23 to the main memory 22 and executes software program processing using the main memory 22.
- the processing device 21 realizes the target moving image recording section 11, the public moving image selecting section 12, the determining section 13, and the display section 14 shown in FIG.
- the communication device 24 transmits information processed by the processing device 21 via the communication network 0 including priority or wireless communication or both, and also transmits information received via the communication network 90 to the processing device 21.
- the received information is used by the processing device 21 for software processing.
- the input device 25 is a device that accepts information inputted by an operator such as a keyboard or a mouse. Information input to the input device 25 is used by the processing device 21 for software processing.
- the display device 26 is a device that displays video TMx, RM, images TMx, RMx, and text information on a display screen in accordance with software processing by the processing device 21.
- FIG. 4 is a flowchart of the overall process.
- FIG. 5 is an image diagram for explaining the overall processing.
- the video identification device 10 first performs target video image recording processing using the target video image recording unit 11 (step S401).
- the target video image recording unit 11 first calculates the image difference between adjacent frames of the target video TM to be searched for plagiarism, and selects a scene where the calculated difference exceeds a predetermined threshold. Identify the boundaries of Further, the target video image recording unit 11 selects an image representative of the scene from the images of frames included in each scene as a representative image TMx, and records the selected representative image TMx (TM1, TM2, TM3, etc. in FIG. 5). data is stored in the storage device 23.
- an image of a frame at a predetermined time position in a scene may be used as the representative image TMx of the scene.
- the image of the first frame of a scene may be used as the representative image TMx, or the image of a frame after a predetermined time has elapsed from the beginning may be used as the representative image TMx.
- the video identification device 10 causes the public video image selection unit 12 to execute a public video image selection process (step S402).
- the public video image selection unit 12 first visits each website of the web server 91 via the communication network 90 and selects public video RMs published on web pages included in the websites. Data is collected (crawling), and the data of the collected public video RM is stored in the recording device 23.
- the public video image selection unit 12 selects cut images RMx (RM1, RM2, RM3, etc. in FIG. 5) from the public video RM at fixed time intervals, and stores the data of the selected cut images RMx in the recording device 23. do.
- the target video image recording process is performed and then the public video image selection process is performed, but the processing order is not limited to this.
- the public video image selection process may be executed before the target video image recording process.
- the determination process is a process of determining whether or not the public video RM is a video that may depend on the target video TM, based on the representative image TMx of the target video TM and the cut image RMx of the public video RM. It is. Details of the determination process will be described later.
- the video identification device 10 displays the determination result by the determination unit 13 on the screen using the display unit 14 (step S404).
- the degree of similarity used in the determination may be displayed.
- the target video TM and the public video RM that is estimated to have been created by plagiarizing the target video TM may be displayed side by side.
- the video identification device 10 notifies a warning to the publisher of the public video RM that is estimated to have been created by plagiarizing the target video TM (step S405).
- the warning notification is made by the user determining the determination result by the determining unit 13, and based on the determination result, notifying the publisher of the public video RM that is estimated to have been created by plagiarizing the target video TM. It's fine.
- the display of the determination result by the determination unit 13 in step S304 may be omitted.
- FIG. 6 is a flowchart of the determination process.
- the determination unit 13 first acquires data of all representative images TMx from the storage device 23, extracts feature points of each representative image TMx, and specifies the feature amount of the feature points. (Step S601).
- the method for extracting the feature points of the representative image TMx is not particularly limited, for example, corner points in the representative image TMx may be extracted as the feature points.
- the method for identifying the feature amount of the feature point of each representative image TMx is not particularly limited, but the feature amount may be identified from the brightness gradient of each feature point.
- the determination unit 13 acquires the data of all cut images RMx from the storage device 23, extracts the feature points of each cut image RMx, and specifies the feature amount of the feature points (step S602).
- the feature points of the cut image RMx may be extracted using the same method as the method of extracting the feature points of the representative image TMx. Further, the feature amount of the feature point of the cut image RMx may be specified by the same method as the method of specifying the feature amount of the feature point of the representative image TMx.
- the determination unit 13 creates a plurality of groups by clustering all the feature points of all the representative images TMx according to the feature amount (step S603).
- the determination unit 13 classifies the feature points of the representative image TMx into groups by nearest neighbor search with the cluster centroid in the feature space, and creates a histogram of the feature points for each representative image TMx (step S604).
- the determination unit 13 classifies the feature points of the cut image RMx into groups by nearest neighbor search with the cluster centroid in the feature space, and creates a histogram of the feature points for each cut image RMx (step S605).
- the determining unit 13 performs a primary determination of similarity between the target video TM and the public video RM by comparing the histogram of the feature points of the representative image TMx and the histogram of the feature points of the cut image RMx (step S606 ).
- the determination unit 13 first determines the similarity of histograms of feature points (hereinafter referred to as “image feature point classification similarity (also referred to as “degree”). The degree of similarity between histograms may be determined based on the Bhattacharya distance. In that case, the shorter the Bhattacharya distance, the higher the similarity.
- the determination unit 13 determines a combination of the representative image TMx and the cut-out image RMx (hereinafter referred to as “feature point classification similar image pair ). Note that the determination unit 13 may identify pairs of similar feature point classification images having the highest inter-image feature point classification similarities up to N ranks.
- the determination unit 13 performs a primary determination of similarity between the target video TM and the public video RM based on the number of similar image pairs classified by feature points. This primary determination is for quickly excluding unrelated public video RMs that are extremely unlikely to rely on the target video TM by simple comparison of histograms. For example, if the number of similar image pairs classified by feature points is 0, it may be determined that the possibility that the public video RM depends on the target video TM is extremely low, that is, it is determined to be dissimilar.
- the determination unit 13 compares the feature points of the combination of the target video TM and the public video RM that are determined not to be dissimilar in the first determination, thereby distinguishing between the target video TM and the public video RM.
- a secondary determination of similarity is performed (step S607).
- the process of step S607 will also be referred to as secondary determination process hereinafter.
- FIG. 7 is a flowchart of the secondary determination process.
- the determination unit 13 first determines, for each of the combinations of the target video TM and the public video RM that are determined not to be dissimilar in the primary determination, the combinations of all the representative images TMx and all the cut images RMx.
- the degree of similarity between the feature points (hereinafter also referred to as "the degree of similarity between feature points”) is calculated by round-robining the feature points in each of (step S701).
- the similarity between feature points may be determined based on the Euclidean distance between feature amounts, for example. In that case, the shorter the Euclidean distance, the higher the similarity between feature points.
- the determination unit 13 identifies a combination of feature points (hereinafter referred to as a "similar feature point pair") in which the similarity between the feature points of the representative image TMx and the cut image RMx is maximum, that is, the Euclidean distance is the minimum (Ste S702).
- This process is a process of searching for corresponding points between feature points in the representative image TMx and the cut image RMx.
- the determination unit 13 determines that the distance of the combination of the representative image TMx and the cut-out image RMx with the closest similarity between feature points is the distance of the second closest combination n times (n ⁇ 1) If the distance is smaller than the distance, the pair with the closest feature point may be identified as a similar feature point pair, and similar feature point pairs that do not satisfy this condition may be deleted (ratio test).
- the determining unit 13 identifies a pair (hereinafter also referred to as a "feature point pairing image pair") of the representative image TMx and the cut image RMx in which the number of similar feature point pairs satisfies a predetermined pairing condition (step S703).
- the pairing condition is, for example, a condition that the number of similar feature point pairs exceeds a predetermined threshold value and that the number is the maximum for the cut image RMx.
- the determination unit 13 calculates the similarity between the videos in the combination of the target video TM and the public video RM (step S704). At this time, the determination unit 13 first calculates the integrated value and average value of the similarity between images of the target video TM and the public video RM, which form each feature point pairing image pair, as the similarity between videos.
- the integrated value representing the degree of similarity between the target video TM and the public video RM may be referred to as a total score.
- the average value representing the degree of similarity between the target video TM and the public video RM may be referred to as an average score.
- the degree of similarity between the images of the target video TM and the public video RM, which form the feature point pairing image pair described above, is determined based on the number of similar feature point pairs.
- the ratio of the number of similar feature points to the total number of feature points in the representative image TMx may be used as the degree of similarity between the representative image TMx and the cut image RMx forming a feature point pairing image pair.
- the ratio of the number of similar feature points to the total number of feature points in the representative image TMx is defined as the degree of similarity between the representative image TMx and the cut image RMx that form a pair of feature point pairing images.
- the ratio of the number of similar feature points to the total number of feature points in the public image RMx may be used as the degree of similarity between the representative image TMx and the cut image RMx that form a pair of feature point pairing images.
- the determination unit 13 determines whether there is a possibility that the public video RM depends on the target video TM, based on the total score and average score between the videos of the representative image TMx and the cut image RMx that form the feature point pairing image pair. A secondary determination is made as to whether or not there is one (step S705).
- the determination unit 13 may determine that there is a possibility that the public video RM depends on the target video TM, if the total score and the average score both exceed their respective thresholds. If the public video RM is based on the target video TM, similar scenes exist, so detection can be made by making a determination using the integrated value. In addition, if the target video TM and public video RM are long videos, the number of scenes will be large, so even if the public video RM is not based on the target video TM, the similarity between images of scenes that are coincidentally similar may be accumulated and the accumulated value (total score) may exceed the threshold.
- each of the above threshold values may be a predetermined fixed value or a value that can be set or changed by the user.
- the frame image of the target video TM is used as it is as the representative image TMx, but other configurations are also possible.
- the frame images of the target video TM are processed in advance to create a processed image, and the processed image is used in the same way as the representative image TMx to perform the above-mentioned process. You may decide to perform processing.
- the processed image includes an image obtained by cutting off the peripheral portion of the representative image TMx that has little effect on the content, an image that is horizontally reversed, an image that is divided, an image that is corrected (reduced pixels), etc. It is preferable to prepare a processed image obtained by processing the representative image TMx using a processing method that is assumed when the representative image TMx is processed and plagiarized.
- a person who commits fraud may perform some kind of processing on the plagiarized video and release it to the public in order to make it difficult to be discovered.
- a video that has been processed in some way has a lower degree of similarity with the original target video TM, making identification difficult.
- the determining unit 13 performs a process of identifying an area in the cut image RMx in which the image of the plagiarized video is embedded, and in step S704, the determination unit 13 specifies the feature point pairing image pair.
- the degree of similarity between videos may be calculated based on the number of similar feature point pairs existing in the specified area in RMx.
- a projective transformation matrix is created that associates the representative image TMx with the region in the cut image RMx in which the plagiarized video is embedded.
- the degree of similarity between the videos may be calculated based on the number of similarity feature point pairs between the representative image TMx and the region in the cutout image RMx where the plagiarized video is embedded.
- the public video image selection unit 12 cuts out a video RM from a public video RM that is published on the communication network 90 such as the Internet by a company or an individual using a web page provided by the web server 91. Image RMx was selected.
- the present invention is not limited to this, and the public video image selection unit 12 may select a public video RM from a specific web page that publishes a plurality of public video RMs.
- the specific web page may be, for example, a web page of a video posting site where individuals can post. This makes it possible to quickly detect a public video RM that relies on the target video TM from among a plurality of public video RMs.
- the public video image selection unit 12 may select a public video RM from other web pages that link to a specific web page that publishes a plurality of public video RMs. This makes it possible to improve the accuracy of detecting a public video RM that relies on the target video TM from among a plurality of public video RMs.
- SYMBOLS 10 Video identification device, 11... Target video image recording part, 12... Public video image selection part, 13... Judgment part, 131... Feature point extraction part, 132... First similar video part, 133... Second similar video part, RM...Public video, RMx...Cut image, TM...Target video, TMx...Representative image
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Technology Law (AREA)
- Tourism & Hospitality (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Economics (AREA)
- Strategic Management (AREA)
- Primary Health Care (AREA)
- General Business, Economics & Management (AREA)
- Marketing (AREA)
- Human Resources & Organizations (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Hardware Design (AREA)
- Computer Security & Cryptography (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2022074778A JP2023163706A (ja) | 2022-04-28 | 2022-04-28 | 動画識別装置、動画識別方法、および動画識別プログラム |
JP2022-074778 | 2022-04-28 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023210095A1 true WO2023210095A1 (ja) | 2023-11-02 |
Family
ID=88518339
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2023/003897 WO2023210095A1 (ja) | 2022-04-28 | 2023-02-07 | 動画識別装置、動画識別方法、および動画識別プログラム |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP2023163706A (enrdf_load_stackoverflow) |
WO (1) | WO2023210095A1 (enrdf_load_stackoverflow) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007299312A (ja) * | 2006-05-02 | 2007-11-15 | Toyota Central Res & Dev Lab Inc | 対象物の三次元位置推定装置 |
KR101373176B1 (ko) * | 2013-02-13 | 2014-03-11 | 서강대학교산학협력단 | 복제 동영상정보 검출방법 및 장치, 저장매체 |
JP2015121524A (ja) * | 2013-11-19 | 2015-07-02 | キヤノン株式会社 | 画像処理装置およびその制御方法、撮像装置、プログラム |
JP2021039647A (ja) * | 2019-09-05 | 2021-03-11 | アズビル株式会社 | 画像データ分類装置および画像データ分類方法 |
-
2022
- 2022-04-28 JP JP2022074778A patent/JP2023163706A/ja active Pending
-
2023
- 2023-02-07 WO PCT/JP2023/003897 patent/WO2023210095A1/ja active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007299312A (ja) * | 2006-05-02 | 2007-11-15 | Toyota Central Res & Dev Lab Inc | 対象物の三次元位置推定装置 |
KR101373176B1 (ko) * | 2013-02-13 | 2014-03-11 | 서강대학교산학협력단 | 복제 동영상정보 검출방법 및 장치, 저장매체 |
JP2015121524A (ja) * | 2013-11-19 | 2015-07-02 | キヤノン株式会社 | 画像処理装置およびその制御方法、撮像装置、プログラム |
JP2021039647A (ja) * | 2019-09-05 | 2021-03-11 | アズビル株式会社 | 画像データ分類装置および画像データ分類方法 |
Also Published As
Publication number | Publication date |
---|---|
JP2023163706A (ja) | 2023-11-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8549627B2 (en) | Detection of objectionable videos | |
JP5482185B2 (ja) | ターゲット情報を検索及び出力する方法及びシステム | |
US8190621B2 (en) | Method, system, and computer readable recording medium for filtering obscene contents | |
US10643667B2 (en) | Bounding box doubling as redaction boundary | |
JP4139615B2 (ja) | 前景/背景セグメント化を用いた画像のイベント・クラスタリング | |
US12015807B2 (en) | System and method for providing image-based video service | |
US9098807B1 (en) | Video content claiming classifier | |
US11526586B2 (en) | Copyright detection in videos based on channel context | |
US20100329574A1 (en) | Mixed media reality indexing and retrieval for repeated content | |
CN107408119B (zh) | 图像检索装置、系统以及方法 | |
RU2676247C1 (ru) | Способ и компьютерное устройство для кластеризации веб-ресурсов | |
CN101286230B (zh) | 图像处理设备和图像处理方法 | |
KR20170038040A (ko) | 비디오에서의 컴퓨터화된 현저한 인물 인식 | |
US20100268604A1 (en) | Method and system for providing information based on logo included in digital contents | |
Zhou et al. | Visual similarity based anti-phishing with the combination of local and global features | |
CN112199545A (zh) | 基于图片文字定位的关键词显示方法、装置及存储介质 | |
WO2018068664A1 (zh) | 网络信息识别方法和装置 | |
US20170103285A1 (en) | Method and device for detecting copies in a stream of visual data | |
JP7284196B2 (ja) | 情報処理装置、情報処理方法、及びプログラム | |
US20180293461A1 (en) | Method and device for detecting copies in a stream of visual data | |
WO2023210095A1 (ja) | 動画識別装置、動画識別方法、および動画識別プログラム | |
US9361198B1 (en) | Detecting compromised resources | |
JP4740706B2 (ja) | 不正画像検出装置、方法、プログラム | |
JP6244887B2 (ja) | 情報処理装置、画像探索方法、及びプログラム | |
JP2010263327A (ja) | 特徴量算出装置およびプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23795843 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 23795843 Country of ref document: EP Kind code of ref document: A1 |