WO2021125501A1 - 기계학습이 완료된 사물 인식 모델을 통해 동영상에 대한 상황 정보 판단이 가능한 동영상 정보 판단장치 - Google Patents
기계학습이 완료된 사물 인식 모델을 통해 동영상에 대한 상황 정보 판단이 가능한 동영상 정보 판단장치 Download PDFInfo
- Publication number
- WO2021125501A1 WO2021125501A1 PCT/KR2020/011973 KR2020011973W WO2021125501A1 WO 2021125501 A1 WO2021125501 A1 WO 2021125501A1 KR 2020011973 W KR2020011973 W KR 2020011973W WO 2021125501 A1 WO2021125501 A1 WO 2021125501A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- context information
- video
- names
- information
- frames
- Prior art date
Links
- 238000010801 machine learning Methods 0.000 title claims abstract description 18
- 239000000284 extract Substances 0.000 claims abstract description 6
- 239000013598 vector Substances 0.000 claims description 57
- 238000001514 detection method Methods 0.000 claims description 20
- 238000000605 extraction Methods 0.000 claims description 10
- 238000000034 method Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 2
- 238000005192 partition Methods 0.000 abstract 1
- 238000005516 engineering process Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 238000004590 computer program Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000011017 operating method Methods 0.000 description 3
- 238000009825 accumulation Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 235000012054 meals Nutrition 0.000 description 1
- 230000005226 mechanical processes and functions Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
Definitions
- the present invention relates to a video information determination device capable of determining situation information on a video through an object recognition model on which machine learning has been completed.
- object recognition technology using deep learning develops in recent years, it may be considered to be able to determine context information for a video based on this.
- deep learning is one of the artificial intelligence technologies that can make a computer have human learning ability through data accumulation. If such deep learning-incorporated object recognition technology is utilized, data of objects is accumulated, and thus more accurate analysis of objects can be performed.
- the moving picture information determining apparatus divides a moving picture composed of a plurality of frames into frames, and then performs object detection from each of the plurality of frames, whereby objects detected from the plurality of frames are inserted one by one. Generates object images of , and applies the plurality of object images as inputs to an object recognition model on which pre-machine learning has been completed to generate object names for objects inserted into each of the plurality of object images, and a plurality of object names After extracting the names of at least two or more objects in the order from the largest to the smallest of the duplicates, the at least two or more object names are displayed on the screen as information about the objects included in the video, so that it is possible to determine the situation information for the video would like to support you to do so.
- An object image detection unit generating a plurality of object images in which objects detected from the plurality of frames are inserted one by one by performing object detection from each of the plurality of frames, and pre-machine learning the plurality of object images
- the object for the object inserted into each of the plurality of object images is applied to the completed object recognition model - the object recognition model is a model pre-configured to determine which object is inserted in the object image -
- a thing name generator generating a plurality of first thing names by generating a name, and when the plurality of first thing names are generated, a representative extracting non-overlapping representative thing names from the plurality of first thing names
- a thing name extraction unit for counting the number of duplicates existing in the plurality of first thing names for each of the
- the moving picture information determining apparatus divides a moving picture composed of a plurality of frames into frames, and then performs object detection from each of the plurality of frames, whereby objects detected from the plurality of frames are inserted one by one. Generates object images of , and applies the plurality of object images as inputs to an object recognition model on which pre-machine learning has been completed to generate object names for objects inserted into each of the plurality of object images, and a plurality of object names After extracting the names of at least two or more objects in the order from the largest to the smallest of the duplicates, the at least two or more object names are displayed on the screen as information about the objects included in the video, so that it is possible to determine the situation information for the video can support you to do so.
- FIG. 1 is a diagram illustrating the structure of an apparatus for determining video information capable of determining situation information on a video through an object recognition model on which machine learning has been completed according to an embodiment of the present invention.
- FIG. 2 is a flowchart illustrating an operation method of a video information determination apparatus capable of determining context information on a video through an object recognition model on which machine learning has been completed according to an embodiment of the present invention.
- each of the components, functional blocks or means may be composed of one or more sub-components, and the electrical, electronic, and mechanical functions performed by each component are electronic.
- a circuit, an integrated circuit, an ASIC (Application Specific Integrated Circuit), etc. may be implemented with various well-known devices or mechanical elements, and may be implemented separately or two or more may be integrated into one.
- the blocks in the accompanying block diagram or steps in the flowchart are computer program instructions that are loaded in a processor or memory of equipment capable of data processing, such as a general-purpose computer, a special-purpose computer, a portable notebook computer, and a network computer, and perform specified functions.
- a processor or memory of equipment capable of data processing such as a general-purpose computer, a special-purpose computer, a portable notebook computer, and a network computer, and perform specified functions.
- these computer program instructions may be stored in a memory provided in a computer device or in a memory readable by a computer
- the functions described in the blocks of the block diagrams or the steps of the flowcharts are produced as articles of manufacture containing instruction means for performing the same.
- each block or each step may represent a module, segment, or portion of code comprising one or more executable instructions for executing the specified logical function(s).
- FIG. 1 is a diagram illustrating the structure of an apparatus for determining video information capable of determining situation information on a video through an object recognition model on which machine learning has been completed according to an embodiment of the present invention.
- a video information determination device 110 capable of determining contextual information on a video through an object recognition model on which machine learning has been completed according to the present invention includes a frame dividing unit 111 , an object image detecting unit 112 , and an object. It includes a name generating unit 113 , a representative object name extracting unit 114 , a counting unit 115 , an object name extracting unit 116 , and an object information display unit 117 .
- the frame dividing unit 111 divides a video composed of a plurality of frames for each frame.
- the object image detection unit 112 generates a plurality of object images in which the objects detected from the plurality of frames are inserted one by one by performing object detection from each of the plurality of frames constituting the moving picture.
- the object image detection unit 112 may simultaneously perform object detection for each of the plurality of frames by utilizing thread parallelism.
- the object name generator 113 applies the plurality of object images as inputs to an object recognition model on which pre-machine learning has been completed to generate object names for objects inserted into each of the plurality of object images, thereby generating a plurality of first Create object names.
- the object recognition model refers to a model pre-configured to determine what kind of object the object inserted into the object image is.
- the moving picture is composed of a plurality of frames 'frame 1, frame 2, frame 3, frame 4, and frame 5', and it is assumed that the moving picture is an image related to a vehicle.
- the object image detecting unit 112 is Object detection may be performed from each of frames 'frame 1, frame 2, frame 3, frame 4, and frame 5'.
- 'object 1, object 2, object 3, ..., object 10' the object The image detector 112 detects objects 'object 1, object 2, object 3, ..., object 10', which are objects detected from the plurality of frames 'frame 1, frame 2, frame 3, frame 4, and frame 5'.
- 'Object image 1, object image 2, object image 3, ..., object image 10' which are a plurality of object images inserted one by one, may be generated.
- the object name generator 113 inputs 'object image 1, object image 2, object image 3, ..., object image 10', which are the plurality of object images, to the object recognition model on which the pre-machine learning has been completed.
- 'Object 1, Object 2, Object 3, ..., Object which is an object inserted into each of the plurality of object images 'Object Image 1, Object Image 2, Object Image 3, ..., Object Image 10'
- a plurality of first thing names such as 'truck, truck, passenger car, passenger car, passenger car, passenger car, bus, bus, truck, and freight car' may be generated.
- the representative object name extraction unit 114 extracts non-overlapping representative object names from the plurality of first object names.
- the counting unit 115 counts the number of duplicates existing in the plurality of first object names for each of the representative object names.
- the thing name extraction unit 116 extracts k (k is a natural number equal to or greater than 2) number of thing names from the largest to the smallest of the representative thing names.
- the object information display unit 117 displays the k object names as information on objects included in the video on the screen.
- the representative object name extraction unit 114 is a representative object name that does not overlap with each other from the plurality of first object names 'truck, truck, passenger car, passenger car, passenger car, passenger car, bus, bus, truck, and freight car'. Trucks, cars, buses, and trucks' can be extracted.
- the count unit 115 is configured to include 'truck, truck, passenger car, passenger car, passenger car, passenger car, bus, You can count the number of duplicates that exist in 'Bus, Trucks, and Trucks'.
- the object name extraction unit 116 extracts three object names, 'car, truck, and bus', in the order from the largest to the smallest among the representative object names, 'truck, passenger car, bus, and freight car'. have.
- the object information display unit 117 may display the names of the three objects, 'car, truck, and bus', as information about the object included in the video on the screen.
- the moving picture information determining apparatus 110 divides the moving picture composed of a plurality of frames into frames and then performs object detection from each of the plurality of frames constituting the moving picture, so that the Generates a plurality of object images in which objects are inserted one by one, and applies the plurality of object images as inputs to an object recognition model on which pre-machine learning has been completed.
- the moving image information determining device 110 includes an object name information database 118 , a context information database 119 , an eigenvector generator 120 , a vector similarity calculator 121 , and extract context information. It may further include a unit 122 and a context information display unit 123 .
- a plurality of predetermined object names and a predetermined unique number for each of the plurality of object names are stored in correspondence with each other.
- information may be stored in the object name information database 118 as shown in Table 1 below.
- a plurality of predetermined context information and a predetermined k-dimensional context information vector for each of the plurality of context information are stored in correspondence with each other.
- information may be stored in the context information database 119 as shown in Table 2 below.
- Situation information 1 (traffic situation) [a 1 a 2 a 3 ]
- Situation information 2 (meal situation) [b 1 b 2 b 3 ]
- Situation information 3 (parenting situation) [c 1 c 2 c 3 ] ... ...
- the eigenvector generator 120 refers to the object name information database 118 to check a unique number for each of the k object names, and then adds the k object names to each of the k object names.
- the vector similarity calculator 121 compares the k-dimensional context information vector for each of the plurality of context information stored in the context information database 119 and the k-dimensional eigenvector to the k-dimensional eigenvector. Calculate vector similarity.
- the vector similarity between the k-dimensional context information vector for each of the plurality of context information and the k-dimensional eigenvector may be calculated according to Equation 1 below.
- M is the vector similarity between two vectors
- S is the cosine similarity between the two vectors
- D is the Euclidean distance between the two vectors
- the cosine similarity between the two vectors S and the two vectors The Euclidean distance D of can be calculated according to Equation 2 and Equation 3 below.
- S is the cosine similarity between vectors A and B, and has a value between -1 and 1, and a larger value means a similar vector
- a i is the i-th component of the vector A
- B i is the i of the vector B means the second component.
- Equation 3 D denotes a Euclidean distance, and A i and B i denote i-th components included in the two vectors.
- the vector similarity calculating unit 121 sets the 'situation' which is the plurality of context information stored in the context information database 119 as shown in Table 2 above.
- '[a 1 a 2 a 3 ], [b 1 b 2 b 3 ], [c 1 c 2 c 3 ], ...' and the three-dimensional eigenvector '[3 5 6]' may be calculated according to Equations 1 to 3 above.
- the context information extractor 122 selects, based on the calculated vector similarity, a k-dimensional context information vector in which the calculated vector similarity has a maximum value among k-dimensional context information vectors for each of the plurality of context information. is selected, and the first context information stored in correspondence with the selected k-dimensional context information vector is extracted from the context information database 119 .
- the context information display unit 123 displays a context information determination message indicating that the video is a video related to the first context information on the screen.
- the context information extraction unit 122 calculates the calculated vector similarity of '1.7, 1.02, 1.01, ...' Based on the plurality of context information, '[a 1 a 2 a 3 ], [b 1 b 2 b 3 ], [c 1 c 2 c 3 ], ...' selects '[a 1 a 2 a 3 ]', which is a three-dimensional situation information vector having the calculated vector similarity with the maximum value, From the context information database 119 as shown in Table 2, 'context information 1', which is the first context information stored in correspondence to '[a 1 a 2 a 3 ]', which is the selected three-dimensional context information vector, can be extracted.
- 'context information 1' which is the first context information stored in correspondence to '[a 1 a 2 a 3 ]', which is the selected three-dimensional context information vector
- the situation information display unit 123 indicates that the video is a video related to 'situation information 1 (traffic situation)', which is the first situation information.
- a judgment message may be displayed on the screen.
- the object image detection unit 112 checks the number of the plurality of frames constituting the moving picture, and when the checked number exceeds a preset reference number, the plurality of frames Starting with the first frame among the frames, the reference number of frames may be sequentially extracted at frame intervals according to the operation of Equation 4 below, and then object detection may be performed from each of the extracted frames.
- A is the frame interval
- T is the number of the plurality of frames
- T S is the reference number
- the object image detection unit 112 checks the number of 'frame 1, frame 2, frame 3, ..., frame 10', which are the plurality of frames constituting the moving picture, as 10, and the number of 10 When the number exceeds the reference number of 5, the above math is performed starting with the first frame 'frame 1' among the plurality of frames 'frame 1, frame 2, frame 3, ..., frame 10'.
- the extracted frames ' Object detection may be performed from each of frame 1, frame 3, frame 5, frame 7, and frame 9'.
- FIG. 2 is a flowchart illustrating an operation method of a video information determination apparatus capable of determining context information on a video through an object recognition model on which machine learning has been completed according to an embodiment of the present invention.
- step S210 a video composed of a plurality of frames is divided into frames.
- step S220 by performing object detection from each of the plurality of frames constituting the moving picture, a plurality of object images in which the objects detected from the plurality of frames are inserted one by one are generated.
- step S230 the plurality of object images are applied as inputs to an object recognition model on which pre-machine learning has been completed (the object recognition model is a pre-configured model to determine what kind of object the object inserted into the object image is),
- a plurality of first object names is generated by generating an object name for an object inserted into each of the plurality of object images.
- step S240 when the plurality of first object names are generated, representative object names that do not overlap with each other are extracted from the plurality of first object names.
- step S250 for each of the representative object names, the number of duplicates existing in the plurality of first object names is counted.
- step S260 k (k is a natural number equal to or greater than 2) of the representative object names are extracted from the largest to the smallest number of duplicates.
- step S270 the k object names are displayed on the screen as information on the objects included in the video.
- a plurality of predetermined object names and a predetermined unique number for each of the plurality of object names are stored in correspondence with each other.
- maintaining an information database maintaining a context information database in which a plurality of predetermined context information and a predetermined k-dimensional context information vector for each of the plurality of context information are stored in correspondence with each other;
- a unique number for each of the k object names is checked with reference to the object name information database, and then a k-dimensional eigenvector including a unique number for each of the k object names as a component.
- the method may further include displaying an information determination message on the screen.
- the calculation of the vector similarity between the k-dimensional context information vector and the k-dimensional eigenvector for each of the plurality of context information may be performed according to Equation 1 above. have.
- step S220 the number of the plurality of frames constituting the video is checked, and when the checked number exceeds a preset reference number, the plurality of frames Starting with the first frame among the frames, the reference number of frames may be sequentially extracted at frame intervals according to the operation of Equation 4, and object detection may be performed from each of the extracted frames.
- the operation method of the video information determination apparatus capable of determining situation information on a video through the object recognition model on which machine learning has been completed according to an embodiment of the present invention has been described with reference to FIG. 2 .
- the operating method of the video information determination device capable of determining situation information for a video through the machine learning-completed object recognition model according to an embodiment of the present invention is through the machine-learning-completed object recognition model described with reference to FIG. 1 . Since it may correspond to the configuration of the operation of the video information determining apparatus 110 capable of determining contextual information for a video, a more detailed description thereof will be omitted.
- the operating method of the video information determination apparatus capable of determining situation information on a video through the machine learning-completed object recognition model is implemented as a computer program stored in a storage medium for execution through combination with a computer.
- the operating method of the video information determination apparatus capable of determining situation information for a video through the machine learning-completed object recognition model according to an embodiment of the present invention is implemented in the form of a program command that can be performed through various computer means. It may be recorded on a computer-readable medium.
- the computer-readable medium may include program instructions, data files, data structures, etc. alone or in combination.
- the program instructions recorded on the medium may be specially designed and configured for the present invention, or may be known and available to those skilled in the art of computer software.
- Examples of the computer-readable recording medium include magnetic media such as hard disks, floppy disks and magnetic tapes, optical media such as CD-ROMs and DVDs, and magnetic such as floppy disks.
- program instructions include not only machine language codes such as those generated by a compiler, but also high-level language codes that can be executed by a computer using an interpreter or the like.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Software Systems (AREA)
- Mathematical Analysis (AREA)
- Computational Mathematics (AREA)
- Pure & Applied Mathematics (AREA)
- Mathematical Optimization (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Algebra (AREA)
- Databases & Information Systems (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
Abstract
Description
복수의 사물 명칭들 | 고유번호 |
승용차 | 3 |
트럭 | 5 |
버스 | 6 |
화물차 | 7 |
숟가락 | 21 |
... | ... |
복수의 상황 정보들 | 3차원의 상황 정보 벡터 |
상황 정보 1(교통 상황) | [a1 a2 a3] |
상황 정보 2(식사 상황) | [b1 b2 b3] |
상황 정보 3(육아 상황) | [c1 c2 c3] |
... | ... |
Claims (4)
- 복수의 프레임들로 구성된 동영상을 프레임별로 분할하는 프레임 분할부;상기 동영상을 구성하는 상기 복수의 프레임들 각각으로부터 객체(Object) 검출을 수행함으로써, 상기 복수의 프레임들로부터 검출된 객체들이 하나씩 삽입된 복수의 객체 이미지들을 생성하는 객체 이미지 검출부;상기 복수의 객체 이미지들을 사전 기계학습이 완료된 사물 인식 모델 - 상기 사물 인식 모델은 객체 이미지에 삽입되어 있는 객체가 어떤 사물 인지 판단할 수 있도록 미리 구성된 모델임 - 에 입력으로 인가하여 상기 복수의 객체 이미지들 각각에 삽입된 객체에 대한 사물 명칭을 생성함으로써, 복수의 제1 사물 명칭들을 생성하는 사물 명칭 생성부;상기 복수의 제1 사물 명칭들이 생성되면, 상기 복수의 제1 사물 명칭들로부터 서로 중복되지 않는 대표 사물 명칭들을 추출하는 대표 사물 명칭 추출부;상기 대표 사물 명칭들 각각에 대해, 상기 복수의 제1 사물 명칭들 내에서 존재하는 중복 개수를 카운트하는 카운트부;상기 대표 사물 명칭들 중 중복 개수가 많은 것부터 적은 순서대로 k(k는 2이상의 자연수임)개의 사물 명칭들을 추출하는 사물 명칭 추출부; 및상기 k개의 사물 명칭들을 상기 동영상에 포함된 사물에 대한 정보로 화면 상에 표시하는 사물 정보 표시부를 포함하는 기계학습이 완료된 사물 인식 모델을 통해 동영상에 대한 상황 정보 판단이 가능한 동영상 정보 판단장치.
- 제1항에 있어서,미리 정해진 복수의 사물 명칭들과 상기 복수의 사물 명칭들 각각에 대한 미리 정해진 고유번호가 서로 대응되어 저장되어 있는 사물 명칭 정보 데이터베이스;미리 정해진 복수의 상황 정보들과 상기 복수의 상황 정보들 각각에 대한 미리 정해진 k차원의 상황 정보 벡터가 서로 대응되어 저장되어 있는 상황 정보 데이터베이스;상기 k개의 사물 명칭들이 추출되면, 상기 사물 명칭 정보 데이터베이스를 참조하여 상기 k개의 사물 명칭들 각각에 대한 고유번호를 확인한 후, 상기 k개의 사물 명칭들 각각에 대한 고유번호를 성분으로 포함하는 k차원의 고유벡터를 생성하는 고유벡터 생성부;상기 k차원의 고유벡터가 생성되면, 상기 상황 정보 데이터베이스에 저장되어 있는 상기 복수의 상황 정보들 각각에 대한 k차원의 상황 정보 벡터와 상기 k차원의 고유벡터 간의 벡터 유사도를 연산하는 벡터 유사도 연산부;상기 연산된 벡터 유사도를 기초로, 상기 복수의 상황 정보들 각각에 대한 k차원의 상황 정보 벡터 중 상기 연산된 벡터 유사도가 최대 값을 갖는 k차원의 상황 정보 벡터를 선택하고, 상기 상황 정보 데이터베이스로부터 상기 선택된 k차원의 상황 정보 벡터에 대응되어 저장되어 있는 제1 상황 정보를 추출하는 상황 정보 추출부; 및상기 제1 상황 정보가 추출되면, 상기 동영상이 상기 제1 상황 정보와 관련된 동영상임을 지시하는 상황 정보 판단 메시지를 상기 화면 상에 표시하는 상황 정보 표시부를 더 포함하는 기계학습이 완료된 사물 인식 모델을 통해 동영상에 대한 상황 정보 판단이 가능한 동영상 정보 판단장치.
- 제1항에 있어서,상기 객체 이미지 검출부는상기 동영상을 구성하는 상기 복수의 프레임들의 개수를 확인하고, 상기 확인된 개수가 미리 설정된 기준 개수를 초과하는 경우, 상기 복수의 프레임들 중 첫 번째 프레임을 시작으로 해서 하기의 수학식 2의 연산에 따른 프레임 간격으로 상기 기준 개수만큼의 프레임들을 순차적으로 추출한 후 상기 추출된 프레임들 각각으로부터 객체 검출을 수행하는 기계학습이 완료된 사물 인식 모델을 통해 동영상에 대한 상황 정보 판단이 가능한 동영상 정보 판단장치.[수학식 2]여기서, A는 상기 프레임 간격으로, T는 상기 복수의 프레임들의 개수, TS는 상기 기준 개수를 의미함.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2019-0172302 | 2019-12-20 | ||
KR1020190172302A KR102261928B1 (ko) | 2019-12-20 | 2019-12-20 | 기계학습이 완료된 사물 인식 모델을 통해 동영상에 대한 상황 정보 판단이 가능한 동영상 정보 판단장치 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021125501A1 true WO2021125501A1 (ko) | 2021-06-24 |
Family
ID=76391469
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2020/011973 WO2021125501A1 (ko) | 2019-12-20 | 2020-09-04 | 기계학습이 완료된 사물 인식 모델을 통해 동영상에 대한 상황 정보 판단이 가능한 동영상 정보 판단장치 |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR102261928B1 (ko) |
WO (1) | WO2021125501A1 (ko) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20110105793A (ko) * | 2009-01-29 | 2011-09-27 | 닛본 덴끼 가부시끼가이샤 | 시간 구간 대표 특징 벡터 생성 장치 |
KR20180028198A (ko) * | 2016-09-08 | 2018-03-16 | 연세대학교 산학협력단 | 실시간 영상을 이용하여 위험 상황을 예측하기 위한 영상 처리 방법, 장치 및 그를 이용하여 위험 상황을 예측하는 방법, 서버 |
KR20190024801A (ko) * | 2017-08-30 | 2019-03-08 | 바이두 온라인 네트웍 테크놀러지 (베이징) 캄파니 리미티드 | 유사 동영상 검색 방법, 장치, 디바이스 및 저장매체 |
KR20190106865A (ko) * | 2019-08-27 | 2019-09-18 | 엘지전자 주식회사 | 동영상 검색방법 및 동영상 검색 단말기 |
JP2019212308A (ja) * | 2018-06-01 | 2019-12-12 | ネイバー コーポレーションNAVER Corporation | 動画サービス提供方法およびこれを用いるサービスサーバ |
-
2019
- 2019-12-20 KR KR1020190172302A patent/KR102261928B1/ko active IP Right Grant
-
2020
- 2020-09-04 WO PCT/KR2020/011973 patent/WO2021125501A1/ko active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20110105793A (ko) * | 2009-01-29 | 2011-09-27 | 닛본 덴끼 가부시끼가이샤 | 시간 구간 대표 특징 벡터 생성 장치 |
KR20180028198A (ko) * | 2016-09-08 | 2018-03-16 | 연세대학교 산학협력단 | 실시간 영상을 이용하여 위험 상황을 예측하기 위한 영상 처리 방법, 장치 및 그를 이용하여 위험 상황을 예측하는 방법, 서버 |
KR20190024801A (ko) * | 2017-08-30 | 2019-03-08 | 바이두 온라인 네트웍 테크놀러지 (베이징) 캄파니 리미티드 | 유사 동영상 검색 방법, 장치, 디바이스 및 저장매체 |
JP2019212308A (ja) * | 2018-06-01 | 2019-12-12 | ネイバー コーポレーションNAVER Corporation | 動画サービス提供方法およびこれを用いるサービスサーバ |
KR20190106865A (ko) * | 2019-08-27 | 2019-09-18 | 엘지전자 주식회사 | 동영상 검색방법 및 동영상 검색 단말기 |
Also Published As
Publication number | Publication date |
---|---|
KR102261928B1 (ko) | 2021-06-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021085784A1 (ko) | 객체 검출 모델의 학습 방법 및 객체 검출 모델이 실행되는 객체 검출 장치 | |
WO2022097932A1 (ko) | 딥러닝 기반으로 2차원 이미지로부터 3차원 모델을 복원하는 장치 및 그 방법 | |
WO2014046401A1 (ko) | 단어 자동 번역에 기초한 입술 모양 변경 장치 및 방법 | |
WO2019164379A1 (en) | Method and system for facial recognition | |
WO2014069822A1 (en) | Apparatus and method for face recognition | |
WO2017164478A1 (ko) | 미세 얼굴 다이나믹의 딥 러닝 분석을 통한 미세 표정 인식 방법 및 장치 | |
WO2021153861A1 (ko) | 다중 객체 검출 방법 및 그 장치 | |
CN106557521A (zh) | 对象索引方法、对象搜索方法及对象索引系统 | |
WO2016013885A1 (en) | Method for retrieving image and electronic device thereof | |
WO2013048159A1 (ko) | 아다부스트 학습 알고리즘을 이용하여 얼굴 특징점 위치를 검출하기 위한 방법, 장치, 및 컴퓨터 판독 가능한 기록 매체 | |
WO2019225964A1 (en) | System and method for fast object detection | |
CN110166851B (zh) | 一种视频摘要生成方法、装置和存储介质 | |
WO2020111505A1 (ko) | 영상 기계학습을 위한 객체 gt 정보 생성 방법 및 시스템 | |
WO2020246655A1 (ko) | 상황 인지 방법 및 이를 수행하는 장치 | |
WO2019132588A1 (ko) | 영상의 특징 및 맥락에 기초한 영상 분석 장치 및 방법 | |
WO2011055930A2 (ko) | 그래프 컷의 초기값을 설정하는 방법, 단말 장치, 및 컴퓨터 판독 가능한 기록 매체 | |
WO2021125501A1 (ko) | 기계학습이 완료된 사물 인식 모델을 통해 동영상에 대한 상황 정보 판단이 가능한 동영상 정보 판단장치 | |
WO2021091124A1 (ko) | 복수의 파일들 각각에 대한 피쳐들의 분포 정보를 기초로 기준 파일에 대한 유사 파일의 탐색이 가능한 전자 장치 및 동작 방법 | |
WO2013066095A1 (ko) | 얼굴 검출 방법, 장치 및 이 방법을 실행하기 위한 컴퓨터 판독 가능한 기록 매체 | |
WO2022114252A1 (ko) | 복잡도 기반 특정 영역 연산 생략 방식을 이용한 딥러닝 기반 범시적 영역 분할 연산 가속처리 방법 | |
WO2011093568A1 (ko) | 레이아웃 기반의 인쇄매체 페이지 인식방법 | |
WO2021118047A1 (ko) | 딥러닝을 이용한 사고 영상의 사고과실 평가 방법 및 장치 | |
WO2023018150A1 (en) | Method and device for personalized search of visual media | |
WO2019231162A1 (ko) | 이미지 분할 방법 및 장치 | |
WO2022131720A1 (ko) | 건축물 이미지를 생성하는 장치 및 방법 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20903253 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20903253 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20903253 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 13.12.2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20903253 Country of ref document: EP Kind code of ref document: A1 |