WO2013183978A1 - Image processing device and method - Google Patents

Image processing device and method Download PDF

Info

Publication number
WO2013183978A1
WO2013183978A1 PCT/KR2013/006896 KR2013006896W WO2013183978A1 WO 2013183978 A1 WO2013183978 A1 WO 2013183978A1 KR 2013006896 W KR2013006896 W KR 2013006896W WO 2013183978 A1 WO2013183978 A1 WO 2013183978A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
subject
background image
unit
information
Prior art date
Application number
PCT/KR2013/006896
Other languages
French (fr)
Korean (ko)
Inventor
구선회
Original Assignee
주식회사 마이씨에프
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 마이씨에프 filed Critical 주식회사 마이씨에프
Publication of WO2013183978A1 publication Critical patent/WO2013183978A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Definitions

  • the present invention relates to an image processing apparatus and a method, and more particularly, to an image processing apparatus and method that can reduce the video data capacity without having to encode a video, and can also facilitate data analysis of the video.
  • this technique has a limitation in reducing the data capacity of an image since it is a method of producing an finally encoded image file.
  • the present invention has been made to solve the above problems, and an object of the present invention is to provide an image processing apparatus and a method capable of reducing video data capacity without encoding a video and facilitating data analysis of the video. There is this.
  • an image processing apparatus for sequentially inputting the image frame;
  • a subject extracting unit which detects and extracts a moving subject or a specific subject from an image frame input through the image input unit;
  • a background image extracting unit extracting a background image, which is a region other than the subject extracted by the subject extracting unit, from the image frame as an image;
  • An object generator configured to objectize the subject and the background image extracted by the subject extractor and the background image extractor, respectively, and to generate combined information and frame information of the object and the subject image;
  • a first storage unit storing the objectized subject and the background image;
  • a second storage unit having a database storing combined information of the object and the background image and frame information; And extracting and subjecting the subject and the background image from the image frame input through the image input unit, and generating the combined information and the frame information of the objectized object and the background image.
  • a controller configured to control an object generator and to store the objectized subject and the background image in the first storage unit and to store
  • the image combination unit for combining the objectized subject and the background image stored in the first storage unit according to the combined information and frame information stored in the database of the second storage unit; And a reproducing unit configured to receive and reproduce an image combined by the image combining unit, and wherein the controller is further configured to display the objectized subject and the background image stored in the first storage unit when the reproduction command is input.
  • the image combination unit and the reproducing unit are preferably combined according to the combination information and the frame information stored in the control unit so as to reproduce the combined image.
  • the image analysis information generation unit for generating image analysis information associated with the object and the subject image; And an image analyzer configured to analyze characteristics of an image reproduced through the playback unit based on the image analysis information, wherein the controller is configured to store the image analysis information generated by the image analysis information generator in the database of the second storage unit. If the image analysis command is input to the image analysis command, the image analysis unit may be controlled to analyze characteristics of the image reproduced through the reproduction unit based on the image analysis information stored in the database of the second storage unit.
  • the image combination unit may combine the images by substituting the combination information and the frame information into attribute values of the object and the background image.
  • the combining information includes at least one of an arrangement order of framed object and a background image, frame position, size, brightness, exposure time, and other effects, and the frame information should be combined with some image elements at any position during image reproduction. Contains information indicating whether or not
  • the image analysis information includes at least one of a title, a shooting time, a shooting period, a color, a position, and a movement direction of the object and the background image.
  • the image processing method the step of sequentially inputting the image frame; Detecting and extracting a moving subject or a specific subject from the image frame; Extracting a background image, which is a region other than the extracted subject, from the image frame as an image; Objectifying the extracted subject and background image, respectively, and generating combination information and frame information of the objectized subject and background image; And storing the objectized subject and the background image in a first storage unit, and storing the combined information and the frame information in a database of the second storage unit.
  • the method may further include generating image analysis information related to the objectized object and the background image; Storing the image analysis information in a database of the second storage unit; And when the image analysis command is input, analyzing a feature of the reproduced image based on the image analysis information stored in the database of the second storage unit.
  • the combining may include combining the combination information and the frame information by substituting the attribute values of the object and the background image.
  • the combining information includes at least one of an arrangement order of framed object and a background image, frame position, size, brightness, exposure time, and other effects, and the frame information should be combined with some image elements at any position during image reproduction. Contains information indicating whether or not
  • the image analysis information includes at least one of a title, a shooting time, a shooting period, a color, a position, and a movement direction of the object and the background image.
  • FIG. 1 is a control block diagram of an image processing apparatus according to an exemplary embodiment of the present invention.
  • FIG. 2 is a flowchart illustrating an image processing method according to an exemplary embodiment of the present invention.
  • FIG. 1 is a control block diagram of an image processing apparatus according to an exemplary embodiment of the present invention.
  • an image processing apparatus may include an image input unit 110, a subject extractor 120, a background image extractor 130, an object generator 140, and image analysis information.
  • the generator 150, the first storage unit 160, the second storage unit 170, the image combiner 180, the playback unit 190, the image analyzer 210, and the controller 220 are included.
  • Such an image processing device converts an analog image signal input through a camera into a digital image signal, and compresses / restores the image using MPEG (Moving Picture Experts Group), an international video compression method, to record and play back for a long time.
  • Digital Video Recorder hereinafter referred to as DVR.
  • DVR Digital Video Recorder
  • DVR has various selection menu functions such as motion detection, connection recording, camera control, image magnification, editor, etc., and has the function to save data semi-permanently.
  • DVR can be applied to video conferencing, video education or remote medical examination.
  • security buildings such as banks, military areas, etc., and it is a security surveillance system that captures and compresses and saves images from a specific location. Widely used.
  • the image processing apparatus may be applied to a personal computer, a personal digital assistant (PDA), a portable multimedia player (PMP), a mobile communication terminal, and the like, in addition to a DVR.
  • PDA personal digital assistant
  • PMP portable multimedia player
  • the image input unit 110 is a portion in which an image frame is input, and the images photographed from a moving image photographing means (not shown) such as a video camera, a digital camera, and a camera phone are sequentially input.
  • a moving image photographing means such as a video camera, a digital camera, and a camera phone
  • the image includes both an analog image or a digital image, but it is assumed that the image is converted into a digital image signal and input.
  • the image signals converted into digital signals are sequentially input through the image input unit 110 in units of frames.
  • the subject extractor 120 detects and extracts a moving subject or a specific subject (eg, a person) from an image frame input through the image input unit 110.
  • the moving subject may be extracted by motion tracking.
  • Motion tracking calculates a motion vector by determining a difference between two consecutive frame images, and tracks a subject by the motion vector. Specifically, in order to obtain a motion vector, a feature point is set in a currently input image frame, and how much data in which direction the feature point coordinates are moved is compared with a previously input image frame. In this case, the feature point may be set by the user, or may be automatically set by comparison with a previously input image frame.
  • the method of extracting a specific subject includes setting a feature point in a currently input image frame and extracting a specific subject region based on data of feature point coordinates.
  • the background image extractor 130 extracts a background image, which is a region other than a subject extracted by the object extractor 120, from an image frame input through the image input unit 110 as an image. To this end, a process of subtracting the subject from the entire image frame is included.
  • the object generator 140 may objectize the subject and the background image extracted by the subject extractor 120 and the background image extractor 130, respectively, and generate combination information and frame information of the object and the subject image.
  • the combined information is information for generating an image frame by combining the object and the background image, for example, the arrangement order or frame position, size, brightness, exposure time, and other effects of the subject and the background image.
  • the frame information is information indicating which image elements should be combined at which position during image playback. For example, the frame and the number of frames to which the subject and the background image, which are the components of the image, should be connected, and which of the screens of the frame are used. Information such as whether to be located in the area.
  • the image analysis information generator 150 generates specific information related to the objectized subject and the background image.
  • the specific information includes a title characterizing a subject and a background image, a shooting time, a shooting period, a color of the subject, a position, a direction of movement, and the like, and is used for image analysis.
  • the first storage unit 160 stores the object and the background image objectified by the object generator 140, and the second storage unit 170 combines the information, the frame information, and the image analysis information of the object and the background image. Has a database that is stored.
  • the image combination unit 180 reads the objectized subject and background image stored in the first storage unit 160 and the combined information and frame information stored in the database of the second storage unit 170, and reads the combined information and frame.
  • the object and the background image objectified according to the information are combined.
  • an image frame is generated by substituting the read combination information and frame information into corresponding attribute values of the objectized object and the background image.
  • the playback unit 190 plays a role of receiving and playing the image combined by the image combination unit 180.
  • the playback unit 190 may generally be a FlashPlayer, SilverlightRuntime, or a web browser itself.
  • a program serving as the play unit 190 may be separately manufactured by a program such as C ++, C #, VisualBasic, or the like.
  • the image analyzer 210 analyzes the characteristics of the image reproduced through the playback unit 190 based on the image analysis information stored in the second storage unit 170. That is, the image capturing time, the shooting period, the color of the subject, the position, the movement direction, etc., as well as the title characterizing the image reproduced by the playback unit 190 can be analyzed.
  • the controller 220 controls the overall operation of the device.
  • the controller 220 controls the subject extractor 120 and the background image extractor 130 to extract the subject and the background image from the image frame input through the image input unit 110.
  • the controller 220 controls the object generator 140 to objectize the extracted subject and the background image and to generate the combined information and the frame information of the object and the background image.
  • the controller 220 stores the objectized subject and the background image in the first storage unit 160, and stores the combined information and the frame information of the objectized subject and the background image in a database of the second storage unit 170. That is, the controller 220 does not store the entire image frame as one file, but separately stores the objectized object and the background image, combined information, and frame information.
  • the controller 220 controls the image analysis information generation unit 150 to generate image analysis information such as a title associated with the object and the background image, a shooting time, a shooting period, a color, a position, and a movement direction of the subject.
  • image analysis information such as a title associated with the object and the background image, a shooting time, a shooting period, a color, a position, and a movement direction of the subject.
  • the generated image analysis information is stored in a database of the second storage unit 170.
  • the controller 220 When the playback command is input, the controller 220 does not control to play the entire image frame as one file, but instead of controlling the objectized object and the background image stored in the first storage unit 160 of the second storage unit 170.
  • the image combination unit 180 is controlled to generate an image frame by combining the combination information and the frame information stored in the database.
  • the controller 220 allows the combined image frame to be played back through the playback unit 190.
  • the controller 220 analyzes the characteristics of the image reproduced through the playback unit 190 based on the image analysis information stored in the second storage unit 170. To control.
  • FIG. 2 is a flowchart illustrating an image processing method according to an exemplary embodiment of the present invention.
  • image frames are sequentially input through the image input unit 110 (S110).
  • the subject extractor 120 detects and extracts a moving subject or a specific subject from sequentially input image frames, and the background image extractor 130 extracts a background image, which is a region other than the subject (S120). Since a specific extraction method has been described above, a description thereof will be omitted.
  • the object generator 140 objectizes the extracted subject and the background image, respectively (S130), and generates combination information and frame information of the objectized subject and the background image (S140).
  • the controller 220 stores the objectized subject and the background image in the first storage unit 160, and stores the combined information and the frame information of the objectized subject and the background image in a database of the second storage unit 170 ( S150).
  • the image analysis information generation unit 150 generates image analysis information such as a title associated with the object and the background image, the shooting time, the shooting period, the color of the subject, the position, the movement direction, etc. (S160). Then, the controller 220 stores the generated image analysis information in the database of the second storage unit 170 (S170).
  • the image combination unit 180 may convert the objectized subject and the background image stored in the first storage unit 160 according to the combined information and frame information stored in the database of the second storage unit 170. Combination generates an image frame (S190).
  • the controller 220 allows the combined image frame to be played back through the playback unit 190 (S210).
  • the image analyzer 210 may analyze the image analysis information such as the title stored in the second storage unit 170, the shooting time, the shooting period, the color of the subject, the position, and the movement direction. Based on the analysis of the characteristics of the image reproduced through the playback unit 190 (S230).
  • the present invention provides a plurality of images to an image processing apparatus, such as a case where a video must be continuously recorded for a predetermined time through a video recording means such as a video camera, a digital camera, a camera phone, or a large video file must be downloaded.
  • a video recording means such as a video camera, a digital camera, a camera phone, or a large video file must be downloaded.
  • the frames are sequentially input, the subject and the background image are first detected and extracted from the image frame.
  • the extracted subject and the background image are respectively objectized and stored, and the combined information and the frame information are separately stored in the database.
  • video data capacity can be reduced without encoding an image, and a separate decoding process is not required even when playing an image.
  • the invention can also be embodied as computer readable code on a computer readable recording medium.
  • the computer-readable recording medium includes all kinds of recording devices in which data that can be read by a computer system is stored. Examples of computer-readable recording media include ROM, RAM, CD-ROM, magnetic tape, floppy disks, optical data storage devices, and the like, which are also implemented in the form of carrier waves (for example, transmission over the Internet). Include.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The present invention relates to an image processing device and method, and more specifically, to an image processing device and method capable of reducing a data capacity of a video without encoding the video and easily analyzing data of the video. An image processing device according to one embodiment of the present invention comprises: an image input unit for sequentially inputting image frames therethrough; a subject extraction unit for sensing and extracting a moving subject or a specific subject from the image frames inputted through the image input unit; a background image extraction unit for extracting, as an image, a background image that is the remaining area excluding the subject extracted by the subject extraction unit from the image frames; an object generation unit for objectizing the subject and the background image respectively extracted by the subject extraction unit and the background image extraction unit and generating combination information and frame information on the objectized subject and the background image; a first storage unit in which the objectized subject and the background image are stored; a second storage unit having a database in which the combination information and the frame information on the objectized subject and the background image are stored; and a control unit for controlling the subject extraction unit, the background image extraction unit, and the object generation unit to extract the subject and the background image from the image frames inputted through the image input unit, objectize the extracted subject and background image and generate the combination information and the frame information on the objectized subject and the background image, storing the objectized subject and the background image in the first storage unit, and storing the combination information and the frame information on the objectized subject and the background image in the database of the second storage unit.

Description

영상 처리 장치 및 방법Image processing apparatus and method
본 발명은 영상 처리 장치 및 방법에 관한 것으로서, 보다 상세하게는 동영상을 인코딩할 필요 없이 동영상 데이터 용량을 줄일 수 있고, 또한 동영상의 데이터 분석을 용이하게 할 수 있는 영상 처리 장치 및 방법에 관한 것이다. The present invention relates to an image processing apparatus and a method, and more particularly, to an image processing apparatus and method that can reduce the video data capacity without having to encode a video, and can also facilitate data analysis of the video.
일반적으로, 영화, 애니메이션 등의 대용량의 동영상 파일은 파일의 크기가 방대하므로 유무선 인터넷으로 실시간 시청하기 어려우며, 장시간에 걸쳐 다운로드하여 시청하게 된다. 대용량의 동영상 파일을 다운로드하는 데는 많은 시간이 소요되고 컴퓨터에도 많은 저장 공간이 필요하다.In general, large-capacity video files such as movies and animations are difficult to watch in real time over wired and wireless Internet because of the large file size, and are downloaded and watched for a long time. Downloading a large video file takes a lot of time and requires a lot of storage space on your computer.
또한, 장시간 촬영되는 감시 카메라의 경우, 시간이 길어질수록 촬영된 영상의 데이터 용량이 커져 지속적으로 영상 파일을 저장하는 데 한계가 있다.In addition, in the case of a surveillance camera that is photographed for a long time, the longer the time, the larger the data capacity of the captured image, there is a limit in continuously storing the image file.
이런 문제를 해결하기 위하여, MPEG(Moving Picture Experts Group) 등과 같이 압축 기술을 이용하여 동영상의 모든 프레임 영상을 압축 규약에 따라 압축하여 전체 파일 크기를 줄이는 방법이 개시되었다.In order to solve this problem, a method of reducing the total file size by compressing all frame images of a video according to a compression protocol using a compression technique such as a moving picture expert group (MPEG) has been disclosed.
그러나, 이 기술은 최종적으로 인코딩된 영상 파일을 만드는 방법이기 때문에 영상의 데이터 용량을 줄이는 데는 한계가 있다.However, this technique has a limitation in reducing the data capacity of an image since it is a method of producing an finally encoded image file.
본 발명은 상기한 문제점을 개선하기 위해 안출된 것으로, 동영상을 인코딩할 필요 없이 동영상 데이터 용량을 줄일 수 있고, 또한 동영상의 데이터 분석을 용이하게 할 수 있는 영상 처리 장치 및 방법을 제공하는 데 그 목적이 있다.Disclosure of Invention The present invention has been made to solve the above problems, and an object of the present invention is to provide an image processing apparatus and a method capable of reducing video data capacity without encoding a video and facilitating data analysis of the video. There is this.
그러나, 본 발명의 목적들은 상기에 언급된 목적으로 제한되지 않으며, 언급되지 않은 또 다른 목적들은 아래의 기재로부터 당업자에게 명확하게 이해될 수 있을 것이다.However, the objects of the present invention are not limited to the above-mentioned objects, and other objects not mentioned will be clearly understood by those skilled in the art from the following description.
상기 목적을 달성하기 위하여, 본 발명의 실시예에 따른 영상 처리 장치는, 영상 프레임이 순차적으로 입력되는 영상 입력부; 상기 영상 입력부를 통해 입력되는 영상 프레임에서 움직이는 피사체 혹은 특정 피사체를 감지하여 추출하는 피사체 추출부; 상기 영상 프레임에서 상기 피사체 추출부에 의해 추출된 피사체를 제외한 나머지 영역인 배경 영상을 이미지로 추출하는 배경 영상 추출부; 상기 피사체 추출부 및 상기 배경 영상 추출부에서 각각 추출된 피사체 및 배경 영상을 객체화시키고, 상기 객체화된 피사체 및 배경 영상의 결합 정보 및 프레임 정보를 생성하는 객체 생성부; 상기 객체화된 피사체 및 배경 영상이 저장되는 제1 저장부; 상기 객체화된 피사체 및 배경 영상의 결합 정보 및 프레임 정보가 저장되는 데이터베이스를 갖는 제2 저장부; 및 상기 영상 입력부를 통해 입력되는 영상 프레임에서 상기 피사체 및 배경 영상을 추출하여 객체화시키고, 상기 객체화된 피사체 및 배경 영상의 결합 정보 및 프레임 정보를 생성하도록 상기 피사체 추출부, 상기 배경 영상 추출부, 상기 객체 생성부를 제어하는 한편, 상기 객체화된 피사체 및 배경 영상을 상기 제1 저장부에 저장하고, 상기 객체화된 피사체 및 배경 영상의 결합 정보 및 프레임 정보를 상기 제2 저장부의 데이터베이스에 저장하는 제어부를 포함한다.In order to achieve the above object, an image processing apparatus according to an embodiment of the present invention, the image input unit for sequentially inputting the image frame; A subject extracting unit which detects and extracts a moving subject or a specific subject from an image frame input through the image input unit; A background image extracting unit extracting a background image, which is a region other than the subject extracted by the subject extracting unit, from the image frame as an image; An object generator configured to objectize the subject and the background image extracted by the subject extractor and the background image extractor, respectively, and to generate combined information and frame information of the object and the subject image; A first storage unit storing the objectized subject and the background image; A second storage unit having a database storing combined information of the object and the background image and frame information; And extracting and subjecting the subject and the background image from the image frame input through the image input unit, and generating the combined information and the frame information of the objectized object and the background image. A controller configured to control an object generator and to store the objectized subject and the background image in the first storage unit and to store the combined information and the frame information of the objectized subject and the background image in the database of the second storage unit. do.
여기서, 상기 제1 저장부에 저장된 객체화된 피사체 및 배경 영상을 상기 제2 저장부의 데이터베이스에 저장된 상기 결합 정보 및 프레임 정보에 따라 조합하는 영상 조합부; 및 상기 영상 조합부에서 조합한 영상을 전달받아 재생시키는 재생부를 더 포함하고, 상기 제어부는, 재생 명령이 입력되면, 상기 제1 저장부에 저장된 객체화된 피사체 및 배경 영상을 상기 제2 저장부의 데이터베이스에 저장된 결합 정보 및 프레임 정보에 따라 조합하고, 상기 조합한 영상을 재생시키도록 상기 영상 조합부 및 상기 재생부를 제어하는 것이 바람직하다.Here, the image combination unit for combining the objectized subject and the background image stored in the first storage unit according to the combined information and frame information stored in the database of the second storage unit; And a reproducing unit configured to receive and reproduce an image combined by the image combining unit, and wherein the controller is further configured to display the objectized subject and the background image stored in the first storage unit when the reproduction command is input. The image combination unit and the reproducing unit are preferably combined according to the combination information and the frame information stored in the control unit so as to reproduce the combined image.
또한, 상기 객체화된 피사체 및 배경 영상과 관련된 영상 분석 정보를 생성하는 영상 분석 정보 생성부; 및 상기 영상 분석 정보를 기초로 상기 재생부를 통해 재생되는 영상의 특징을 분석하는 영상 분석부를 더 포함하고, 상기 제어부는 상기 영상 분석 정보 생성부를 통해 생성된 상기 영상 분석 정보를 상기 제2 저장부의 데이터베이스에 저장하고, 영상 분석 명령이 입력되면, 상기 제2 저장부의 데이터베이스에 저장된 영상 분석 정보를 기초로 상기 재생부를 통해 재생되는 영상의 특징을 분석하도록 상기 영상 분석부를 제어하는 것이 바람직하다. In addition, the image analysis information generation unit for generating image analysis information associated with the object and the subject image; And an image analyzer configured to analyze characteristics of an image reproduced through the playback unit based on the image analysis information, wherein the controller is configured to store the image analysis information generated by the image analysis information generator in the database of the second storage unit. If the image analysis command is input to the image analysis command, the image analysis unit may be controlled to analyze characteristics of the image reproduced through the reproduction unit based on the image analysis information stored in the database of the second storage unit.
그리고, 상기 영상 조합부는 상기 결합 정보 및 프레임 정보를 상기 객체화된 피사체 및 배경 영상의 속성값에 대입함으로써 영상을 조합할 수 있다. The image combination unit may combine the images by substituting the combination information and the frame information into attribute values of the object and the background image.
상기 결합 정보는 상기 객체화된 피사체 및 배경 영상의 배열 순서나 프레임 위치, 크기, 밝기, 노출 시간, 기타 효과 중 적어도 하나를 포함하고, 상기 프레임 정보는 영상 재생시 어느 위치에 어떤 영상 요소들을 조합해야 하는지를 나타내는 정보를 포함한다.The combining information includes at least one of an arrangement order of framed object and a background image, frame position, size, brightness, exposure time, and other effects, and the frame information should be combined with some image elements at any position during image reproduction. Contains information indicating whether or not
상기 영상 분석 정보는 상기 객체화된 피사체 및 배경 영상을 특징하는 타이틀, 촬영 시간, 촬영 기간, 피사체의 색상, 위치, 운동 방향 중 적어도 하나를 포함한다.The image analysis information includes at least one of a title, a shooting time, a shooting period, a color, a position, and a movement direction of the object and the background image.
상기 목적을 달성하기 위하여, 본 발명의 실시예에 따른 영상 처리 방법은, 영상 프레임이 순차적으로 입력되는 단계; 상기 영상 프레임에서 움직이는 피사체 혹은 특정 피사체를 감지하여 추출하는 단계; 상기 영상 프레임에서 상기 추출된 피사체를 제외한 나머지 영역인 배경 영상을 이미지로 추출하는 단계; 상기 추출된 피사체 및 배경 영상을 각각 객체화시키고, 상기 객체화된 피사체 및 배경 영상의 결합 정보 및 프레임 정보를 생성하는 단계; 및 상기 객체화된 피사체 및 배경 영상을 제1 저장부에 저장하고, 상기 결합 정보 및 프레임 정보를 제2 저장부의 데이터베이스에 저장하는 단계를 포함한다.In order to achieve the above object, the image processing method according to an embodiment of the present invention, the step of sequentially inputting the image frame; Detecting and extracting a moving subject or a specific subject from the image frame; Extracting a background image, which is a region other than the extracted subject, from the image frame as an image; Objectifying the extracted subject and background image, respectively, and generating combination information and frame information of the objectized subject and background image; And storing the objectized subject and the background image in a first storage unit, and storing the combined information and the frame information in a database of the second storage unit.
여기서, 재생 명령이 입력되면, 상기 제1 저장부에 저장된 객체화된 피사체 및 배경 영상을 상기 제2 저장부의 데이터베이스에 저장된 상기 결합 정보 및 프레임 정보에 따라 조합하는 단계; 및 상기 조합한 영상을 재생하는 단계를 더 포함하는 것이 바람직하다.Here, when a reproduction command is input, combining the objectized subject and the background image stored in the first storage unit according to the combination information and the frame information stored in the database of the second storage unit; And reproducing the combined image.
또한, 상기 객체화된 피사체 및 배경 영상과 관련된 영상 분석 정보를 생성하는 단계; 상기 영상 분석 정보를 상기 제2 저장부의 데이터베이스에 저장하는 단계; 및 영상 분석 명령이 입력되면, 상기 제2 저장부의 데이터베이스에 저장된 상기 영상 분석 정보를 기초로 상기 재생되는 영상의 특징을 분석하는 단계를 더 포함하는 것이 바람직하다.The method may further include generating image analysis information related to the objectized object and the background image; Storing the image analysis information in a database of the second storage unit; And when the image analysis command is input, analyzing a feature of the reproduced image based on the image analysis information stored in the database of the second storage unit.
그리고, 상기 조합하는 단계는 상기 결합 정보 및 프레임 정보를 상기 객체화된 피사체 및 배경 영상의 속성값에 대입함으로써 영상을 조합할 수 있다.The combining may include combining the combination information and the frame information by substituting the attribute values of the object and the background image.
상기 결합 정보는 상기 객체화된 피사체 및 배경 영상의 배열 순서나 프레임 위치, 크기, 밝기, 노출 시간, 기타 효과 중 적어도 하나를 포함하고, 상기 프레임 정보는 영상 재생시 어느 위치에 어떤 영상 요소들을 조합해야 하는지를 나타내는 정보를 포함한다. The combining information includes at least one of an arrangement order of framed object and a background image, frame position, size, brightness, exposure time, and other effects, and the frame information should be combined with some image elements at any position during image reproduction. Contains information indicating whether or not
상기 영상 분석 정보는 상기 객체화된 피사체 및 배경 영상을 특징하는 타이틀, 촬영 시간, 촬영 기간, 피사체의 색상, 위치, 운동 방향 중 적어도 하나를 포함한다.The image analysis information includes at least one of a title, a shooting time, a shooting period, a color, a position, and a movement direction of the object and the background image.
상기한 바와 같이 본 발명에 의한 영상 처리 장치 및 방법에 따르면, 동영상을 인코딩할 필요 없이 동영상 데이터 용량을 줄일 수 있고, 또한 동영상의 데이터 분석을 용이하게 할 수 있는 효과가 있다. According to the image processing apparatus and method according to the present invention as described above, it is possible to reduce the video data capacity without having to encode the video, and to facilitate the data analysis of the video.
도 1은 본 발명의 실시예에 따른 영상 처리 장치의 제어 블록도이다.1 is a control block diagram of an image processing apparatus according to an exemplary embodiment of the present invention.
도 2는 본 발명의 실시예에 따른 영상 처리 방법을 나타낸 흐름도이다. 2 is a flowchart illustrating an image processing method according to an exemplary embodiment of the present invention.
기타 실시예들의 구체적인 사항들은 상세한 설명 및 도면들에 포함되어 있으며, 본 발명의 이점 및 특징, 그리고 그것들을 달성하는 방법은 첨부되는 도면과 함께 상세하게 후술되어 있는 실시예들을 참조하면 명확해질 것이다.Specific details of other embodiments are included in the detailed description and drawings, and the advantages and features of the present invention and methods for achieving them will be apparent with reference to the embodiments described below in detail with the accompanying drawings.
그러나 본 발명은 이하에서 개시되는 실시예들에 한정되는 것이 아니라 서로 다른 다양한 형태로 구현될 수 있으며, 단지 본 실시예들은 본 발명의 개시가 완전하도록 하고, 본 발명이 속하는 기술 분야에서 통상의 지식을 가진 자에게 발명의 범주를 완전하게 알려주기 위해 제공되는 것이며, 본 발명은 청구항의 범주에 의해 정의될 뿐이다. 명세서 전체에 걸쳐 동일 참조 부호는 동일 구성 요소를 지칭한다.However, the present invention is not limited to the embodiments disclosed below, but may be embodied in various different forms, and the present embodiments merely make the disclosure of the present invention complete and common knowledge in the technical field to which the present invention belongs. It is provided to fully inform the person having the scope of the invention, which is defined only by the scope of the claims. Like reference numerals refer to like elements throughout.
이하, 첨부된 블록도 또는 처리 흐름도에 대한 도면들을 참고하여 본 발명의 실시예에 따른 영상 처리 장치 및 방법에 대해 설명하도록 한다. Hereinafter, an image processing apparatus and a method according to an exemplary embodiment of the present invention will be described with reference to the accompanying block diagram or the drawings of the processing flowchart.
도 1은 본 발명의 실시예에 따른 영상 처리 장치의 제어 블록도이다. 1 is a control block diagram of an image processing apparatus according to an exemplary embodiment of the present invention.
도 1에 도시된 바와 같이, 본 발명의 실시예에 따른 영상 처리 장치는 영상 입력부(110), 피사체 추출부(120), 배경 영상 추출부(130), 객체 생성부(140), 영상 분석 정보 생성부(150), 제1 저장부(160), 제2 저장부(170), 영상 조합부(180), 재생부(190), 영상 분석부(210) 및 제어부(220)를 포함한다. As illustrated in FIG. 1, an image processing apparatus according to an exemplary embodiment of the present invention may include an image input unit 110, a subject extractor 120, a background image extractor 130, an object generator 140, and image analysis information. The generator 150, the first storage unit 160, the second storage unit 170, the image combiner 180, the playback unit 190, the image analyzer 210, and the controller 220 are included.
이러한 영상 처리 장치는 카메라를 통해 입력된 아날로그 방식의 영상 신호를 디지털 영상 신호로 전환하여 동영상 국제 압축 방식인 MPEG(Moving Picture Experts Group)으로 영상을 압축/복원하여 장시간 녹화 및 재생하여 볼 수 있는 시스템인 디지털 영상 저장 장치(Digital Video Recorder ; 이하 DVR)일 수 있다. DVR은 녹화 뿐만 아니라 동작 감지(Motion Detction), 연결 녹화, 카메라 제어, 화상 확대, 편집기 등의 다양한 선택 메뉴 기능이 있으며, 데이터를 반영구적으로 저장하는 기능을 갖추고 있다. 특히, DVR은 화상 회의나 화상 교육 또는 원격 진찰 등에 응용될 수 있으며, 특히 은행, 군사지역 등 보안이 필요한 건물이나 공간에 설치되어 특정 위치에서의 영상을 촬영한 후 이를 압축 및 저장하는 보안 감시 시스템으로 널리 사용되고 있다. 이러한 DVR은 종래의 아날로그 방식을 사용하는 CCTV(Closed Circuit Television)와는 달리 영상을 용이하게 저장하고 관리할 수 있으며, 또한 필요한 영상을 쉽게 검색하여 원하는 시간대에 저장된 영상을 짧은 시간에 확인할 수 있다. 물론, 본 영상 처리 장치는 DVR뿐만 아니라, 동영상 재생 프로그램이 설치된 개인 컴퓨터, PDA(Personal Digital Assistant), PMP(Portable Multimedia Player), 이동통신 단말기 등에 적용될 수 있다.Such an image processing device converts an analog image signal input through a camera into a digital image signal, and compresses / restores the image using MPEG (Moving Picture Experts Group), an international video compression method, to record and play back for a long time. Digital Video Recorder (hereinafter referred to as DVR). In addition to recording, DVR has various selection menu functions such as motion detection, connection recording, camera control, image magnification, editor, etc., and has the function to save data semi-permanently. In particular, DVR can be applied to video conferencing, video education or remote medical examination. Especially, it is installed in security buildings such as banks, military areas, etc., and it is a security surveillance system that captures and compresses and saves images from a specific location. Widely used. Unlike conventional CCTV (Closed Circuit Television) using the DVR, such a DVR can easily store and manage an image, and also can easily search for a necessary image and check the image stored in a desired time frame in a short time. Of course, the image processing apparatus may be applied to a personal computer, a personal digital assistant (PDA), a portable multimedia player (PMP), a mobile communication terminal, and the like, in addition to a DVR.
영상 입력부(110)는 영상 프레임이 입력되는 부분으로서, 비디오 카메라, 디지털 카메라, 카메라 폰 등의 동영상 촬영 수단(미도시)으로부터 촬영된 영상이 순차적으로 입력된다. 여기서, 영상은 아날로그 영상 또는 디지털 영상을 모두 포함하지만, 디지털 영상 신호로 변환되어 입력되는 것을 전제로 한다. 디지털 신호로 변환된 영상 신호들이 프레임 단위로 순차적으로 영상 입력부(110)를 통해 입력된다. The image input unit 110 is a portion in which an image frame is input, and the images photographed from a moving image photographing means (not shown) such as a video camera, a digital camera, and a camera phone are sequentially input. Here, the image includes both an analog image or a digital image, but it is assumed that the image is converted into a digital image signal and input. The image signals converted into digital signals are sequentially input through the image input unit 110 in units of frames.
피사체 추출부(120)는 영상 입력부(110)를 통해 입력되는 영상 프레임에서 움직이는 피사체 혹은 특정 피사체(예를 들면, 사람)를 감지하여 추출한다. The subject extractor 120 detects and extracts a moving subject or a specific subject (eg, a person) from an image frame input through the image input unit 110.
움직이는 피사체는 모션 트래킹(motion tracking)에 의하여 추출할 수 있다. 모션 트래킹은 연속되는 두 프레임 영상 사이의 차이를 판별하여 움직임 벡터를 계산하고, 움직임 벡터에 의하여 피사체를 추적한다. 구체적으로 움직임 벡터를 구하기 위해, 현재 입력되는 영상 프레임에서 특징점을 설정하고, 이전에 입력된 영상 프레임과 비교하여 특징점 좌표의 데이터가 어느 방향으로 얼마만큼 움직였는가를 계산한다. 이때, 특징점은 사용자가 설정할 수도 있으며, 이전에 입력된 영상 프레임과의 비교에 의하여 자동으로 설정될 수도 있다.The moving subject may be extracted by motion tracking. Motion tracking calculates a motion vector by determining a difference between two consecutive frame images, and tracks a subject by the motion vector. Specifically, in order to obtain a motion vector, a feature point is set in a currently input image frame, and how much data in which direction the feature point coordinates are moved is compared with a previously input image frame. In this case, the feature point may be set by the user, or may be automatically set by comparison with a previously input image frame.
특정 피사체를 추출하는 방법으로는 현재 입력되는 영상 프레임에서 특징점을 설정하고, 특징점 좌표의 데이터를 기초로 특정 피사체 영역을 추출하는 방법이 포함된다. The method of extracting a specific subject includes setting a feature point in a currently input image frame and extracting a specific subject region based on data of feature point coordinates.
배경 영상 추출부(130)는 영상 입력부(110)를 통해 입력되는 영상 프레임에서 피사체 추출부(120)에 의해 추출된 피사체를 제외한 나머지 영역인 배경 영상을 이미지로 추출한다. 이를 위해, 전체 영상 프레임에서 피사체를 뺄셈 연산하는 과정이 포함된다. The background image extractor 130 extracts a background image, which is a region other than a subject extracted by the object extractor 120, from an image frame input through the image input unit 110 as an image. To this end, a process of subtracting the subject from the entire image frame is included.
객체 생성부(140)는 피사체 추출부(120) 및 배경 영상 추출부(130)에서 각각 추출된 피사체 및 배경 영상을 객체화시키고, 객체화된 피사체 및 배경 영상의 결합 정보 및 프레임 정보를 생성한다. 여기서, 결합 정보는 객체화된 피사체 및 배경 영상을 조합시켜서 하나의 영상 프레임을 생성하기 위한 정보로서, 예를 들면 피사체 및 배경 영상의 배열 순서나 프레임 위치, 크기, 밝기, 노출 시간, 기타 효과 등의 정보들이다. 프레임 정보는 영상 재생시 어느 위치에 어떤 영상 요소들을 조합해야 하는지를 나타내는 정보로서, 예를 들면, 영상의 구성 요소가 되는 피사체 및 배경 영상이 몇 번째 프레임과 연결되어야 하는지, 또 해당 프레임의 화면에서 어느 영역에 위치해야 하는지 등의 정보들이다. The object generator 140 may objectize the subject and the background image extracted by the subject extractor 120 and the background image extractor 130, respectively, and generate combination information and frame information of the object and the subject image. Here, the combined information is information for generating an image frame by combining the object and the background image, for example, the arrangement order or frame position, size, brightness, exposure time, and other effects of the subject and the background image. Information. The frame information is information indicating which image elements should be combined at which position during image playback. For example, the frame and the number of frames to which the subject and the background image, which are the components of the image, should be connected, and which of the screens of the frame are used. Information such as whether to be located in the area.
영상 분석 정보 생성부(150)는 객체화된 피사체 및 배경 영상과 관련된 특정 정보를 생성한다. 특정 정보는 피사체 및 배경 영상을 특징하는 타이틀을 비롯해 촬영 시간, 촬영 기간, 피사체의 색상, 위치, 운동 방향 등을 포함하며, 영상 분석을 위해 이용된다. The image analysis information generator 150 generates specific information related to the objectized subject and the background image. The specific information includes a title characterizing a subject and a background image, a shooting time, a shooting period, a color of the subject, a position, a direction of movement, and the like, and is used for image analysis.
제1 저장부(160)는 객체 생성부(140)에 의해 객체화된 피사체 및 배경 영상이 저장되며, 제2 저장부(170)는 객체화된 피사체 및 배경 영상의 결합 정보, 프레임 정보 및 영상 분석 정보가 저장되는 데이터 베이스를 갖는다. The first storage unit 160 stores the object and the background image objectified by the object generator 140, and the second storage unit 170 combines the information, the frame information, and the image analysis information of the object and the background image. Has a database that is stored.
영상 조합부(180)는 제1 저장부(160)에 저장된 객체화된 피사체 및 배경 영상과, 제2 저장부(170)의 데이터베이스에 저장된 결합 정보 및 프레임 정보를 독출하고, 독출된 결합 정보 및 프레임 정보에 따라 객체화된 피사체 및 배경 영상을 조합한다. 구체적으로, 독출된 결합 정보 및 프레임 정보를 객체화된 피사체 및 배경 영상의 해당 속성값에 대입함으로써 영상 프레임을 생성한다. The image combination unit 180 reads the objectized subject and background image stored in the first storage unit 160 and the combined information and frame information stored in the database of the second storage unit 170, and reads the combined information and frame. The object and the background image objectified according to the information are combined. Specifically, an image frame is generated by substituting the read combination information and frame information into corresponding attribute values of the objectized object and the background image.
재생부(190)는 영상 조합부(180)에서 조합한 영상을 전달받아 재생시키는 역할을 한다. 재생부(190)는 일반적으로 FlashPlayer나 SilverlightRuntime, 또는 웹 브라우저 그 자체가 될 수도 있다. 또한, C++, C#, VisualBasic 등과 같은 프로그램으로 재생부(190) 역할을 하는 프로그램을 별도로 제작할 수도 있다. The playback unit 190 plays a role of receiving and playing the image combined by the image combination unit 180. The playback unit 190 may generally be a FlashPlayer, SilverlightRuntime, or a web browser itself. In addition, a program serving as the play unit 190 may be separately manufactured by a program such as C ++, C #, VisualBasic, or the like.
영상 분석부(210)는 제2 저장부(170)에 저장된 영상 분석 정보를 기초로 재생부(190)를 통해 재생되는 영상의 특징을 분석한다. 즉, 재생부(190)를 통해 재생되는 영상을 특징하는 타이틀을 비롯해 촬영 시간, 촬영 기간, 피사체의 색상, 위치, 운동 방향 등을 분석할 수 있다.The image analyzer 210 analyzes the characteristics of the image reproduced through the playback unit 190 based on the image analysis information stored in the second storage unit 170. That is, the image capturing time, the shooting period, the color of the subject, the position, the movement direction, etc., as well as the title characterizing the image reproduced by the playback unit 190 can be analyzed.
제어부(220)는 장치 내의 전반적인 동작을 제어한다. 제어부(220)는 영상 입력부(110)를 통해 입력되는 영상 프레임에서 피사체 및 배경 영상을 추출하도록 피사체 추출부(120) 및 배경 영상 추출부(130)를 제어한다. The controller 220 controls the overall operation of the device. The controller 220 controls the subject extractor 120 and the background image extractor 130 to extract the subject and the background image from the image frame input through the image input unit 110.
그리고, 제어부(220)는 추출된 피사체 및 배경 영상을 객체화시키고, 객체화된 피사체 및 배경 영상의 결합 정보 및 프레임 정보를 생성하도록 객체 생성부(140)를 제어한다. The controller 220 controls the object generator 140 to objectize the extracted subject and the background image and to generate the combined information and the frame information of the object and the background image.
제어부(220)는 객체화된 피사체 및 배경 영상을 제1 저장부(160)에 저장하는 한편, 객체화된 피사체 및 배경 영상의 결합 정보 및 프레임 정보를 제2 저장부(170)의 데이터베이스에 저장한다. 즉, 제어부(220)는 전체 영상 프레임을 하나의 파일로 저장하지 아니하고, 객체화된 피사체 및 배경 영상과, 결합 정보 및 프레임 정보를 분리하여 저장한다. The controller 220 stores the objectized subject and the background image in the first storage unit 160, and stores the combined information and the frame information of the objectized subject and the background image in a database of the second storage unit 170. That is, the controller 220 does not store the entire image frame as one file, but separately stores the objectized object and the background image, combined information, and frame information.
또한, 제어부(220)는 객체화된 피사체 및 배경 영상과 관련된 타이틀을 비롯해 촬영 시간, 촬영 기간, 피사체의 색상, 위치, 운동 방향 등의 영상 분석 정보를 생성하도록 영상 분석 정보 생성부(150)를 제어하고, 생성된 영상 분석 정보를 제2 저장부(170)의 데이터베이스에 저장한다. In addition, the controller 220 controls the image analysis information generation unit 150 to generate image analysis information such as a title associated with the object and the background image, a shooting time, a shooting period, a color, a position, and a movement direction of the subject. The generated image analysis information is stored in a database of the second storage unit 170.
제어부(220)는 재생 명령이 입력되면, 전체 영상 프레임을 하나의 파일로 재생하도록 제어하는 것이 아니라, 제1 저장부(160)에 저장된 객체화된 피사체 및 배경 영상을 제2 저장부(170)의 데이터베이스에 저장된 결합 정보 및 프레임 정보에 따라 조합하여 영상 프레임을 생성하도록 영상 조합부(180)를 제어한다. 그리고, 제어부(220)는 조합된 영상 프레임이 재생부(190)를 통해 재생되도록 한다. When the playback command is input, the controller 220 does not control to play the entire image frame as one file, but instead of controlling the objectized object and the background image stored in the first storage unit 160 of the second storage unit 170. The image combination unit 180 is controlled to generate an image frame by combining the combination information and the frame information stored in the database. In addition, the controller 220 allows the combined image frame to be played back through the playback unit 190.
그리고, 제어부(220)는 영상 분석 명령이 입력되면, 제2 저장부(170)에 저장된 영상 분석 정보를 기초로 재생부(190)를 통해 재생되는 영상의 특징을 분석하도록 영상 분석부(210)를 제어한다. When the image analysis command is input, the controller 220 analyzes the characteristics of the image reproduced through the playback unit 190 based on the image analysis information stored in the second storage unit 170. To control.
도 2는 본 발명의 실시예에 따른 영상 처리 방법을 나타낸 흐름도이다.2 is a flowchart illustrating an image processing method according to an exemplary embodiment of the present invention.
도 2를 참조하면, 먼저 영상 입력부(110)를 통해 영상 프레임이 순차적으로 입력된다(S110). Referring to FIG. 2, first, image frames are sequentially input through the image input unit 110 (S110).
그러면, 피사체 추출부(120)는 순차적으로 입력되는 영상 프레임에서 움직이는 피사체 혹은 특정 피사체를 감지하여 추출하고, 배경 영상 추출부(130)는 피사체를 제외한 나머지 영역인 배경 영상을 추출한다(S120). 구체적인 추출 방법은 상술하였으므로, 그 설명을 생략하기로 한다. Then, the subject extractor 120 detects and extracts a moving subject or a specific subject from sequentially input image frames, and the background image extractor 130 extracts a background image, which is a region other than the subject (S120). Since a specific extraction method has been described above, a description thereof will be omitted.
객체 생성부(140)는 추출된 피사체 및 배경 영상을 각각 객체화시키고(S130), 객체화된 피사체 및 배경 영상의 결합 정보 및 프레임 정보를 생성한다(S140). The object generator 140 objectizes the extracted subject and the background image, respectively (S130), and generates combination information and frame information of the objectized subject and the background image (S140).
제어부(220)는 객체화된 피사체 및 배경 영상을 제1 저장부(160)에 저장하는 한편, 객체화된 피사체 및 배경 영상의 결합 정보 및 프레임 정보를 제2 저장부(170)의 데이터베이스에 저장한다(S150). The controller 220 stores the objectized subject and the background image in the first storage unit 160, and stores the combined information and the frame information of the objectized subject and the background image in a database of the second storage unit 170 ( S150).
한편, 영상 분석 정보 생성부(150)는 객체화된 피사체 및 배경 영상과 관련된 타이틀을 비롯해 촬영 시간, 촬영 기간, 피사체의 색상, 위치, 운동 방향 등의 영상 분석 정보를 생성한다(S160). 그러면, 제어부(220)는 생성된 영상 분석 정보를 제2 저장부(170)의 데이터베이스에 저장한다(S170).On the other hand, the image analysis information generation unit 150 generates image analysis information such as a title associated with the object and the background image, the shooting time, the shooting period, the color of the subject, the position, the movement direction, etc. (S160). Then, the controller 220 stores the generated image analysis information in the database of the second storage unit 170 (S170).
재생 명령이 입력되면(S180), 영상 조합부(180)는 제1 저장부(160)에 저장된 객체화된 피사체 및 배경 영상을 제2 저장부(170)의 데이터베이스에 저장된 결합 정보 및 프레임 정보에 따라 조합하여 영상 프레임을 생성한다(S190). 그리고, 제어부(220)는 조합된 영상 프레임이 재생부(190)를 통해 재생되도록 한다(S210). When the play command is input (S180), the image combination unit 180 may convert the objectized subject and the background image stored in the first storage unit 160 according to the combined information and frame information stored in the database of the second storage unit 170. Combination generates an image frame (S190). In addition, the controller 220 allows the combined image frame to be played back through the playback unit 190 (S210).
한편, 영상 분석 명령이 입력되면(S220), 영상 분석부(210)는 제2 저장부(170)에 저장된 타이틀을 비롯해 촬영 시간, 촬영 기간, 피사체의 색상, 위치, 운동 방향 등의 영상 분석 정보를 기초로 재생부(190)를 통해 재생되는 영상의 특징을 분석한다(S230). Meanwhile, when an image analysis command is input (S220), the image analyzer 210 may analyze the image analysis information such as the title stored in the second storage unit 170, the shooting time, the shooting period, the color of the subject, the position, and the movement direction. Based on the analysis of the characteristics of the image reproduced through the playback unit 190 (S230).
이와 같이, 본 발명은 비디오 카메라, 디지털 카메라, 카메라 폰 등의 동영상 촬영 수단을 통해 일정 시간 지속적으로 영상을 촬영해야 하는 경우나 대용량의 동영상 파일을 다운로드해야 하는 경우 등, 영상 처리 장치에 복수 개의 영상 프레임이 순차적으로 입력될 때, 먼저 영상 프레임에서 피사체 및 배경 영상을 감지하여 추출한다. 그리고, 추출된 피사체 및 배경 영상을 각각 객체화하여 저장하며, 결합 정보 및 프레임 정보를 데이터베이스에 분리하여 저장한다. 이로써, 영상을 인코딩할 필요 없이 영상 데이터 용량을 줄일 수 있으며, 영상 재생시에도 별도의 디코딩 과정이 필요하지 않다. 또한, 영상 분석 정보를 생성하여 데이터베이스에 함께 저장함으로써 영상의 데이터 분석을 용이하게 할 수 있는 장점이 있다.As described above, the present invention provides a plurality of images to an image processing apparatus, such as a case where a video must be continuously recorded for a predetermined time through a video recording means such as a video camera, a digital camera, a camera phone, or a large video file must be downloaded. When the frames are sequentially input, the subject and the background image are first detected and extracted from the image frame. The extracted subject and the background image are respectively objectized and stored, and the combined information and the frame information are separately stored in the database. As a result, video data capacity can be reduced without encoding an image, and a separate decoding process is not required even when playing an image. In addition, there is an advantage that can easily analyze the data of the image by generating the image analysis information and stored in the database together.
본 발명은 또한 컴퓨터로 읽을 수 있는 기록매체에 컴퓨터가 읽을 수 있는 코드로서 구현하는 것이 가능하다. 컴퓨터가 읽을 수 있는 기록매체는 컴퓨터 시스템에 의하여 읽혀질 수 있는 데이터가 저장되는 모든 종류의 기록장치를 포함한다. 컴퓨터가 읽을 수 있는 기록매체의 예로는 ROM, RAM, CD-ROM, 자기 테이프, 플로피 디스크, 광 데이터 저장장치 등이 있으며, 또한 캐리어 웨이브(예를 들어 인터넷을 통한 전송)의 형태로 구현되는 것도 포함한다.The invention can also be embodied as computer readable code on a computer readable recording medium. The computer-readable recording medium includes all kinds of recording devices in which data that can be read by a computer system is stored. Examples of computer-readable recording media include ROM, RAM, CD-ROM, magnetic tape, floppy disks, optical data storage devices, and the like, which are also implemented in the form of carrier waves (for example, transmission over the Internet). Include.
이상 본 발명의 바람직한 실시예들을 기초로 설명되었지만, 당업자들은 본 발명이 속하는 기술분야의 기술적 사상이나 필수적 특징을 변경하지 않고서 다른 구체적인 형태로 실시될 수 있다는 것을 이해할 수 있을 것이다.Although described above based on the preferred embodiments of the present invention, those skilled in the art will understand that the present invention may be implemented in other specific forms without changing the technical spirit or essential features of the technical field to which the present invention belongs.
그러므로 이상에서 기술한 실시예들은 모든 면에서 예시적인 것이며 한정적인 것이 아닌 것으로서 이해되어야 한다. 본 발명의 범위는 상기 상세한 설명보다는 후술하는 특허청구범위에 의하여 한정되며, 특허청구범위의 의미 및 범위 그리고 그 등가 개념으로부터 도출되는 모든 변경 또는 변형된 형태가 본 발명의 범위에 포함되는 것으로 해석되어야 한다.Therefore, the embodiments described above are to be understood as illustrative in all respects and not as restrictive. The scope of the present invention is defined by the appended claims rather than the foregoing description, and all changes or modifications derived from the meaning and scope of the claims and their equivalent concepts should be construed as being included in the scope of the present invention. do.

Claims (2)

  1. 영상 프레임이 순차적으로 입력되는 영상 입력부; An image input unit configured to sequentially input image frames;
    상기 영상 입력부를 통해 입력되는 영상 프레임에서 움직이는 피사체 혹은 특정 피사체를 감지하여 추출하는 피사체 추출부; A subject extracting unit which detects and extracts a moving subject or a specific subject from an image frame input through the image input unit;
    상기 영상 프레임에서 상기 피사체 추출부에 의해 추출된 피사체를 제외한 나머지 영역인 배경 영상을 이미지로 추출하는 배경 영상 추출부;A background image extracting unit extracting a background image, which is a region other than the subject extracted by the subject extracting unit, from the image frame as an image;
    상기 피사체 추출부 및 상기 배경 영상 추출부에서 각각 추출된 피사체 및 배경 영상을 객체화시키고, 상기 객체화된 피사체 및 배경 영상의 결합 정보 및 프레임 정보를 생성하는 객체 생성부;An object generator configured to objectize the subject and the background image extracted by the subject extractor and the background image extractor, respectively, and to generate combined information and frame information of the object and the subject image;
    상기 객체화된 피사체 및 배경 영상이 저장되는 제1 저장부;A first storage unit storing the objectized subject and the background image;
    상기 객체화된 피사체 및 배경 영상의 결합 정보 및 프레임 정보가 저장되는 데이터베이스를 갖는 제2 저장부; 및A second storage unit having a database storing combined information of the object and the background image and frame information; And
    상기 영상 입력부를 통해 입력되는 영상 프레임에서 상기 피사체 및 배경 영상을 추출하여 객체화시키고, 상기 객체화된 피사체 및 배경 영상의 결합 정보 및 프레임 정보를 생성하도록 상기 피사체 추출부, 상기 배경 영상 추출부, 상기 객체 생성부를 제어하는 한편, 상기 객체화된 피사체 및 배경 영상을 상기 제1 저장부에 저장하고, 상기 객체화된 피사체 및 배경 영상의 결합 정보 및 프레임 정보를 상기 제2 저장부의 데이터베이스에 저장하는 제어부를 포함하는 영상 처리 장치.The subject extracting unit, the background image extracting unit, and the object to extract and subject the subject and background image from the image frame input through the image input unit, and generate the combined information and the frame information of the objectized subject and the background image. And a controller configured to control the generation unit, and to store the objectized subject and the background image in the first storage unit, and to store the combined information and the frame information of the objectized subject and the background image in the database of the second storage unit. Image processing device.
  2. 영상 프레임이 순차적으로 입력되는 단계;Sequentially inputting image frames;
    상기 영상 프레임에서 움직이는 피사체 혹은 특정 피사체를 감지하여 추출하는 단계;Detecting and extracting a moving subject or a specific subject from the image frame;
    상기 영상 프레임에서 상기 추출된 피사체를 제외한 나머지 영역인 배경 영상을 이미지로 추출하는 단계;Extracting a background image, which is a region other than the extracted subject, from the image frame as an image;
    상기 추출된 피사체 및 배경 영상을 각각 객체화시키고, 상기 객체화된 피사체 및 배경 영상의 결합 정보 및 프레임 정보를 생성하는 단계; 및Objectifying the extracted subject and background image, respectively, and generating combination information and frame information of the objectized subject and background image; And
    상기 객체화된 피사체 및 배경 영상을 제1 저장부에 저장하고, 상기 결합 정보 및 프레임 정보를 제2 저장부의 데이터베이스에 저장하는 단계를 포함하는 영상 처리 방법. And storing the objectized object and the background image in a first storage unit and storing the combined information and the frame information in a database of the second storage unit.
PCT/KR2013/006896 2012-06-08 2013-07-31 Image processing device and method WO2013183978A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020120061431A KR101340308B1 (en) 2012-06-08 2012-06-08 Video processing apparatus and method
KR10-2012-0061431 2012-06-08

Publications (1)

Publication Number Publication Date
WO2013183978A1 true WO2013183978A1 (en) 2013-12-12

Family

ID=49712306

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2013/006896 WO2013183978A1 (en) 2012-06-08 2013-07-31 Image processing device and method

Country Status (2)

Country Link
KR (1) KR101340308B1 (en)
WO (1) WO2013183978A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101746074B1 (en) * 2015-12-31 2017-06-12 한양여자대학교 산학협력단 System for analyzing the forgery of digital video and method therefor
US20220044414A1 (en) * 2018-10-18 2022-02-10 Korea Advanced Institute Of Science And Technology Method and device for processing image on basis of artificial neural network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003032616A (en) * 2001-07-17 2003-01-31 Nhk Computer Service:Kk Image information producing device, image editing device, image information production program and image editing program
JP2008187256A (en) * 2007-01-26 2008-08-14 Fujifilm Corp Motion image creating device, method and program
JP2009296344A (en) * 2008-06-05 2009-12-17 Nippon Telegr & Teleph Corp <Ntt> Apparatus and method of processing video, program, and computer-readable recoding medium
JP2012054912A (en) * 2010-08-02 2012-03-15 Sharp Corp Video processing apparatus, display device and video processing method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003032616A (en) * 2001-07-17 2003-01-31 Nhk Computer Service:Kk Image information producing device, image editing device, image information production program and image editing program
JP2008187256A (en) * 2007-01-26 2008-08-14 Fujifilm Corp Motion image creating device, method and program
JP2009296344A (en) * 2008-06-05 2009-12-17 Nippon Telegr & Teleph Corp <Ntt> Apparatus and method of processing video, program, and computer-readable recoding medium
JP2012054912A (en) * 2010-08-02 2012-03-15 Sharp Corp Video processing apparatus, display device and video processing method

Also Published As

Publication number Publication date
KR101340308B1 (en) 2013-12-11

Similar Documents

Publication Publication Date Title
US10956749B2 (en) Methods, systems, and media for generating a summarized video with video thumbnails
KR102146042B1 (en) Method and system for playing back recorded video
CN106559697B (en) A kind of recorded file cover display methods and system based on PVR set-top boxes
CN101552890B (en) Information processing apparatus, information processing method
EP1347455A2 (en) Contents recording/playback apparatus and contents edit method
WO2009113280A1 (en) Image processing device and imaging device equipped with same
CN106028120A (en) Method and device for performing video direction in mobile terminal
CN106416220A (en) Automatic insertion of video into a photo story
JPH0993588A (en) Moving image processing method
WO2011136418A1 (en) Dvr and method for monitoring image thereof
KR20090007177A (en) Apparatus and method for selective real time recording based on face identification
CN105144700A (en) Image processing apparatus and image processing method
US20120087636A1 (en) Moving image playback apparatus, moving image management apparatus, method, and storage medium for controlling the same
CN107251551A (en) Image processing equipment, image capture apparatus, image processing method and program
JP4075748B2 (en) Image recording device
WO2012137994A1 (en) Image recognition device and image-monitoring method therefor
JP2007266659A (en) Imaging reproducing apparatus
CN101193249A (en) Image processing apparatus
WO2013183978A1 (en) Image processing device and method
KR20080035891A (en) Image playback apparatus for providing smart search of motion and method of the same
WO2013162095A1 (en) Dvr and video monitoring method therefor
KR20130031179A (en) Method and apparatus for displaying a summary video
WO2019039661A1 (en) Method for syntax-based extraction of moving object region of compressed video
KR100490948B1 (en) Method for transferring image signals and system using the method
JP2015119403A (en) Image recording apparatus, image recording method, program, and imaging apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13800689

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13800689

Country of ref document: EP

Kind code of ref document: A1