WO2020085558A1 - High-speed analysis image processing apparatus and driving method for apparatus - Google Patents

High-speed analysis image processing apparatus and driving method for apparatus Download PDF

Info

Publication number
WO2020085558A1
WO2020085558A1 PCT/KR2018/013184 KR2018013184W WO2020085558A1 WO 2020085558 A1 WO2020085558 A1 WO 2020085558A1 KR 2018013184 W KR2018013184 W KR 2018013184W WO 2020085558 A1 WO2020085558 A1 WO 2020085558A1
Authority
WO
WIPO (PCT)
Prior art keywords
image processing
image
video
analysis
communication interface
Prior art date
Application number
PCT/KR2018/013184
Other languages
French (fr)
Korean (ko)
Inventor
고현준
장정훈
최준호
전창원
Original Assignee
주식회사 인텔리빅스
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 인텔리빅스 filed Critical 주식회사 인텔리빅스
Publication of WO2020085558A1 publication Critical patent/WO2020085558A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/60Memory management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Definitions

  • the present invention relates to a high-speed analysis image processing apparatus and a method of driving the apparatus, and more specifically, for video images of various formats, for example, based on a person's face or event option information related to a person, image analysis and search can be quickly performed. It relates to a high-speed analysis image processing apparatus to be processed and a driving method of the apparatus.
  • CCTV cameras capture public places and automatically analyze images to extract a large number of unspecified objects and analyze motion to automatically alert the administrator when abnormal motion is detected, or There is an increasing demand for intelligent video surveillance systems that deliver information to other connected automation systems.
  • An embodiment of the present invention provides a high-speed analysis image processing apparatus and a method for driving the apparatus for quickly processing image analysis and search based on, for example, human face or event option information related to a person, for video images of various formats It has a purpose.
  • the high-speed analysis image processing apparatus generates a communication interface unit that receives a video image, and extracts and analyzes an object to be analyzed from the received video image to generate attribute information of the object, and And a control unit that analyzes the video image based on attribute information and generates an analysis result as metadata.
  • the communication interface unit interlocks with an external device that performs object tracking-based image processing, and the control unit may perform high-speed analysis image processing based on a designated object using the received video image at the request of the external device.
  • the controller analyzes an event related to the object to be analyzed to further generate event information, and may include and store the generated event information in the metadata.
  • the controller may further generate deep learning-based metadata using attribute information of the object and the metadata.
  • the control unit may include a video processing unit that processes video images of different formats received through the communication interface unit.
  • the controller may search for and provide the generated metadata matching the search command based on a scenario-based search command received through the communication interface unit.
  • the communication interface unit may selectively receive video images of a photographing device, a removable storage medium (USB), and a third party device at a designated place.
  • a photographing device a photographing device
  • a removable storage medium USB
  • a method of driving a high-speed analysis image processing apparatus is a method of driving a high-speed analysis image processing apparatus including a communication interface unit and a control unit, receiving a video image from the communication interface unit, and The control unit extracts and analyzes an object to be analyzed from the received video image to generate object attribute information, and analyzes the video image based on the generated object attribute information to generate an analysis result as metadata. do.
  • the communication interface unit interlocks with an external device that performs object tracking-based image processing, and the method of driving the high-speed analysis image processing device uses the received video image at the request of the external device to perform high-speed analysis image centering on a specified object. It may further include the step of performing the processing.
  • the driving method of the high-speed analysis image processing apparatus may further include analyzing an event related to the object to be analyzed, generating event information, and storing the generated event information in the metadata.
  • the driving method of the high-speed analysis image processing apparatus may further include generating deep learning-based metadata using attribute information of the object and the metadata.
  • the method for driving the high-speed analysis image processing apparatus may further include processing video images of different formats received through the communication interface unit.
  • the driving method of the high-speed analysis image processing apparatus may further include searching and providing the generated metadata matching the search command based on a scenario-based search command received through the communication interface unit.
  • the method for driving the high-speed analysis image processing apparatus may further include selectively receiving video images of a photographing apparatus, a removable storage medium, and a third party apparatus at a designated place.
  • the search speed is reduced as the search amount decreases according to the addition of a search category (for example, events such as loitering, stoppage, vehicle stoppage, etc.). Will increase.
  • a search category for example, events such as loitering, stoppage, vehicle stoppage, etc.
  • a search window (eg, UI) for inputting a person or event related to a person or a scenario-based search term may be formed simply and intuitively to facilitate user search convenience.
  • deep learning may be performed using the properties of the object, person attribute information, event information, etc., stored as metadata to increase the accuracy of the search, and additionally generate additional information to flexibly cope with events and accidents.
  • FIG. 1 is a view showing a high-speed analysis video service system according to an embodiment of the present invention
  • Figure 2 is an exemplary view schematically showing Figure 1
  • FIG. 3 is a block diagram illustrating the structure of the first image processing apparatus of FIG. 2,
  • FIG. 4 is a block diagram illustrating the structure of the second image processing apparatus of FIG. 2,
  • FIG. 5 is a block diagram illustrating another structure of the second image processing apparatus of FIG. 2,
  • FIG. 6 is a block diagram illustrating the detailed structure of the image high-speed analysis unit of FIG. 5,
  • FIG. 7 is a view showing a high-speed analysis video service process according to an embodiment of the present invention.
  • FIG. 8 is a view for explaining an operation process between a forensic manager and a search client constituting the first image processing apparatus of FIG. 2;
  • FIG. 9 is a view for explaining an operation process of the first image processing apparatus and the third party apparatus of FIG. 2,
  • FIG. 10 is a diagram illustrating a search main screen
  • FIG. 11 is a diagram illustrating an FRS setup process
  • 21 to 30 are views for explaining an offline search screen.
  • FIG. 31 is a flowchart illustrating an operation process of a high-speed analysis image processing apparatus according to an embodiment of the present invention.
  • FIG. 1 is a diagram illustrating a high-speed analysis image service system according to an embodiment of the present invention
  • FIG. 2 is an exemplary diagram schematically showing FIG. 1.
  • the high-speed analysis image service system 90 includes a user device 100, a communication network 110, an image service device 120, and a third party device 130. ).
  • the element can be integrated and configured in a network device (for example, a switching device, etc.) in the communication network 110, and is described as including everything in order to help a sufficient understanding of the invention.
  • a network device for example, a switching device, etc.
  • the user device 100 is installed in a designated place and includes an imaging device such as CCTV for monitoring events and accidents, a desktop computer owned by users, a laptop computer, a mobile phone (eg, a smart phone), a tablet PC, and a smart TV.
  • a removable storage medium (eg, USB) 101 may be further included.
  • such a removable storage medium may include a memory provided in a black box of a vehicle.
  • the removable storage medium 101 may be directly connected to the control computer constituting the video service device 120.
  • the user device 100 may store an image captured through a camera (including a temporary storage site), and provide a captured image to the image service device 120 to request image analysis.
  • a camera including a temporary storage site
  • the captured video is provided to the video service device 120 in real time or periodically to control it through analysis. could be done.
  • the communication network 110 includes both wired and wireless communication networks.
  • a wired / wireless Internet network may be used or interlocked as the communication network 110.
  • the wired network includes an internet network such as a cable network or a public telephone network (PSTN), and the wireless communication networks include CDMA, WCDMA, GSM, Evolved Packet Core (EPC), Long Term Evolution (LTE), Wibro network, etc. It includes meaning.
  • the communication network 110 according to an embodiment of the present invention is not limited thereto, and may be used as a cloud computing network under a cloud computing environment, a 5G network, etc. as a connection network of a next-generation mobile communication system to be implemented in the future.
  • an access point in the communication network 110 can access a telephone exchange or the like, but in the case of a wireless communication network, data can be accessed by accessing a SGSN or a Gateway GPRS Support Node (GGSN) operated by a communication company. It can process or connect to various repeaters such as BTS (Base Station Transmission), NodeB, and e-NodeB to process data.
  • BTS Base Station Transmission
  • NodeB Node B
  • e-NodeB e-NodeB
  • the communication network 110 may include an access point.
  • Access points include small base stations, such as femto or pico base stations, which are often installed in buildings.
  • the femto or pico base station is classified according to the maximum number of user devices 100 that can be accessed according to the classification of the small base station.
  • the access point includes a user equipment 100 and a short-range communication module for performing short-range communication such as Zigbee and Wi-Fi.
  • the access point can use TCP / IP or RTSP (Real-Time Streaming Protocol) for wireless communication.
  • TCP / IP or RTSP Real-Time Streaming Protocol
  • short-range communication may be performed in various standards such as radio frequency (RF) and ultra-wideband communication (UWB), such as Bluetooth, Zigbee, infrared (IrDA), ultra high frequency (UHF), and very high frequency (VHF), in addition to Wi-Fi.
  • RF radio frequency
  • UWB ultra-wideband communication
  • IrDA infrared
  • UHF ultra high frequency
  • VHF very high frequency
  • the access point can extract the location of the data packet, designate the best communication path for the extracted location, and transfer the data packet to the next device, for example, the video service device 120 along the designated communication path.
  • An access point can share multiple lines in a typical network environment, including routers, repeaters and repeaters, for example.
  • the video service device 120 may serve as a control device for monitoring a corresponding area through a user device 100 installed in a designated area, for example, a captured image provided by CCTV.
  • a server of a company for providing a service by performing a high-speed analysis operation of an image may be included.
  • the video service device 120 includes a DB 120a for storing a large amount of video data, and may further include a server and a control computer.
  • the video service device 120 may be constructed in a variety of forms. For example, it can operate in the form of a single server, and multiple servers can work together.
  • the image service device 120 may include, for example, a first image processing device 121 and a second image processing device 123 as shown in FIG. 2. Through this, the image service device 120 of FIG. 1 can rapidly increase the image processing speed by collaborating or distributing the image processing operation.
  • the second image processing apparatus 123 performs an object image-based image processing operation.
  • object tracking is a method of predicting the motion of the extracted object by extracting the object from the first unit image of the image. Motion tracking is usually done in the form of a vector, i.e. direction and distance calculation.
  • the object image-based image processing extracts a designated object (eg, a person, a vehicle, etc.) to be analyzed for each unit image (for example, the first to Nth unit images), and extracts for each unit image. It is a method of comparing the property information of an object and determining whether it is the same object.
  • the feature point information becomes attribute information.
  • shape or color may be attribute information.
  • the final determination of the object may be made through deep learning. For example, similar objects are classified as candidate groups, and the properties of the object are finally determined through deep learning.
  • the second image processing apparatus 123 determines an object based on the attribute information and analyzes a correlation with surrounding objects or objects. Through this, an event is derived. Then, part or all of the attribute information, the event information, and the captured image matching it are generated as metadata.
  • the second image processing device 123 may refer to a predetermined rule (or policy).
  • the second image processing apparatus 123 may further generate new information by performing a deep learning operation using the stored metadata, and may also generate prediction information in the process.
  • Meta data for a specific video image generated in this way facilitates a search.
  • the first image processing device 121 is a control device, for example, a video analysis request from a third party device 130 such as a police station or a government office.
  • the first image processing device 121 may perform image analysis based on object tracking on its own, or may request the second image processing device 123 to perform object image based image analysis. .
  • the second image processing device 123 may further search the image analysis result using attribute information (eg, a face), and detect only a designated event (eg, as the first image processing device 121). Rather than doing so, it can be extended to further search by adding various event options.
  • the event option information may include roaming, stopping, vehicle stopping, and the like. Therefore, the speed of the search is fast and the search term is added, so the accuracy of the search is increased.
  • the first image processing device 121 may provide a scenario-based search word to search for a sentence, not a word, that is, a natural language, and increase the accuracy of information by a deep learning operation and receive prediction information together.
  • a scenario-based search word may have a series of events such as, for example, 'black among illegally turning vehicles, a taxi' or 'a person wearing a hat running with a bag'.
  • the term i-forensic is used in that it processes video images of different formats obtained through various paths. This can be viewed as implying that, for example, picture-based image analysis is performed, but analysis and search can be performed in a short time by receiving images recorded in various formats.
  • Video image means a video signal. Generally, the video signal is divided into a video signal and an audio signal, and further includes additional information. Of course, the additional information includes various information such as date and time information.
  • the term video image is mainly used, but it may not mean only a video signal. Therefore, video images may be used interchangeably with video data and image data.
  • an image processing operation such as image analysis based on an object image performed by the second image processing apparatus 123 analyzes pixel values in a picture (or macroblock) image to classify object boundaries, and also classifies them. It is possible to analyze what type of object is by analyzing the pixel value of the object. For example, a person has a black head, a face is flesh-colored, and the two parts are connected to each other. Therefore, it is possible to classify human objects by extracting black and flesh-colored areas, and analyze corresponding parts to analyze feature points. Since there are various types of people or human faces, it may be possible to classify based on data stored in the form of a template. Therefore, if the characteristic point of a specific person is found, it can be judged as the same person even if there is a personal change, such as wearing the glasses.
  • the shape of the nose, the shape of the ear, or the shape of the jaw line may be a characteristic point of a person, and on the basis of this, it is determined whether the same person as the subsequent unit images.
  • an event is derived through correlation between the detected person and surrounding objects or objects. For example, a specific person was found in the unit video, and it was analyzed that the person is holding garbage. Then, in the Nth video, if the garbage is removed from the hand, and it is found in the dump site in the video, it is judged that the person dumped the garbage in an inappropriate place. Since such event generation can be analyzed in various forms, the embodiment of the present invention will not be limited to any one form.
  • the first image processing apparatus 121 and the second image processing apparatus 123 are interlocked with the image service apparatus 120, but this is configured in the form of a single server, for example. It may be formed in a form constituting the first module and the second module.
  • the image service device 120 may further interwork a third image processing device for high-speed analysis and search operations.
  • the third image processing apparatus provides an analysis result by performing an analysis operation based on an object image only for vehicle analysis on the received video image. Since various types of system design are possible, the embodiment of the present invention will not be limited to any one form.
  • the third party device 130 includes a server operated by a government office such as a police station, and a server of a company providing other content videos.
  • a control device operated by a local government may be a third party device 130.
  • a control device may be preferably the video service device 120 according to an embodiment of the present invention.
  • the third party device 130 can be used for various purposes through image analysis, it should be understood as a provider that provides content, that is, a video image.
  • FIG. 3 is a block diagram illustrating the structure of the first image processing apparatus of FIG. 2.
  • the first image processing apparatus 121 includes a communication interface unit 300, a control unit 310, a forensic image execution unit 320, and a storage unit 330. Includes some or all.
  • control unit means to be integrated with other components such as, and is described as including all in order to help a sufficient understanding of the invention.
  • the forensic image execution unit 320 when the forensic image execution unit 320 is integrated with the control unit 310, it may be referred to as a 'forensic image processing unit'.
  • the forensic image processing unit may execute one software to perform a control operation and a forensic image processing operation together.
  • the forensic image processing unit may be configured to operate in hardware, software, or a combination thereof.
  • the control unit 310 may include a CPU and a memory. When configured in the form of an IC chip, a program for forensic image processing is stored in a memory and the CPU executes it. The processing speed increases rapidly.
  • the forensic image execution unit 320 may perform image analysis based on an object image, but may also perform an image processing operation based on object tracking.
  • the former may be executed in the first module, and the latter may be executed in the second module.
  • the first image processing apparatus 121 may have a different configuration depending on how the system is configured. Above all, it is clear to perform image analysis based on object images and store the results in the form of metadata to provide search results quickly when a search request is made. Therefore, the embodiment of the present invention will not be particularly limited to any one form.
  • the communication interface 300 may communicate with the user device 100 and the sub-party device 130 of FIG. 1, respectively. To this end, a communication module may be included. Since the communication interface unit 300 processes video images, operations such as modulation and demodulation, encoding and decoding, muxing, and demuxing may be performed, but this may also be performed by the controller 310.
  • the communication interface 300 transmits this to the control unit 310 when, for example, a third party device 130 requests analysis of a video image, or when an image is provided from a photographing device such as a CCTV as the user device 100. You can.
  • the communication interface 300 may perform a setting operation for performing a forensic operation with the second image processing device 123 at the request of the control unit 310, through which the second image processing device 123 As a result, analysis results can be provided by requesting image processing.
  • the control unit 310 is responsible for the overall control operation of the communication interface unit 300, the forensic image execution unit 320, and the storage unit 330 constituting the first image processing unit 121.
  • the control unit 310 may control the forensic image execution unit 320 to perform an operation for performing a forensic operation with the second image processing apparatus 123.
  • the control unit 310 performs a forensic image execution unit.
  • the second image processing apparatus 123 may request an image analysis for the video image, and then provide a search term to receive the searched result.
  • the control unit 310 may search for a video image by selecting a person's face or a person category as a search term according to the operation of the forensic image execution unit 320, and search for scenario-based search terms or event option information. Based on this, various types of searches can be additionally performed.
  • the first image processing device 121 may analyze the A range and search
  • the second image processing device 123 may analyze and search the B range, and the first image processing device through the forensic operation ( If the analysis and search is requested from the 121) to the second image processing apparatus 123, the first image processing apparatus 121 operates to view the analysis or search results of the second image processing apparatus 123 in the B range.
  • the control unit 310 and the forensic image execution unit 320 perform this.
  • the forensic image execution unit 320 executes an interlocking program to allow the first image processing unit 121 to view the analysis results of the video image analysis of the second image processing unit 123.
  • This may include various UX / UI programs. For example, when a user sets a forensic operation through a UI window or when a user provides a search term with a property of a person, such as a 'face' category, the second image processing device 123 ) To provide various search results.
  • the storage unit 330 may store various data processed by the first image processing device 121 and may temporarily store the data. For example, when the first image processing device 121 is interlocked with the DB 120a, temporary data may be stored in the storage unit 330 and permanent data may be stored in the DB 120a. The data stored in the storage unit 330 is output when requested by the control unit 310.
  • FIG. 4 is a block diagram illustrating the structure of the second image processing apparatus of FIG. 2.
  • the second image processing apparatus 123 includes some or all of the communication interface 400 and the image high-speed processing unit 410, where "some or all "Including” is the same as the above.
  • the communication interface 400 may communicate with the first image processing apparatus 121 of FIG. 2 according to an embodiment of the present invention.
  • the first image processing device 121 provides a video image and requests analysis
  • the video image and the analysis request are transmitted to the image high-speed processing unit 410.
  • the communication interface unit 400 analyzes the video image, such as person attribute information, event information, correlation information, and the video image matching the information in the form of metadata, the image high-speed processing unit 410 It can be stored in the DB (120a) on request.
  • the communication interface unit 400 provides a search term, such as an attribute-based search term, a scenario-based search term, and an event option-based search word from the first image processing device 121 to the image fast processing unit 410 Search results are provided accordingly.
  • a search term such as an attribute-based search term, a scenario-based search term, and an event option-based search word from the first image processing device 121 to the image fast processing unit 410 Search results are provided accordingly.
  • the image high-speed processing unit 410 performs an object image-based analysis operation when an analysis request is made for the received video image.
  • the above description has been sufficiently described, and further explanation is omitted.
  • the image high-speed processing unit 410 may store the analysis results in the DB 120a so that the first image processing device 121 accesses the DB 120a to perform the above search. That is, the search can be performed by the first image processing device 121 directly accessing the DB 120a, but various methods are possible such as receiving the search results indirectly via the image high-speed processing unit 410, which is a system. Since it may vary according to the designer's intention, the embodiment of the present invention will not be particularly limited to any one form. However, the former would be desirable in view of the speed of data processing.
  • FIG. 5 is a block diagram illustrating another structure of the second image processing apparatus of FIG. 2
  • FIG. 6 is a block diagram illustrating a detailed structure of the high-speed analysis image execution unit of FIG.
  • the second image processing apparatus 123 ' includes a communication interface unit 500, a control unit 510, an image high-speed analysis unit 520, and a storage unit 530. ).
  • including some or all means that some components, such as the storage unit 530, are omitted, so that the second image processing apparatus 123 'is configured or some components, such as the image high-speed analysis unit 520, are included. It means that it can be integrated with other components, such as the control unit 510, etc., and is described as including everything in order to help the understanding of the invention.
  • the second image processing apparatus 123 'of FIG. 5 is that the control operation and the high-speed analysis (and search) operation of the image are dualized and processed. You can. To this end, they may be separated by hardware, software, or a combination thereof to perform different operations. That is, it can be seen that the control unit 510 performs only the control operation, and the high-speed analysis operation of the image is performed by the image high-speed analysis unit 520.
  • the image high-speed analysis unit 520 performs object image-based analysis. For example, it can be considered as a method of analyzing attributes by gapping objects in the form of images and analyzing pixel values for objects in the captured images.
  • the communication interface unit 500, the control unit 510, the image high-speed analysis unit 520, and the storage unit 530 of FIG. 5 communicate with the image service device 120 or FIG. 4 of FIG. Since the contents of the interface unit 400 and the image high-speed processing unit 410 are not significantly different, the contents will be replaced.
  • the image high-speed analysis unit 520 of FIG. 5 may have the same structure as the image high-speed analysis unit 520 'of FIG. 6.
  • the image high-speed analysis unit 520 of FIG. 5 is a part or all of the video processing unit 600, the video search unit 610, the scheduling unit 620, the manual processing unit 630, and the bookmark processing unit 640 as shown in FIG. It may include.
  • "including some or all" has the same meaning as above.
  • the image high-speed analysis unit 520 of FIG. 5 may further perform various operations in addition to the high-speed analysis described above.
  • the video processing unit 600 may process video data provided in various formats by processing data in a format of a corresponding image or converting data into a specified format, and converting the data into a same format.
  • video data provided in various formats may be converted into a specified format, then analyzed and converted into the same format and then exported.
  • the video search unit 610 may allow an attribute-based search, for example, a search to be performed in a 'face' category, or a search based on event option information, and a scenario-based search.
  • the scenario-based scenario is to provide a search result based on the analysis of a search word provided in the form of a sentence or, more precisely, a short sentence. In other words, it can be considered as similar to a keyword-based search, but there may be a difference in that keywords correspond to words and scenarios are sentences.
  • the scenario sentence may include a complex scenario such as "black of a vehicle that is illegally turning, in the case of a taxi" or "a person wearing a hat running with a bag".
  • the scheduling unit 620 performs a schedule management operation, and may be in charge of an operation in which video analysis is automatically registered and automatically generated at a specified time once or periodically (daily, weekly, monthly).
  • the manual processing unit 630 may perform a manual function, such as an operation of providing help. It is to provide the manual so that it can be used directly on i-Forensics.
  • the bookmark processing unit 640 may perform various operations related to bookmarking a favorite image, such as designating a bookmark (interest list), deleting a bookmark, exporting a bookmark list, managing multiple bookmarks, and controlling bookmark deletion (protection).
  • the image high-speed analysis unit 520 of FIG. 5 may further include a configuration for performing various operations as shown in [Table 1] and [Table 2]. That is, since FIG. 6 is only an example, the configuration may be further added to the configuration of FIG. 6 by SW modules, HW modules, and combinations based on the contents of [Table 1] and [Table 1]. 6 is a configuration in which only representative operations are described in the form of modules.
  • FIG. 7 is a view showing a high-speed analysis video service process according to an embodiment of the present invention.
  • the first image processing apparatus 121 of FIG. 2 may include a forensic manager 121a and a search client 121b. This may be in the form of a SW module, for example.
  • the forensic manager 121a may be in charge of management or control of the first image processing device 121, and the search client 121b may perform a search-related operation.
  • FIG. 7 shows operations between the manager 120a and the client 121b in the DB 120a, the third party device 130, and the first image processing device 121, and further, the second image processing device 123, for example. .
  • the first image processing device 121 requests and receives a video image from a DB 120a or a third party device 130 such as a VMS (Virtual Memory System), and then receives the second image processing device 123 Shows that the analysis of the video image is completed by inquiring or requesting with (S701 ⁇ S712).
  • steps S705 to S707 are a process of requesting an inquiry or analysis to the second image processing apparatus 123
  • steps S708 to S710 are processes for performing a high-speed analysis operation
  • steps S711 and S712 are processes for completing the analysis. .
  • FIG. 8 is a diagram illustrating an operation process between a forensic manager and a search client constituting the first image processing apparatus of FIG. 2.
  • the forensic manager 121a and the client 121b may specifically operate in a form of processing a video list as in FIG. 8 (S800 to S804).
  • a user such as a control agent inputs a specific search word through the first image processing device 121
  • a list of various video images corresponding to the search word may be first provided to the user. Through this list, you can select the video you are looking for and receive and play the selected video.
  • the analyzed result may be provided and displayed on the screen.
  • FIG. 9 is a view for explaining an operation process of the first image processing apparatus and the third party apparatus of FIG. 2.
  • the first image processing device 121 may request meta data stored in the DB 120a and receive it as a stream along with a video image to display it on the screen (S900 to S905). Such a process may receive meta data from the DB 120a through a collaborative operation between the manager 121a and the client 121b, for example, a video image according to a specific search term and various information matching it and display it on the screen.
  • FIG. 10 is a diagram illustrating a search main screen
  • FIG. 11 is a diagram illustrating an FRS setting process.
  • the first image processing apparatus 121 may include a control computer, a control monitor or a display board connected to the control computer.
  • an FRS connection establishment operation may be performed as shown in FIGS. 10 and 11.
  • a specific setting screen pops up.
  • an IP address of the second image processing device 123 may be input to the pop-up window to interlock with each other.
  • status information 1100 indicating whether or not a connection is made may be displayed.
  • 12 to 20 are views for explaining an offline analysis screen.
  • the first image processing device 121 of FIG. 2 clicks the file import button 1200 displayed on the screen as shown in FIG. 12 to search for a desired file. Can be loaded.
  • the imported files are registered in the analysis channel list area 1210 on the left side of the screen, that is, the first area.
  • the file is stored in the list area 1210, but analysis may not be possible.
  • the video image requesting analysis to the video display area may be played.
  • the image is reproduced small in the form of a thumbnail image, and additional information such as time may be displayed on the image.
  • a pop-up window 1400 for setting an analysis type is displayed. You can call it and set the analysis type. It can be seen that the 'face' category is added to the pop-up window 1400. This can be regarded as an item for checking attribute information analyzed based on a person's object image according to an embodiment of the present invention.
  • the video image of the analysis target 1410 selected may be analyzed in three types per file. This is made according to the designated method according to the embodiment of the present invention, and may be changed as much as possible. However, since the embodiment of the present invention provides data for video images of various formats, it is desirable to support more than that. In addition, it can be designed so that re-analysis is not possible for the analysis target or the analysis result value can be maintained as long as there is no request for deletion.
  • the area may be a third area.
  • the third area may be divided into a small area, an area including an analysis standby / progress list, an area including an analysis completion list, and an area including an analysis failure list.
  • various video images contained in the third area may be brought back to the analysis channel list area as shown in FIG. 17 to perform analysis again.
  • the third area is divided into analysis completion when there is an object having a desired analysis result as shown in FIG. 18, and when analysis is completed but there is no object, that is, no object, analysis failure list cannot be completed due to network errors or file problems. Will be included in the domain.
  • the analysis may be retried under different conditions as in FIG. For example, if the analysis fails as a result of an attempt to search with a general object item as shown in FIG. 19, the analysis is retried as shown in FIG. 20 based on the attribute information of the person according to the embodiment of the present invention.
  • 21 to 30 are views for explaining an offline search screen.
  • the first image processing device 121 of FIG. 2 may display a search screen as shown in FIG. 21 on a monitor screen. That is, the search item 2100 is selected on the main screen. Subsequently, by selecting the video list item 2110 to be searched, the searched list may be retrieved as shown in FIG. 22. This is provided in the list of completed analyzes. Of course, the user can delete specific items on the list.
  • the user selects a specific video image and sets various search expressions.
  • search conditions (expression) customized to the video image are displayed on the screen.
  • the search window 2500 for the first format video image and the second format video image may display different items.
  • a playback method may be determined by selecting it.
  • a search type item in the search window 2500 may be selected to search for people, vehicles, and other unidentified objects constituting a general object. You can perform a search.
  • a play period can be designated through a play bar as shown in FIG. 27, and a play time can also be set. If the object section is continuous playback as shown in FIG. 28, continuous playback is performed.
  • FIG. 29 shows that multiple selection of thumbnails and continuous playback thereof can be performed
  • FIG. 30 shows that a specific video image, such as a clip image, can be set as a favorite by selecting the favorite button 3000 have.
  • FIG. 31 is a flowchart illustrating an operation process of a high-speed analysis image processing apparatus according to an embodiment of the present invention.
  • an image service apparatus 120 or a first image processing apparatus 121 according to an embodiment of the present invention (hereinafter, a first image)
  • a processing device receives a video image (S3100).
  • the received video image includes images of different formats.
  • the first image processing device 121 extracts and analyzes the face image of the person from the received video image to generate face attribute information, and analyzes the video image based on the generated face attribute information to analyze the result as metadata. It is created (S3110).
  • the first image processing device 121 After the meta data stored as the result of the analysis is stored in the DB 120a as shown in FIG. 1, the first image processing device 121 provides video analysis results according to various search expressions of the user.
  • character-oriented analysis was further performed, and of course, the analysis is performed by a method of analyzing an object image in an image, but further analyzes a video image analysis result based on the attribute information of the corresponding person. Will be provided.
  • the search may be performed by adding event option information of the person, and above all, a scenario-based search may be performed.
  • the non-transitory readable recording medium means a medium that stores data semi-permanently and that can be read by a device, rather than a medium that stores data for a short time, such as registers, caches, and memory.
  • the above-described programs may be stored and provided on a non-transitory readable recording medium such as a CD, DVD, hard disk, Blu-ray disk, USB, memory card, ROM, and the like.
  • first image processing device 123 first image processing device 123
  • 123 ' second image processing device
  • control unit 320 forensic image execution unit

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present invention relates to a high-speed analysis image processing apparatus and a driving method for the apparatus, and a high-speed analysis image processing apparatus according to an embodiment of the present invention can comprise: a communication interface unit for receiving a video image; and a control unit, which extracts an object to be analyzed from the received video image and analyzes same so as to generate attribute information of the object, and analyzes the video image on the basis of the generated attribute information of the object so as to generate the analysis result as metadata.

Description

고속분석 영상처리장치 및 그 장치의 구동방법High-speed analysis image processing device and driving method of the device
본 발명은 고속분석 영상처리장치 및 그 장치의 구동방법에 관한 것으로서, 더 상세하게는 가령 다양한 포맷의 비디오 영상에 대하여 사람의 얼굴이나 사람과 관련한 이벤트 옵션 정보 등에 기초해 영상 분석과 검색을 신속하게 처리하는 고속분석 영상처리장치 및 그 장치의 구동방법에 관한 것이다.The present invention relates to a high-speed analysis image processing apparatus and a method of driving the apparatus, and more specifically, for video images of various formats, for example, based on a person's face or event option information related to a person, image analysis and search can be quickly performed. It relates to a high-speed analysis image processing apparatus to be processed and a driving method of the apparatus.
현대인들의 삶이 점점더 복잡하고 다변화되면서 그에 따른 사건·사고도 더불어 증가하고 있다. 이러한 이유의 일환으로 공공장소를 감시하기 위해 CCTV 카메라로 공공 장소를 촬영하고 자동으로 영상을 분석하여 불특정 다수의 물체들을 추출하고 움직임을 분석하여, 비정상적인 움직임이 감지될 경우에 자동으로 관리자에게 경고하거나 그 밖의 연계된 자동화 시스템에 정보를 전달하는 지능형 영상 감시 시스템에 대한 요구가 점점 커지고 있다.As the lives of modern people become more and more complicated and diversified, incidents and accidents are also increasing. As part of this reason, to monitor public places, CCTV cameras capture public places and automatically analyze images to extract a large number of unspecified objects and analyze motion to automatically alert the administrator when abnormal motion is detected, or There is an increasing demand for intelligent video surveillance systems that deliver information to other connected automation systems.
최근 몇년 사이에 감시용 CCTV를 통해 확보된 이미지나 동영상을 통해 특정 이벤트를 감지하는 장치 및 그 방법에 대한 많은 설비들이 설치되어 있는데, 기존의 장치는 특정 지역에 따라 발생할 수 있는 특정 이벤트 감지(예: 중앙선 침범, 과속 등)를 위해 영상에서 추적객체를 추출하고 추출된 객체가 일정한 행위를 하는 설정된 값에 의해 이벤트 발생을 감지하는 것이 일반적이다.In recent years, a number of facilities have been installed for devices and methods for detecting specific events through images or videos acquired through surveillance CCTVs. Existing devices detect specific events that may occur depending on specific regions (eg : It is common to extract the tracking object from the image for the invasion of the center line, speeding, etc., and detect the occurrence of an event by the set value that the extracted object performs a certain action.
그런데, 종래의 이러한 객체추적 방식은 객체의 다양한 환경 요인에 의해 발생하는 노이즈 모션 때문에 객체 오검출이 발생하고, 또 기존 룰(rule) 기반의 객체 분류기의 성능 한계로 잦은 객체 오분류가 발생하며, 영상에서 객체들 간의 겹침이 크고 자주 발생하는 복잡한 환경에서는 개별 객체의 영역을 제대로 검출하기 어려운 문제가 있다. 또한, 이동 중인 카메라 영상에서는 배경이 자주 바뀌기 때문에 배경 모델과의 비교를 통한 객체 검출 방식을 적용할 수 없는 문제도 있다.However, in the conventional object tracking method, object error detection occurs due to noise motion caused by various environmental factors of the object, and frequent object misclassification occurs due to performance limitations of the existing rule-based object classifier, In a complex environment where overlap between objects in a video is large and frequently occurs, there is a problem that it is difficult to properly detect the area of an individual object. In addition, there is a problem that the object detection method through comparison with the background model cannot be applied because the background changes frequently in the moving camera image.
본 발명의 실시예는, 가령 다양한 포맷의 비디오 영상에 대하여 사람의 얼굴이나 사람과 관련한 이벤트 옵션 정보 등에 기초해 영상 분석과 검색을 신속히 처리하는 고속분석 영상처리장치 및 그 장치의 구동방법을 제공함에 그 목적이 있다.An embodiment of the present invention provides a high-speed analysis image processing apparatus and a method for driving the apparatus for quickly processing image analysis and search based on, for example, human face or event option information related to a person, for video images of various formats It has a purpose.
본 발명의 실시예에 따른 고속분석 영상처리장치는, 비디오 영상을 수신하는 통신 인터페이스부, 및 상기 수신한 비디오 영상에서 분석대상 객체를 추출해 분석하여 객체의 속성 정보를 생성하고, 상기 생성한 객체의 속성 정보를 근거로 상기 비디오 영상을 분석하여 분석 결과를 메타 데이터로서 생성하는 제어부를 포함한다.The high-speed analysis image processing apparatus according to an embodiment of the present invention generates a communication interface unit that receives a video image, and extracts and analyzes an object to be analyzed from the received video image to generate attribute information of the object, and And a control unit that analyzes the video image based on attribute information and generates an analysis result as metadata.
상기 통신 인터페이스부는, 객체추적 기반의 영상처리를 수행하는 외부장치에 연동하며, 상기 제어부는 상기 외부장치의 요청시 상기 수신한 비디오 영상을 이용해 지정 객체 중심의 고속분석 영상처리를 수행할 수 있다.The communication interface unit interlocks with an external device that performs object tracking-based image processing, and the control unit may perform high-speed analysis image processing based on a designated object using the received video image at the request of the external device.
상기 제어부는, 상기 분석대상 객체와 관련한 이벤트를 분석해 이벤트 정보를 더 생성하며, 상기 생성한 이벤트 정보를 상기 메타 데이터에 포함하여 저장할 수 있다.The controller analyzes an event related to the object to be analyzed to further generate event information, and may include and store the generated event information in the metadata.
상기 제어부는, 상기 객체의 속성 정보 및 상기 메타 데이터를 이용하여 딥러닝 기반의 메타 데이터를 더 생성할 수 있다.The controller may further generate deep learning-based metadata using attribute information of the object and the metadata.
상기 제어부는, 상기 통신 인터페이스부를 통해 수신된 서로 다른 포맷의 비디오 영상을 처리하는 비디오 처리부를 포함할 수 있다.The control unit may include a video processing unit that processes video images of different formats received through the communication interface unit.
상기 제어부는, 상기 통신 인터페이스부를 통해 수신되는 시나리오 기반의 검색 명령어를 근거로, 상기 검색 명령어에 매칭되는 상기 생성한 메타 데이터를 검색하여 제공할 수 있다.The controller may search for and provide the generated metadata matching the search command based on a scenario-based search command received through the communication interface unit.
상기 통신 인터페이스부는 지정된 장소의 촬영장치, 이동식저장매체(USB) 및 서드파티장치의 비디오 영상을 선택적으로 수신할 수 있다.The communication interface unit may selectively receive video images of a photographing device, a removable storage medium (USB), and a third party device at a designated place.
또한, 본 발명의 실시예에 따른 고속분석 영상처리장치의 구동방법은 통신 인터페이스부 및 제어부를 포함하는 고속분석 영상처리장치의 구동방법으로서, 상기 통신 인터페이스부에서 비디오 영상을 수신하는 단계, 및 상기 제어부가 상기 수신한 비디오 영상에서 분석대상 객체를 추출해 분석하여 객체의 속성 정보를 생성하고, 상기 생성한 객체의 속성 정보를 근거로 상기 비디오 영상을 분석하여 분석 결과를 메타 데이터로서 생성하는 단계를 포함한다.In addition, a method of driving a high-speed analysis image processing apparatus according to an embodiment of the present invention is a method of driving a high-speed analysis image processing apparatus including a communication interface unit and a control unit, receiving a video image from the communication interface unit, and The control unit extracts and analyzes an object to be analyzed from the received video image to generate object attribute information, and analyzes the video image based on the generated object attribute information to generate an analysis result as metadata. do.
상기 통신 인터페이스부는, 객체추적 기반의 영상처리를 수행하는 외부장치에 연동하며, 상기 고속분석 영상처리장치의 구동방법은 상기 외부장치의 요청시 상기 수신한 비디오 영상을 이용해 지정 객체 중심의 고속분석 영상처리를 수행하는 단계를 더 포함할 수 있다.The communication interface unit interlocks with an external device that performs object tracking-based image processing, and the method of driving the high-speed analysis image processing device uses the received video image at the request of the external device to perform high-speed analysis image centering on a specified object. It may further include the step of performing the processing.
상기 고속분석 영상처리장치의 구동방법은 상기 분석대상 객체와 관련한 이벤트를 분석해 이벤트 정보를 더 생성하는 단계, 및 상기 생성한 이벤트 정보를 상기 메타 데이터에 포함하여 저장하는 단계를 더 포함할 수 있다.The driving method of the high-speed analysis image processing apparatus may further include analyzing an event related to the object to be analyzed, generating event information, and storing the generated event information in the metadata.
상기 고속분석 영상처리장치의 구동방법은 상기 객체의 속성 정보 및 상기 메타 데이터를 이용하여 딥러닝 기반의 메타 데이터를 생성하는 단계를 더 포함할 수 있다.The driving method of the high-speed analysis image processing apparatus may further include generating deep learning-based metadata using attribute information of the object and the metadata.
상기 고속분석 영상처리장치의 구동방법은 상기 통신 인터페이스부를 통해 수신된 서로 다른 포맷의 비디오 영상을 처리하는 단계를 더 포함할 수 있다.The method for driving the high-speed analysis image processing apparatus may further include processing video images of different formats received through the communication interface unit.
상기 고속분석 영상처리장치의 구동방법은 상기 통신 인터페이스부를 통해 수신되는 시나리오 기반의 검색 명령어를 근거로, 상기 검색 명령어에 매칭되는 상기 생성한 메타 데이터를 검색하여 제공하는 단계를 더 포함할 수 있다.The driving method of the high-speed analysis image processing apparatus may further include searching and providing the generated metadata matching the search command based on a scenario-based search command received through the communication interface unit.
상기 고속분석 영상처리장치의 구동방법은 지정된 장소의 촬영장치, 이동식저장매체 및 서드파티장치의 비디오 영상을 선택적으로 수신하는 단계를 더 포함할 수 있다.The method for driving the high-speed analysis image processing apparatus may further include selectively receiving video images of a photographing apparatus, a removable storage medium, and a third party apparatus at a designated place.
본 발명의 실시예에 따르면 다양한 루트(route)로 제공되는 서로 다른 포맷의 비디오 영상 즉 포렌식 영상을 수신하여 신속하게 분석함으로써 사건 및 사고의 대체에 정확성을 높일 수 있을 것이다.According to an embodiment of the present invention, it is possible to increase accuracy in the replacement of incidents and accidents by quickly receiving and analyzing video images of different formats provided by various routes, that is, forensic images.
또한, 객체추적 방식이 아닌 비디오 영상에서 분석대상 객체를 추출해 분석하여 객체의 속성 정보를 생성하고 메타 데이터를 저장한 후, 해당 속성 기반의 검색어를 기반으로 메타 데이터 및 그에 매칭되는 비디오 영상을 제공하므로 검색이 신속하게 이루어질 수 있을 것이다.In addition, it extracts and analyzes an object to be analyzed from a video image that is not an object tracking method, generates attribute information of an object, stores metadata, and then provides metadata and a video image matching it based on the search word based on the attribute. Searches can be done quickly.
나아가, 객체의 속성 기반으로 검색을 수행할 때 인물과 관련한 이벤트 정보를 더 추가할 수 있게 되어 검색의 속도를 더 증가시킬 수 있을 것이다. 본 발명의 실시예에 따라 영상 분석시 이벤트 옵션 정보를 더 추가하여 데이터를 저장하였으므로 이를 이용함으로써 검색 카테고리(예: 배회, 멈춤, 차량정차 등의 이벤트)의 추가에 따라 검색량이 줄어들어 검색 속도는 당연히 증가하게 될 것이다.Furthermore, when performing a search based on the properties of an object, it is possible to add more event information related to a person, thereby further increasing the speed of the search. According to an embodiment of the present invention, since the event option information is further added during video analysis and data is stored, the search speed is reduced as the search amount decreases according to the addition of a search category (for example, events such as loitering, stoppage, vehicle stoppage, etc.). Will increase.
뿐만 아니라, 시나리오 기반(예: 문장어)의 검색어를 통해서도 검색이 이루어지도록 함으로써 검색의 다양성을 확보하고 또한 검색의 속도를 증가시킬 수 있을 것이다.In addition, it is possible to secure the diversity of the search and increase the speed of the search by allowing the search to be performed even through a search word based on a scenario (eg, a sentence).
인물이나 인물과 관련한 이벤트, 또 시나리오 기반의 검색어를 입력하기 위한 검색창(예: UI)을 간단하고 직관적으로 형성함으로써 사용자들의 검색 편의를 도모할 수 있을 것이다.A search window (eg, UI) for inputting a person or event related to a person or a scenario-based search term may be formed simply and intuitively to facilitate user search convenience.
또한, 메타 데이터로서 저장된 객체의 속성, 인물 속성 정보, 이벤트 정보 등을 이용하여 딥러닝을 수행함으로써 검색의 정확도를 높이고, 또 다른 부가정보를 추가적으로 생성하여 사건 및 사고에 유연하게 대처할 수 있을 것이다.In addition, deep learning may be performed using the properties of the object, person attribute information, event information, etc., stored as metadata to increase the accuracy of the search, and additionally generate additional information to flexibly cope with events and accidents.
도 1은 본 발명의 실시예에 따른 고속분석 영상서비스시스템을 나타내는 도면,1 is a view showing a high-speed analysis video service system according to an embodiment of the present invention,
도 2는 도 1을 도식화하여 나타낸 예시도,Figure 2 is an exemplary view schematically showing Figure 1,
도 3은 도 2의 제1 영상처리장치의 구조를 예시한 블록다이어그램,3 is a block diagram illustrating the structure of the first image processing apparatus of FIG. 2,
도 4는 도 2의 제2 영상처리장치의 구조를 예시한 블록다이어그램,4 is a block diagram illustrating the structure of the second image processing apparatus of FIG. 2,
도 5는 도 2의 제2 영상처리장치의 다른 구조를 예시한 블록다이어그램,5 is a block diagram illustrating another structure of the second image processing apparatus of FIG. 2,
도 6은 도 5의 영상 고속분석부의 세부구조를 예시한 블록다이어그램,6 is a block diagram illustrating the detailed structure of the image high-speed analysis unit of FIG. 5,
도 7은 본 발명의 실시예에 따른 고속분석 영상서비스 과정을 나타내는 도면,7 is a view showing a high-speed analysis video service process according to an embodiment of the present invention,
도 8은 도 2의 제1 영상처리장치를 구성하는 포렌식 매니저와 검색 클라이언트 사이의 동작 과정을 설명하기 위한 도면,8 is a view for explaining an operation process between a forensic manager and a search client constituting the first image processing apparatus of FIG. 2;
도 9는 도 2의 제1 영상처리장치와 서드파티장치의 동작 과정을 설명하기 위한 도면,9 is a view for explaining an operation process of the first image processing apparatus and the third party apparatus of FIG. 2,
도 10은 검색 메인 화면을 예시한 도면,10 is a diagram illustrating a search main screen,
도 11은 FRS 설정 과정을 예시한 도면,11 is a diagram illustrating an FRS setup process,
도 12 내지 도 20은 오프라인 분석 화면을 설명하기 위한 도면,12 to 20 are views for explaining an offline analysis screen,
도 21 내지 도 30은 오프라인 검색 화면을 설명하기 위한 도면, 그리고21 to 30 are views for explaining an offline search screen, and
도 31은 본 발명의 실시예에 따른 고속분석 영상처리장치의 동작 과정을 나타내는 흐름도이다31 is a flowchart illustrating an operation process of a high-speed analysis image processing apparatus according to an embodiment of the present invention.
이하, 도면을 참조하여 본 발명의 실시예에 대하여 상세히 설명한다.Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings.
도 1은 본 발명의 실시예에 따른 고속분석 영상서비스시스템을 나타내는 도면이며, 도 2는 도 1을 도식화하여 나타낸 예시도이다.1 is a diagram illustrating a high-speed analysis image service system according to an embodiment of the present invention, and FIG. 2 is an exemplary diagram schematically showing FIG. 1.
도 1 및 도 2에 도시된 바와 같이, 본 발명의 실시예에 따른 고속분석 영상서비스시스템(90)은 사용자장치(100), 통신망(110), 영상서비스장치(120) 및 서드파티장치(130)의 일부 또는 전부를 포함한다.1 and 2, the high-speed analysis image service system 90 according to an embodiment of the present invention includes a user device 100, a communication network 110, an image service device 120, and a third party device 130. ).
여기서, "일부 또는 전부를 포함한다"는 것은 사용자장치(100) 또는 서드파티장치(130)와 같은 일부 구성요소가 생략되어 영상서비스시스템(90)이 구성되거나 영상서비스장치(120)와 같은 구성요소가 통신망(110) 내의 네트워크장치(예: 교환장치 등)에 통합되어 구성될 수 있는 것 등을 의미하는 것으로서, 발명의 충분한 이해를 돕기 위하여 전부 포함하는 것으로 설명한다.Here, "including some or all" means that some components such as the user device 100 or the third party device 130 are omitted, such that the video service system 90 is configured or the video service device 120 is configured. It means that the element can be integrated and configured in a network device (for example, a switching device, etc.) in the communication network 110, and is described as including everything in order to help a sufficient understanding of the invention.
사용자장치(100)는 지정된 장소에 설치되어 사건, 사고를 감시하는 CCTV와 같은 촬영장치, 사용자들이 소유하는 데스크탑컴퓨터, 랩탑컴퓨터, 휴대폰(예: 스마트폰), 태블릿PC 및 스마트TV 등을 포함하며, 이동식저장매체(예: USB)(101)를 더 포함할 수 있다. 물론 이러한 이동식저장매체는 차량의 블랙박스에 구비되는 메모리를 포함할 수도 있다. 이동식저장매체(101)는 영상서비스장치(120)를 구성하는 관제컴퓨터에 바로 연결될 수도 있을 것이다.The user device 100 is installed in a designated place and includes an imaging device such as CCTV for monitoring events and accidents, a desktop computer owned by users, a laptop computer, a mobile phone (eg, a smart phone), a tablet PC, and a smart TV. , A removable storage medium (eg, USB) 101 may be further included. Of course, such a removable storage medium may include a memory provided in a black box of a vehicle. The removable storage medium 101 may be directly connected to the control computer constituting the video service device 120.
사용자장치(100)는 카메라를 통해 촬영된 영상을 저장(임시저장 포함)하며, 영상서비스장치(120)로 촬영영상을 제공하여 영상분석을 요청할 수 있다. 물론 영상서비스장치(120)가 관제장치의 역할을 수행하고, 사용자장치(100)가 CCTV와 같은 촬영장치인 경우 촬영영상은 실시간으로 혹은 주기적으로 영상서비스장치(120)로 제공되어 분석을 통해 관제가 이루어질 수 있을 것이다.The user device 100 may store an image captured through a camera (including a temporary storage site), and provide a captured image to the image service device 120 to request image analysis. Of course, if the video service device 120 acts as a control device, and the user device 100 is a CCTV-like recording device, the captured video is provided to the video service device 120 in real time or periodically to control it through analysis. Could be done.
통신망(110)은 유무선 통신망을 모두 포함한다. 가령 통신망(110)으로서 유무선 인터넷망이 이용되거나 연동될 수 있다. 여기서 유선망은 케이블망이나 공중 전화망(PSTN)과 같은 인터넷망을 포함하는 것이고, 무선 통신망은 CDMA, WCDMA, GSM, EPC(Evolved Packet Core), LTE(Long Term Evolution), 와이브로(Wibro) 망 등을 포함하는 의미이다. 물론 본 발명의 실시예에 따른 통신망(110)은 이에 한정되는 것이 아니며, 향후 구현될 차세대 이동통신 시스템의 접속망으로서 가령 클라우드 컴퓨팅 환경하의 클라우드 컴퓨팅망, 5G망 등에 사용될 수 있다. 가령, 통신망(110)이 유선 통신망인 경우 통신망(110) 내의 액세스포인트는 전화국의 교환국 등에 접속할 수 있지만, 무선 통신망인 경우에는 통신사에서 운용하는 SGSN 또는 GGSN(Gateway GPRS Support Node)에 접속하여 데이터를 처리하거나, BTS(Base Station Transmission), NodeB, e-NodeB 등의 다양한 중계기에 접속하여 데이터를 처리할 수 있다.The communication network 110 includes both wired and wireless communication networks. For example, a wired / wireless Internet network may be used or interlocked as the communication network 110. Here, the wired network includes an internet network such as a cable network or a public telephone network (PSTN), and the wireless communication networks include CDMA, WCDMA, GSM, Evolved Packet Core (EPC), Long Term Evolution (LTE), Wibro network, etc. It includes meaning. Of course, the communication network 110 according to an embodiment of the present invention is not limited thereto, and may be used as a cloud computing network under a cloud computing environment, a 5G network, etc. as a connection network of a next-generation mobile communication system to be implemented in the future. For example, when the communication network 110 is a wired communication network, an access point in the communication network 110 can access a telephone exchange or the like, but in the case of a wireless communication network, data can be accessed by accessing a SGSN or a Gateway GPRS Support Node (GGSN) operated by a communication company. It can process or connect to various repeaters such as BTS (Base Station Transmission), NodeB, and e-NodeB to process data.
통신망(110)은 액세스포인트를 포함할 수 있다. 액세스포인트는 건물 내에 많이 설치되는 펨토(femto) 또는 피코(pico) 기지국과 같은 소형 기지국을 포함한다. 여기서, 펨토 또는 피코 기지국은 소형 기지국의 분류상 사용자장치(100)를 최대 몇 대까지 접속할 수 있느냐에 따라 구분된다. 물론 액세스포인트는 사용자장치(100)와 지그비 및 와이파이(Wi-Fi) 등의 근거리 통신을 수행하기 위한 근거리 통신 모듈을 포함한다. 액세스포인트는 무선통신을 위하여 TCP/IP 혹은 RTSP(Real-Time Streaming Protocol)를 이용할 수 있다. 여기서, 근거리 통신은 와이파이 이외에 블루투스, 지그비, 적외선(IrDA), UHF(Ultra High Frequency) 및 VHF(Very High Frequency)와 같은 RF(Radio Frequency) 및 초광대역 통신(UWB) 등의 다양한 규격으로 수행될 수 있다. 이에 따라 액세스포인트는 데이터 패킷의 위치를 추출하고, 추출된 위치에 대한 최상의 통신 경로를 지정하며, 지정된 통신 경로를 따라 데이터 패킷을 다음 장치, 예컨대 영상서비스장치(120)로 전달할 수 있다. 액세스포인트는 일반적인 네트워크 환경에서 여러 회선을 공유할 수 있으며, 예컨대 라우터(router), 리피터(repeater) 및 중계기 등이 포함된다.The communication network 110 may include an access point. Access points include small base stations, such as femto or pico base stations, which are often installed in buildings. Here, the femto or pico base station is classified according to the maximum number of user devices 100 that can be accessed according to the classification of the small base station. Of course, the access point includes a user equipment 100 and a short-range communication module for performing short-range communication such as Zigbee and Wi-Fi. The access point can use TCP / IP or RTSP (Real-Time Streaming Protocol) for wireless communication. Here, short-range communication may be performed in various standards such as radio frequency (RF) and ultra-wideband communication (UWB), such as Bluetooth, Zigbee, infrared (IrDA), ultra high frequency (UHF), and very high frequency (VHF), in addition to Wi-Fi. You can. Accordingly, the access point can extract the location of the data packet, designate the best communication path for the extracted location, and transfer the data packet to the next device, for example, the video service device 120 along the designated communication path. An access point can share multiple lines in a typical network environment, including routers, repeaters and repeaters, for example.
영상서비스장치(120)는 가령 지정된 구역에 설치되는 사용자장치(100), 가령 CCTV에서 제공되는 촬영영상을 통해 해당 구역을 모니터링하는 관제장치의 역할을 수행할 수 있다. 물론 그것에 특별히 한정하지는 않을 것이다. 가령 본 발명의 실시예에 따라 영상의 고속분석 동작을 수행하여 서비스로 제공하기 위한 업체의 서버를 포함할 수도 있다. 영상서비스장치(120)는 방대한 양의 영상 데이터를 저장하기 위한 DB(120a)를 포함하며, 서버와 관제컴퓨터 등을 더 포함할 수 있다. The video service device 120 may serve as a control device for monitoring a corresponding area through a user device 100 installed in a designated area, for example, a captured image provided by CCTV. Of course, it will not be particularly limited. For example, according to an embodiment of the present invention, a server of a company for providing a service by performing a high-speed analysis operation of an image may be included. The video service device 120 includes a DB 120a for storing a large amount of video data, and may further include a server and a control computer.
또한, 영상서비스장치(120)는 다양한 형태로 시스템이 구축될 수 있다. 가령 단독서버의 형태로 동작할 수 있으며, 여러 대의 서버가 연동할 수 있다. 예컨대, 영상서비스장치(120)는 가령 도 2에서와 같이 제1 영상처리장치(121)와 제2 영상처리장치(123)를 포함할 수 있다. 이를 통해 도 1의 영상서비스장치(120)는 영상처리 동작을 협업 또는 분산 처리함으로써 영상처리속도를 급격하게 증가시킬 수 있다.In addition, the video service device 120 may be constructed in a variety of forms. For example, it can operate in the form of a single server, and multiple servers can work together. For example, the image service device 120 may include, for example, a first image processing device 121 and a second image processing device 123 as shown in FIG. 2. Through this, the image service device 120 of FIG. 1 can rapidly increase the image processing speed by collaborating or distributing the image processing operation.
본 발명의 실시예에 따라 제1 영상처리장치(121)는 객체추적 기반의 영상처리 동작을 수행한다면, 제2 영상처리장치(123)는 객체이미지 기반의 영상처리 동작을 수행한다. 객체추적은 영상의 최초 단위영상에서 객체를 추출하여 추출한 객체의 움직임을 예측하는 방식으로 추적한다고 볼 수 있다. 움직임 추적은 통상 벡터 즉 방향과 거리를 계산하는 형태로 이루어진다. 반면, 객체이미지 기반의 영상처리는 단위 영상들(예: 제1번째 내지 제N번째 단위영상)마다 분석대상이 되는 지정된 객체(예: 사람, 차량 등)를 추출하고, 그 단위 영상들마다 추출한 객체의 속성 정보를 비교하여 동일 객체인지를 판단하는 방식이다. 따라서, 사람의 경우 얼굴에 특징점이 있다면 해당 특징점 정보가 속성 정보가 되는 것이다. 또한 차량의 경우 모양이나 색상 등이 속성 정보가 될 수 있다. 이때, 객체의 최종 결정은 딥러닝을 통해 이루어질 수도 있다. 가령, 유사한 객체들을 후보군으로 분류하였다가 딥러닝을 통해 해당 객체의 속성을 최종적으로 결정하는 것이다.According to an embodiment of the present invention, if the first image processing apparatus 121 performs an object tracking-based image processing operation, the second image processing apparatus 123 performs an object image-based image processing operation. It can be considered that object tracking is a method of predicting the motion of the extracted object by extracting the object from the first unit image of the image. Motion tracking is usually done in the form of a vector, i.e. direction and distance calculation. On the other hand, the object image-based image processing extracts a designated object (eg, a person, a vehicle, etc.) to be analyzed for each unit image (for example, the first to Nth unit images), and extracts for each unit image. It is a method of comparing the property information of an object and determining whether it is the same object. Therefore, in the case of a person, if there is a feature point on the face, the feature point information becomes attribute information. Also, in the case of a vehicle, shape or color may be attribute information. At this time, the final determination of the object may be made through deep learning. For example, similar objects are classified as candidate groups, and the properties of the object are finally determined through deep learning.
이와 같이 제2 영상처리장치(123)는 속성 정보를 기반으로 객체를 판단하고, 주변 사물 또는 객체와의 상관 관계를 분석한다. 이를 통해 이벤트를 도출해 낸다. 그리고, 속성 정보와 이벤트 정보, 그리고 그에 매칭되는 촬영영상의 일부 또는 전부를 메타 데이터로 생성한다. 물론 이벤트를 도출하는 과정에서 제2 영상처리장치(123)는 기설정된 규칙(혹은 정책)을 참고할 수도 있을 것이다. 또한, 제2 영상처리장치(123)는 저장된 메타 데이터를 이용하여 딥러닝 동작을 수행함으로써 새로운 정보를 더 생성할 수 있고, 이의 과정에서 예측 정보도 생성할 수 있을 것이다.As described above, the second image processing apparatus 123 determines an object based on the attribute information and analyzes a correlation with surrounding objects or objects. Through this, an event is derived. Then, part or all of the attribute information, the event information, and the captured image matching it are generated as metadata. Of course, in the process of deriving the event, the second image processing device 123 may refer to a predetermined rule (or policy). In addition, the second image processing apparatus 123 may further generate new information by performing a deep learning operation using the stored metadata, and may also generate prediction information in the process.
이와 같이 생성되는 특정 비디오 영상에 대한 메타 데이터는 검색을 수월하게 한다. 제1 영상처리장치(121)는 가령 관제장치로서 경찰서나 관공서 등의 서드파티장치(130)로부터 영상 분석 요청이 있다고 가정해 보자. 이의 경우, 제1 영상처리장치(121)는 자체적으로 객체추적 기반의 영상 분석을 수행할 수 있고, 또는 제2 영상처리장치(123)로 요청하여 객체이미지 기반의 영상 분석을 수행할 수도 있을 것이다. 그리고, 그 영상 분석 결과를 확인해 볼 수 있다. 가령 이는 데이터를 DB(120a)에 저장한 후 공유함으로써 가능할 수 있을 것이다.Meta data for a specific video image generated in this way facilitates a search. Assume that the first image processing device 121 is a control device, for example, a video analysis request from a third party device 130 such as a police station or a government office. In this case, the first image processing device 121 may perform image analysis based on object tracking on its own, or may request the second image processing device 123 to perform object image based image analysis. . And, you can check the result of the video analysis. For example, this may be possible by storing data in DB 120a and then sharing it.
검색과 관련해서는 이후에 자세히 살펴보겠지만, 우선 제1 영상처리장치(121)와 제2 영상처리장치(123)는 검색의 카테고리가 다르며, 검색 결과도 무척 다르다. 다시 말해, 제2 영상처리장치(123)는 속성 정보(예: 얼굴)를 더 이용하여 영상 분석결과의 검색이 가능할 수 있고, (가령 제1 영상처리장치(121)와 같이) 지정된 이벤트만을 검출하는 것이 아니라 그보다 더 확장되어 다양한 이벤트 옵션을 추가하여 검색을 수행할 수 있다. 여기서, 이벤트 옵션 정보는 배회, 멈춤, 차량 정차 등을 포함할 수 있다. 따라서, 검색의 속도도 빠르고 검색어가 추가되므로 검색의 정확도가 높아지게 된다. The search will be described in detail later, but first, the first image processing apparatus 121 and the second image processing apparatus 123 have different search categories, and search results are very different. In other words, the second image processing device 123 may further search the image analysis result using attribute information (eg, a face), and detect only a designated event (eg, as the first image processing device 121). Rather than doing so, it can be extended to further search by adding various event options. Here, the event option information may include roaming, stopping, vehicle stopping, and the like. Therefore, the speed of the search is fast and the search term is added, so the accuracy of the search is increased.
무엇보다 제1 영상처리장치(121)는 시나리오 기반의 검색어를 제공하여 단어가 아니라 문장 즉 자연어 검색이 가능할 수 있으며, 딥러닝 동작에 의해 정보의 정확도를 높이고 아울러 예측 정보도 함께 제공받을 수 있다. 가령, 비디오 영상이 범죄 영상이라 가정해 보자. 해당 인물을 근거로 다양한 채널을 통해 영상을 즉 빅데이터를 확보할 수 있기 때문에 이를 근거로 정확한 예측도 수행하여 예측 결과를 제공할 수 있게 되는 것이다. 시나리오 기반의 검색어는 가령 '불법 유턴하는 차량 중 검정색이고, 택시인 경우' 또는 '모자를 쓴 사람이 가방을 들고 달려가는 모습' 등과 같이 이벤트가 연속되는 형태를 가질 수 있다.First of all, the first image processing device 121 may provide a scenario-based search word to search for a sentence, not a word, that is, a natural language, and increase the accuracy of information by a deep learning operation and receive prediction information together. For example, suppose the video image is a crime image. Since video can be secured through various channels based on the person, it is possible to provide prediction results by performing accurate prediction based on this. Scenario-based search terms may have a series of events such as, for example, 'black among illegally turning vehicles, a taxi' or 'a person wearing a hat running with a bag'.
본 발명의 실시예에서는 다양한 경로로 획득되는 서로 다른 포맷의 비디오 영상을 처리한다는 점에서 i-포렌식(forensic)이라는 용어를 사용한다. 이는 가령 픽쳐(picture) 기반의 영상분석을 수행하되, 다양한 포맷으로 녹화된 영상을 제공받아 단시간에 분석과 검색을 수행할 수 있는 것을 함축하는 것이라 볼 수 있다. 비디오 영상은 영상 신호를 의미한다. 통상 영상 신호라 함은 비디오 신호와 음성 신호로 구분되고, 부가 정보를 더 포함한다. 물론 부가 정보는 날짜나 시간 정보 등의 다양한 정보를 포함한다. 본 발명의 실시예에서는 비디오 영상이라는 용어를 주로 사용하지만 그것은 비디오 신호만으로 의미하지 않을 수 있다. 따라서, 비디오 영상은 비디오 데이터, 영상 데이터와 같이 혼용되어 사용될 수 있을 것이다.In the embodiment of the present invention, the term i-forensic is used in that it processes video images of different formats obtained through various paths. This can be viewed as implying that, for example, picture-based image analysis is performed, but analysis and search can be performed in a short time by receiving images recorded in various formats. Video image means a video signal. Generally, the video signal is divided into a video signal and an audio signal, and further includes additional information. Of course, the additional information includes various information such as date and time information. In the embodiment of the present invention, the term video image is mainly used, but it may not mean only a video signal. Therefore, video images may be used interchangeably with video data and image data.
첨언하면, 제2 영상처리장치(123)에서 수행하는 객체이미지 기반의 영상분석 등의 영상처리 동작은 가령 픽쳐(혹은 매크로블록) 영상에서 화소값들을 분석하여 객체들의 경계를 구분하고, 또한 구분한 객체의 화소값을 분석하여 어떠한 유형의 객체인지를 분석할 수 있다. 가령 사람은 머리가 검고, 얼굴은 살색, 그리고 그 2개의 부분이 서로 연결되어 있을 것이다. 따라서, 블랙과 살색의 영역을 추출하여 사람 객체를 구분하고, 해당 부분을 분석해 특징점을 분석해 낼 수 있다. 사람 혹은 사람의 얼굴과 관련해서는 다양한 유형이 있으므로 이는 템플릿의 형태로 저장된 데이터를 기준으로 분류하는 것도 얼마든지 가능할 수 있다. 따라서, 특정 인물의 특징점을 찾으면 해당 인물이 안경을 착용하는 등의 사사로운 변화가 있더라도 동일 인물로 판단할 수 있게 되는 것이다. Incidentally, an image processing operation such as image analysis based on an object image performed by the second image processing apparatus 123 analyzes pixel values in a picture (or macroblock) image to classify object boundaries, and also classifies them. It is possible to analyze what type of object is by analyzing the pixel value of the object. For example, a person has a black head, a face is flesh-colored, and the two parts are connected to each other. Therefore, it is possible to classify human objects by extracting black and flesh-colored areas, and analyze corresponding parts to analyze feature points. Since there are various types of people or human faces, it may be possible to classify based on data stored in the form of a template. Therefore, if the characteristic point of a specific person is found, it can be judged as the same person even if there is a personal change, such as wearing the glasses.
가령 코의 형상이나 귀의 형상, 또는 턱선의 모양 등이 사람의 특징점이 될 수 있고, 이를 근거로 후속 단위영상들과 동일 인물 여부를 판단하게 된다. 이와 같은 방식으로 검출된 인물과 주변 사물이나 객체와의 상관관계를 통해 이벤트를 도출하게 된다. 예컨대, 단위 영상에서 특정 인물이 발견되었고, 해당 인물이 쓰레기를 들고 있는 것이 분석되었다. 그리고, N번째 영상에서 쓰레기가 손에서 없어졌고, 또 해당 영상에서 쓰레기 투기장소에서 발견된 경우, 해당 인물이 쓰레기를 적절하지 않는 곳에 투기한 것으로 판단하는 것이다. 이러한 이벤트 생성은 다양한 형태로 분석될 수 있으므로 본 발명의 실시예에서는 어느 하나의 형태에 특별히 한정하지는 않을 것이다.For example, the shape of the nose, the shape of the ear, or the shape of the jaw line may be a characteristic point of a person, and on the basis of this, it is determined whether the same person as the subsequent unit images. In this way, an event is derived through correlation between the detected person and surrounding objects or objects. For example, a specific person was found in the unit video, and it was analyzed that the person is holding garbage. Then, in the Nth video, if the garbage is removed from the hand, and it is found in the dump site in the video, it is judged that the person dumped the garbage in an inappropriate place. Since such event generation can be analyzed in various forms, the embodiment of the present invention will not be limited to any one form.
한편, 본 발명의 실시예에서는 영상서비스장치(120)를 제1 영상처리장치(121)와 제2 영상처리장치(123)가 연동하는 것으로 설명하였지만, 이는 가령 단독 서버의 형태로 구성되어 해당 서버 내에 제1 모듈과 제2 모듈을 구성하는 형태로 형성될 수도 있을 것이다. 여기서, 제1 모듈은 객체추적 기반의 영상처리를 수행한다면, 제2 모듈은 객체 이미지(혹은 사진) 기반의 영상처리를 수행할 수 있다. 해당 동작과 관련해서는 위에서 이미 자세히 설명하였으므로 더 이상의 설명은 생략한다. 또한, 영상서비스장치(120)는 고속분석 및 검색 동작을 위하여 제3 영상처리장치를 더 연동할 수 있다. 가령, 제3 영상처리장치는 수신한 비디오 영상에서 차량에 대한 분석만 객체이미지 기반의 분석동작을 수행하여 분석 결과를 제공하는 것이다. 이와 같이 다양한 형태의 시스템 설계가 가능하므로 본 발명의 실시예에서는 어느 하나의 형태에 특별히 한정하지는 않을 것이다.On the other hand, in the embodiment of the present invention, the first image processing apparatus 121 and the second image processing apparatus 123 are interlocked with the image service apparatus 120, but this is configured in the form of a single server, for example. It may be formed in a form constituting the first module and the second module. Here, if the first module performs object tracking based image processing, the second module may perform object image (or photo) based image processing. Regarding the operation, it has already been described in detail above, and thus, further explanation is omitted. In addition, the image service device 120 may further interwork a third image processing device for high-speed analysis and search operations. For example, the third image processing apparatus provides an analysis result by performing an analysis operation based on an object image only for vehicle analysis on the received video image. Since various types of system design are possible, the embodiment of the present invention will not be limited to any one form.
서드파티장치(130)는 경찰서 등의 관공서에서 운영하는 서버, 기타 콘텐츠 영상을 제공하는 업체의 서버를 포함한다. 가령, 지방자치단체에서 운영하는 관제장치는 서드파티장치(130)가 될 수도 있다. 물론 이러한 관제장치는 본 발명의 실시예에 따른 영상서비스장치(120)인 것이 바람직할 수 있다. 하지만, 특별히 한정하지는 않을 것이다. 서드파티장치(130)는 영상분석을 통해 다양한 목적에 사용할 수 있기 때문에 콘텐츠 즉 비디오영상을 제공하는 제공업체로 이해하면 좋다.The third party device 130 includes a server operated by a government office such as a police station, and a server of a company providing other content videos. For example, a control device operated by a local government may be a third party device 130. Of course, such a control device may be preferably the video service device 120 according to an embodiment of the present invention. However, it will not be particularly limited. Since the third party device 130 can be used for various purposes through image analysis, it should be understood as a provider that provides content, that is, a video image.
도 3은 도 2의 제1 영상처리장치의 구조를 예시한 블록다이어그램이다.3 is a block diagram illustrating the structure of the first image processing apparatus of FIG. 2.
도 3에 도시된 바와 같이, 본 발명의 실시예에 따른 제1 영상처리장치(121)는 통신 인터페이스부(300), 제어부(310), 포렌식 영상실행부(320) 및 저장부(330)의 일부 또는 전부를 포함한다.3, the first image processing apparatus 121 according to an embodiment of the present invention includes a communication interface unit 300, a control unit 310, a forensic image execution unit 320, and a storage unit 330. Includes some or all.
여기서, "일부 또는 전부를 포함한다"는 것은 통신 인터페이스부(300)나 저장부(330)와 같은 일부 구성요소가 생략되어 구성되거나, 포렌식 영상실행부(320)와 같은 일부 구성요소가 제어부(310)와 같은 다른 구성요소에 통합되어 구성될 수 있는 것 등을 의미하는 것으로서, 발명의 충분한 이해를 돕기 위하여 전부 포함하는 것으로 설명한다.Here, "including some or all" is configured by omitting some components, such as the communication interface 300 or the storage unit 330, or some components such as the forensic image execution unit 320, the control unit ( 310) means to be integrated with other components such as, and is described as including all in order to help a sufficient understanding of the invention.
가령, 포렌식 영상실행부(320)가 제어부(310)와 통합되는 경우, '포렌식 영상처리부'라 명명될 수 있다. 포렌식 영상처리부는 하나의 소프트웨어를 실행시켜 제어 동작과 포렌식 영상처리 동작을 함께 수행할 수 있다. 또는 포렌식 영상처리부가 하드웨어나 소프트웨어적으로, 또는 그 조합에 의해 구성되어 동작하는 것도 얼마든지 가능할 수 있다. 나아가, 제어부(310)는 CPU와 메모리를 포함할 수도 있다. IC 칩(chip)의 형태로 구성될 때 메모리에 포렌식 영상처리를 위한 프로그램을 저장하고, CPU가 이를 실행시키는 것이다. 연산처리속도가 빠르게 증가하게 된다.For example, when the forensic image execution unit 320 is integrated with the control unit 310, it may be referred to as a 'forensic image processing unit'. The forensic image processing unit may execute one software to perform a control operation and a forensic image processing operation together. Alternatively, the forensic image processing unit may be configured to operate in hardware, software, or a combination thereof. Furthermore, the control unit 310 may include a CPU and a memory. When configured in the form of an IC chip, a program for forensic image processing is stored in a memory and the CPU executes it. The processing speed increases rapidly.
나아가, 포렌식 영상실행부(320)는 앞서 언급한 대로, 객체이미지 기반의 영상분석을 수행할 수 있지만, 객체추적 기반의 영상처리동작도 병행할 수 있다. 가령 전자는 제1 모듈에서 실행하고, 후자는 제2 모듈에서 실행하면 된다.Further, as described above, the forensic image execution unit 320 may perform image analysis based on an object image, but may also perform an image processing operation based on object tracking. For example, the former may be executed in the first module, and the latter may be executed in the second module.
상기한 바와 같이, 본 발명의 실시예에 따른 제1 영상처리장치(121)는 시스템을 어떻게 구성하느냐에 따라 그 구성이 상이할 수 있다. 무엇보다 객체이미지 기반의 영상 분석을 수행하고, 그 결과를 메타 데이터의 형태로 저장하여 검색 요청시 신속하게 검색 결과를 제공하는 것은 분명하다. 따라서, 본 발명의 실시예에서는 어느 하나의 형태에 특별히 한정하지는 않을 것이다.As described above, the first image processing apparatus 121 according to the embodiment of the present invention may have a different configuration depending on how the system is configured. Above all, it is clear to perform image analysis based on object images and store the results in the form of metadata to provide search results quickly when a search request is made. Therefore, the embodiment of the present invention will not be particularly limited to any one form.
통신 인터페이스부(300)는 도 1의 사용자장치(100) 및 서브파티장치(130)와 각각 통신을 수행할 수 있다. 이를 위해 통신모듈을 포함할 수 있다. 통신 인터페이스부(300)는 비디오 영상을 처리하므로 변복조, 인코딩 및 디코딩, 먹싱 및 디먹싱 등의 동작을 수행할 수 있지만, 이는 제어부(310)에서 수행될 수도 있다.The communication interface 300 may communicate with the user device 100 and the sub-party device 130 of FIG. 1, respectively. To this end, a communication module may be included. Since the communication interface unit 300 processes video images, operations such as modulation and demodulation, encoding and decoding, muxing, and demuxing may be performed, but this may also be performed by the controller 310.
통신 인터페이스부(300)는 가령 서드파티장치(130)에서 비디오 영상에 대한 분석을 요청한 경우, 또는 사용자장치(100)로서 CCTV와 같은 촬영장치에서 영상이 제공되는 경우, 이를 제어부(310)에 전달할 수 있다.The communication interface 300 transmits this to the control unit 310 when, for example, a third party device 130 requests analysis of a video image, or when an image is provided from a photographing device such as a CCTV as the user device 100. You can.
또한, 통신 인터페이스부(300)는 제어부(310)의 요청에 따라 포렌식 동작을 수행하기 위한 설정 동작을 제2 영상처리장치(123)와 수행할 수 있으며, 이를 통해 제2 영상처리장치(123)로 영상처리를 요청하여 분석 결과를 제공받을 수 있다.In addition, the communication interface 300 may perform a setting operation for performing a forensic operation with the second image processing device 123 at the request of the control unit 310, through which the second image processing device 123 As a result, analysis results can be provided by requesting image processing.
제어부(310)는 제1 영상처리장치(121)를 구성하는 통신 인터페이스부(300), 포렌식 영상실행부(320) 및 저장부(330)의 전반적인 제어 동작을 담당한다. 가령, 제어부(310)는 포렌식 영상실행부(320)를 제어하여 제2 영상처리장치(123)와 포렌식 동작을 수행하기 위한 동작을 수행할 수 있다. 다시 말해, 제어부(310)는 제1 영상처리장치(121)와 제2 영상처리장치(123)가 영상분석 동작을 연동하기 위한 설정 과정을 수행한 후, 가령 제어부(310)가 포렌식 영상실행부(320)의 요청에 따라 제2 영상처리장치(123)로 비디오 영상에 대한 영상 분석을 요청하고, 이후 검색어를 제공하여 검색된 결과를 제공받을 수 있을 것이다.The control unit 310 is responsible for the overall control operation of the communication interface unit 300, the forensic image execution unit 320, and the storage unit 330 constituting the first image processing unit 121. For example, the control unit 310 may control the forensic image execution unit 320 to perform an operation for performing a forensic operation with the second image processing apparatus 123. In other words, after the first image processing device 121 and the second image processing device 123 perform a setting process for interlocking the image analysis operation, for example, the control unit 310 performs a forensic image execution unit. According to the request of 320, the second image processing apparatus 123 may request an image analysis for the video image, and then provide a search term to receive the searched result.
가령 제어부(310)는 포렌식 영상실행부(320)의 동작에 따라 검색어로서 사람의 얼굴 즉 인물 카테고리를 더 설정할 수 있도록 함으로써 이를 선택해 비디오 영상을 검색할 수 있고, 시나리오 기반의 검색어나 이벤트 옵션 정보를 기반으로 다양한 형태의 검색을 추가로 수행할 수 있다. 다시 말해, 제1 영상처리장치(121)가 A 범위를 분석하여 검색이 가능하고, 제2 영상처리장치(123)가 B 범위를 분석하여 검색이 가능한데, 포렌식 동작을 통해 제1 영상처리장치(121)에서 제2 영상처리장치(123)로 분석 및 검색을 요청하면 제1 영상처리장치(121)에서 B 범위에 있는 제2 영상처리장치(123)의 분석 혹은 검색 결과를 볼 수 있도록 동작하는 것이다. 이를 제어부(310)와 포렌식 영상실행부(320)가 수행한다고 볼 수 있다.For example, the control unit 310 may search for a video image by selecting a person's face or a person category as a search term according to the operation of the forensic image execution unit 320, and search for scenario-based search terms or event option information. Based on this, various types of searches can be additionally performed. In other words, the first image processing device 121 may analyze the A range and search, and the second image processing device 123 may analyze and search the B range, and the first image processing device through the forensic operation ( If the analysis and search is requested from the 121) to the second image processing apparatus 123, the first image processing apparatus 121 operates to view the analysis or search results of the second image processing apparatus 123 in the B range. will be. It can be seen that the control unit 310 and the forensic image execution unit 320 perform this.
포렌식 영상실행부(320)는 위에서 설명한 대로, 제1 영상처리장치(121)에서 제2 영상처리장치(123)의 비디오 영상 분석에 대한 분석 결과를 볼 수 있도록 하기 위한 연동 프로그램을 실행한다. 여기에는 다양한 UX/UI 프로그램이 포함될 수 있다 가령, UI 창을 통해 포렌식 동작을 위해 설정하거나 사용자가 인물의 속성, 가령 '얼굴' 카테고리로 검색어를 제공할 때 이를 기반으로 제2 영상처리장치(123)에서 검색 결과를 제공하도록 하는 등의 다양한 동작을 수행할 수 있다.As described above, the forensic image execution unit 320 executes an interlocking program to allow the first image processing unit 121 to view the analysis results of the video image analysis of the second image processing unit 123. This may include various UX / UI programs. For example, when a user sets a forensic operation through a UI window or when a user provides a search term with a property of a person, such as a 'face' category, the second image processing device 123 ) To provide various search results.
저장부(330)는 제1 영상처리장치(121)에서 처리되는 다양한 데이터를 저장할 수 있으며, 임시 저장할 수 있다. 가령 제1 영상처리장치(121)가 DB(120a)에 연동하는 경우, 임시 데이터는 저장부(330)에 저장하고 영구 데이터는 DB(120a)에 저장될 수 있다. 이와 같이 저장부(330)에 저장된 데이터는 제어부(310)의 요청시 출력하게 된다.The storage unit 330 may store various data processed by the first image processing device 121 and may temporarily store the data. For example, when the first image processing device 121 is interlocked with the DB 120a, temporary data may be stored in the storage unit 330 and permanent data may be stored in the DB 120a. The data stored in the storage unit 330 is output when requested by the control unit 310.
도 4는 도 2의 제2 영상처리장치의 구조를 예시한 블록다이어그램이다.4 is a block diagram illustrating the structure of the second image processing apparatus of FIG. 2.
도 4에 도시된 바와 같이, 본 발명의 실시예에 따른 제2 영상처리장치(123)는 통신 인터페이스부(400) 및 영상 고속처리부(410)의 일부 또는 전부를 포함하며, 여기서 "일부 또는 전부를 포함"한다는 것은 앞서서의 의미와 동일하다.4, the second image processing apparatus 123 according to an embodiment of the present invention includes some or all of the communication interface 400 and the image high-speed processing unit 410, where "some or all "Including" is the same as the above.
통신 인터페이스부(400)는 본 발명의 실시예에 따라 도 2의 제1 영상처리장치(121)와 통신을 수행할 수 있다. 제1 영상처리장치(121)에서 비디오 영상을 제공하여 분석을 요청하면 해당 비디오 영상과 분석 요청을 영상 고속처리부(410)에 전달한다.The communication interface 400 may communicate with the first image processing apparatus 121 of FIG. 2 according to an embodiment of the present invention. When the first image processing device 121 provides a video image and requests analysis, the video image and the analysis request are transmitted to the image high-speed processing unit 410.
또한, 통신 인터페이스부(400)는 비디오 영상에 대한 분석 결과를, 가령 인물 속성 정보, 이벤트 정보, 상관관계 정보, 또 그 정보들에 매칭되는 비디오 영상을 메타 데이터의 형태로 영상 고속처리부(410)의 요청하에 DB(120a)에 저장할 수 있다.In addition, the communication interface unit 400 analyzes the video image, such as person attribute information, event information, correlation information, and the video image matching the information in the form of metadata, the image high-speed processing unit 410 It can be stored in the DB (120a) on request.
그리고, 통신 인터페이스부(400)는 제1 영상처리장치(121)에서 검색어, 가령 속성기반의 검색어, 시나리오기반의 검색어, 또 이벤트 옵션 기반의 검색어를 제공하면 이를 영상 고속처리부(410)에 제공하여 그에 따른 검색결과가 제공되도록 한다.Then, the communication interface unit 400 provides a search term, such as an attribute-based search term, a scenario-based search term, and an event option-based search word from the first image processing device 121 to the image fast processing unit 410 Search results are provided accordingly.
영상 고속처리부(410)는 수신된 비디오 영상에 대하여 분석요청이 있으면 객체이미지 기반의 분석 동작을 수행한다. 이와 관련해서는 앞서 충분히 설명하였으므로 더 이상의 설명은 생략한다.The image high-speed processing unit 410 performs an object image-based analysis operation when an analysis request is made for the received video image. In this regard, the above description has been sufficiently described, and further explanation is omitted.
또한, 영상 고속처리부(410)는 분석 결과를 DB(120a)에 저장하여 제1 영상처리장치(121)가 해당 DB(120a)에 접속하여 위에서와 같은 검색이 이루어지도록 할 수 있다. 즉 검색은 제1 영상처리장치(121)가 직접 DB(120a)에 접속하여 수행할 수 있지만, 영상 고속처리부(410)를 경유하여 간접적으로 검색 결과를 수신하는 등 다양한 방법이 가능하고, 이는 시스템 설계자의 의도에 따라 달라질 수 있는 것이므로 본 발명의 실시예에서는 어느 하나의 형태에 특별히 한정하지는 않을 것이다. 다만, 전자가 데이터 처리의 신속성이 있다는 점에서 바람직할 것이다.In addition, the image high-speed processing unit 410 may store the analysis results in the DB 120a so that the first image processing device 121 accesses the DB 120a to perform the above search. That is, the search can be performed by the first image processing device 121 directly accessing the DB 120a, but various methods are possible such as receiving the search results indirectly via the image high-speed processing unit 410, which is a system. Since it may vary according to the designer's intention, the embodiment of the present invention will not be particularly limited to any one form. However, the former would be desirable in view of the speed of data processing.
도 5는 도 2의 제2 영상처리장치의 다른 구조를 예시한 블록다이어그램이며, 도 6은 도 5의 고속분석영상실행부의 세부구조를 예시한 블록다이어그램이다.5 is a block diagram illustrating another structure of the second image processing apparatus of FIG. 2, and FIG. 6 is a block diagram illustrating a detailed structure of the high-speed analysis image execution unit of FIG.
도 5에 도시된 바와 같이, 본 발명의 다른 실시예에 따른 제2 영상처리장치(123')는 통신 인터페이스부(500), 제어부(510), 영상 고속분석부(520) 및 저장부(530)의 일부 또는 전부를 포함할 수 있다.As shown in FIG. 5, the second image processing apparatus 123 'according to another embodiment of the present invention includes a communication interface unit 500, a control unit 510, an image high-speed analysis unit 520, and a storage unit 530. ).
여기서, "일부 또는 전부를 포함한다"는 것은 저장부(530)와 같은 일부 구성요소가 생략되어 제2 영상처리장치(123')가 구성되거나 영상 고속분석부(520)와 같은 일부 구성요소가 제어부(510)와 같은 다른 구성요소에 통합되어 구성될 수 있는 것 등을 의미하는 것으로서 발명의 충분한 이해를 돕기 위하여 전부 포함하는 것으로 설명한다.Here, "including some or all" means that some components, such as the storage unit 530, are omitted, so that the second image processing apparatus 123 'is configured or some components, such as the image high-speed analysis unit 520, are included. It means that it can be integrated with other components, such as the control unit 510, etc., and is described as including everything in order to help the understanding of the invention.
도 4의 제2 영상처리장치(123)와 비교해 볼 때, 도 5의 제2 영상처리장치(123')는 제어 동작과 영상의 고속분석 (및 검색) 동작을 이원화하여 처리한다는 데에 있다고 볼 수 있다. 이를 위해 하드웨어, 소프트웨어 또는 그 조합에 의해 분리되어 서로 다른 동작을 수행할 수 있다. 즉 제어부(510)는 제어 동작만을 담당하고, 영상의 고속분석 동작은 영상 고속분석부(520)에서 수행한다고 볼 수 있다. 물론 영상 고속분석부(520)는 객체이미지 기반의 분석을 수행한다. 가령 이미지 형태로 객체를 갭쳐해와 캡쳐된 이미지의 객체에 대한 화소값을 분석해 속성을 분석하고, 특징점을 찾는 방식이라 볼 수 있다.Compared to the second image processing apparatus 123 of FIG. 4, it is considered that the second image processing apparatus 123 'of FIG. 5 is that the control operation and the high-speed analysis (and search) operation of the image are dualized and processed. You can. To this end, they may be separated by hardware, software, or a combination thereof to perform different operations. That is, it can be seen that the control unit 510 performs only the control operation, and the high-speed analysis operation of the image is performed by the image high-speed analysis unit 520. Of course, the image high-speed analysis unit 520 performs object image-based analysis. For example, it can be considered as a method of analyzing attributes by gapping objects in the form of images and analyzing pixel values for objects in the captured images.
상기한 점들을 제외하면, 도 5의 통신 인터페이스부(500), 제어부(510), 영상 고속분석부(520) 및 저장부(530)는 도 1의 영상서비스장치(120)나 도 4의 통신 인터페이스부(400) 및 영상 고속처리부(410)의 내용과 크게 다르지 않으므로 그 내용들로 대신하고자 한다.Except for the above points, the communication interface unit 500, the control unit 510, the image high-speed analysis unit 520, and the storage unit 530 of FIG. 5 communicate with the image service device 120 or FIG. 4 of FIG. Since the contents of the interface unit 400 and the image high-speed processing unit 410 are not significantly different, the contents will be replaced.
한편, 도 5의 영상 고속분석부(520)는 도 6의 영상 고속분석부(520')와 같은 구조를 가질 수 있다. 도 5의 영상 고속분석부(520)는 도 6에서와 같이 비디오 처리부(600), 비디오 검색부(610), 스케줄링부(620), 매뉴얼 처리부(630) 및 북마크 처리부(640)의 일부 또는 전부를 포함할 수 있다. 여기서, "일부 또는 전부를 포함"한다는 것은 앞서서의 의미와 동일하다.Meanwhile, the image high-speed analysis unit 520 of FIG. 5 may have the same structure as the image high-speed analysis unit 520 'of FIG. 6. The image high-speed analysis unit 520 of FIG. 5 is a part or all of the video processing unit 600, the video search unit 610, the scheduling unit 620, the manual processing unit 630, and the bookmark processing unit 640 as shown in FIG. It may include. Here, "including some or all" has the same meaning as above.
도 5의 영상 고속분석부(520)는 앞서 설명한 바 있는 고속 분석 이외에도 다양한 동작을 더 수행할 수 있다. 비디오 처리부(600)는 다양한 포맷으로 제공되는 비디오 영상을 해당 영상의 포맷으로 데이터를 처리하거나 지정된 포맷의 데이터로 변환하여 처리하고, 다시 동일한 포맷으로 변환하여 제공할 수 있다. 본 발명의 실시예에서는 다양한 포맷으로 제공되는 비디오 데이터를 지정된 포맷으로 변환한 후 분석을 수행하고 이를 다시 동일 포맷으로 변환하여 내보내는 형태가 될 수 있다.The image high-speed analysis unit 520 of FIG. 5 may further perform various operations in addition to the high-speed analysis described above. The video processing unit 600 may process video data provided in various formats by processing data in a format of a corresponding image or converting data into a specified format, and converting the data into a same format. In an embodiment of the present invention, video data provided in various formats may be converted into a specified format, then analyzed and converted into the same format and then exported.
비디오 검색부(610)는 속성기반의 검색, 가령 '얼굴' 카테고리로 검색이 이루어지도록 하거나, 이벤트 옵션 정보에 기반한 검색, 나아가 시나리오 기반의 검색이 이루어지도록 할 수 있다. 여기서 시나리오 기반은 문장, 더 정확하게는 단문장의 형태로 검색어가 제공되면 이를 분석하여 이를 기반으로 검색결과를 제공하는 것이다. 즉 키워드 기반의 검색과 유사하다고 볼 수 있지만, 키워드는 단어에 해당되고 시나리오는 문장이라는 점에서 차이가 있을 수 있다. 가령, 시나리오 문장은 "불법 유턴하는 차량중 검정색이고, 택시인 경우' 또는 "모자를 쓴 사람이 가방을 들고 달려가는 모습' 등과 같이 복잡한 시나리오를 포함할 수 있다.The video search unit 610 may allow an attribute-based search, for example, a search to be performed in a 'face' category, or a search based on event option information, and a scenario-based search. The scenario-based scenario is to provide a search result based on the analysis of a search word provided in the form of a sentence or, more precisely, a short sentence. In other words, it can be considered as similar to a keyword-based search, but there may be a difference in that keywords correspond to words and scenarios are sentences. For example, the scenario sentence may include a complex scenario such as "black of a vehicle that is illegally turning, in the case of a taxi" or "a person wearing a hat running with a bag".
스케줄링부(620)는 스케줄 관리 동작을 수행하며, 한번 또는 주기적으로 (일일, 주간, 월간) 지정한 시간에 자동으로 비디오 분석이 등록되고 자동 생성되도록 하는 동작을 담당할 수 있다.The scheduling unit 620 performs a schedule management operation, and may be in charge of an operation in which video analysis is automatically registered and automatically generated at a specified time once or periodically (daily, weekly, monthly).
매뉴얼 처리부(630)는 매뉴얼 기능, 가령 도움말을 제공하는 동작을 수행할 수 있다. 매뉴얼을 i-포렌식상에서 바로 사용할 수 있도록 제공하는 것이다.The manual processing unit 630 may perform a manual function, such as an operation of providing help. It is to provide the manual so that it can be used directly on i-Forensics.
북마크 처리부(640)는 북마크(관심목록) 지정, 북마크 삭제, 북마크 목록 내보내기, 다수의 북마크 관리 기능, 북마크 삭제 방제(보호기능) 등 즐겨찾는 영상을 북마크하는 것에 관련된 다양한 동작을 수행할 수 있다.The bookmark processing unit 640 may perform various operations related to bookmarking a favorite image, such as designating a bookmark (interest list), deleting a bookmark, exporting a bookmark list, managing multiple bookmarks, and controlling bookmark deletion (protection).
Figure PCTKR2018013184-appb-T000001
Figure PCTKR2018013184-appb-T000001
그 이외에도 도 5의 영상 고속분석부(520)는 [표 1] 및 [표 2]에서 볼 수 있는 바와 같이 다양한 동작을 수행하기 위한 구성을 더 포함할 수 있다. 즉 도 6은 하나의 예에 불과하므로 [표 1] 및 [표 1]의 내용에 기반한 SW 모듈, HW 모듈 및 그 조합에 의해 구성을 도 6의 구성에 더 추가할 수 있을 것이다. 도 6은 대표적인 동작만을 모듈의 형태로 구성하여 설명한 것이다.In addition, the image high-speed analysis unit 520 of FIG. 5 may further include a configuration for performing various operations as shown in [Table 1] and [Table 2]. That is, since FIG. 6 is only an example, the configuration may be further added to the configuration of FIG. 6 by SW modules, HW modules, and combinations based on the contents of [Table 1] and [Table 1]. 6 is a configuration in which only representative operations are described in the form of modules.
Figure PCTKR2018013184-appb-T000002
Figure PCTKR2018013184-appb-T000002
도 7은 본 발명의 실시예에 따른 고속분석 영상서비스 과정을 나타내는 도면이다.7 is a view showing a high-speed analysis video service process according to an embodiment of the present invention.
도 7을 참조하면, 도 2의 제1 영상처리장치(121)는 포렌식 매니저(121a) 및 검색 클라이언트(121b)를 포함할 수 있다. 이는 가령 SW모듈의 형태일 수 있다. 포렌식 매니저(121a)는 제1 영상처리장치(121)의 관리 혹은 제어를 담당할 수 있고, 검색 클라이언트(121b)는 검색 관련 동작을 수행할 수 있다. 도 7은 가령 DB(120a)나 서드파티장치(130), 제1 영상처리장치(121) 내의 매니저(120a)와 클라이언트(121b), 나아가 제2 영상처리장치(123) 간의 동작을 보여주고 있다.Referring to FIG. 7, the first image processing apparatus 121 of FIG. 2 may include a forensic manager 121a and a search client 121b. This may be in the form of a SW module, for example. The forensic manager 121a may be in charge of management or control of the first image processing device 121, and the search client 121b may perform a search-related operation. FIG. 7 shows operations between the manager 120a and the client 121b in the DB 120a, the third party device 130, and the first image processing device 121, and further, the second image processing device 123, for example. .
도 7에서 볼 때 제1 영상처리장치(121)가 VMS(Virtual Memory System)와 같은 DB(120a) 또는 서드파티장치(130)로 비디오 영상을 요청하여 제공받은 후 제2 영상처리장치(123)로 문의(query) 혹은 요청하여 비디오 영상에 대한 분석을 완료하는 것을 보여준다(S701 ~ S712). 여기서 S705 내지 S707 단계는 제2 영상처리장치(123)로 문의 즉 분석을 요청하는 과정이고, S708 내지 S710 단계는 고속 분석 동작을 수행하기 위한 과정이며, S711 및 S712 단계는 분석이 완료되는 과정이다.7, the first image processing device 121 requests and receives a video image from a DB 120a or a third party device 130 such as a VMS (Virtual Memory System), and then receives the second image processing device 123 Shows that the analysis of the video image is completed by inquiring or requesting with (S701 ~ S712). Here, steps S705 to S707 are a process of requesting an inquiry or analysis to the second image processing apparatus 123, steps S708 to S710 are processes for performing a high-speed analysis operation, and steps S711 and S712 are processes for completing the analysis. .
도 8은 도 2의 제1 영상처리장치를 구성하는 포렌식 매니저와 검색 클라이언트 사이의 동작 과정을 설명하기 위한 도면이다.FIG. 8 is a diagram illustrating an operation process between a forensic manager and a search client constituting the first image processing apparatus of FIG. 2.
도 8을 참조하면, 포렌식 매니저(121a)와 클라이언트(121b)는 구체적으로 도 8에서와 같이 비디오 리스트(list)를 처리하는 형태로 동작할 수 있다(S800 ~ S804). 가령 제1 영상처리장치(121)를 통해 관제요원과 같은 사용자가 특정 검색어를 입력하였을 때, 해당 검색어에 해당되는 다양한 비디오 영상에 대한 리스트가 먼저 사용자에게 제공될 수 있을 것이다. 이러한 리스트를 통해 자신이 찾고자 하는 영상을 선택하고, 그 선택된 비디오 영상을 제공받아 재생시킬 수 있을 것이다. 또는 분석된 결과를 제공받아 화면에 표시해 줄 수 있을 것이다.Referring to FIG. 8, the forensic manager 121a and the client 121b may specifically operate in a form of processing a video list as in FIG. 8 (S800 to S804). For example, when a user such as a control agent inputs a specific search word through the first image processing device 121, a list of various video images corresponding to the search word may be first provided to the user. Through this list, you can select the video you are looking for and receive and play the selected video. Alternatively, the analyzed result may be provided and displayed on the screen.
도 9는 도 2의 제1 영상처리장치와 서드파티장치의 동작 과정을 설명하기 위한 도면이다.9 is a view for explaining an operation process of the first image processing apparatus and the third party apparatus of FIG. 2.
도 9를 참조하면, 제1 영상처리장치(121)는 DB(120a)에 저장되어 있는 메타 데이터를 요청하여 비디오 영상과 함께 스트림으로 제공받아 화면에 표시할 수 있다(S900 ~ S905). 이러한 과정은 매니저(121a)와 클라이언트(121b) 간에 협업 동작을 통해 DB(120a)로부터 메타 데이터, 가령 특정 검색어에 따른 비디오 영상과 그에 매칭되는 다양한 정보를 제공받아 화면에 표시할 수 있을 것이다.Referring to FIG. 9, the first image processing device 121 may request meta data stored in the DB 120a and receive it as a stream along with a video image to display it on the screen (S900 to S905). Such a process may receive meta data from the DB 120a through a collaborative operation between the manager 121a and the client 121b, for example, a video image according to a specific search term and various information matching it and display it on the screen.
도 10은 검색 메인 화면을 예시한 도면이고, 도 11은 FRS 설정 과정을 예시한 도면이다.10 is a diagram illustrating a search main screen, and FIG. 11 is a diagram illustrating an FRS setting process.
설명의 편의상 도 10 및 도 11을 도 2와 함께 참조하면, 본 발명의 실시예에 따른 제1 영상처리장치(121)는 관제컴퓨터, 관제컴퓨터에 연결되는 관제모니터 혹은 전광판을 포함할 수 있다. 해당 관제컴퓨터에서 본 발명의 실시예에 따른 i-포렌식 동작이 수행될 수 있도록 하기 위해 도 10 및 도 11과 같이 FRS 접속 설정 동작을 수행할 수 있다. 도 10에서 우측 상단의 환경설정 버튼(1000)을 선택할 때 나타나는 화면에서 도 11에서와 같이 FRS 접속 설정 항목(1010)을 클릭하면 구체적인 설정 화면이 팝업되어 나타난다. 이후, 팝업창에 제2 영상처리장치(123)의 IP 주소를 입력하여 서로 연동시킬 수 있게 된다. 팝업창의 하단에는 도 11에서와 같이 접속 여부를 알려주는 상태정보(1100)가 표시될 수 있다. 이를 통해 제1 영상처리장치(121)와 제2 영상처리장치(123)의 i-포렌식 연동 과정이 완료된다.Referring to FIGS. 10 and 11 together with FIG. 2 for convenience of description, the first image processing apparatus 121 according to an embodiment of the present invention may include a control computer, a control monitor or a display board connected to the control computer. In order to enable an i-forensic operation according to an embodiment of the present invention to be performed on the control computer, an FRS connection establishment operation may be performed as shown in FIGS. 10 and 11. In the screen displayed when the environment setting button 1000 on the upper right in FIG. 10 is selected, when the FRS connection setting item 1010 is clicked as shown in FIG. 11, a specific setting screen pops up. Thereafter, an IP address of the second image processing device 123 may be input to the pop-up window to interlock with each other. At the bottom of the pop-up window, as shown in FIG. 11, status information 1100 indicating whether or not a connection is made may be displayed. Through this, the i-forensic interlocking process of the first image processing apparatus 121 and the second image processing apparatus 123 is completed.
도 12 내지 도 20은 오프라인 분석 화면을 설명하기 위한 도면이다.12 to 20 are views for explaining an offline analysis screen.
설명의 편의상 도 12 내지 도 20을 도 2와 함께 참조하면, 도 2의 제1 영상처리장치(121)는 도 12에서와 같이 화면에 표시되는 파일 불러오기 버튼(1200)을 클릭하여 원하는 파일을 불러오기할 수 있다. 이때 불러오기한 파일들은 화면 좌측의 분석채널 리스트영역(1210) 즉 제1 영역에 등록된다. 파일 불러오기 과정에서 이미 분석된 파일의 경우에는 파일이 리스트영역(1210)에 담기지만 분석은 불가할 수 있다.For convenience of description, referring to FIGS. 12 to 20 together with FIG. 2, the first image processing device 121 of FIG. 2 clicks the file import button 1200 displayed on the screen as shown in FIG. 12 to search for a desired file. Can be loaded. At this time, the imported files are registered in the analysis channel list area 1210 on the left side of the screen, that is, the first area. In the case of a file that has already been analyzed during the file import process, the file is stored in the list area 1210, but analysis may not be possible.
이어, 도 13에서와 같이 분석채널 리스트영역(1210)에서 특정 비디오 파일을 선택한 후, 오프라인 분석 버튼(1300)을 선택하면 영상 표시영역 즉 제2 영역에 분석을 요청한 비디오 영상의 재생이 이루어질 수 있다. 비디오 영상이 재생되는 제2 영역의 하단에는 썸네일 이미지와 같은 형태로 영상이 작게 재생되고 그 영상에는 시간과 같은 부가 정보가 함께 표시될 수 있다.Subsequently, as shown in FIG. 13, after selecting a specific video file in the analysis channel list area 1210 and selecting the offline analysis button 1300, the video image requesting analysis to the video display area, that is, the second area, may be played. . At the bottom of the second area in which the video image is reproduced, the image is reproduced small in the form of a thumbnail image, and additional information such as time may be displayed on the image.
또한, 오프라인 분석을 수행할 때, 사용자는 분석채널 리스트영역(1210)에서 도 14에서와 같이 분석대상(1410)인 비디오 영상에 대하여 분석하기를 요청할 때, 분석 타입을 설정하는 팝업창(1400)을 불러내어 분석 타입을 설정할 수 있다. 팝업창(1400)에는 '얼굴' 카테고리가 추가되어 있는 것을 확인할 수 있다. 이는 본 발명의 실시예에 따른 사람의 객체이미지 기반으로 분석된 속성정보를 확인하기 위한 항목이라 볼 수 있다.In addition, when performing offline analysis, when a user requests to analyze a video image that is an analysis target 1410 in the analysis channel list area 1210 as shown in FIG. 14, a pop-up window 1400 for setting an analysis type is displayed. You can call it and set the analysis type. It can be seen that the 'face' category is added to the pop-up window 1400. This can be regarded as an item for checking attribute information analyzed based on a person's object image according to an embodiment of the present invention.
본 발명의 실시예에 따라 선택된 분석대상(1410)의 비디오 영상은 하나의 파일당 3가지 유형으로 메타 데이터 분석이 이루어질 수 있다. 이는 본 발명의 실시예에 따른 지정된 방식에 따라 이루어지는 것이므로 얼마든지 변경될 수 있다. 다만, 본 발명의 실시예는 다양한 포맷의 비디오 영상에 대한 데이터를 제공하므로 그보다 더 많은 지원을 하는 것이 바람직하다. 또한, 분석대상에 대하여 재분석이 불가능하도록 설계하거나 분석 결과값은 삭제 요청이 없는 한 유지될 수 있다.According to an embodiment of the present invention, the video image of the analysis target 1410 selected may be analyzed in three types per file. This is made according to the designated method according to the embodiment of the present invention, and may be changed as much as possible. However, since the embodiment of the present invention provides data for video images of various formats, it is desirable to support more than that. In addition, it can be designed so that re-analysis is not possible for the analysis target or the analysis result value can be maintained as long as there is no request for deletion.
도 14 및 도 15에서와 같은 과정을 통해 선택된 비디오 영상들에 대한 분석이 완료되면 분석채널 리스트영역에서 비디오 영상들이 사라지고, 분석이 완료된 비디오 영상들은 도 15에서와 같이 분석대기/진행 리스트영역(1600)으로 이동될 수 있다. 해당 영역은 제3 영역이 될 수 있다. 제3 영역은 소 영역으로서 분석대기/진행 리스트를 포함하는 영역, 분석완료 리스트를 포함하는 영역 및 분석실패 리스트를 포함하는 영역으로 구분될 수 있다. When analysis of the selected video images is completed through the process as shown in FIGS. 14 and 15, the video images disappear from the analysis channel list area, and the analyzed video images are analyzed in the standby / progress list area 1600 as shown in FIG. ). The area may be a third area. The third area may be divided into a small area, an area including an analysis standby / progress list, an area including an analysis completion list, and an area including an analysis failure list.
이때, 제3 영역에 담긴 다양한 비디오 영상들은 도 17에서와 같이 다시 분석 채널 리스트영역으로 가져와 다시 분석을 수행할 수 있다. 제3 영역은 도 18에서와 같이 원하는 분석 결과의 개체가 있는 경우 분석 완료로 구분되고, 분석을 완료했으나 오브젝트 즉 객체가 없는 경우, 네트워크 오류나 파일상의 문제로 분석을 완료하지 못한 경우에는 분석실패 리스트 영역에 포함되게 된다.At this time, various video images contained in the third area may be brought back to the analysis channel list area as shown in FIG. 17 to perform analysis again. The third area is divided into analysis completion when there is an object having a desired analysis result as shown in FIG. 18, and when analysis is completed but there is no object, that is, no object, analysis failure list cannot be completed due to network errors or file problems. Will be included in the domain.
도 18에서와 같이 분석이 실패한 비디오 영상의 경우, 도 19에서와 같이 다른 조건으로 분석을 재시도해 볼 수 있다. 가령 도 19에서와 같이 일반 객체 항목으로 검색을 시도한 결과 분석이 실패하였다면 본 발명의 실시예에 따른 인물 즉 인물의 속성 정보를 기반으로 도 20과 같이 분석을 재시도하는 것이다. In the case of a video image in which the analysis has failed as shown in FIG. 18, the analysis may be retried under different conditions as in FIG. For example, if the analysis fails as a result of an attempt to search with a general object item as shown in FIG. 19, the analysis is retried as shown in FIG. 20 based on the attribute information of the person according to the embodiment of the present invention.
그 결과, 인물 검색에 따른 결과를 수신하게 될 것이다.As a result, you will receive the results according to the person search.
도 21 내지 도 30은 오프라인 검색 화면을 설명하기 위한 도면이다.21 to 30 are views for explaining an offline search screen.
설명의 편의상 도 21 내지 도 30을 도 2와 함께 참조하면, 본 발명의 실시예에 따른 도 2의 제1 영상처리장치(121)는 모니터화면에 도 21과 같은 검색화면을 표시할 수 있다. 즉 메인화면에서 검색항목(2100)을 선택한다. 이어 검색할 비디오 리스트 항목(2110)을 선택하여 도 22와 같이 검색된 리스트를 불러올 수 있다. 이는 분석이 완료된 리스트에서 제공된다. 물론 사용자는 리스트상의 특정 항목을 삭제할 수 있다.For convenience of description, referring to FIGS. 21 to 30 together with FIG. 2, the first image processing device 121 of FIG. 2 according to an embodiment of the present invention may display a search screen as shown in FIG. 21 on a monitor screen. That is, the search item 2100 is selected on the main screen. Subsequently, by selecting the video list item 2110 to be searched, the searched list may be retrieved as shown in FIG. 22. This is provided in the list of completed analyzes. Of course, the user can delete specific items on the list.
해당 리스트의 비디오 영상들을 검색할 비디오 리스트영역, 즉 제4 영역에 담은 후, 특정 비디오 영상이 선택되지 상태에서는 도 23과 같이 화면에 재생되는 비디오 나타나지 않게 된다.After putting the video images of the list in the video list area to search for, that is, the fourth area, when a specific video image is not selected, the video reproduced on the screen does not appear as shown in FIG. 23.
이어 사용자가 도 24에서와 같이, 특정 비디오 영상을 선택한 후 다양한 검색식을 설정하게 되는데, 이때 서로 다른 포맷의 비디오 영상의 경우 해당 비디오 영상에 맞춤화된 검색조건(식)이 화면에 표시된다. 가령 제1 포맷의 비디오 영상과 제2 포맷의 비디오 영상에 대한 검색창(2500)은 다른 항목을 표시해줄 수 있다.24, the user selects a specific video image and sets various search expressions. In this case, in the case of video images of different formats, search conditions (expression) customized to the video image are displayed on the screen. For example, the search window 2500 for the first format video image and the second format video image may display different items.
비디오 영상을 재생할 시에 도 25에서와 같이 일반 재생 이외에 객체 구간 연속 재생과 같은 재생 옵션이 추가되어 있으므로 이를 선택하여 재생 방식을 결정할 수 있을 것이다.When playing a video image, since a playback option such as continuous playback of an object section is added in addition to normal playback as shown in FIG. 25, a playback method may be determined by selecting it.
또한, 검색조건을 설정할 때 검색창(2500)의 검색타입 항목을 선택하여 일반객체를 구성하는 사람, 차량, 또 그 이외의 미확인 객체에 대하여 검색을 수행할 수 있고, 나아가 사람 중심, 즉 얼굴 기반의 검색을 수행할 수 있다.In addition, when setting search conditions, a search type item in the search window 2500 may be selected to search for people, vehicles, and other unidentified objects constituting a general object. You can perform a search.
검색을 수행한 후, 화면에 비디오가 재생되면 도 27과 같이 플레이바를 통해 재생할 기간을 지정할 수 있으며 재생 시간도 설정할 수 있다. 만약 도 28과 같이 객체 구간 연속 재생인 경우에는 연속재생이 이루어지게 된다.After the search is performed, when a video is played on the screen, a play period can be designated through a play bar as shown in FIG. 27, and a play time can also be set. If the object section is continuous playback as shown in FIG. 28, continuous playback is performed.
도 29는 썸네일 다중 선택과 그에 대한 연속재생을 수행할 수 있는 것을 보여주고 있으며, 도 30은 즐겨찾기 버튼(3000)을 선택하여 특정 비디오영상, 가령 클립 영상을 즐겨찾기로 설정할 수 있는 것을 보여주고 있다.FIG. 29 shows that multiple selection of thumbnails and continuous playback thereof can be performed, and FIG. 30 shows that a specific video image, such as a clip image, can be set as a favorite by selecting the favorite button 3000 have.
도 31은 본 발명의 실시예에 따른 고속분석 영상처리장치의 동작 과정을 나타내는 흐름도이다.31 is a flowchart illustrating an operation process of a high-speed analysis image processing apparatus according to an embodiment of the present invention.
설명의 편의상 도 31을 도 1 및 도 2와 함께 참조하면, 고속분석 영상처리장치로서 본 발명의 실시예에 따른 영상서비스장치(120) 또는 제1 영상처리장치(121)(이하, 제1 영상처리장치로 설명함)는 비디오 영상을 수신한다(S3100). 이때 수신된 비디오 영상은 서로 다른 포맷의 영상을 포함한다.For convenience of description, referring to FIG. 31 together with FIGS. 1 and 2, as a high-speed analysis image processing apparatus, an image service apparatus 120 or a first image processing apparatus 121 according to an embodiment of the present invention (hereinafter, a first image) A processing device) receives a video image (S3100). At this time, the received video image includes images of different formats.
이어, 제1 영상처리장치(121)는 수신한 비디오 영상에서 사람의 얼굴 이미지를 추출해 분석하여 얼굴 속성 정보를 생성하고, 생성한 얼굴 속성 정보를 근거로 비디오 영상을 분석하여 분석 결과를 메타 데이터로서 생성한다(S3110).Subsequently, the first image processing device 121 extracts and analyzes the face image of the person from the received video image to generate face attribute information, and analyzes the video image based on the generated face attribute information to analyze the result as metadata. It is created (S3110).
이러한 분석 결과로서 저장된 메타 데이터는 도 1과 같은 DB(120a)에 저장된 후, 제1 영상처리장치(121)는 사용자의 다양한 검색식에 따른 비디오 분석결과를 제공해 준다. 무엇보다 본 발명의 실시예에서는 인물 중심의 분석이 더 수행되었고, 물론 그 분석은 영상에서 객체이미지를 분석하는 방식에 의해 이루어지지만, 해당 인물의 속성 정보를 기반으로 비디어 영상의 분석 결과를 더 제공하게 된다.After the meta data stored as the result of the analysis is stored in the DB 120a as shown in FIG. 1, the first image processing device 121 provides video analysis results according to various search expressions of the user. First of all, in the embodiment of the present invention, character-oriented analysis was further performed, and of course, the analysis is performed by a method of analyzing an object image in an image, but further analyzes a video image analysis result based on the attribute information of the corresponding person. Will be provided.
이때, 해당 인물의 이벤트 옵션 정보를 추가하여 검색이 이루어질 수 있으며, 무엇보다 시나리오 기반의 검색이 이루어질 수 있다는 것이다.At this time, the search may be performed by adding event option information of the person, and above all, a scenario-based search may be performed.
이러한 내용들은 앞서 충분히 설명하였으므로 더 이상의 설명은 생략한다.Since these contents have been sufficiently explained above, further explanation is omitted.
한편, 본 발명의 실시 예를 구성하는 모든 구성 요소들이 하나로 결합하거나 결합하여 동작하는 것으로 설명되었다고 해서, 본 발명이 반드시 이러한 실시 예에 한정되는 것은 아니다. 즉, 본 발명의 목적 범위 안에서라면, 그 모든 구성 요소들이 하나 이상으로 선택적으로 결합하여 동작할 수도 있다. 또한, 그 모든 구성요소들이 각각 하나의 독립적인 하드웨어로 구현될 수 있지만, 각 구성 요소들의 그 일부 또는 전부가 선택적으로 조합되어 하나 또는 복수 개의 하드웨어에서 조합된 일부 또는 전부의 기능을 수행하는 프로그램 모듈을 갖는 컴퓨터 프로그램으로서 구현될 수도 있다. 그 컴퓨터 프로그램을 구성하는 코드들 및 코드 세그먼트들은 본 발명의 기술 분야의 당업자에 의해 용이하게 추론될 수 있을 것이다. 이러한 컴퓨터 프로그램은 컴퓨터가 읽을 수 있는 비일시적 저장매체(non-transitory computer readable media)에 저장되어 컴퓨터에 의하여 읽혀지고 실행됨으로써, 본 발명의 실시 예를 구현할 수 있다.On the other hand, that all components constituting the embodiments of the present invention are described as being combined or operated as one, the present invention is not necessarily limited to these embodiments. That is, if it is within the scope of the present invention, all of the components may be selectively combined and operated. Further, although all of the components may be implemented by one independent hardware, a part or all of the components are selectively combined to perform a part or all of functions combined in one or a plurality of hardware. It may be implemented as a computer program having a. The codes and code segments constituting the computer program may be easily deduced by those skilled in the art of the present invention. Such a computer program is stored in a computer-readable non-transitory computer readable media, and read and executed by a computer, thereby implementing an embodiment of the present invention.
여기서 비일시적 판독 가능 기록매체란, 레지스터, 캐시(cache), 메모리 등과 같이 짧은 순간 동안 데이터를 저장하는 매체가 아니라, 반영구적으로 데이터를 저장하며, 기기에 의해 판독(reading)이 가능한 매체를 의미한다. 구체적으로, 상술한 프로그램들은 CD, DVD, 하드 디스크, 블루레이 디스크, USB, 메모리 카드, ROM 등과 같은 비일시적 판독가능 기록매체에 저장되어 제공될 수 있다.Here, the non-transitory readable recording medium means a medium that stores data semi-permanently and that can be read by a device, rather than a medium that stores data for a short time, such as registers, caches, and memory. . Specifically, the above-described programs may be stored and provided on a non-transitory readable recording medium such as a CD, DVD, hard disk, Blu-ray disk, USB, memory card, ROM, and the like.
이상에서는 본 발명의 바람직한 실시 예에 대하여 도시하고 설명하였지만, 본 발명은 상술한 특정의 실시 예에 한정되지 아니하며, 청구범위에 청구하는 본 발명의 요지를 벗어남이 없이 당해 발명이 속하는 기술분야에서 통상의 지식을 가진 자에 의해 다양한 변형실시가 가능한 것은 물론이고, 이러한 변형실시들은 본 발명의 기술적 사상이나 전망으로부터 개별적으로 이해되어서는 안 될 것이다.Although the preferred embodiments of the present invention have been illustrated and described above, the present invention is not limited to the specific embodiments described above, and it is usually in the technical field to which the present invention pertains without departing from the gist of the present invention as claimed in the claims. It is of course possible to perform various modifications by a person having knowledge of, and these modifications should not be individually understood from the technical idea or prospect of the present invention.
** 부호의 설명** Explanation of codes
100: 사용자장치 101: 이동식저장매체100: user device 101: removable storage medium
110: 통신망 120: 영상서비스장치110: communication network 120: video service device
121: 제1 영상처리장치 123, 123': 제2 영상처리장치121: first image processing device 123, 123 ': second image processing device
130: 서드파티장치 300, 400, 500: 통신 인터페이스부130: third party device 300, 400, 500: communication interface unit
310, 510: 제어부 320: 포렌식 영상실행부310, 510: control unit 320: forensic image execution unit
330, 530: 저장부 410: 영상 고속처리부330, 530: storage unit 410: image high-speed processing unit
520, 520': 영상 고속분석부 600: 비디오 처리부520, 520 ': image high-speed analysis unit 600: video processing unit
610: 비디오 검색부 620: 스케줄링부610: video search unit 620: scheduling unit
630: 매뉴얼 처리부 640: 북마크 처리부630: manual processing unit 640: bookmark processing unit

Claims (14)

  1. 비디오 영상을 수신하는 통신 인터페이스부; 및A communication interface unit that receives a video image; And
    상기 수신한 비디오 영상에서 분석대상 객체를 추출해 분석하여 객체의 속성 정보를 생성하고, 상기 생성한 객체의 속성 정보를 근거로 상기 비디오 영상을 분석하여 분석 결과를 메타 데이터로서 생성하는 제어부;를A control unit for extracting and analyzing an object to be analyzed from the received video image to generate attribute information of the object, and analyzing the video image based on the attribute information of the generated object to generate an analysis result as metadata;
    포함하는 고속분석 영상처리장치.High-speed analysis image processing device including.
  2. 제1항에 있어서,According to claim 1,
    상기 통신 인터페이스부는, 객체추적 기반의 영상처리를 수행하는 외부장치에 연동하며, 상기 제어부는 상기 외부장치의 요청시 상기 수신한 비디오 영상을 이용해 지정 객체 중심의 고속분석 영상처리를 수행하는 고속분석 영상처리장치.The communication interface unit interlocks with an external device that performs object tracking-based image processing, and the controller performs high-speed analysis image processing based on a specified object using the received video image at the request of the external device. Processing device.
  3. 제1항에 있어서,According to claim 1,
    상기 제어부는, 상기 분석대상 객체와 관련한 이벤트를 분석해 이벤트 정보를 더 생성하며, 상기 생성한 이벤트 정보를 상기 메타 데이터에 포함하여 저장하는 고속분석 영상처리장치.The controller analyzes an event related to the object to be analyzed, further generates event information, and stores the generated event information in the metadata to store a high-speed analysis image processing apparatus.
  4. 제1항에 있어서,According to claim 1,
    상기 제어부는, 상기 객체의 속성 정보 및 상기 메타 데이터를 이용하여 딥러닝 기반의 메타 데이터를 더 생성하는 고속분석 영상처리장치.The controller is a high-speed analysis image processing apparatus that further generates deep learning-based metadata using the attribute information of the object and the metadata.
  5. 제1항에 있어서,According to claim 1,
    상기 제어부는, 상기 통신 인터페이스부를 통해 수신된 서로 다른 포맷의 비디오 영상을 처리하는 비디오 처리부를 포함하는 고속분석 영상처리장치.The control unit includes a video processing unit that processes video images of different formats received through the communication interface unit.
  6. 제1항에 있어서,According to claim 1,
    상기 제어부는, 상기 통신 인터페이스부를 통해 수신되는 시나리오 기반의 검색 명령어를 근거로, 상기 검색 명령어에 매칭되는 상기 생성한 메타 데이터를 검색하여 제공하는 고속분석 영상처리장치.The control unit searches for and provides the generated metadata matching the search command based on a scenario-based search command received through the communication interface unit.
  7. 제1항에 있어서,According to claim 1,
    상기 통신 인터페이스부는 지정된 장소의 촬영장치, 이동식저장매체(USB) 및 서드파티장치의 비디오 영상을 선택적으로 수신하는 고속분석 영상처리장치.The communication interface unit is a high-speed analysis image processing device for selectively receiving the video image of a photographing device, a removable storage medium (USB) and a third party device at a designated place.
  8. 통신 인터페이스부 및 제어부를 포함하는 고속분석 영상처리장치의 구동방법으로서,A method of driving a high-speed analysis image processing apparatus including a communication interface and a control unit,
    상기 통신 인터페이스부에서 비디오 영상을 수신하는 단계; 및Receiving a video image from the communication interface; And
    상기 제어부가, 상기 수신한 비디오 영상에서 분석대상 객체를 추출해 분석하여 객체의 속성 정보를 생성하고, 상기 생성한 객체의 속성 정보를 근거로 상기 비디오 영상을 분석하여 분석 결과를 메타 데이터로서 생성하는 단계;를The control unit extracts and analyzes an object to be analyzed from the received video image to generate attribute information of the object, and analyzes the video image based on the generated attribute information to generate an analysis result as metadata. ;
    포함하는 고속분석 영상처리장치의 구동방법.Method of driving a high-speed analysis image processing apparatus comprising.
  9. 제8항에 있어서,The method of claim 8,
    상기 통신 인터페이스부는, 객체추적 기반의 영상처리를 수행하는 외부장치에 연동하며, 상기 외부장치의 요청시 상기 수신한 비디오 영상을 이용해 지정 객체 중심의 고속분석 영상처리를 수행하는 단계;를 더 포함하는 고속분석 영상처리장치의 구동방법.The communication interface unit interlocks with an external device that performs object tracking-based image processing, and performs high-speed analysis image processing based on a designated object using the received video image when requested by the external device. Driving method of high-speed analysis image processing device.
  10. 제8항에 있어서,The method of claim 8,
    상기 분석대상 객체와 관련한 이벤트를 분석해 이벤트 정보를 더 생성하는 단계; 및 Generating event information by analyzing an event related to the object to be analyzed; And
    상기 생성한 이벤트 정보를 상기 메타 데이터에 포함하여 저장하는 단계;를 더 포함하는 고속분석 영상처리장치의 구동방법.And storing the generated event information by including the metadata in the meta data.
  11. 제8항에 있어서,The method of claim 8,
    상기 객체의 속성 정보 및 상기 메타 데이터를 이용하여 딥러닝 기반의 메타 데이터를 생성하는 단계;를 더 포함하는 고속분석 영상처리장치의 구동방법.And generating deep learning based meta data using the attribute information of the object and the meta data.
  12. 제8항에 있어서,The method of claim 8,
    상기 통신 인터페이스부를 통해 수신된 서로 다른 포맷의 비디오 영상을 처리하는 단계;를 더 포함하는 고속분석 영상처리장치의 구동방법.And processing video images of different formats received through the communication interface unit.
  13. 제8항에 있어서,The method of claim 8,
    상기 통신 인터페이스부를 통해 수신되는 시나리오 기반의 검색 명령어를 근거로, 상기 검색 명령어에 매칭되는 상기 생성한 메타 데이터를 검색하여 제공하는 sp.. 단계;를 더 포함하는 고속분석 영상처리장치의 구동방법.And a sp .. step of searching for and providing the generated metadata matching the search command based on the scenario-based search command received through the communication interface unit.
  14. 제8항에 있어서,The method of claim 8,
    지정된 장소의 촬영장치, 이동식저장매체 및 서드파티장치의 비디오 영상을 선택적으로 수신하는 단계;를 더 포함하는 고속분석 영상처리장치의 구동방법.A method of driving a high-speed analysis image processing apparatus further comprising; selectively receiving a video image of a photographing apparatus, a removable storage medium, and a third-party apparatus at a designated place.
PCT/KR2018/013184 2018-10-22 2018-11-01 High-speed analysis image processing apparatus and driving method for apparatus WO2020085558A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020180125702A KR101954717B1 (en) 2018-10-22 2018-10-22 Apparatus for Processing Image by High Speed Analysis and Driving Method Thereof
KR10-2018-0125702 2018-10-22

Publications (1)

Publication Number Publication Date
WO2020085558A1 true WO2020085558A1 (en) 2020-04-30

Family

ID=65760982

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2018/013184 WO2020085558A1 (en) 2018-10-22 2018-11-01 High-speed analysis image processing apparatus and driving method for apparatus

Country Status (2)

Country Link
KR (1) KR101954717B1 (en)
WO (1) WO2020085558A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102247359B1 (en) * 2019-07-31 2021-05-04 (주)유디피 Image analysis system and method for remote monitoring
KR102152237B1 (en) * 2020-05-27 2020-09-04 주식회사 와치캠 Cctv central control system and method based on situation analysis
KR102246617B1 (en) * 2021-03-16 2021-04-30 넷마블 주식회사 Method to analyze scene

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110305394A1 (en) * 2010-06-15 2011-12-15 David William Singer Object Detection Metadata
US20160065906A1 (en) * 2010-07-19 2016-03-03 Ipsotek Ltd Video Analytics Configuration
US20170109582A1 (en) * 2015-10-19 2017-04-20 Disney Enterprises, Inc. Incremental learning framework for object detection in videos
KR20170084657A (en) * 2016-01-12 2017-07-20 소프트온넷(주) System and method for generating narrative report based on video recognition and event trancking
KR20180019874A (en) * 2016-08-17 2018-02-27 한화테크윈 주식회사 The Apparatus And System For Searching

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101492473B1 (en) * 2014-04-04 2015-02-11 주식회사 사라다 Context-aware cctv intergrated managment system with user-based
KR20160061856A (en) 2014-11-24 2016-06-01 삼성전자주식회사 Method and apparatus for recognizing object, and method and apparatus for learning recognizer
KR102147361B1 (en) 2015-09-18 2020-08-24 삼성전자주식회사 Method and apparatus of object recognition, Method and apparatus of learning for object recognition
KR101925907B1 (en) 2016-06-03 2019-02-26 (주)싸이언테크 Apparatus and method for studying pattern of moving objects using adversarial deep generative model
KR101696801B1 (en) * 2016-10-21 2017-01-16 이형각 integrated image monitoring system based on IoT camera

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110305394A1 (en) * 2010-06-15 2011-12-15 David William Singer Object Detection Metadata
US20160065906A1 (en) * 2010-07-19 2016-03-03 Ipsotek Ltd Video Analytics Configuration
US20170109582A1 (en) * 2015-10-19 2017-04-20 Disney Enterprises, Inc. Incremental learning framework for object detection in videos
KR20170084657A (en) * 2016-01-12 2017-07-20 소프트온넷(주) System and method for generating narrative report based on video recognition and event trancking
KR20180019874A (en) * 2016-08-17 2018-02-27 한화테크윈 주식회사 The Apparatus And System For Searching

Also Published As

Publication number Publication date
KR101954717B1 (en) 2019-03-06

Similar Documents

Publication Publication Date Title
WO2020085558A1 (en) High-speed analysis image processing apparatus and driving method for apparatus
WO2014069943A1 (en) Method of providing information-of-users' interest when video call is made, and electronic apparatus thereof
WO2014193065A1 (en) Video search apparatus and method
WO2021167374A1 (en) Video search device and network surveillance camera system including same
WO2017138766A1 (en) Hybrid-based image clustering method and server for operating same
WO2013165083A1 (en) System and method for providing image-based video service
WO2015147437A1 (en) Mobile service system, and method and device for generating location-based album in same system
WO2021145565A1 (en) Method, apparatus, and system for managing image captured by drone
CN110543584B (en) Method, device, processing server and storage medium for establishing face index
CN105072478A (en) Life recording system and method based on wearable equipment
WO2018164532A1 (en) System and method for enhancing augmented reality (ar) experience on user equipment (ue) based on in-device contents
WO2022186426A1 (en) Image processing device for automatic segment classification, and method for driving same device
US20120147179A1 (en) Method and system for providing intelligent access monitoring, intelligent access monitoring apparatus
WO2020067615A1 (en) Method for controlling video anonymization device for improving anonymization performance, and device therefor
WO2019103443A1 (en) Method, apparatus and system for managing electronic fingerprint of electronic file
WO2019194569A1 (en) Image searching method, device, and computer program
WO2019083073A1 (en) Traffic information providing method and device, and computer program stored in medium in order to execute method
WO2016036049A1 (en) Search service providing apparatus, system, method, and computer program
WO2023113158A1 (en) Criminal profiling method, device performing same, and computer program
WO2016129804A1 (en) Method for generating webpage on basis of consumer behavior patterns and method for utilizing webpage
KR102254037B1 (en) Apparatus for Image Analysis and Driving Method Thereof
WO2019231089A1 (en) System for performing bi-directional inquiry, comparison and tracking on security policies and audit logs, and method therefor
WO2013089390A1 (en) System for providing personal information based on the creation and consumption of content
KR20200093264A (en) Method and system for managing traffic violation using video/image device
WO2015129987A1 (en) Service apparatus for providing object recognition-based advertisement, user equipment for receiving object recognition-based advertisement, system for providing object recognition-based advertisement, method therefor and recording medium therefor in which computer program is recorded

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18937663

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18937663

Country of ref document: EP

Kind code of ref document: A1