WO2020085558A1 - Appareil de traitement d'image d'analyse à grande vitesse et procédé de commande associé - Google Patents

Appareil de traitement d'image d'analyse à grande vitesse et procédé de commande associé Download PDF

Info

Publication number
WO2020085558A1
WO2020085558A1 PCT/KR2018/013184 KR2018013184W WO2020085558A1 WO 2020085558 A1 WO2020085558 A1 WO 2020085558A1 KR 2018013184 W KR2018013184 W KR 2018013184W WO 2020085558 A1 WO2020085558 A1 WO 2020085558A1
Authority
WO
WIPO (PCT)
Prior art keywords
image processing
image
video
analysis
communication interface
Prior art date
Application number
PCT/KR2018/013184
Other languages
English (en)
Korean (ko)
Inventor
고현준
장정훈
최준호
전창원
Original Assignee
주식회사 인텔리빅스
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 인텔리빅스 filed Critical 주식회사 인텔리빅스
Publication of WO2020085558A1 publication Critical patent/WO2020085558A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/60Memory management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Definitions

  • the present invention relates to a high-speed analysis image processing apparatus and a method of driving the apparatus, and more specifically, for video images of various formats, for example, based on a person's face or event option information related to a person, image analysis and search can be quickly performed. It relates to a high-speed analysis image processing apparatus to be processed and a driving method of the apparatus.
  • CCTV cameras capture public places and automatically analyze images to extract a large number of unspecified objects and analyze motion to automatically alert the administrator when abnormal motion is detected, or There is an increasing demand for intelligent video surveillance systems that deliver information to other connected automation systems.
  • An embodiment of the present invention provides a high-speed analysis image processing apparatus and a method for driving the apparatus for quickly processing image analysis and search based on, for example, human face or event option information related to a person, for video images of various formats It has a purpose.
  • the high-speed analysis image processing apparatus generates a communication interface unit that receives a video image, and extracts and analyzes an object to be analyzed from the received video image to generate attribute information of the object, and And a control unit that analyzes the video image based on attribute information and generates an analysis result as metadata.
  • the communication interface unit interlocks with an external device that performs object tracking-based image processing, and the control unit may perform high-speed analysis image processing based on a designated object using the received video image at the request of the external device.
  • the controller analyzes an event related to the object to be analyzed to further generate event information, and may include and store the generated event information in the metadata.
  • the controller may further generate deep learning-based metadata using attribute information of the object and the metadata.
  • the control unit may include a video processing unit that processes video images of different formats received through the communication interface unit.
  • the controller may search for and provide the generated metadata matching the search command based on a scenario-based search command received through the communication interface unit.
  • the communication interface unit may selectively receive video images of a photographing device, a removable storage medium (USB), and a third party device at a designated place.
  • a photographing device a photographing device
  • a removable storage medium USB
  • a method of driving a high-speed analysis image processing apparatus is a method of driving a high-speed analysis image processing apparatus including a communication interface unit and a control unit, receiving a video image from the communication interface unit, and The control unit extracts and analyzes an object to be analyzed from the received video image to generate object attribute information, and analyzes the video image based on the generated object attribute information to generate an analysis result as metadata. do.
  • the communication interface unit interlocks with an external device that performs object tracking-based image processing, and the method of driving the high-speed analysis image processing device uses the received video image at the request of the external device to perform high-speed analysis image centering on a specified object. It may further include the step of performing the processing.
  • the driving method of the high-speed analysis image processing apparatus may further include analyzing an event related to the object to be analyzed, generating event information, and storing the generated event information in the metadata.
  • the driving method of the high-speed analysis image processing apparatus may further include generating deep learning-based metadata using attribute information of the object and the metadata.
  • the method for driving the high-speed analysis image processing apparatus may further include processing video images of different formats received through the communication interface unit.
  • the driving method of the high-speed analysis image processing apparatus may further include searching and providing the generated metadata matching the search command based on a scenario-based search command received through the communication interface unit.
  • the method for driving the high-speed analysis image processing apparatus may further include selectively receiving video images of a photographing apparatus, a removable storage medium, and a third party apparatus at a designated place.
  • the search speed is reduced as the search amount decreases according to the addition of a search category (for example, events such as loitering, stoppage, vehicle stoppage, etc.). Will increase.
  • a search category for example, events such as loitering, stoppage, vehicle stoppage, etc.
  • a search window (eg, UI) for inputting a person or event related to a person or a scenario-based search term may be formed simply and intuitively to facilitate user search convenience.
  • deep learning may be performed using the properties of the object, person attribute information, event information, etc., stored as metadata to increase the accuracy of the search, and additionally generate additional information to flexibly cope with events and accidents.
  • FIG. 1 is a view showing a high-speed analysis video service system according to an embodiment of the present invention
  • Figure 2 is an exemplary view schematically showing Figure 1
  • FIG. 3 is a block diagram illustrating the structure of the first image processing apparatus of FIG. 2,
  • FIG. 4 is a block diagram illustrating the structure of the second image processing apparatus of FIG. 2,
  • FIG. 5 is a block diagram illustrating another structure of the second image processing apparatus of FIG. 2,
  • FIG. 6 is a block diagram illustrating the detailed structure of the image high-speed analysis unit of FIG. 5,
  • FIG. 7 is a view showing a high-speed analysis video service process according to an embodiment of the present invention.
  • FIG. 8 is a view for explaining an operation process between a forensic manager and a search client constituting the first image processing apparatus of FIG. 2;
  • FIG. 9 is a view for explaining an operation process of the first image processing apparatus and the third party apparatus of FIG. 2,
  • FIG. 10 is a diagram illustrating a search main screen
  • FIG. 11 is a diagram illustrating an FRS setup process
  • 21 to 30 are views for explaining an offline search screen.
  • FIG. 31 is a flowchart illustrating an operation process of a high-speed analysis image processing apparatus according to an embodiment of the present invention.
  • FIG. 1 is a diagram illustrating a high-speed analysis image service system according to an embodiment of the present invention
  • FIG. 2 is an exemplary diagram schematically showing FIG. 1.
  • the high-speed analysis image service system 90 includes a user device 100, a communication network 110, an image service device 120, and a third party device 130. ).
  • the element can be integrated and configured in a network device (for example, a switching device, etc.) in the communication network 110, and is described as including everything in order to help a sufficient understanding of the invention.
  • a network device for example, a switching device, etc.
  • the user device 100 is installed in a designated place and includes an imaging device such as CCTV for monitoring events and accidents, a desktop computer owned by users, a laptop computer, a mobile phone (eg, a smart phone), a tablet PC, and a smart TV.
  • a removable storage medium (eg, USB) 101 may be further included.
  • such a removable storage medium may include a memory provided in a black box of a vehicle.
  • the removable storage medium 101 may be directly connected to the control computer constituting the video service device 120.
  • the user device 100 may store an image captured through a camera (including a temporary storage site), and provide a captured image to the image service device 120 to request image analysis.
  • a camera including a temporary storage site
  • the captured video is provided to the video service device 120 in real time or periodically to control it through analysis. could be done.
  • the communication network 110 includes both wired and wireless communication networks.
  • a wired / wireless Internet network may be used or interlocked as the communication network 110.
  • the wired network includes an internet network such as a cable network or a public telephone network (PSTN), and the wireless communication networks include CDMA, WCDMA, GSM, Evolved Packet Core (EPC), Long Term Evolution (LTE), Wibro network, etc. It includes meaning.
  • the communication network 110 according to an embodiment of the present invention is not limited thereto, and may be used as a cloud computing network under a cloud computing environment, a 5G network, etc. as a connection network of a next-generation mobile communication system to be implemented in the future.
  • an access point in the communication network 110 can access a telephone exchange or the like, but in the case of a wireless communication network, data can be accessed by accessing a SGSN or a Gateway GPRS Support Node (GGSN) operated by a communication company. It can process or connect to various repeaters such as BTS (Base Station Transmission), NodeB, and e-NodeB to process data.
  • BTS Base Station Transmission
  • NodeB Node B
  • e-NodeB e-NodeB
  • the communication network 110 may include an access point.
  • Access points include small base stations, such as femto or pico base stations, which are often installed in buildings.
  • the femto or pico base station is classified according to the maximum number of user devices 100 that can be accessed according to the classification of the small base station.
  • the access point includes a user equipment 100 and a short-range communication module for performing short-range communication such as Zigbee and Wi-Fi.
  • the access point can use TCP / IP or RTSP (Real-Time Streaming Protocol) for wireless communication.
  • TCP / IP or RTSP Real-Time Streaming Protocol
  • short-range communication may be performed in various standards such as radio frequency (RF) and ultra-wideband communication (UWB), such as Bluetooth, Zigbee, infrared (IrDA), ultra high frequency (UHF), and very high frequency (VHF), in addition to Wi-Fi.
  • RF radio frequency
  • UWB ultra-wideband communication
  • IrDA infrared
  • UHF ultra high frequency
  • VHF very high frequency
  • the access point can extract the location of the data packet, designate the best communication path for the extracted location, and transfer the data packet to the next device, for example, the video service device 120 along the designated communication path.
  • An access point can share multiple lines in a typical network environment, including routers, repeaters and repeaters, for example.
  • the video service device 120 may serve as a control device for monitoring a corresponding area through a user device 100 installed in a designated area, for example, a captured image provided by CCTV.
  • a server of a company for providing a service by performing a high-speed analysis operation of an image may be included.
  • the video service device 120 includes a DB 120a for storing a large amount of video data, and may further include a server and a control computer.
  • the video service device 120 may be constructed in a variety of forms. For example, it can operate in the form of a single server, and multiple servers can work together.
  • the image service device 120 may include, for example, a first image processing device 121 and a second image processing device 123 as shown in FIG. 2. Through this, the image service device 120 of FIG. 1 can rapidly increase the image processing speed by collaborating or distributing the image processing operation.
  • the second image processing apparatus 123 performs an object image-based image processing operation.
  • object tracking is a method of predicting the motion of the extracted object by extracting the object from the first unit image of the image. Motion tracking is usually done in the form of a vector, i.e. direction and distance calculation.
  • the object image-based image processing extracts a designated object (eg, a person, a vehicle, etc.) to be analyzed for each unit image (for example, the first to Nth unit images), and extracts for each unit image. It is a method of comparing the property information of an object and determining whether it is the same object.
  • the feature point information becomes attribute information.
  • shape or color may be attribute information.
  • the final determination of the object may be made through deep learning. For example, similar objects are classified as candidate groups, and the properties of the object are finally determined through deep learning.
  • the second image processing apparatus 123 determines an object based on the attribute information and analyzes a correlation with surrounding objects or objects. Through this, an event is derived. Then, part or all of the attribute information, the event information, and the captured image matching it are generated as metadata.
  • the second image processing device 123 may refer to a predetermined rule (or policy).
  • the second image processing apparatus 123 may further generate new information by performing a deep learning operation using the stored metadata, and may also generate prediction information in the process.
  • Meta data for a specific video image generated in this way facilitates a search.
  • the first image processing device 121 is a control device, for example, a video analysis request from a third party device 130 such as a police station or a government office.
  • the first image processing device 121 may perform image analysis based on object tracking on its own, or may request the second image processing device 123 to perform object image based image analysis. .
  • the second image processing device 123 may further search the image analysis result using attribute information (eg, a face), and detect only a designated event (eg, as the first image processing device 121). Rather than doing so, it can be extended to further search by adding various event options.
  • the event option information may include roaming, stopping, vehicle stopping, and the like. Therefore, the speed of the search is fast and the search term is added, so the accuracy of the search is increased.
  • the first image processing device 121 may provide a scenario-based search word to search for a sentence, not a word, that is, a natural language, and increase the accuracy of information by a deep learning operation and receive prediction information together.
  • a scenario-based search word may have a series of events such as, for example, 'black among illegally turning vehicles, a taxi' or 'a person wearing a hat running with a bag'.
  • the term i-forensic is used in that it processes video images of different formats obtained through various paths. This can be viewed as implying that, for example, picture-based image analysis is performed, but analysis and search can be performed in a short time by receiving images recorded in various formats.
  • Video image means a video signal. Generally, the video signal is divided into a video signal and an audio signal, and further includes additional information. Of course, the additional information includes various information such as date and time information.
  • the term video image is mainly used, but it may not mean only a video signal. Therefore, video images may be used interchangeably with video data and image data.
  • an image processing operation such as image analysis based on an object image performed by the second image processing apparatus 123 analyzes pixel values in a picture (or macroblock) image to classify object boundaries, and also classifies them. It is possible to analyze what type of object is by analyzing the pixel value of the object. For example, a person has a black head, a face is flesh-colored, and the two parts are connected to each other. Therefore, it is possible to classify human objects by extracting black and flesh-colored areas, and analyze corresponding parts to analyze feature points. Since there are various types of people or human faces, it may be possible to classify based on data stored in the form of a template. Therefore, if the characteristic point of a specific person is found, it can be judged as the same person even if there is a personal change, such as wearing the glasses.
  • the shape of the nose, the shape of the ear, or the shape of the jaw line may be a characteristic point of a person, and on the basis of this, it is determined whether the same person as the subsequent unit images.
  • an event is derived through correlation between the detected person and surrounding objects or objects. For example, a specific person was found in the unit video, and it was analyzed that the person is holding garbage. Then, in the Nth video, if the garbage is removed from the hand, and it is found in the dump site in the video, it is judged that the person dumped the garbage in an inappropriate place. Since such event generation can be analyzed in various forms, the embodiment of the present invention will not be limited to any one form.
  • the first image processing apparatus 121 and the second image processing apparatus 123 are interlocked with the image service apparatus 120, but this is configured in the form of a single server, for example. It may be formed in a form constituting the first module and the second module.
  • the image service device 120 may further interwork a third image processing device for high-speed analysis and search operations.
  • the third image processing apparatus provides an analysis result by performing an analysis operation based on an object image only for vehicle analysis on the received video image. Since various types of system design are possible, the embodiment of the present invention will not be limited to any one form.
  • the third party device 130 includes a server operated by a government office such as a police station, and a server of a company providing other content videos.
  • a control device operated by a local government may be a third party device 130.
  • a control device may be preferably the video service device 120 according to an embodiment of the present invention.
  • the third party device 130 can be used for various purposes through image analysis, it should be understood as a provider that provides content, that is, a video image.
  • FIG. 3 is a block diagram illustrating the structure of the first image processing apparatus of FIG. 2.
  • the first image processing apparatus 121 includes a communication interface unit 300, a control unit 310, a forensic image execution unit 320, and a storage unit 330. Includes some or all.
  • control unit means to be integrated with other components such as, and is described as including all in order to help a sufficient understanding of the invention.
  • the forensic image execution unit 320 when the forensic image execution unit 320 is integrated with the control unit 310, it may be referred to as a 'forensic image processing unit'.
  • the forensic image processing unit may execute one software to perform a control operation and a forensic image processing operation together.
  • the forensic image processing unit may be configured to operate in hardware, software, or a combination thereof.
  • the control unit 310 may include a CPU and a memory. When configured in the form of an IC chip, a program for forensic image processing is stored in a memory and the CPU executes it. The processing speed increases rapidly.
  • the forensic image execution unit 320 may perform image analysis based on an object image, but may also perform an image processing operation based on object tracking.
  • the former may be executed in the first module, and the latter may be executed in the second module.
  • the first image processing apparatus 121 may have a different configuration depending on how the system is configured. Above all, it is clear to perform image analysis based on object images and store the results in the form of metadata to provide search results quickly when a search request is made. Therefore, the embodiment of the present invention will not be particularly limited to any one form.
  • the communication interface 300 may communicate with the user device 100 and the sub-party device 130 of FIG. 1, respectively. To this end, a communication module may be included. Since the communication interface unit 300 processes video images, operations such as modulation and demodulation, encoding and decoding, muxing, and demuxing may be performed, but this may also be performed by the controller 310.
  • the communication interface 300 transmits this to the control unit 310 when, for example, a third party device 130 requests analysis of a video image, or when an image is provided from a photographing device such as a CCTV as the user device 100. You can.
  • the communication interface 300 may perform a setting operation for performing a forensic operation with the second image processing device 123 at the request of the control unit 310, through which the second image processing device 123 As a result, analysis results can be provided by requesting image processing.
  • the control unit 310 is responsible for the overall control operation of the communication interface unit 300, the forensic image execution unit 320, and the storage unit 330 constituting the first image processing unit 121.
  • the control unit 310 may control the forensic image execution unit 320 to perform an operation for performing a forensic operation with the second image processing apparatus 123.
  • the control unit 310 performs a forensic image execution unit.
  • the second image processing apparatus 123 may request an image analysis for the video image, and then provide a search term to receive the searched result.
  • the control unit 310 may search for a video image by selecting a person's face or a person category as a search term according to the operation of the forensic image execution unit 320, and search for scenario-based search terms or event option information. Based on this, various types of searches can be additionally performed.
  • the first image processing device 121 may analyze the A range and search
  • the second image processing device 123 may analyze and search the B range, and the first image processing device through the forensic operation ( If the analysis and search is requested from the 121) to the second image processing apparatus 123, the first image processing apparatus 121 operates to view the analysis or search results of the second image processing apparatus 123 in the B range.
  • the control unit 310 and the forensic image execution unit 320 perform this.
  • the forensic image execution unit 320 executes an interlocking program to allow the first image processing unit 121 to view the analysis results of the video image analysis of the second image processing unit 123.
  • This may include various UX / UI programs. For example, when a user sets a forensic operation through a UI window or when a user provides a search term with a property of a person, such as a 'face' category, the second image processing device 123 ) To provide various search results.
  • the storage unit 330 may store various data processed by the first image processing device 121 and may temporarily store the data. For example, when the first image processing device 121 is interlocked with the DB 120a, temporary data may be stored in the storage unit 330 and permanent data may be stored in the DB 120a. The data stored in the storage unit 330 is output when requested by the control unit 310.
  • FIG. 4 is a block diagram illustrating the structure of the second image processing apparatus of FIG. 2.
  • the second image processing apparatus 123 includes some or all of the communication interface 400 and the image high-speed processing unit 410, where "some or all "Including” is the same as the above.
  • the communication interface 400 may communicate with the first image processing apparatus 121 of FIG. 2 according to an embodiment of the present invention.
  • the first image processing device 121 provides a video image and requests analysis
  • the video image and the analysis request are transmitted to the image high-speed processing unit 410.
  • the communication interface unit 400 analyzes the video image, such as person attribute information, event information, correlation information, and the video image matching the information in the form of metadata, the image high-speed processing unit 410 It can be stored in the DB (120a) on request.
  • the communication interface unit 400 provides a search term, such as an attribute-based search term, a scenario-based search term, and an event option-based search word from the first image processing device 121 to the image fast processing unit 410 Search results are provided accordingly.
  • a search term such as an attribute-based search term, a scenario-based search term, and an event option-based search word from the first image processing device 121 to the image fast processing unit 410 Search results are provided accordingly.
  • the image high-speed processing unit 410 performs an object image-based analysis operation when an analysis request is made for the received video image.
  • the above description has been sufficiently described, and further explanation is omitted.
  • the image high-speed processing unit 410 may store the analysis results in the DB 120a so that the first image processing device 121 accesses the DB 120a to perform the above search. That is, the search can be performed by the first image processing device 121 directly accessing the DB 120a, but various methods are possible such as receiving the search results indirectly via the image high-speed processing unit 410, which is a system. Since it may vary according to the designer's intention, the embodiment of the present invention will not be particularly limited to any one form. However, the former would be desirable in view of the speed of data processing.
  • FIG. 5 is a block diagram illustrating another structure of the second image processing apparatus of FIG. 2
  • FIG. 6 is a block diagram illustrating a detailed structure of the high-speed analysis image execution unit of FIG.
  • the second image processing apparatus 123 ' includes a communication interface unit 500, a control unit 510, an image high-speed analysis unit 520, and a storage unit 530. ).
  • including some or all means that some components, such as the storage unit 530, are omitted, so that the second image processing apparatus 123 'is configured or some components, such as the image high-speed analysis unit 520, are included. It means that it can be integrated with other components, such as the control unit 510, etc., and is described as including everything in order to help the understanding of the invention.
  • the second image processing apparatus 123 'of FIG. 5 is that the control operation and the high-speed analysis (and search) operation of the image are dualized and processed. You can. To this end, they may be separated by hardware, software, or a combination thereof to perform different operations. That is, it can be seen that the control unit 510 performs only the control operation, and the high-speed analysis operation of the image is performed by the image high-speed analysis unit 520.
  • the image high-speed analysis unit 520 performs object image-based analysis. For example, it can be considered as a method of analyzing attributes by gapping objects in the form of images and analyzing pixel values for objects in the captured images.
  • the communication interface unit 500, the control unit 510, the image high-speed analysis unit 520, and the storage unit 530 of FIG. 5 communicate with the image service device 120 or FIG. 4 of FIG. Since the contents of the interface unit 400 and the image high-speed processing unit 410 are not significantly different, the contents will be replaced.
  • the image high-speed analysis unit 520 of FIG. 5 may have the same structure as the image high-speed analysis unit 520 'of FIG. 6.
  • the image high-speed analysis unit 520 of FIG. 5 is a part or all of the video processing unit 600, the video search unit 610, the scheduling unit 620, the manual processing unit 630, and the bookmark processing unit 640 as shown in FIG. It may include.
  • "including some or all" has the same meaning as above.
  • the image high-speed analysis unit 520 of FIG. 5 may further perform various operations in addition to the high-speed analysis described above.
  • the video processing unit 600 may process video data provided in various formats by processing data in a format of a corresponding image or converting data into a specified format, and converting the data into a same format.
  • video data provided in various formats may be converted into a specified format, then analyzed and converted into the same format and then exported.
  • the video search unit 610 may allow an attribute-based search, for example, a search to be performed in a 'face' category, or a search based on event option information, and a scenario-based search.
  • the scenario-based scenario is to provide a search result based on the analysis of a search word provided in the form of a sentence or, more precisely, a short sentence. In other words, it can be considered as similar to a keyword-based search, but there may be a difference in that keywords correspond to words and scenarios are sentences.
  • the scenario sentence may include a complex scenario such as "black of a vehicle that is illegally turning, in the case of a taxi" or "a person wearing a hat running with a bag".
  • the scheduling unit 620 performs a schedule management operation, and may be in charge of an operation in which video analysis is automatically registered and automatically generated at a specified time once or periodically (daily, weekly, monthly).
  • the manual processing unit 630 may perform a manual function, such as an operation of providing help. It is to provide the manual so that it can be used directly on i-Forensics.
  • the bookmark processing unit 640 may perform various operations related to bookmarking a favorite image, such as designating a bookmark (interest list), deleting a bookmark, exporting a bookmark list, managing multiple bookmarks, and controlling bookmark deletion (protection).
  • the image high-speed analysis unit 520 of FIG. 5 may further include a configuration for performing various operations as shown in [Table 1] and [Table 2]. That is, since FIG. 6 is only an example, the configuration may be further added to the configuration of FIG. 6 by SW modules, HW modules, and combinations based on the contents of [Table 1] and [Table 1]. 6 is a configuration in which only representative operations are described in the form of modules.
  • FIG. 7 is a view showing a high-speed analysis video service process according to an embodiment of the present invention.
  • the first image processing apparatus 121 of FIG. 2 may include a forensic manager 121a and a search client 121b. This may be in the form of a SW module, for example.
  • the forensic manager 121a may be in charge of management or control of the first image processing device 121, and the search client 121b may perform a search-related operation.
  • FIG. 7 shows operations between the manager 120a and the client 121b in the DB 120a, the third party device 130, and the first image processing device 121, and further, the second image processing device 123, for example. .
  • the first image processing device 121 requests and receives a video image from a DB 120a or a third party device 130 such as a VMS (Virtual Memory System), and then receives the second image processing device 123 Shows that the analysis of the video image is completed by inquiring or requesting with (S701 ⁇ S712).
  • steps S705 to S707 are a process of requesting an inquiry or analysis to the second image processing apparatus 123
  • steps S708 to S710 are processes for performing a high-speed analysis operation
  • steps S711 and S712 are processes for completing the analysis. .
  • FIG. 8 is a diagram illustrating an operation process between a forensic manager and a search client constituting the first image processing apparatus of FIG. 2.
  • the forensic manager 121a and the client 121b may specifically operate in a form of processing a video list as in FIG. 8 (S800 to S804).
  • a user such as a control agent inputs a specific search word through the first image processing device 121
  • a list of various video images corresponding to the search word may be first provided to the user. Through this list, you can select the video you are looking for and receive and play the selected video.
  • the analyzed result may be provided and displayed on the screen.
  • FIG. 9 is a view for explaining an operation process of the first image processing apparatus and the third party apparatus of FIG. 2.
  • the first image processing device 121 may request meta data stored in the DB 120a and receive it as a stream along with a video image to display it on the screen (S900 to S905). Such a process may receive meta data from the DB 120a through a collaborative operation between the manager 121a and the client 121b, for example, a video image according to a specific search term and various information matching it and display it on the screen.
  • FIG. 10 is a diagram illustrating a search main screen
  • FIG. 11 is a diagram illustrating an FRS setting process.
  • the first image processing apparatus 121 may include a control computer, a control monitor or a display board connected to the control computer.
  • an FRS connection establishment operation may be performed as shown in FIGS. 10 and 11.
  • a specific setting screen pops up.
  • an IP address of the second image processing device 123 may be input to the pop-up window to interlock with each other.
  • status information 1100 indicating whether or not a connection is made may be displayed.
  • 12 to 20 are views for explaining an offline analysis screen.
  • the first image processing device 121 of FIG. 2 clicks the file import button 1200 displayed on the screen as shown in FIG. 12 to search for a desired file. Can be loaded.
  • the imported files are registered in the analysis channel list area 1210 on the left side of the screen, that is, the first area.
  • the file is stored in the list area 1210, but analysis may not be possible.
  • the video image requesting analysis to the video display area may be played.
  • the image is reproduced small in the form of a thumbnail image, and additional information such as time may be displayed on the image.
  • a pop-up window 1400 for setting an analysis type is displayed. You can call it and set the analysis type. It can be seen that the 'face' category is added to the pop-up window 1400. This can be regarded as an item for checking attribute information analyzed based on a person's object image according to an embodiment of the present invention.
  • the video image of the analysis target 1410 selected may be analyzed in three types per file. This is made according to the designated method according to the embodiment of the present invention, and may be changed as much as possible. However, since the embodiment of the present invention provides data for video images of various formats, it is desirable to support more than that. In addition, it can be designed so that re-analysis is not possible for the analysis target or the analysis result value can be maintained as long as there is no request for deletion.
  • the area may be a third area.
  • the third area may be divided into a small area, an area including an analysis standby / progress list, an area including an analysis completion list, and an area including an analysis failure list.
  • various video images contained in the third area may be brought back to the analysis channel list area as shown in FIG. 17 to perform analysis again.
  • the third area is divided into analysis completion when there is an object having a desired analysis result as shown in FIG. 18, and when analysis is completed but there is no object, that is, no object, analysis failure list cannot be completed due to network errors or file problems. Will be included in the domain.
  • the analysis may be retried under different conditions as in FIG. For example, if the analysis fails as a result of an attempt to search with a general object item as shown in FIG. 19, the analysis is retried as shown in FIG. 20 based on the attribute information of the person according to the embodiment of the present invention.
  • 21 to 30 are views for explaining an offline search screen.
  • the first image processing device 121 of FIG. 2 may display a search screen as shown in FIG. 21 on a monitor screen. That is, the search item 2100 is selected on the main screen. Subsequently, by selecting the video list item 2110 to be searched, the searched list may be retrieved as shown in FIG. 22. This is provided in the list of completed analyzes. Of course, the user can delete specific items on the list.
  • the user selects a specific video image and sets various search expressions.
  • search conditions (expression) customized to the video image are displayed on the screen.
  • the search window 2500 for the first format video image and the second format video image may display different items.
  • a playback method may be determined by selecting it.
  • a search type item in the search window 2500 may be selected to search for people, vehicles, and other unidentified objects constituting a general object. You can perform a search.
  • a play period can be designated through a play bar as shown in FIG. 27, and a play time can also be set. If the object section is continuous playback as shown in FIG. 28, continuous playback is performed.
  • FIG. 29 shows that multiple selection of thumbnails and continuous playback thereof can be performed
  • FIG. 30 shows that a specific video image, such as a clip image, can be set as a favorite by selecting the favorite button 3000 have.
  • FIG. 31 is a flowchart illustrating an operation process of a high-speed analysis image processing apparatus according to an embodiment of the present invention.
  • an image service apparatus 120 or a first image processing apparatus 121 according to an embodiment of the present invention (hereinafter, a first image)
  • a processing device receives a video image (S3100).
  • the received video image includes images of different formats.
  • the first image processing device 121 extracts and analyzes the face image of the person from the received video image to generate face attribute information, and analyzes the video image based on the generated face attribute information to analyze the result as metadata. It is created (S3110).
  • the first image processing device 121 After the meta data stored as the result of the analysis is stored in the DB 120a as shown in FIG. 1, the first image processing device 121 provides video analysis results according to various search expressions of the user.
  • character-oriented analysis was further performed, and of course, the analysis is performed by a method of analyzing an object image in an image, but further analyzes a video image analysis result based on the attribute information of the corresponding person. Will be provided.
  • the search may be performed by adding event option information of the person, and above all, a scenario-based search may be performed.
  • the non-transitory readable recording medium means a medium that stores data semi-permanently and that can be read by a device, rather than a medium that stores data for a short time, such as registers, caches, and memory.
  • the above-described programs may be stored and provided on a non-transitory readable recording medium such as a CD, DVD, hard disk, Blu-ray disk, USB, memory card, ROM, and the like.
  • first image processing device 123 first image processing device 123
  • 123 ' second image processing device
  • control unit 320 forensic image execution unit

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

L'invention concerne un appareil de traitement d'image d'analyse à grande vitesse ainsi qu'un procédé de commande pour ledit appareil. Selon un mode de réalisation de l'invention, l'appareil de traitement d'image d'analyse à grande vitesse peut comprendre : une unité d'interface de communication permettant de recevoir une image vidéo; et une unité de commande qui extrait un objet à analyser de l'image vidéo reçue et analyse celui-ci afin de générer les informations d'attribut de l'objet, puis analyse l'image vidéo d'après les informations d'attribut générées de l'objet afin de générer le résultat d'analyse sous la forme de métadonnées.
PCT/KR2018/013184 2018-10-22 2018-11-01 Appareil de traitement d'image d'analyse à grande vitesse et procédé de commande associé WO2020085558A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020180125702A KR101954717B1 (ko) 2018-10-22 2018-10-22 고속분석 영상처리장치 및 그 장치의 구동방법
KR10-2018-0125702 2018-10-22

Publications (1)

Publication Number Publication Date
WO2020085558A1 true WO2020085558A1 (fr) 2020-04-30

Family

ID=65760982

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2018/013184 WO2020085558A1 (fr) 2018-10-22 2018-11-01 Appareil de traitement d'image d'analyse à grande vitesse et procédé de commande associé

Country Status (2)

Country Link
KR (1) KR101954717B1 (fr)
WO (1) WO2020085558A1 (fr)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102247359B1 (ko) * 2019-07-31 2021-05-04 (주)유디피 원격 모니터링을 위한 영상 분석 시스템 및 방법
KR102152237B1 (ko) * 2020-05-27 2020-09-04 주식회사 와치캠 상황 분석 기반의 cctv 관제 방법 및 시스템
KR102246617B1 (ko) * 2021-03-16 2021-04-30 넷마블 주식회사 화면 분석 방법
KR20240074636A (ko) 2022-11-18 2024-05-28 주식회사 위트콘 인공지능 기반의 포렌식 영상 처리 시스템 및 그 방법

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110305394A1 (en) * 2010-06-15 2011-12-15 David William Singer Object Detection Metadata
US20160065906A1 (en) * 2010-07-19 2016-03-03 Ipsotek Ltd Video Analytics Configuration
US20170109582A1 (en) * 2015-10-19 2017-04-20 Disney Enterprises, Inc. Incremental learning framework for object detection in videos
KR20170084657A (ko) * 2016-01-12 2017-07-20 소프트온넷(주) 인지형 차량 및 인상착의 객체 및 이벤트 인식, 추적, 검색, 예측을 위한 나레이티브 보고서 작성 시스템 및 방법
KR20180019874A (ko) * 2016-08-17 2018-02-27 한화테크윈 주식회사 이벤트 검색 장치 및 시스템

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101492473B1 (ko) * 2014-04-04 2015-02-11 주식회사 사라다 사용자 기반 상황 인지형 씨씨티비 통합관제시스템
KR20160061856A (ko) 2014-11-24 2016-06-01 삼성전자주식회사 객체 인식 방법 및 장치, 및 인식기 학습 방법 및 장치
KR102147361B1 (ko) 2015-09-18 2020-08-24 삼성전자주식회사 객체 인식 장치 및 방법, 객체 인식 모델 학습 장치 및 방법
KR101925907B1 (ko) 2016-06-03 2019-02-26 (주)싸이언테크 신경망 생성 모델을 이용한 객체 움직임 패턴 학습장치 및 그 방법
KR101696801B1 (ko) * 2016-10-21 2017-01-16 이형각 사물 인터넷(IoT) 카메라 기반의 통합 영상 감시 시스템

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110305394A1 (en) * 2010-06-15 2011-12-15 David William Singer Object Detection Metadata
US20160065906A1 (en) * 2010-07-19 2016-03-03 Ipsotek Ltd Video Analytics Configuration
US20170109582A1 (en) * 2015-10-19 2017-04-20 Disney Enterprises, Inc. Incremental learning framework for object detection in videos
KR20170084657A (ko) * 2016-01-12 2017-07-20 소프트온넷(주) 인지형 차량 및 인상착의 객체 및 이벤트 인식, 추적, 검색, 예측을 위한 나레이티브 보고서 작성 시스템 및 방법
KR20180019874A (ko) * 2016-08-17 2018-02-27 한화테크윈 주식회사 이벤트 검색 장치 및 시스템

Also Published As

Publication number Publication date
KR101954717B1 (ko) 2019-03-06

Similar Documents

Publication Publication Date Title
WO2020085558A1 (fr) Appareil de traitement d'image d'analyse à grande vitesse et procédé de commande associé
WO2014069943A1 (fr) Procédé de fourniture d'informations d'intérêt pour les utilisateurs lors d'un appel vidéo, et appareil électronique associé
WO2017138766A1 (fr) Procédé de regroupement d'image à base hybride et serveur de fonctionnement associé
WO2014193065A1 (fr) Procédé et appareil de recherche de vidéo
WO2013165083A1 (fr) Système et procédé de fourniture de service de vidéo fondée sur une image
WO2019156543A2 (fr) Procédé de détermination d'une image représentative d'une vidéo, et dispositif électronique pour la mise en œuvre du procédé
WO2021167374A1 (fr) Dispositif de recherche vidéo et système de caméra de surveillance de réseau le comprenant
WO2015147437A1 (fr) Système de service mobile, et méthode et dispositif de production d'album basé sur l'emplacement dans le même système
WO2021145565A1 (fr) Procédé, appareil et système de gestion d'image capturée par un drone
WO2019231089A1 (fr) Système pour effectuer une interrogation, une comparaison et un suivi bidirectionnels sur des politiques de sécurité et des journaux d'audit, et procédé associé
CN105072478A (zh) 一种基于可穿戴设备的人生记录系统及其方法
KR102254037B1 (ko) 영상분석장치 및 그 장치의 구동방법
WO2022186426A1 (fr) Dispositif de traitement d'image pour classification automatique de segments, et son procédé de commande
US20120147179A1 (en) Method and system for providing intelligent access monitoring, intelligent access monitoring apparatus
WO2020067615A1 (fr) Procédé de commande d'un dispositif d'anonymisation vidéo permettant d'améliorer les performances d'anonymisation, et dispositif associé
WO2019103443A1 (fr) Procédé, appareil, et système de gestion d'empreinte électronique de fichier électronique
JP7307887B2 (ja) 情報処理装置、情報処理方法及びプログラム
WO2019194569A1 (fr) Programme d'ordinateur, dispositif et procédé de recherche d'image
WO2014148784A1 (fr) Base de données de modèles linguistiques pour la reconnaissance linguistique, dispositif et procédé et système de reconnaissance linguistique
WO2019083073A1 (fr) Procédé et dispositif de fourniture d'informations de trafic, et programme informatique stocké dans un support afin d'exécuter le procédé
WO2023113158A1 (fr) Procédé de profilage d'un criminel, dispositif exécutant le procédé et programme informatique
WO2016129804A1 (fr) Procédé pour générer une page web sur la base de modèles de comportement de consommateurs et procédé d'utilisation de page web
WO2015129987A1 (fr) Appareil de service de fourniture de publicité à base de reconnaissance d'objet, équipement d'utilisateur de réception de publicité à base de reconnaissance d'objet, système de fourniture de publicité à base de reconnaissance d'objet, procédé associé et support d'enregistrement associé dans lequel est enregistré un programme d'ordinateur
WO2021045414A1 (fr) Dispositif de recommandation d'image et procédé de fonctionnement du dispositif de traitement d'image
WO2019164056A1 (fr) Serveur, procédé et dispositif vestimentaire pour soutenir l'entretien d'un équipement militaire sur la base d'un arbre de recherche binaire dans une reconnaissance d'objets générale basée sur une réalité augmentée, une réalité virtuelle, ou une réalité mixte

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18937663

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18937663

Country of ref document: EP

Kind code of ref document: A1