WO2022186426A1 - Dispositif de traitement d'image pour classification automatique de segments, et son procédé de commande - Google Patents

Dispositif de traitement d'image pour classification automatique de segments, et son procédé de commande Download PDF

Info

Publication number
WO2022186426A1
WO2022186426A1 PCT/KR2021/005986 KR2021005986W WO2022186426A1 WO 2022186426 A1 WO2022186426 A1 WO 2022186426A1 KR 2021005986 W KR2021005986 W KR 2021005986W WO 2022186426 A1 WO2022186426 A1 WO 2022186426A1
Authority
WO
WIPO (PCT)
Prior art keywords
segment
event
image processing
image
processing apparatus
Prior art date
Application number
PCT/KR2021/005986
Other languages
English (en)
Korean (ko)
Inventor
임수응
나종근
Original Assignee
주식회사 스누아이랩
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 스누아이랩 filed Critical 주식회사 스누아이랩
Publication of WO2022186426A1 publication Critical patent/WO2022186426A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition

Definitions

  • the present invention relates to an image processing apparatus for automatic segment classification and a driving method of the apparatus, and more particularly, it is possible to determine an event related to each segment through segmentation of a specified object included in a background in a captured image, for example. It relates to an image processing apparatus for automatic segment classification and a driving method of the apparatus.
  • Each image information acquired by being photographed in real time through such a large number of surveillance cameras is displayed on a control monitor provided inside the control room.
  • the screen of the control monitor is divided into hardware or software to correspond to the number of surveillance cameras, and each image information is displayed on the control monitor to be distinguished from each other.
  • An embodiment of the present invention provides an image processing apparatus for automatic segment classification capable of determining an event related to each segment through segmentation of a specified object included in a background in a captured image, and a method of driving the apparatus, for example. There is a purpose.
  • An image processing apparatus for automatic segment classification includes a communication interface unit for receiving a captured image of an arbitrary space, and a plurality of objects including designated different types of objects including a place in the received captured image. and a control unit that automatically classifies the segmented area into each area, and processes an event for each segment based on the automatically classified segment.
  • the control unit may determine an event for the specified object in the arbitrary space based on the captured image, segment the background image excluding the specified object into each region, and re-determine the determined event for each segment .
  • the controller may perform the segmentation by detecting at least one of a housing complex, a factory, a livestock farm, a construction site, a shopping mall, an office, a road, and a station as the different types of objects.
  • the controller may re-determine a complex situation based on the automatically classified segment for the event determined in relation to the designated object.
  • the control unit may present a selectable event type for each automatically classified segment to the user, receive an event that the user wants to detect and an event occurrence condition related to the segment, respectively, and perform an event determination operation for each segment .
  • the controller may receive information about the type, size, and time of the object as the event occurrence condition, respectively.
  • the controller may adjust the priority of the event occurring for each segment by using metadata and big data generated in relation to an arbitrary object in the captured image.
  • the method of driving an image processing apparatus for automatic segment classification includes the steps of: a communication interface unit receiving a captured image of an arbitrary space; and automatically classifying a plurality of regions including different types of objects into each region, and processing an event for each segment based on the automatically classified segment.
  • the processing includes determining an event for a designated object in the arbitrary space based on the captured image, segmenting the background image excluding the designated object into each region, and re-judging the determined event for each segment can
  • the segmentation may be performed by detecting at least one of a housing complex, a factory, a livestock site, a construction site, a shopping mall, an office, a road, and a station as the different types of objects.
  • the processing may include re-judging a complex situation based on the automatically classified segment for the event determined in relation to the designated object.
  • the processing may include presenting a selectable event type for each automatically classified segment to the user, setting the event and event occurrence conditions that the user wants to detect in relation to the segment, respectively, and performing an event determination operation for each segment.
  • the type, size, and time information of the object may be respectively set as the event occurrence condition.
  • the priority of the event occurring for each segment may be adjusted using metadata and big data generated in relation to an arbitrary object in the captured image.
  • an event suitable for a segment that is an object of interest or a space may be presented and the event may be determined based on the present.
  • the embodiment of the present invention may improve false positives and false positives when determining an event by converting metadata related to a tracking object into big data.
  • FIG. 1 is a view showing an automatic segment classification system according to an embodiment of the present invention
  • FIG. 2 is a block diagram illustrating a detailed structure of the image processing apparatus of FIG. 1;
  • FIG. 3 is a block diagram illustrating a detailed structure of the segment processing unit of FIG. 2;
  • FIG. 4 is a flowchart illustrating a driving process of an image processing apparatus according to an embodiment of the present invention
  • FIG. 5 is a flowchart illustrating a driving process of an image processing apparatus according to another embodiment of the present invention.
  • FIG. 1 is a diagram illustrating an automatic segment classification system according to an embodiment of the present invention.
  • the automatic segment classification system 90 according to the embodiment of the present invention is a part or includes all
  • the communication network 110 of FIG. 1 is omitted, so that the photographing apparatus 100 and the image processing apparatus 120 perform direct communication (eg, P2P communication), or the image processing apparatus It means that some or all of the components constituting the communication network 110 can be configured by being integrated into a network device (eg, a wireless switching device, etc.) constituting the communication network 110, to help a sufficient understanding of the invention. It will be described as including everything for the purpose.
  • a network device eg, a wireless switching device, etc.
  • the photographing apparatus 100 includes a Closed Circuit Television (CCTV) or Internet Protocol (IP) camera installed at an arbitrary place requiring monitoring. Also, the photographing apparatus 100 may include a Pan-Tilt-Zoom (PTZ) camera capable of pan, tilt, and zoom operations as well as a fixed camera.
  • the photographing device 100 is installed in various places for social safety, crime prevention, social issue and public surveillance, and converts the photographed image to the image processing device 120 .
  • the photographing device 100 may be installed in a public place such as a subway or bus stop to monitor various situations such as incidents and accidents, and may also monitor acts of violence occurring on a railing of a bridge or in a remote place.
  • monitoring can be made through CCTV installed in daycare centers and the like.
  • the photographing device 100 may operate under the control of the control device.
  • the communication network 110 includes both wired and wireless communication networks.
  • a wired/wireless Internet network may be used or interlocked as the communication network 110 .
  • the wired network includes an Internet network such as a cable network or a public telephone network (PSTN)
  • the wireless communication network includes CDMA, WCDMA, GSM, Evolved Packet Core (EPC), Long Term Evolution (LTE), Wibro network, etc. is meant to include
  • the communication network 110 according to the embodiment of the present invention is not limited thereto, and may be used, for example, in a cloud computing network under a cloud computing environment, a 5G network, and the like.
  • the access point in the communication network 110 can connect to a switching center of a telephone company, etc., but in the case of a wireless communication network, it connects to a SGSN or GGSN (Gateway GPRS Support Node) operated by a communication company to transmit data. or by accessing various repeaters such as a Base Transmissive Station (BTS), NodeB, and e-NodeB to process data.
  • SGSN Serving GPRS Support Node
  • BTS Base Transmissive Station
  • NodeB NodeB
  • e-NodeB e-NodeB
  • the communication network 110 may include an access point (AP).
  • the access point includes a small base station, such as a femto or pico base station, which is often installed in a building. Femto or pico base stations are classified according to the maximum number of access to the imaging device 100, etc. in the classification of a small base station.
  • the access point may include a short-distance communication module for performing short-distance communication such as Zigbee and Wi-Fi with the photographing device 100 and the like.
  • the access point may use TCP/IP or Real-Time Streaming Protocol (RTSP) for wireless communication.
  • RTSP Real-Time Streaming Protocol
  • short-range communication may be performed in various standards such as Bluetooth, Zigbee, infrared, radio frequency (RF) such as ultra high frequency (UHF) and very high frequency (VHF), and ultra-wideband communication (UWB) in addition to Wi-Fi.
  • RF radio frequency
  • UHF ultra high frequency
  • VHF very high frequency
  • UWB ultra-wideband communication
  • the access point may extract the location of the data packet, designate the best communication path for the extracted location, and transmit the data packet to the next device, for example, the image processing apparatus 120 along the designated communication path.
  • the access point may share several lines in a general network environment, and includes, for example, a router, a repeater, and a repeater.
  • the image processing apparatus 120 may be, for example, an image analysis apparatus as a server, and may perform an operation for automatic segment classification according to an embodiment of the present invention.
  • the image processing apparatus 120 may be variously named, such as an automatic segment classification apparatus or an image apparatus for automatic segment classification.
  • the image processing apparatus 120 according to an embodiment of the present invention may perform an image analysis operation in addition to the automatic image segment classification operation or operate as a control apparatus so that a controller can control the image.
  • the image processing apparatus 120 may analyze a photographed image received from the photographing apparatus 100 to detect and classify a designated object, track the object, and determine an event based on the tracking result. .
  • the designated object may be a person object or an object of interest in which the user is interested.
  • the image processing apparatus 120 may generate metadata for these designated objects and events and manage them in the DB 120a of FIG. 1 .
  • the metadata includes attribute information (eg, shape, color, etc.) related to the designated object.
  • the image processing apparatus 120 does not simply determine an event based on the object tracking result as in the prior art, but with respect to the entire area of the shooting projection or objects other than a designated object.
  • segments can be classified into segments, or fragments, into each area, and events are set for each classified segment.
  • a comprehensive judgment of the event is made.
  • an event set by the user for each segment may be detected.
  • the image processing apparatus 120 performs segmentation on a specific object in the foreground image by performing foreground processing after excluding the specified object from the captured image. And after classifying the segment, it is possible to set an event for each segment. For example, segment classification of a background region for setting a non-ROI may be performed.
  • segments for housing complexes, factories, livestock sites, construction sites, shopping malls, offices, roads, and stations may be classified as objects other than designated objects.
  • objects in these segments may also correspond to thing objects.
  • an event for an existing object it is distinct from determining a specific tool or detecting an object (eg, cigarette, garbage, etc.) possessed by a human object. It can be seen as an object that is merely a background that is not particularly interested in poetry.
  • the image processing apparatus 120 presents an event type that can be used (or selected) for each classified segment. This may be preset as data in advance, and the user may select a plurality of events to be detected from the suggested event types.
  • the image processing apparatus 120 when a photographed image is received after the photographing apparatus 100 is installed in a specific place, the image processing apparatus 120 determines an event based on a user-specified object and automatically classifies the segment. operation can be performed using In the received image, if the background image as above, i.e., a factory or construction site is included in the background area, the area is segmented and classified, and the types of events that can be set in relation to the segment are presented to users, such as control personnel, for setting. receive For example, there may be many types of events that may occur in relation to a designated object at a construction site. Accordingly, the user can select only an event to be detected among them.
  • a fall accident of a worker may be detected as an event, and fire detection may be detected as an event. Accordingly, the user may not be very interested in fire detection and may be only interested in the fall accident. Accordingly, the image processing apparatus 120 can be considered to finally detect an event related to a fall accident of a designated object at a construction site.
  • different segments in the same image may include roads. Accordingly, the road may operate to detect other events in relation to the specified object.
  • a traffic accident of a specified object may be detected as an event, and in this way, the image processing apparatus 120 detects different types of events according to the types of surrounding objects segmented around the specified object. it can be
  • a received captured image is segmented on a specific target, an event type and event occurrence condition (eg, object type, etc.) are set for each segment, and an event is detected based on this.
  • the image processing apparatus 120 allows the user to select N events to be detected in the image, and sets the object type, size, holding time (or time information), etc. as event occurrence conditions, and according to the setting
  • the image processing apparatus 120 may start or perform an event detection operation.
  • the embodiment of the present invention will not be particularly limited to any one form. In other words, many events that can be set by workers and construction sites can be monitored only for specific events.
  • events that can be set in relation to the surrounding objects may be automatically set.
  • the events that can occur at a housing construction site and at a large-scale construction site are different. Therefore, it can be automatically set based on these characteristics. This is because, in the former case, fire accidents are emphasized, and in the latter case, fall accidents of workers can be emphasized.
  • the image processing apparatus 120 provides an event determined based on the tracking result of a specified object when segment classification and event setting for each segment are completed, that is, when event setting for each segment, whether manual or automatic setting, is completed.
  • Various additional actions may be further performed, such as re-judging an event by comprehensively judging the situation of surrounding objects, or re-selecting or adjusting the priorities of events occurring for each segment. For example, after generating big data by collecting metadata related to a designated object, an additional operation such as adjusting a priority based on the big data may be performed.
  • an event of interest can be set by presenting an event suitable for a segment, such as a specific place, in the background image, and a simple setting operation can be performed even if there are many types of events, and the ROI is set as before.
  • the image processing apparatus 120 sets an ROI, determines an event based on this, segments non-interest regions other than the ROI, and finally generates an event.
  • it can be applied to a method of generating an event by additionally considering the segmented surrounding objects.
  • a surrounding object that is, a segmented area, such as a place, can be additionally considered.
  • it will not be particularly limited to any one method such as whether to set the region of interest or whether to designate a designated object.
  • FIG. 2 is a block diagram illustrating a detailed structure of the image processing apparatus of FIG. 1
  • FIG. 3 is a block diagram illustrating a detailed structure of the segment processing unit of FIG. 2 .
  • the image processing apparatus 120 of FIG. 1 includes a communication interface unit 200 , a control unit 210 , a segment processing unit 220 , and a portion of the storage unit 230 . or all inclusive.
  • “including some or all” means that some components such as the storage unit 230 are omitted to configure the image processing apparatus 120 , or some components such as the segment processing unit 220 are included in the control unit 210 . It means that it can be configured by being integrated with other components, such as, and will be described as including all in order to help a sufficient understanding of the invention.
  • the communication interface 200 communicates with the photographing apparatus 100 via the communication network 110 of FIG. 1 .
  • the communication interface unit 200 may perform operations such as modulation/demodulation, and may perform operations such as encoding/decoding and muxing/demuxing.
  • the photographing apparatus 100 may generate metadata related to a photographed object in an image when photographing an arbitrary space or an arbitrary place, and such metadata may be transmitted together with the photographed image.
  • the embodiment of the present invention will not be limited to any one form.
  • the control unit 210 is in charge of overall control operations of the communication interface unit 200, the segment processing unit 220, and the storage unit 230 of FIG. 2 constituting the image processing apparatus 120 of FIG. 1 .
  • the control unit 210 may store the captured image provided from the communication interface unit 200 in the storage unit 230 , and then retrieve it and provide it to the segment processing unit 220 .
  • the control unit 210 systematically classifies it and stores it in the DB 120a of FIG. 1 .
  • the segment processing unit 220 may perform, for example, an analysis operation of the captured image.
  • the segment processing unit 220 may include a plurality of regions segmented around a designated object or an object of interest according to an embodiment of the present invention, and of course, the corresponding region may include a place related to an event or a building of the corresponding place. , it is possible to determine the event set for each segment. For example, when a designated object for object tracking is set, the segment processing unit 220 automatically classifies a segment including a surrounding object based on this, and presents an event type that can be used for each classified segment to a user or an administrator. Thus, N events to be detected in the image can be selected. This has already been described above. Accordingly, the segment processing unit 220 may determine an event determined by comprehensively considering the specified object and the surrounding segmented objects.
  • the segment processing unit 220 may be the entire image in the background region of the specified object, but if the background image is based on the background image, the segment processing unit 220 segments the foreground, that is, fragments or splits a plurality of regions into each region, and classifies it to segment Executes event setting operation for each event. For example, if two segments are analyzed in the captured image, an event may be set for each segment. As described above, the segment processing unit 220 selects N events to be detected by presenting usable event types for each segment. And set the event occurrence condition additionally. After setting the event type, it is possible to additionally set the event occurrence condition. The event occurrence condition may include object type, size, and holding time. When an event type and occurrence condition in consideration of such a segment are set, event detection can be performed based on this.
  • the image processing apparatus 120 can present an event suitable for a segment, and can simply perform a setting operation even if there are many types of events, without setting an ROI as in the past. By analyzing the entire domain, it may be possible to recognize the complex circumstances in which the event occurred.
  • meta data is generated as big data according to the above process and is used, it is possible to improve false positives or false positives, that is, non-detection of events.
  • the metadata reflects the properties of a designated object, it can be considered that big data is generated by collecting metadata according to time change.
  • the segment processing unit 220 of FIG. 2 may have, for example, the configuration of the segment processing unit 220 ′ as shown in FIG. 3 .
  • the configuration may be made in various ways, and this configuration may be configured by a S/W module, a H/W module, or a combination thereof. In the embodiment of the present invention, it will not be particularly limited to any one form in such a configuration.
  • the image analysis unit 300 may analyze the received captured image and output an analysis result. For example, an operation of detecting and classifying a specified object in a video frame and tracking the object may be performed. An object detector and an object classifier may be included for object detection and classification, and DCNN object detection and classification may be performed for such object detection and classification. Accordingly, erroneous detection of an object, etc. may be reduced. Object tracking can be viewed as tracking the movement of a specific object in multiple video frames. Of course, the image analyzer 300 may perform automatic segment classification on the background image in addition to object detection in the process of image analysis.
  • the foreground image may be divided into a plurality of regions in a matrix form, that is, segmented, and segmented by determining whether a predetermined specific object is included during segmentation.
  • the target of the segmentation may be limited to a construction site, a road, a parking lot, etc. as preset.
  • the object of segmentation may be a place.
  • segmentation can be used to ensure that events that fit the segment are detected when determining the events of a given object.
  • the segment setting unit 310 performs a setting operation performed in relation to a segment by, for example, a user, an administrator, or a programmer.
  • the image analyzer 300 may set N events to be detected in the image by presenting the type of event to the user for the automatically classified segment.
  • event generation conditions such as object type or size may be set.
  • the event determination unit 320 may determine an event based on the segment setting information set by the segment setting unit 310 and based on the analysis result provided by the image analysis unit 300 .
  • the event determination unit 320 determines an event for each segment as set by the user, etc. in the segment setting unit 310 . can see. In other words, if the segment area is a construction site, an event set in this regard is detected. In addition, if it is a parking lot, an event set in this regard is detected. For example, since the event type is set and the object type is also set, it is possible to detect an event only for the operator, not the supervisor, at the construction site.
  • the event determination unit 320 determines an event based on a rule for the designated object, and based on the setting information of the segment setting unit 310 , It is also possible to reconfirm the event judged by In this process, the event determination unit 320 may perform an operation such as adjusting or reselecting the priorities of events occurring for each segment. In short, the event determination unit 320 determines the event A based on the tracking result of the specified object, but may determine the event B as the event B in consideration of the segment, that is, the surrounding situation comprehensively. Above all, since the embodiment of the present invention determines or comprehensively determines an event for a specified object through segmentation of a portion such as a place in the entire image or background area, it will not be limited to any one form.
  • the storage unit 230 of FIG. 2 may sequentially store the captured images provided under the control of the control unit 210 in a memory and then retrieve them and provide them to the segment processing unit 220 .
  • the storage unit 230 temporarily stores an image analysis result of a specified format including the segment data processed by the segment processing unit 220 and systematically classifies it in the DB 120a of FIG. 1 under the control of the control unit 210 . You can print it out to be saved.
  • control unit 210 may include a CPU and a memory, and may be formed as a single chip.
  • the CPU includes a control circuit, an arithmetic unit (ALU), an instruction interpreter and a registry
  • the memory may include a RAM.
  • the control circuit may perform a control operation
  • the operation unit may perform an operation operation of binary bit information
  • the instruction interpretation unit may perform an operation of converting a high-level language into machine language and/or converting a machine language into a high-level language, including an interpreter or compiler.
  • the registry may be involved in software data storage.
  • a program stored in the segment processing unit 220 is copied, loaded into a memory, that is, a RAM, and then executed, thereby rapidly increasing the data operation processing speed. can do it
  • FIG. 4 is a flowchart illustrating a driving process of an image processing apparatus according to an embodiment of the present invention.
  • the image processing apparatus 120 receives a captured image in an arbitrary space (S400).
  • the captured image is not necessarily an image provided by a camera, etc., but may be an image stored in a storage medium such as USB or a server such as a DVR and provided.
  • the image processing device 120 automatically classifies by segmenting a plurality of regions including designated different types of objects (eg, construction sites, parking lots, etc.) including places in the received captured image into each region, An event for each segment may be processed based on the automatically classified segment (S410).
  • objects eg, construction sites, parking lots, etc.
  • the image processing apparatus 120 determines an event based on a rule-based or artificial intelligence deep learning based on a specified object as an embodiment, and then considers the segment or sets setting information related to the segment. An operation such as readjusting the event determined based on the basis may be performed. Accordingly, since the event is determined in consideration of the surrounding circumstances for the designated object, the accuracy of event determination may be increased.
  • the image processing apparatus 120 performs segmentation on the entire image or background as another embodiment as described above, and then determines the type of event set for each segment and the event occurrence condition. Since the event may be detected based on the basis, the embodiment of the present invention will not be particularly limited to any one operation form. In other words, since the user has set the event type and event occurrence condition that he wants to check what event in the housing complex and what event he wants to check in relation to the station, the image processing device 120 performs image analysis and segment processing accordingly. After detecting an event by performing it, you will be able to notify the event when the corresponding event is detected.
  • the image processing apparatus 120 can perform various operations, and since other details have been sufficiently described above, those contents will be replaced.
  • FIG. 5 is a flowchart illustrating a driving process of an image processing apparatus according to another embodiment of the present invention.
  • the image processing apparatus 120 performs an operation for setting an event for each segment based on the captured image received from the photographing apparatus 100 . It can be performed (S500 ⁇ S525).
  • the image processing apparatus 120 may perform segmentation of a foreground image or a background region, although it may be the entire image, and then automatically classify the image and set an event for each segment. For example, when a segment is a construction site, if the system presents the available event types for each segment programmatically, the user can select N events to detect and set the event occurrence conditions in relation to the selected event. In addition, in the case of an office, the system, that is, a pre-installed program, presents a selectable event type related to the office to the user, selects an event of interest, and may set the event occurrence condition as described above.
  • the image processing apparatus 120 may detect an event based on the setting information ( S530 to S560 ). In other words, if there are a plurality of segments in which events are set in the received captured image, the image processing apparatus 120 according to an embodiment of the present invention may simultaneously detect and notify the plurality of events.
  • the image processing apparatus 120 determines an event based on a rule-based or deep learning of artificial intelligence through object detection, classification, and tracking with respect to a specified object of interest to the user, and then, as above, An event can be finally determined based on information set by the user for each segment. For example, if an event is detected and notified, but the event is not interested in the setting information set by the user in relation to the surrounding segment, the notification of the corresponding event may be canceled.
  • the image processing apparatus 120 may generate big data by collecting metadata related to an object in which an event has occurred based on time change. Accordingly, big data can be used for event determination ( S570 ), and after processing the metadata into big data, operations such as re-selection of priorities of events occurring for each segment may be performed. For example, a list of events may be ranked based on statistics at a specific point in time based on big data. And, if time changes, the corresponding ranking can be adjusted based on big data.
  • the image processing apparatus 120 can perform various operations, and since other details have been sufficiently described above, those contents will be replaced.
  • the present invention is not necessarily limited to this embodiment. That is, within the scope of the object of the present invention, all the components may operate by selectively combining one or more.
  • all of the components may be implemented as one independent hardware, but a part or all of each component is selectively combined to perform some or all functions of the combined components in one or a plurality of hardware program modules It may be implemented as a computer program having Codes and code segments constituting the computer program can be easily deduced by those skilled in the art of the present invention.
  • Such a computer program is stored in a computer-readable non-transitory computer readable media, read and executed by a computer, thereby implementing an embodiment of the present invention.
  • the non-transitory readable recording medium refers to a medium that stores data semi-permanently and can be read by a device, not a medium that stores data for a short moment, such as a register, cache, memory, etc. .
  • the above-described programs may be provided by being stored in a non-transitory readable recording medium such as a CD, DVD, hard disk, Blu-ray disk, USB, memory card, ROM, and the like.
  • photographing device 110 communication network

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

La présente invention concerne un dispositif de traitement d'image pour une classification automatique de segments, et un procédé de commande de ce dispositif. Le dispositif de traitement d'image pour la classification automatique de segments selon un mode de réalisation de la présente invention peut comprendre : une unité d'interface de communication pour recevoir une image obtenue par photographie d'un espace prédéterminé ; et une unité de commande pour classifier automatiquement de multiples zones comprenant différents types d'objets désignés comprenant des endroits dans l'image photographiée reçue, en segments de zones respectives, et traiter un événement spécifique à un segment sur la base des segments classés automatiquement.
PCT/KR2021/005986 2021-03-04 2021-05-13 Dispositif de traitement d'image pour classification automatique de segments, et son procédé de commande WO2022186426A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2021-0028606 2021-03-04
KR1020210028606A KR102336480B1 (ko) 2021-03-04 2021-03-04 자동 세그먼트 분류를 위한 영상처리장치 및 그 장치의 구동방법

Publications (1)

Publication Number Publication Date
WO2022186426A1 true WO2022186426A1 (fr) 2022-09-09

Family

ID=78868012

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2021/005986 WO2022186426A1 (fr) 2021-03-04 2021-05-13 Dispositif de traitement d'image pour classification automatique de segments, et son procédé de commande

Country Status (2)

Country Link
KR (1) KR102336480B1 (fr)
WO (1) WO2022186426A1 (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102446832B1 (ko) * 2021-12-20 2022-09-22 김승모 영상내 객체 검출 시스템 및 그 방법
WO2024080543A1 (fr) * 2022-10-11 2024-04-18 삼성전자 주식회사 Dispositif électronique pour générer des faits marquants de vidéo, et son procédé de fonctionnement

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150084567A (ko) * 2014-01-14 2015-07-22 한화테크윈 주식회사 요약 영상 브라우징 시스템 및 방법
KR20170137747A (ko) * 2015-03-17 2017-12-13 리리컬 랩스 비디오 컴프레션 테크놀로지, 엘엘씨 프랙탈 차원 측정을 이용한 전경 검출
KR20180017097A (ko) * 2015-07-21 2018-02-20 소니 주식회사 반자동 이미지 세그먼트화
KR102033903B1 (ko) * 2018-06-25 2019-10-18 주식회사 인텔리빅스 안전관제시스템 및 그 시스템의 구동방법
KR102139582B1 (ko) * 2019-12-05 2020-07-29 주식회사 인텔리빅스 다중 roi 및 객체 검출 dcnn 기반의 cctv 영상분석장치 및 그 장치의 구동방법

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101394242B1 (ko) * 2011-09-23 2014-05-27 광주과학기술원 영상 감시 장치 및 영상 감시 방법
KR101668303B1 (ko) 2015-05-15 2016-10-24 (주)클로버 이벤트 영상의 통합 관리시스템
KR102423760B1 (ko) 2017-02-02 2022-07-22 한국전자통신연구원 영상 이벤트 단위 세그멘테이션 시스템 및 그 방법
KR101856733B1 (ko) 2017-05-19 2018-05-10 주식회사 미디어테크 이벤트에 따른 관제 영상 표출 시스템
KR102028147B1 (ko) 2018-10-12 2019-10-04 (주)파슨텍 이벤트 상황을 관제하는 통합 관제 시스템, 영상 분석 장치 및 지역 관제 서버
KR20200061747A (ko) 2018-11-26 2020-06-03 한국전자통신연구원 스포츠 경기 영상에서 이벤트를 인식하는 장치 및 방법
KR102154610B1 (ko) 2020-04-10 2020-09-10 주식회사 넥스트케이 이벤트 발생 위치 계산이 가능한 영상장치 및 그 장치의 구동방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150084567A (ko) * 2014-01-14 2015-07-22 한화테크윈 주식회사 요약 영상 브라우징 시스템 및 방법
KR20170137747A (ko) * 2015-03-17 2017-12-13 리리컬 랩스 비디오 컴프레션 테크놀로지, 엘엘씨 프랙탈 차원 측정을 이용한 전경 검출
KR20180017097A (ko) * 2015-07-21 2018-02-20 소니 주식회사 반자동 이미지 세그먼트화
KR102033903B1 (ko) * 2018-06-25 2019-10-18 주식회사 인텔리빅스 안전관제시스템 및 그 시스템의 구동방법
KR102139582B1 (ko) * 2019-12-05 2020-07-29 주식회사 인텔리빅스 다중 roi 및 객체 검출 dcnn 기반의 cctv 영상분석장치 및 그 장치의 구동방법

Also Published As

Publication number Publication date
KR102336480B1 (ko) 2021-12-07

Similar Documents

Publication Publication Date Title
KR101964683B1 (ko) 스마트 영상처리장치 및 그 장치의 구동방법
WO2022186426A1 (fr) Dispositif de traitement d'image pour classification automatique de segments, et son procédé de commande
KR102139582B1 (ko) 다중 roi 및 객체 검출 dcnn 기반의 cctv 영상분석장치 및 그 장치의 구동방법
KR102119721B1 (ko) 지능형 에지장치 및 그 장치의 구동방법
KR101688218B1 (ko) 객체 인지 기반의 실시간 영상 검지 기술을 이용한 교통 흐름 및 돌발 상황 관리 시스템 및 그 처리 방법
WO2014051337A1 (fr) Appareil et procédé pour détecter un événement à partir d'une pluralité d'images photographiées
CN106790515B (zh) 一种异常事件处理系统及其应用方法
KR101462855B1 (ko) 불법 주정차 무인 자동 단속 시스템 및 그의 처리 방법
CN106454253A (zh) 区域徘徊的检测方法及系统
WO2016099084A1 (fr) Système de fourniture de service de sécurité et procédé utilisant un signal de balise
KR102306854B1 (ko) 교통상황 관리 시스템 및 방법
WO2020085558A1 (fr) Appareil de traitement d'image d'analyse à grande vitesse et procédé de commande associé
KR20210072285A (ko) 재실자 인원 실시간 모니터링을 위한 cctv 영상정보 분석 시스템 및 그 방법
KR20190092227A (ko) 지능형 영상 분석에 의한 실영상 및 녹화영상 검색 시스템
WO2022055023A1 (fr) Système de plateforme d'analyse d'image intelligent intégré ido capable de reconnaître des objets intelligents
WO2012137994A1 (fr) Dispositif de reconnaissance d'image et son procédé de surveillance d'image
CN115428499A (zh) 无线ip摄像头探测系统及方法
KR102107957B1 (ko) 건물 외벽 침입감지를 위한 cctv 모니터링 시스템 및 방법
US20190147734A1 (en) Collaborative media collection analysis
KR101954719B1 (ko) 이벤트감지장치 및 그 장치의 구동방법
CN116246416A (zh) 一种用于安防的智能分析预警平台及方法
WO2017034309A1 (fr) Procédé et appareil de classification de données multimédia
WO2018097384A1 (fr) Appareil et procédé de notification de fréquentation
KR101921868B1 (ko) 고해상도 카메라를 이용한 지능형 영상 감시 시스템 및 그 방법
KR101547255B1 (ko) 지능형 감시 시스템의 객체기반 검색방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21929288

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21929288

Country of ref document: EP

Kind code of ref document: A1