WO2023171981A1 - Surveillance camera management device - Google Patents

Surveillance camera management device Download PDF

Info

Publication number
WO2023171981A1
WO2023171981A1 PCT/KR2023/002898 KR2023002898W WO2023171981A1 WO 2023171981 A1 WO2023171981 A1 WO 2023171981A1 KR 2023002898 W KR2023002898 W KR 2023002898W WO 2023171981 A1 WO2023171981 A1 WO 2023171981A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
surveillance camera
neural network
learning
partial
Prior art date
Application number
PCT/KR2023/002898
Other languages
French (fr)
Korean (ko)
Inventor
허앤드류
노창훈
Original Assignee
주식회사 코아아띠즈
주식회사 글로벌비엠아이
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 코아아띠즈, 주식회사 글로벌비엠아이 filed Critical 주식회사 코아아띠즈
Publication of WO2023171981A1 publication Critical patent/WO2023171981A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • H04N5/92Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N5/926Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback by pulse code modulation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to a surveillance camera management device that supplies power to at least one connected surveillance camera and compresses and provides images acquired by at least one surveillance camera.
  • the power supply infrastructure that supplies power to surveillance cameras
  • it is installed separately from the network line for data transmission and reception of surveillance cameras, so as the number of installations increases, power supply devices, etc., need to be expanded in proportion.
  • the present invention seeks to further simplify the configuration of a surveillance system including a plurality of surveillance cameras.
  • the present invention seeks to efficiently compress and transmit images without deteriorating image quality.
  • a surveillance camera management device that supplies power to at least one surveillance camera and compresses and provides images acquired by the at least one surveillance camera according to an embodiment of the present invention includes a surveillance camera connector connected to at least one surveillance camera. ; A network connection unit connected to the network; a power supply unit that receives power from a power source and generates power to be supplied to the at least one surveillance camera connected through the surveillance camera connection unit; a control unit that compresses video data received from the at least one surveillance camera to generate compressed video data; and a communication unit that receives the video data from the at least one surveillance camera, provides the video data to the control unit, and transmits the compressed video data generated by the control unit to at least one external device connected to the network.
  • the control unit uses the learned first artificial neural network to divide the first image into at least one first partial image that requires compression according to the first compression condition and at least one second partial image that requires compression according to the second compression condition. generate first image data by compressing the first partial image according to the first compression conditions, and generating second image data by compressing the second partial image according to the second compression conditions, The first image data and the second image data may be transmitted to the at least one external device.
  • the first artificial neural network may be a neural network that learns the correlation between a first training image, a first partial training image generated from the first training image, and a second partial training image generated from the first training image.
  • the first partial learning image is an image that includes only areas corresponding to objects classified as having a degree of movement within the first learning image that is greater than a predetermined threshold
  • the second partial learning image is an image within the first learning image. It may be an image that includes only areas corresponding to objects classified as having a degree of movement less than the predetermined threshold.
  • the first image consists of a first resolution and a first frame rate
  • the first compression condition is a condition for compressing the first partial image with a second resolution and a second frame rate
  • the second resolution is the first compression condition. 1 resolution or higher
  • the second frame rate may be higher than the first frame rate.
  • the first image consists of a first resolution and a first frame rate
  • the second compression condition is a condition for compressing the second partial image to a third resolution and a third frame rate
  • the third resolution is the 1 resolution or less
  • the third frame rate may be less than the first frame rate
  • the control unit generates identification information for each of at least one object included in the first image using a learned second artificial neural network, and refers to the identification information for each of the at least one object, and converts the first image into a first image.
  • a third partial image is divided into at least one third partial image that requires compression according to compression conditions and a fourth partial image that requires compression according to second compression conditions, and the third partial image is compressed according to the first compression conditions.
  • Data may be generated, fourth image data may be generated by compressing the fourth partial image according to the second compression condition, and the third image data and the fourth image data may be transmitted to the at least one external device. .
  • the artificial neural network may be a neural network that learns the correlation between a second learning image, a region corresponding to an object within the second learning image, and identification information of the object.
  • the control unit uses identification information of each of the at least one object to generate the third partial image composed of an area corresponding to an object classified as having a degree of movement greater than a predetermined threshold in the first image, Using the identification information of each of the at least one object, the fourth partial image consisting of an area corresponding to an object classified as having a degree of movement less than a predetermined threshold in the first image may be generated.
  • the surveillance camera connection unit and the at least one surveillance camera are connected through a first interface, the power generated by the power unit is supplied to the at least one surveillance camera through the first interface, and the at least one surveillance camera is connected to the surveillance camera.
  • the generated image data may be transmitted to the communication unit through the first interface.
  • the control unit may provide an interface for managing the at least one surveillance camera to the at least one external device according to a surveillance camera management interface request received from the at least one external device.
  • the interface includes an interface for setting an environment in which the at least one surveillance camera is installed; an interface that sets at least one condition used to compress images acquired by the at least one surveillance camera; and an interface for setting items related to power supplied to the at least one surveillance camera.
  • the configuration of a surveillance system including a plurality of surveillance cameras can be simplified.
  • the present invention can simplify the configuration of the system by allowing power supply to the surveillance camera and compression of images acquired by the surveillance camera to be performed in one device.
  • the present invention can compress images without deteriorating image quality by varying compression conditions for each part of the image using a learned artificial neural network.
  • FIG. 1 is a diagram schematically showing the configuration of a monitoring system according to an embodiment of the present invention.
  • FIGS. 2 and 3 are diagrams schematically showing the configuration of a surveillance camera management device 100 according to an embodiment of the present invention.
  • Figures 4 and 5 are diagrams for explaining an exemplary structure of a learned artificial neural network used by the surveillance camera management device 100 according to an embodiment of the present invention.
  • FIG. 6 is a diagram illustrating a method in which an artificial neural network learning device (not shown) learns a first artificial neural network 420 using a plurality of learning data 410 according to an embodiment of the present invention.
  • FIG. 7 shows a first partial image 441 and a second partial image from the first image 430 using the first artificial neural network 420 learned by an artificial neural network learning device (not shown) according to an embodiment of the present invention. This is a diagram to explain the process of outputting the image 442.
  • FIG. 8 is a diagram illustrating a method in which an artificial neural network learning device (not shown) learns a second artificial neural network 520 using a plurality of learning data 510 according to an embodiment of the present invention.
  • FIG. 9 shows a region 541 corresponding to an object recognized from the first image 530 using a second artificial neural network 520 learned by an artificial neural network learning device (not shown) according to an embodiment of the present invention.
  • This diagram is for explaining the process of outputting object identification information 542.
  • FIG. 10 is a diagram illustrating a process in which the control unit 110 generates first image data 622 and second image data 632 from the first image 610 according to an embodiment of the present invention.
  • FIG. 11 is a diagram showing image data transmitted by the control unit 110 over time.
  • Figure 12 is a flowchart to explain a surveillance camera management method performed by the control unit 110 according to an embodiment of the present invention.
  • a surveillance camera management device that supplies power to at least one surveillance camera and compresses and provides images acquired by the at least one surveillance camera according to an embodiment of the present invention includes a surveillance camera connector connected to at least one surveillance camera. ; A network connection unit connected to the network; a power supply unit that receives power from a power source and generates power to be supplied to the at least one surveillance camera connected through the surveillance camera connection unit; a control unit that compresses video data received from the at least one surveillance camera to generate compressed video data; and a communication unit that receives the video data from the at least one surveillance camera, provides the video data to the control unit, and transmits the compressed video data generated by the control unit to at least one external device connected to the network.
  • FIG. 1 is a diagram schematically showing the configuration of a monitoring system according to an embodiment of the present invention.
  • the surveillance camera management device can supply power to at least one surveillance camera, compress the video acquired by at least one surveillance camera, and supply it to one or more other devices.
  • the surveillance camera management device supplies power to each surveillance camera through a single interface (for example, a network connection cable with an RJ45 type plug), and collects the information obtained by the surveillance camera. Video can be obtained.
  • a surveillance camera management device can compress images acquired by a surveillance camera using a learned artificial neural network. For example, the surveillance camera management device divides the image acquired from the surveillance camera into at least one first partial image that requires compression according to a first compression condition and at least one second partial image that requires compression according to a second compression condition, A compressed image can be generated from each partial image. A detailed description of this will be provided later.
  • 'artificial neural network' is a neural network learned according to a predetermined purpose, and may refer to an artificial neural network learned by machine learning or deep learning techniques. Such a neural network will be described with reference to FIGS. 4 to 9.
  • the surveillance system includes a surveillance camera management device 100, at least one surveillance camera 200, a network 300, an image storage device 310, and a user terminal ( 320) may be included.
  • the surveillance camera management device 100 supplies power to at least one surveillance camera 200 and compresses the video acquired by the at least one surveillance camera 200 into one It can be supplied to other devices.
  • the surveillance camera management device 100 can manage by assigning an identification number to each of at least one connected surveillance camera 200.
  • the surveillance camera management device 100 manages by assigning an IP address to each of the at least one surveillance camera 200. can do.
  • this method is illustrative and the scope of the present invention is not limited thereto.
  • FIGS. 2 and 3 are diagrams schematically showing the configuration of a surveillance camera management device 100 according to an embodiment of the present invention. Hereinafter, it will be described with reference to FIGS. 2 and 3 together.
  • the surveillance camera management device 100 includes a control unit 110, a power unit 120, a surveillance camera connection unit 130, a network connection unit 140, and a communication unit 150. may include.
  • the control unit 110 allows the surveillance camera management device 100 to supply power to at least one surveillance camera 200 and compress the video acquired by the at least one surveillance camera 200. It may be a device that controls a series of processes that supply supply to one or more other devices.
  • control unit 110 may mean, for example, a data processing device built into hardware that has a physically structured circuit to perform a function expressed as a code or command included in a program.
  • data processing devices built into hardware include microprocessor, central processing unit (CPU), processor core, multiprocessor, and application-specific integrated (ASIC). Circuit), FPGA (Field Programmable Gate Array), GPU (Graphics processing unit), etc., but the scope of the present invention is not limited thereto.
  • control unit 110 is shown as a single unit in FIG. 1, the control unit 110 may be composed of a plurality of hardware (eg, a plurality of chips) to control the series of processes described above. However, this is an example and the spirit of the present invention is not limited thereto.
  • the power supply unit 120 can receive power from a power source (or an external power source) and generate power to be supplied to at least one surveillance camera 200 connected through the surveillance camera connection unit 130. .
  • the power unit 120 may generate power used by the surveillance camera management device 100 from power supplied from the power source.
  • the power supply unit 120 may receive power from a power source and generate power to be supplied to at least one surveillance camera 200 and/or the surveillance camera management device 100. At this time, the power supply unit 120 may generate a first type of power supplied to at least one surveillance camera 200 and a second type of power used in the surveillance camera management device 100.
  • this is an example and the spirit of the present invention is not limited thereto.
  • Power generated by the power supply unit 120 may be supplied to at least one surveillance camera 200 through an interface connected to the surveillance camera connection unit 130.
  • an interface connected to the surveillance camera connection unit 130.
  • power generated by the power supply unit 120 may be supplied to at least one surveillance camera 200 through such a cable.
  • power supply to at least one surveillance camera 200 and data transmission and reception to and from at least one surveillance camera 200 can be accomplished through one interface.
  • the power supply unit 120 may include a power port 121 connected to an external power source.
  • the surveillance camera connection unit 130 can connect at least one surveillance camera 200 and the surveillance camera management device 100.
  • ‘connecting’ two devices may mean not only physically connecting but also electrically connecting.
  • the surveillance camera connection unit 130 may include one or more connection ports 131 to 136 connected to at least one surveillance camera 200 through a predetermined interface, as shown in FIG. 3. You can.
  • the first surveillance camera may be connected to the first port 131 of the connection unit 130, and the second surveillance camera may be connected to the second port 132.
  • this is an example and the spirit of the present invention is not limited thereto.
  • one or more ports 131 may be for connection according to a predetermined interface.
  • one or more ports 131 may be for connecting a network connection cable with an RJ45 type plug. Please note that this is an example and the spirit of the present invention is not limited thereto.
  • the network connection unit 140 can connect the network 300 and the surveillance camera management device 100.
  • the network connection unit 140 includes a first port 141 connected to the network 300 according to a first interface and a second port 142 connected to the network 300 according to a second interface. ) may include.
  • the first interface may be, for example, a network connection cable with an RJ45 type plug
  • the second interface may be an optical cable.
  • this is an example and the spirit of the present invention is not limited thereto.
  • the communication unit 150 receives image data from at least one surveillance camera 200 and provides it to the control unit 110, and transmits the compressed image data generated by the control unit 110 to the network 300. Can be transmitted to at least one external device connected to.
  • the communication unit 150 allows the surveillance camera management device 100 to transmit and receive signals such as control signals or data signals through wired or wireless connection with other network devices such as the video storage device 310 and/or the user terminal 320. It may be a device containing the necessary hardware and software.
  • the communication unit 150 receives video data from at least one surveillance camera 200 through the surveillance camera connection unit 130, and receives video data generated by the control unit 110 through the network connection unit 140. Compressed video data can be transmitted to at least one external device. At this time, the surveillance camera connection unit 130 and at least one surveillance camera 200 may be connected according to a predetermined interface. Additionally, through this interface, not only can video data be transmitted and received, but power can be supplied to at least one surveillance camera 200.
  • At least one surveillance camera 200 may be a device that acquires images of the surrounding environment and transmits them to another device (for example, the surveillance camera management device 100).
  • At least one surveillance camera 200 may be of a type capable of panning, tilting, and zooming, or may be of a type with a fixed angle of view. However, this is an example, and any device that acquires images of the environment and transmits them may correspond to 'at least one surveillance camera 200' described in the present invention.
  • Each of at least one surveillance camera 200 may be connected to the surveillance camera management device 100 according to a predetermined interface.
  • each of at least one surveillance camera 200 may be connected to the surveillance camera management device 100 through a network connection cable having an RJ45 type plug.
  • at least one surveillance camera 200 may receive power through one interface and transmit acquired image data at the same time.
  • the network 300 may refer to a communication network that mediates data transmission and reception between the surveillance camera management device 100 and at least one external device 310 and 320.
  • the network 300 may be a wired network such as Local Area Networks (LANs), Wide Area Networks (WANs), Metropolitan Area Networks (MANs), or Integrated Service Digital Networks (ISDNs), or wireless LANs, CDMA, Bluetooth, satellite communication, etc. may cover wireless networks, but the scope of the present invention is not limited thereto.
  • the network 300 may provide a path through which the surveillance camera management device 100 transmits compressed video data to the video storage device 310 and/or the user terminal 320.
  • this is an example and the spirit of the present invention is not limited thereto.
  • the video storage device 310 may be a device that receives, stores, and/or manages compressed video data transmitted by the surveillance camera management device 100.
  • the video storage device 310 may be a Network Video Recorder (NVR) or a device including an NVR.
  • NVR Network Video Recorder
  • the image storage device 310 can generate an image from compressed image data received through the network 300 and store it in storage. Additionally, the image storage device 310 may provide the image stored in the storage to another device, such as the user terminal 320, upon request from the other device.
  • the image storage device 310 may provide an interface for searching and/or managing stored images.
  • the image storage device 310 may provide images stored in the form of a web page to the user terminal 320 connected through the network 300.
  • this is an example and the spirit of the present invention is not limited thereto.
  • the user terminal 320 can receive and display images from the surveillance camera management device 100 and/or the image storage device 310.
  • a user terminal 320 may be, for example, a mobile phone as shown in FIG. 1 .
  • any general-purpose information processing device with a networking function such as a tablet PC or PC, may correspond to the user terminal 320 of the present invention.
  • Figures 4 and 5 are diagrams for explaining an exemplary structure of a learned artificial neural network used by the surveillance camera management device 100 according to an embodiment of the present invention.
  • the artificial neural network is learned by a separate artificial neural network learning device (not shown), and the surveillance camera management device 100 receives and uses the learned artificial neural network from the artificial neural network learning device.
  • the artificial neural network learning device may be implemented in the form of a general-purpose information processing device such as a computer or server.
  • the artificial neural network may be an artificial neural network based on a convolutional neural network (CNN) model as shown in FIG. 4.
  • CNN convolutional neural network
  • the CNN model may be a hierarchical model used to ultimately extract features of input data by alternately performing a plurality of computational layers (Convolutional Layer, Pooling Layer).
  • An artificial neural network learning device (not shown) according to an embodiment of the present invention can build or learn an artificial neural network model by processing learning data according to a supervised learning technique. A detailed description of how the artificial neural network learning device (not shown) trains the artificial neural network will be described later.
  • An artificial neural network learning device (not shown) according to an embodiment of the present invention uses a plurality of learning data, so that the output value generated by inputting any one input data to the artificial neural network is close to the value labeled in the corresponding learning data.
  • the artificial neural network can be trained by repeatedly performing the process of updating the weight of each layer and/or each node.
  • the artificial neural network learning device may update the weight (or coefficient) of each layer and/or each node according to a back propagation algorithm.
  • An artificial neural network learning device (not shown) according to an embodiment of the present invention includes a convolution layer for extracting feature values of input data, and a pooling layer for forming a feature map by combining the extracted feature values. layer) can be created.
  • the artificial neural network learning device (not shown) according to an embodiment of the present invention is a fully connected layer that prepares to determine the probability that input data corresponds to each of a plurality of items by combining the generated feature maps. ) can be created.
  • An artificial neural network learning device (not shown) according to an embodiment of the present invention can calculate an output layer including output corresponding to input data.
  • the input data is divided into blocks of 5
  • an artificial neural network stores in the memory of an artificial neural network learning device (not shown) the type of artificial neural network model, the coefficient of at least one node constituting the artificial neural network, the weight of the node, and the relationship between the plurality of layers constituting the artificial neural network. It can be stored in the form of coefficients of the function that defines.
  • the structure of the artificial neural network can also be stored in memory in the form of source code and/or program.
  • the artificial neural network may be an artificial neural network based on a Recurrent Neural Network (RNN) model as shown in FIG. 5.
  • RNN Recurrent Neural Network
  • the artificial neural network according to this recurrent neural network (RNN) model includes an input layer (L1) including at least one input node (N1), and a hidden layer (L2) including a plurality of hidden nodes (N2). ) and an output layer (L3) including at least one output node (N3).
  • the hidden layer L2 may include one or more layers that are fully connected as shown.
  • the artificial neural network may include a function (not shown) that defines the relationship between each hidden layer.
  • At least one output node N3 of the output layer L3 may include an output value generated by the artificial neural network from input values of the input layer L1 under the control of an artificial neural network learning device (not shown).
  • each node of each layer may be a vector. Additionally, each node may include a weight corresponding to the importance of the node.
  • the artificial neural network uses a first function (F1) that defines the relationship between the input layer (L1) and the hidden layer (L2) and a second function (F2) that defines the relationship between the hidden layer (L2) and the output layer (L3). It can be included.
  • the first function (F1) may define a connection relationship between the input node (N1) included in the input layer (L1) and the hidden node (N2) included in the hidden layer (L2).
  • the second function (F2) may define a connection relationship between the hidden node (N2) included in the hidden layer (L2) and the output node (N3) included in the output layer (L3).
  • the functions between the first function (F1), the second function (F2), and the hidden layer may include a recurrent neural network model that outputs a result based on the input of the previous node.
  • the first function (F1) and the second function (F2) may be learned based on a plurality of learning data.
  • functions between a plurality of hidden layers in addition to the above-described first function (F1) and second function (F2) may also be learned.
  • the artificial neural network according to an embodiment of the present invention can be learned using a supervised learning method based on labeled learning data.
  • An artificial neural network learning device (not shown) according to an embodiment of the present invention uses a plurality of learning data, so that the output value generated by inputting any one input data to the artificial neural network is close to the value labeled in the corresponding learning data.
  • An artificial neural network can be trained by repeatedly performing the process of updating the above-mentioned functions (F1, F2, functions between hidden layers, etc.).
  • the artificial neural network learning device (not shown) according to an embodiment of the present invention can update the above-described functions (F1, F2, functions between hidden layers, etc.) according to the back propagation algorithm.
  • F1, F2, functions between hidden layers, etc. the above-described functions
  • this is an example and the spirit of the present invention is not limited thereto.
  • FIGS. 4 and 5 The types and/or structures of artificial neural networks described in FIGS. 4 and 5 are exemplary and the scope of the present invention is not limited thereto. Therefore, artificial neural networks of various types of models may correspond to the 'artificial neural network' described throughout the specification.
  • FIG. 6 is a diagram illustrating a method in which an artificial neural network learning device (not shown) learns a first artificial neural network 420 using a plurality of learning data 410 according to an embodiment of the present invention.
  • 7 shows a first partial image 441 and a second partial image from the first image 430 using the first artificial neural network 420 learned by an artificial neural network learning device (not shown) according to an embodiment of the present invention. This is a diagram to explain the process of outputting the image 442.
  • the first artificial neural network 420 is generated from the first learning image, the first partial learning image generated from the first learning image, and the first learning image included in each of the plurality of learning data 410. It may be a neural network that learns the correlation between the generated second partial learning images.
  • the first artificial neural network 420 generates the first partial image 441 and the second partial image 442 according to the input of the first image 430, as shown in FIG. It may refer to a neural network that has been trained (or is being trained) to output.
  • the first partial image (or first partial training image) includes only areas corresponding to objects classified as having a degree of movement greater than or equal to a predetermined threshold within the original image (first image or first training image). It could be a video doing it.
  • the original image is an image containing a scene where a pedestrian is walking
  • the first partial image may be an image in which only the area corresponding to the pedestrian is extracted from the original image.
  • this is an example and the spirit of the present invention is not limited thereto.
  • the second partial image (or second partial training image) includes only areas corresponding to objects classified as having a degree of movement less than a predetermined threshold within the original image (first image or first training image). It could be a video doing it.
  • the original image is an image containing a scene in which a pedestrian is walking
  • the second partial image may be an image in which only the remaining area excluding the area corresponding to the pedestrian is extracted from the original image.
  • this is an example and the spirit of the present invention is not limited thereto.
  • the second partial image (or the second partial training image) is generated by excluding the first partial image (or the first partial training image) from the original image (the first image or the first training image). It can be created with In other words, an image obtained by combining the first partial image (or first partial training image) and the second partial image (or second partial training image) may correspond to the original image (first image or first training image).
  • this is an example and the spirit of the present invention is not limited thereto.
  • Each of the plurality of learning data 410 may include a first learning image, a first partial learning image generated from the first learning image, and a second partial learning image generated from the first learning image. You can.
  • the first learning data 411 there is a first learning image 411A, a first partial learning image 411B generated from the first learning image 411A, and a second partial learning image generated from the first learning image ( 411C) may be included.
  • the second learning data 412 and the third learning data 413 also include a first learning image, a first partial learning image generated from the first learning image, and a second partial learning image generated from the first learning image, respectively. may include.
  • FIG. 8 is a diagram illustrating a method in which an artificial neural network learning device (not shown) learns a second artificial neural network 520 using a plurality of learning data 510 according to an embodiment of the present invention.
  • 9 shows a region 541 corresponding to an object recognized from the first image 530 using a second artificial neural network 520 learned by an artificial neural network learning device (not shown) according to an embodiment of the present invention.
  • This diagram is for explaining the process of outputting object identification information 542.
  • the second artificial neural network 520 provides a correlation between the second learning image included in each of the plurality of learning data 510, the area corresponding to the object in the second learning image, and the identification information of the object. It could be a neural network that learned relationships.
  • the second artificial neural network 520 creates a region 541 corresponding to the object within the first image 530 according to the input of the first image 530, as shown in FIG. 9. It may refer to a neural network that has been trained (or is being learned) to output identification information 542 of the corresponding object.
  • Each of the plurality of learning data 510 may include a second learning image, an area corresponding to an object within the second learning image, and identification information of the object.
  • the first learning data 511 may include a second learning image 511A, an area 511B corresponding to an object within the second learning image 511A, and identification information 511C of the object.
  • the second learning data 512 and the third learning data 513 may also include a second learning image, an area corresponding to an object within the second learning image, and identification information of the object, respectively.
  • the area corresponding to the object generated by the second artificial neural network and the object's identification information can be used to divide the original image into two partial images, which will be described in detail later.
  • control unit 110 on the premise that the first artificial neural network 420 and the second artificial neural network 520 are learned and stored in the surveillance camera management device 100 according to the above-described process. .
  • the control unit 110 may acquire the first image from at least one surveillance camera 200.
  • the first image may be an image acquired by at least one surveillance camera 200 and may be an image composed of a first resolution and a first frame rate.
  • the first image may be an image with a resolution of 1920x1080 and a frame rate of 30 Fps.
  • this is an example and the spirit of the present invention is not limited thereto.
  • the control unit 110 may generate first image data and second image data using a learned artificial neural network.
  • FIG. 10 is a diagram illustrating a process in which the control unit 110 generates first image data 622 and second image data 632 from the first image 610 according to an embodiment of the present invention.
  • control unit 110 uses the first artificial neural network 420.
  • the control unit 110 uses the learned first artificial neural network 420 to convert the first image 610 into at least one first partial image 621 that requires compression according to the first compression condition. ) and at least one second partial image 631 that requires compression according to the second compression condition.
  • the first artificial neural network 420 is trained (or learned) to output the first partial image 441 and the second partial image 442 according to the input of the first image 430. Since it is a neural network, a first partial image 621 and a second partial image 631 can be output from the first image 610.
  • control unit 110 compresses the first partial image 441 according to the first compression condition to generate first image data 622, and similarly, the second partial image 442 may be compressed according to the second compression condition to generate second image data 632.
  • the first compression condition is a condition for compressing the first partial image 621 with a second resolution and a second frame rate.
  • the second resolution may be higher than the first resolution
  • the second frame rate may be higher than the first frame rate.
  • the first compression condition may be a compression condition in which the minimum compression quality is the quality of the original image. For example, if the first image 610 is an image composed of a resolution of 1920x1080 and a frame rate of 30 Fps, the first compression condition is a condition for compressing the first partial image 621 with a resolution of 1920x1080 and a frame rate of 30 Fps. You can.
  • the second compression condition may be a condition for compressing the second partial image 631 to a third resolution and a third frame rate.
  • the third resolution may be less than the first resolution
  • the third frame rate may be less than the first frame rate.
  • the second compression condition may be a compression condition in which the maximum quality is the quality of the original image. For example, if the first image 610 is an image composed of a resolution of 1920x1080 and a frame rate of 30 Fps, the second compression condition is a condition for compressing the second partial image 631 with a resolution of 1920x1080 and a frame rate of 10 Fps. You can.
  • control unit 110 uses the second artificial neural network 520.
  • the control unit 110 may generate identification information for each of at least one object included in the first image using the learned second artificial neural network 520.
  • the second artificial neural network 520 generates an area 541 corresponding to the object within the first image 530 and identification information 542 of the object according to the input of the first image 530.
  • the control unit 110 compresses the first image according to the first compression condition by referring to the identification information of each of at least one object output by the second artificial neural network 520 from the first image. It can be divided into at least one third partial image that requires this and a fourth partial image that requires compression according to the second compression condition.
  • the third partial image is an image corresponding to the first partial image when using the first artificial neural network 420
  • the fourth partial image is also an image corresponding to the second partial image when using the first artificial neural network 420. It could be a video.
  • control unit 110 uses identification information for each of at least one object to create an area consisting of an area corresponding to an object classified as having a degree of movement greater than a predetermined threshold level in the first image.
  • a third partial image can be generated.
  • the controller 110 may generate a fourth partial image comprised of an area corresponding to an object classified as having a movement degree less than a predetermined threshold within the first image.
  • the control unit 110 displays a third partial image including an area where identification information is given as an object with a large degree of movement such as a vehicle, a pedestrian, etc., a street tree
  • a fourth partial image can be generated that includes an area to which identification information is assigned as an object with a small degree of movement, such as a road surface or a building near the road.
  • this is an example and the spirit of the present invention is not limited thereto.
  • control unit 110 determines/classifies the movement of an object.
  • the control unit 110 generates third image data by compressing the third partial image according to the first compression condition, and generates a fourth image data by compressing the fourth partial image according to the second compression condition. Data can be generated. At this time, since each of the first compression condition and the second compression condition has been described in detail, detailed description thereof will be omitted.
  • the control unit 110 transmits the first image data 622 and the second image data 632 (or the third image data and the fourth image data) generated according to the above-described process through a network ( 300) can be transmitted to at least one external device connected to the device.
  • the control unit 110 may transmit the first image data 622 and the second image data 632 to the image storage device 310.
  • FIG. 11 is a diagram showing image data transmitted by the control unit 110 over time.
  • the control unit 110 may generate first image data (Image Data_1) compressed to at least the quality of the original image and second image data (Image Data_2) compressed to the quality of the original image as much as possible.
  • compression to 'maximum original image quality' includes setting the frame rate to be smaller than the original, so as shown in FIG. 11, the second image data (Image Data_2) is added to the first image data (Image Data_1). It may be transmitted at a lower frequency compared to In other words, the first image data (Image Data_1) may be generated and transmitted more frequently than the second image data (Image Data_2).
  • the present invention enables more efficient use of network resources in video transmission.
  • the control unit 110 provides an interface for management of at least one surveillance camera 200 to at least one external device in accordance with a surveillance camera management interface request received from at least one external device. can do.
  • the interface provided at this time includes an interface for setting the environment in which at least one surveillance camera 200 is installed, an interface for setting at least one condition used to compress the image acquired by at least one surveillance camera, and at least one surveillance camera ( 200) may include at least one of the interfaces for setting items related to power supplied to the device.
  • the control unit 110 may provide an interface for setting conditions used to compress images acquired by at least one surveillance camera 200 in response to a request from the user terminal 320 to provide a surveillance camera management interface.
  • this is an example and the spirit of the present invention is not limited thereto.
  • FIG. 12 is a flowchart illustrating a surveillance camera management method performed by the surveillance camera management device 100 according to an embodiment of the present invention.
  • the description will be made with reference to FIGS. 1 to 11, but overlapping description will be omitted.
  • the surveillance camera management device 100 can load an artificial neural network. (S1210) At this time, 'loading' the artificial neural network means using the learned artificial neural network in another device (for example, learning artificial neural network). This may mean receiving from a device (not shown) and allowing it to be used. For example, the surveillance camera management device 100 may receive the weights constituting the artificial neural network from the artificial neural network learning device and use them.
  • the surveillance camera management device 100 can supply power to the first surveillance camera 201 added to the system (S1220) and set up a network through a predetermined process (S1230).
  • the surveillance camera management device 100 supplies power to the first surveillance camera 201 through a network connection cable with an RJ45 type plug, and also uses the same cable to set the IP address of the first surveillance camera 201. Can be assigned.
  • the first surveillance camera 201 can acquire the first image.
  • the surveillance camera management device 100 includes a first surveillance camera ( The first image may be received from 201) (S1250).
  • the first image may be an image acquired by the first surveillance camera 201 and may be an image composed of a first resolution and a first frame rate.
  • the first image may be an image with a resolution of 1920x1080 and a frame rate of 30 Fps.
  • this is an example and the spirit of the present invention is not limited thereto.
  • the surveillance camera management device 100 can generate first image data and second image data using a learned artificial neural network (S1260).
  • S1260 a learned artificial neural network
  • FIG. 10 is a diagram illustrating a process in which the surveillance camera management device 100 generates first image data 622 and second image data 632 from the first image 610 according to an embodiment of the present invention. am.
  • the surveillance camera management device 100 uses the learned first artificial neural network 420 to convert the first image 610 into at least one first portion that requires compression according to a first compression condition. It can be divided into an image 621 and at least one second partial image 631 that requires compression according to the second compression condition.
  • the first artificial neural network 420 is trained (or learned) to output the first partial image 441 and the second partial image 442 according to the input of the first image 430. Since it is a neural network, a first partial image 621 and a second partial image 631 can be output from the first image 610.
  • the surveillance camera management device 100 compresses the first partial image 441 according to the first compression condition to generate first image data 622, and similarly, the second partial image Second image data 632 may be generated by compressing 442 according to the second compression condition.
  • the first compression condition is a condition for compressing the first partial image 621 with a second resolution and a second frame rate.
  • the second resolution may be higher than the first resolution
  • the second frame rate may be higher than the first frame rate.
  • the first compression condition may be a compression condition in which the minimum compression quality is the quality of the original video. For example, if the first image 610 is an image composed of a resolution of 1920x1080 and a frame rate of 30 Fps, the first compression condition is a condition for compressing the first partial image 621 with a resolution of 1920x1080 and a frame rate of 30 Fps. You can.
  • the second compression condition may be a condition for compressing the second partial image 631 to a third resolution and a third frame rate.
  • the third resolution may be less than the first resolution
  • the third frame rate may be less than the first frame rate.
  • the second compression condition may be a compression condition in which the maximum quality is the quality of the original image. For example, if the first image 610 is an image composed of a resolution of 1920x1080 and a frame rate of 30 Fps, the second compression condition is a condition for compressing the second partial image 631 with a resolution of 1920x1080 and a frame rate of 10 Fps. You can.
  • the surveillance camera management device 100 may generate identification information for each of at least one object included in the first image using the learned second artificial neural network 520.
  • the second artificial neural network 520 generates an area 541 corresponding to the object within the first image 530 and identification information 542 of the object according to the input of the first image 530.
  • the surveillance camera management device 100 refers to the identification information of each of at least one object output by the second artificial neural network 520 from the first image, and applies the first image to the first compression condition. It can be divided into at least one third partial image that needs to be compressed according to and a fourth partial image that needs to be compressed according to the second compression condition. At this time, the third partial image is an image corresponding to the first partial image when using the first artificial neural network 420, and the fourth partial image is also an image corresponding to the second partial image when using the first artificial neural network 420. It could be a video.
  • the surveillance camera management device 100 uses identification information for each of at least one object to detect an object corresponding to an object classified as having a degree of movement greater than a predetermined threshold in the first image.
  • a third partial image composed of regions can be generated.
  • the surveillance camera management device 100 may generate a fourth partial image comprised of an area corresponding to an object classified as having a movement degree less than a predetermined threshold within the first image.
  • the surveillance camera management device 100 For example, if the first image is an image of a road on which cars pass, the surveillance camera management device 100 generates a third partial image that includes an area where identification information is given as an object with a large degree of movement, such as a vehicle, a pedestrian, etc.
  • a fourth partial image can be generated that includes an area where identification information is assigned to objects with a small degree of movement, such as street trees, road surfaces, and buildings around the road.
  • this is an example and the spirit of the present invention is not limited thereto.
  • the standards by which the surveillance camera management device 100 determines/classifies the movement of objects may be stored in advance.
  • the surveillance camera management device 100 generates third image data by compressing the third partial image according to the first compression condition, and compresses the fourth partial image according to the second compression condition. Fourth image data can be generated. At this time, since each of the first compression condition and the second compression condition has been described in detail, detailed description thereof will be omitted.
  • the surveillance camera management device 100 includes first image data 622 and second image data 632 (or third image data and fourth image data) generated according to the above-described process. Can be transmitted to the user terminal 320. (S1270)
  • the surveillance camera management device 100 may repeatedly perform steps S1250, S1260, and S1270 on the second image acquired by the first surveillance camera 201. (S1280)
  • the surveillance camera management device 100 can repeatedly perform the same process for a plurality of third images acquired by the first surveillance camera 201 after the second image.
  • FIG. 11 is a diagram showing video data transmitted by the surveillance camera management device 100 over time.
  • the surveillance camera management device 100 can generate first image data (Image Data_1) compressed to at least the quality of the original image and second image data (Image Data_2) compressed to the quality of the original image as much as possible.
  • compression to 'maximum original image quality' includes setting the frame rate to be smaller than the original, so as shown in FIG. 11, the second image data (Image Data_2) is added to the first image data (Image Data_1). It may be transmitted at a lower frequency compared to In other words, the first image data (Image Data_1) may be generated and transmitted more frequently than the second image data (Image Data_2).
  • the present invention enables more efficient use of network resources in video transmission.
  • the embodiments according to the present invention described above may be implemented in the form of a computer program that can be executed through various components on a computer, and such a computer program may be recorded on a computer-readable medium.
  • the medium may be one that stores a program executable on a computer. Examples of media include magnetic media such as hard disks, floppy disks, and magnetic tapes, optical recording media such as CD-ROMs and DVDs, magneto-optical media such as floptical disks, And there may be something configured to store program instructions, including ROM, RAM, flash memory, etc.
  • the computer program may be designed and configured specifically for the present invention, or may be known and available to those skilled in the art of computer software.
  • Examples of computer programs may include not only machine language code such as that created by a compiler, but also high-level language code that can be executed by a computer using an interpreter or the like.
  • connections or connection members of lines between components shown in the drawings exemplify functional connections and/or physical or circuit connections, and in actual devices, various functional connections or physical connections may be replaced or added. Can be represented as connections, or circuit connections. Additionally, if there is no specific mention such as “essential,” “important,” etc., it may not be a necessary component for the application of the present invention.

Abstract

According to an embodiment of the present invention, a surveillance camera management device for supplying power to at least one surveillance camera, and compressing and providing an image acquired by the at least one surveillance camera may comprise: a surveillance camera connection unit connected to at least one surveillance camera; a network connection unit connected to a network; a power supply unit for receiving power supplied from a power source, and generating power supplied to the at least one surveillance camera connected through the surveillance camera connection unit; a control unit for generating compression image data by compressing image data received from the at least one surveillance camera; and a communication unit for receiving the image data from the at least one surveillance camera to provide same to the control unit, and transmitting the compression image data generated by the control unit to at least one external device connected to the network.

Description

감시카메라 관리 장치Surveillance camera management device
본 발명은 연결된 적어도 하나의 감시카메라에 전원을 공급하고, 적어도 하나의 감시카메라가 획득한 영상을 압축하여 제공하는 감시카메라 관리 장치에 관한 것이다.The present invention relates to a surveillance camera management device that supplies power to at least one connected surveillance camera and compresses and provides images acquired by at least one surveillance camera.
오늘날 감시카메라의 설치 대수가 증가함에 따라 감시카메라와 관련된 인프라들에 부담이 증가하고 있는 추세이다. Today, as the number of surveillance cameras installed increases, the burden on infrastructure related to surveillance cameras is increasing.
예를 들어 감시카메라에 전원을 공급하는 전원 공급 인프라의 경우 감시카메라의 데이터 송수신을 위한 네트워크 회선과 별개로 설치되어, 설치 대수가 증가될수록 그에 비례하여 전원 공급 장치 등의 증설이 필요하다.For example, in the case of the power supply infrastructure that supplies power to surveillance cameras, it is installed separately from the network line for data transmission and reception of surveillance cameras, so as the number of installations increases, power supply devices, etc., need to be expanded in proportion.
또한 일반적인 감시 시스템에 있어서, 감시카메라가 획득한 영상은 모두 통신망을 통하여 관제실 등으로 전송되기에, 감시카메라의 증가에 따라 인터넷과 같은 통신망에서 영상 송수신에 사용되는 리소스와 영상의 저장에 필요한 리소스가 기하급수적으로 증가하고 있다. 따라서 보다 효율적인 영상 압축 및 전송 방법이 필요한 실정이다.In addition, in a general surveillance system, all images acquired by surveillance cameras are transmitted to the control room through a communication network. As the number of surveillance cameras increases, the resources used to transmit and receive images and the resources required to store images in communication networks such as the Internet are increasing. It is increasing exponentially. Therefore, a more efficient video compression and transmission method is needed.
본 발명은 복수의 감시카메라를 포함하는 감시 시스템의 구성을 보다 간소화 하고자 한다.The present invention seeks to further simplify the configuration of a surveillance system including a plurality of surveillance cameras.
또한 본 발명은 화질 열화 없이 영상을 효율적으로 압축하여 전송하고자 한다.Additionally, the present invention seeks to efficiently compress and transmit images without deteriorating image quality.
본 발명의 일 실시예에 따른 적어도 하나의 감시카메라에 전원을 공급하고 상기 적어도 하나의 감시카메라가 획득한 영상을 압축하여 제공하는 감시카메라 관리 장치는, 적어도 하나의 감시카메라와 연결되는 감시카메라 연결부; 네트워크와 연결되는 네트워크 연결부; 전원으로부터 전력을 공급받아 상기 감시카메라 연결부를 통하여 연결된 상기 적어도 하나의 감시카메라에 공급되는 전력을 생성하는 전원부; 상기 적어도 하나의 감시카메라로부터 수신된 영상 데이터를 압축하여 압축 영상 데이터를 생성하는 제어부; 및 상기 적어도 하나의 감시카메라로부터 상기 영상 데이터를 수신하여 상기 제어부에 제공하고, 상기 제어부가 생성한 상기 압축 영상 데이터를 상기 네트워크에 연결된 적어도 하나의 외부 장치에 전송하는 통신부;를 포함할 수 있다.A surveillance camera management device that supplies power to at least one surveillance camera and compresses and provides images acquired by the at least one surveillance camera according to an embodiment of the present invention includes a surveillance camera connector connected to at least one surveillance camera. ; A network connection unit connected to the network; a power supply unit that receives power from a power source and generates power to be supplied to the at least one surveillance camera connected through the surveillance camera connection unit; a control unit that compresses video data received from the at least one surveillance camera to generate compressed video data; and a communication unit that receives the video data from the at least one surveillance camera, provides the video data to the control unit, and transmits the compressed video data generated by the control unit to at least one external device connected to the network.
상기 제어부는 학습된 제1 인공 신경망을 이용하여 제1 영상을 제1 압축 조건에 따라 압축이 필요한 적어도 하나의 제1 부분 영상과 제2 압축 조건에 따라 압축이 필요한 적어도 하나의 제2 부분 영상으로 구분하고, 상기 제1 부분 영상을 상기 제1 압축 조건에 따라 압축한 제1 영상 데이터를 생성하고, 상기 제2 부분 영상을 상기 제2 압축 조건에 따라 압축한 제2 영상 데이터를 생성하고, 상기 제1 영상 데이터 및 상기 제2 영상 데이터를 상기 적어도 하나의 외부 장치에 전송할 수 있다.The control unit uses the learned first artificial neural network to divide the first image into at least one first partial image that requires compression according to the first compression condition and at least one second partial image that requires compression according to the second compression condition. generate first image data by compressing the first partial image according to the first compression conditions, and generating second image data by compressing the second partial image according to the second compression conditions, The first image data and the second image data may be transmitted to the at least one external device.
상기 제1 인공 신경망은 제1 학습 영상, 상기 제1 학습 영상으로부터 생성된 제1 부분 학습 영상 및 상기 제1 학습 영상으로부터 생성된 제2 부분 학습 영상 간의 상관관계를 학습한 신경망일 수 있다.The first artificial neural network may be a neural network that learns the correlation between a first training image, a first partial training image generated from the first training image, and a second partial training image generated from the first training image.
상기 제1 부분 학습 영상은 상기 제1 학습 영상 내에서 움직임의 정도가 소정의 임계 정도 이상일 것으로 분류된 객체에 해당하는 영역만을 포함하는 영상이고, 상기 제2 부분 학습 영상은 상기 제1 학습 영상 내에서 움직임의 정도가 상기 소정의 임계 정도 미만인 것으로 분류된 객체에 해당하는 영역만을 포함하는 영상일 수 있다.The first partial learning image is an image that includes only areas corresponding to objects classified as having a degree of movement within the first learning image that is greater than a predetermined threshold, and the second partial learning image is an image within the first learning image. It may be an image that includes only areas corresponding to objects classified as having a degree of movement less than the predetermined threshold.
상기 제1 영상은 제1 해상도 및 제1 프레임 레이트로 구성되며, 상기 제1 압축 조건은 상기 제1 부분 영상을 제2 해상도 및 제2 프레임 레이트로 압축하는 조건이고, 상기 제2 해상도는 상기 제1 해상도 이상이고, 상기 제2 프레임 레이트는 상기 제1 프레임 레이트 이상일 수 있다.The first image consists of a first resolution and a first frame rate, the first compression condition is a condition for compressing the first partial image with a second resolution and a second frame rate, and the second resolution is the first compression condition. 1 resolution or higher, and the second frame rate may be higher than the first frame rate.
상기 제1 영상은 제1 해상도 및 제1 프레임 레이트로 구성되며, 상기 제2 압축 조건은 상기 제2 부분 영상을 제3 해상도 및 제3 프레임 레이트로 압축하는 조건이고, 상기 제3 해상도는 상기 제1 해상도 이하이고, 상기 제3 프레임 레이트는 상기 제1 프레임 레이트 미만일 수 있다.The first image consists of a first resolution and a first frame rate, the second compression condition is a condition for compressing the second partial image to a third resolution and a third frame rate, and the third resolution is the 1 resolution or less, and the third frame rate may be less than the first frame rate.
상기 제어부는 학습된 제2 인공 신경망을 이용하여 제1 영상에 포함된 적어도 하나의 객체 각각의 식별 정보를 생성하고, 상기 적어도 하나의 객체 각각의 식별 정보를 참조하여, 상기 제1 영상을 제1 압축 조건에 따라 압축이 필요한 적어도 하나의 제3 부분 영상과 제2 압축 조건에 따라 압축이 필요한 제4 부분 영상으로 구분하고, 상기 제3 부분 영상을 상기 제1 압축 조건에 따라 압축한 제3 영상 데이터를 생성하고, 상기 제4 부분 영상을 상기 제2 압축 조건에 따라 압축한 제4 영상 데이터를 생성하고, 상기 제3 영상 데이터 및 상기 제4 영상 데이터를 상기 적어도 하나의 외부 장치에 전송할 수 있다.The control unit generates identification information for each of at least one object included in the first image using a learned second artificial neural network, and refers to the identification information for each of the at least one object, and converts the first image into a first image. A third partial image is divided into at least one third partial image that requires compression according to compression conditions and a fourth partial image that requires compression according to second compression conditions, and the third partial image is compressed according to the first compression conditions. Data may be generated, fourth image data may be generated by compressing the fourth partial image according to the second compression condition, and the third image data and the fourth image data may be transmitted to the at least one external device. .
상기 인공 신경망은 제2 학습 영상, 상기 제2 학습 영상 내에서 객체에 해당하는 영역 및 상기 객체의 식별 정보 간의 상관관계를 학습한 신경망일 수 있다.The artificial neural network may be a neural network that learns the correlation between a second learning image, a region corresponding to an object within the second learning image, and identification information of the object.
상기 제어부는 상기 적어도 하나의 객체 각각의 식별 정보를 이용하여, 상기 제1 영상 내에서 움직임의 정도가 소정의 임계 정도 이상인 것으로 분류된 객체에 해당하는 영역으로 구성된 상기 제3 부분 영상을 생성하고, 상기 적어도 하나의 객체 각각의 식별 정보를 이용하여, 상기 제1 영상 내에서 움직임의 정도가 소정의 임계 정도 미만인 것으로 분류된 객체에 해당하는 영역으로 구성된 상기 제4 부분 영상을 생성할 수 있다.The control unit uses identification information of each of the at least one object to generate the third partial image composed of an area corresponding to an object classified as having a degree of movement greater than a predetermined threshold in the first image, Using the identification information of each of the at least one object, the fourth partial image consisting of an area corresponding to an object classified as having a degree of movement less than a predetermined threshold in the first image may be generated.
상기 감시카메라 연결부와 상기 적어도 하나의 감시카메라는 제1 인터페이스를 통해 연결되고, 상기 전원부가 생성한 전력은 상기 제1 인터페이스를 통해 상기 적어도 하나의 감시카메라로 공급되고, 상기 적어도 하나의 감시카메라가 생성한 상기 영상 데이터는 상기 제1 인터페이스를 통해 상기 통신부로 전송될 수 있다.The surveillance camera connection unit and the at least one surveillance camera are connected through a first interface, the power generated by the power unit is supplied to the at least one surveillance camera through the first interface, and the at least one surveillance camera is connected to the surveillance camera. The generated image data may be transmitted to the communication unit through the first interface.
상기 제어부는 상기 적어도 하나의 외부 장치로부터 수신된 감시카메라 관리 인터페이스 요청에 따라, 상기 적어도 하나의 외부 장치에 상기 적어도 하나의 감시카메라의 관리를 위한 인터페이스를 제공할 수 있다.The control unit may provide an interface for managing the at least one surveillance camera to the at least one external device according to a surveillance camera management interface request received from the at least one external device.
상기 인터페이스는 상기 적어도 하나의 감시카메라가 설치된 환경을 설정하는 인터페이스; 상기 적어도 하나의 감시카메라가 획득한 영상을 압축하는데 사용되는 적어도 하나의 조건을 설정하는 인터페이스; 및 상기 적어도 하나의 감시카메라에 공급되는 전원과 관련된 항목을 설정하는 인터페이스;중 적어도 하나를 포함할 수 있다.The interface includes an interface for setting an environment in which the at least one surveillance camera is installed; an interface that sets at least one condition used to compress images acquired by the at least one surveillance camera; and an interface for setting items related to power supplied to the at least one surveillance camera.
본 발명에 따르면 복수의 감시카메라를 포함하는 감시 시스템의 구성을 보다 간소화 할 수 있다. 특히 본 발명은 감시카메라에 대한 전원 공급과 감시카메라가 획득한 영상의 압축이 하나의 장치에서 수행될 수 있도록 함으로써 시스템의 구성을 간소화 할 수 있다.According to the present invention, the configuration of a surveillance system including a plurality of surveillance cameras can be simplified. In particular, the present invention can simplify the configuration of the system by allowing power supply to the surveillance camera and compression of images acquired by the surveillance camera to be performed in one device.
또한 본 발명은 학습된 인공 신경망을 이용하여 영상의 부분 별 압축 조건을 달리함으로써 화질의 열화 없이 영상을 압축할 수 있다.Additionally, the present invention can compress images without deteriorating image quality by varying compression conditions for each part of the image using a learned artificial neural network.
도 1은 본 발명의 일 실시예에 따른 감시 시스템의 구성을 개략적으로 도시한 도면이다.1 is a diagram schematically showing the configuration of a monitoring system according to an embodiment of the present invention.
도 2 및 도 3은 본 발명의 일 실시예에 따른 감시카메라 관리 장치(100)의 구성을 개략적으로 도시한 도면이다.2 and 3 are diagrams schematically showing the configuration of a surveillance camera management device 100 according to an embodiment of the present invention.
도 4 및 도 5는 본 발명의 일 실시예에 따른 감시카메라 관리 장치(100)가 사용하는 학습된 인공 신경망의 예시적인 구조를 설명하기 위한 도면이다.Figures 4 and 5 are diagrams for explaining an exemplary structure of a learned artificial neural network used by the surveillance camera management device 100 according to an embodiment of the present invention.
도 6은 본 발명의 일 실시예에 따른 인공 신경망 학습 장치(미도시)가 복수의 학습 데이터(410)를 이용하여 제1 인공 신경망(420)을 학습하는 방법을 설명하기 위한 도면이다. FIG. 6 is a diagram illustrating a method in which an artificial neural network learning device (not shown) learns a first artificial neural network 420 using a plurality of learning data 410 according to an embodiment of the present invention.
도 7은 본 발명의 일 실시예에 따른 인공 신경망 학습 장치(미도시)가 학습된 제1 인공 신경망(420)을 이용하여 제1 영상(430)으로부터 제1 부분 영상(441)과 제2 부분 영상(442)을 출력하는 과정을 설명하기 위한 도면이다.7 shows a first partial image 441 and a second partial image from the first image 430 using the first artificial neural network 420 learned by an artificial neural network learning device (not shown) according to an embodiment of the present invention. This is a diagram to explain the process of outputting the image 442.
도 8은 본 발명의 일 실시예에 따른 인공 신경망 학습 장치(미도시)가 복수의 학습 데이터(510)를 이용하여 제2 인공 신경망(520)을 학습하는 방법을 설명하기 위한 도면이다. FIG. 8 is a diagram illustrating a method in which an artificial neural network learning device (not shown) learns a second artificial neural network 520 using a plurality of learning data 510 according to an embodiment of the present invention.
도 9는 본 발명의 일 실시예에 따른 인공 신경망 학습 장치(미도시)가 학습된 제2 인공 신경망(520)을 이용하여 제1 영상(530)으로부터 인식된 객체에 해당하는 영역(541) 및 객체의 식별 정보(542)를 출력하는 과정을 설명하기 위한 도면이다.9 shows a region 541 corresponding to an object recognized from the first image 530 using a second artificial neural network 520 learned by an artificial neural network learning device (not shown) according to an embodiment of the present invention. This diagram is for explaining the process of outputting object identification information 542.
도 10은 본 발명의 일 실시예에 따른 제어부(110)가 제1 영상(610)으로부터 제1 영상 데이터(622) 및 제2 영상 데이터(632)를 생성하는 과정을 설명하기 위한 도면이다.FIG. 10 is a diagram illustrating a process in which the control unit 110 generates first image data 622 and second image data 632 from the first image 610 according to an embodiment of the present invention.
도 11은 제어부(110)에 의해 전송되는 영상 데이터를 시간의 흐름에 따라 도시한 도면이다.FIG. 11 is a diagram showing image data transmitted by the control unit 110 over time.
도 12는 본 발명의 일 실시예에 따른 제어부(110)에 의해 수행되는 감시카메라 관리 방법을 설명하기 위한 흐름도이다.Figure 12 is a flowchart to explain a surveillance camera management method performed by the control unit 110 according to an embodiment of the present invention.
본 발명의 일 실시예에 따른 적어도 하나의 감시카메라에 전원을 공급하고 상기 적어도 하나의 감시카메라가 획득한 영상을 압축하여 제공하는 감시카메라 관리 장치는, 적어도 하나의 감시카메라와 연결되는 감시카메라 연결부; 네트워크와 연결되는 네트워크 연결부; 전원으로부터 전력을 공급받아 상기 감시카메라 연결부를 통하여 연결된 상기 적어도 하나의 감시카메라에 공급되는 전력을 생성하는 전원부; 상기 적어도 하나의 감시카메라로부터 수신된 영상 데이터를 압축하여 압축 영상 데이터를 생성하는 제어부; 및 상기 적어도 하나의 감시카메라로부터 상기 영상 데이터를 수신하여 상기 제어부에 제공하고, 상기 제어부가 생성한 상기 압축 영상 데이터를 상기 네트워크에 연결된 적어도 하나의 외부 장치에 전송하는 통신부;를 포함할 수 있다.A surveillance camera management device that supplies power to at least one surveillance camera and compresses and provides images acquired by the at least one surveillance camera according to an embodiment of the present invention includes a surveillance camera connector connected to at least one surveillance camera. ; A network connection unit connected to the network; a power supply unit that receives power from a power source and generates power to be supplied to the at least one surveillance camera connected through the surveillance camera connection unit; a control unit that compresses video data received from the at least one surveillance camera to generate compressed video data; and a communication unit that receives the video data from the at least one surveillance camera, provides the video data to the control unit, and transmits the compressed video data generated by the control unit to at least one external device connected to the network.
본 발명은 다양한 변환을 가할 수 있고 여러 가지 실시예를 가질 수 있는 바, 특정 실시예들을 도면에 예시하고 상세한 설명에 상세하게 설명하고자 한다. 본 발명의 효과 및 특징, 그리고 그것들을 달성하는 방법은 도면과 함께 상세하게 후술되어 있는 실시예들을 참조하면 명확해질 것이다. 그러나 본 발명은 이하에서 개시되는 실시예들에 한정되는 것이 아니라 다양한 형태로 구현될 수 있다. Since the present invention can be modified in various ways and can have various embodiments, specific embodiments will be illustrated in the drawings and described in detail in the detailed description. The effects and features of the present invention and methods for achieving them will become clear by referring to the embodiments described in detail below along with the drawings. However, the present invention is not limited to the embodiments disclosed below and may be implemented in various forms.
이하, 첨부된 도면을 참조하여 본 발명의 실시예들을 상세히 설명하기로 하며, 도면을 참조하여 설명할 때 동일하거나 대응하는 구성 요소는 동일한 도면부호를 부여하고 이에 대한 중복되는 설명은 생략하기로 한다.Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings. When describing with reference to the drawings, identical or corresponding components will be assigned the same reference numerals and redundant description thereof will be omitted. .
이하의 실시예에서, 제1, 제2 등의 용어는 한정적인 의미가 아니라 하나의 구성 요소를 다른 구성 요소와 구별하는 목적으로 사용되었다. 이하의 실시예에서, 단수의 표현은 문맥상 명백하게 다르게 뜻하지 않는 한, 복수의 표현을 포함한다. 이하의 실시예에서, 포함하다 또는 가지다 등의 용어는 명세서상에 기재된 특징, 또는 구성요소가 존재함을 의미하는 것이고, 하나 이상의 다른 특징들 또는 구성요소가 부가될 가능성을 미리 배제하는 것은 아니다. 도면에서는 설명의 편의를 위하여 구성 요소들이 그 크기가 과장 또는 축소될 수 있다. 예컨대, 도면에서 나타난 각 구성의 크기 및 형태는 설명의 편의를 위해 임의로 나타났었음으로, 본 발명이 반드시 도시된 바에 한정되지 않는다. In the following embodiments, terms such as first and second are used not in a limiting sense but for the purpose of distinguishing one component from another component. In the following examples, singular terms include plural terms unless the context clearly dictates otherwise. In the following embodiments, terms such as include or have mean that the features or components described in the specification exist, and do not exclude in advance the possibility of adding one or more other features or components. In the drawings, the sizes of components may be exaggerated or reduced for convenience of explanation. For example, the size and shape of each component shown in the drawings are shown arbitrarily for convenience of explanation, and the present invention is not necessarily limited to what is shown.
도 1은 본 발명의 일 실시예에 따른 감시 시스템의 구성을 개략적으로 도시한 도면이다.1 is a diagram schematically showing the configuration of a monitoring system according to an embodiment of the present invention.
본 발명의 일 실시예에 따른 감시카메라 관리 장치는 적어도 하나의 감시카메라에 전원을 공급하고, 적어도 하나의 감시카메라가 획득한 영상을 압축하여 하나 이상의 다른 장치에 공급할 수 있다. 이때 본 발명의 일 실시예에 따른 감시카메라 관리 장치는 각각의 감시카메라와 단일 인터페이스(예를 들어 RJ45타입의 플러그를 갖는 네트워크 연결 케이블)를 통해 감시카메라에 전원을 공급하며, 감시카메라가 획득한 영상을 획득할 수 있다.The surveillance camera management device according to an embodiment of the present invention can supply power to at least one surveillance camera, compress the video acquired by at least one surveillance camera, and supply it to one or more other devices. At this time, the surveillance camera management device according to an embodiment of the present invention supplies power to each surveillance camera through a single interface (for example, a network connection cable with an RJ45 type plug), and collects the information obtained by the surveillance camera. Video can be obtained.
본 발명의 일 실시예에 따른 감시카메라 관리 장치는 학습된 인공 신경망을 이용하여 감시카메라가 획득한 영상을 압축할 수 있다. 가령 감시카메라 관리 장치는 감시카메라로부터 획득된 영상을 제1 압축 조건에 따라 압축이 필요한 적어도 하나의 제1 부분 영상과 제2 압축 조건에 따라 압축이 필요한 적어도 하나의 제2 부분 영상으로 구분하고, 각각의 부분 영상으로부터 압축된 영상을 생성할 수 있다. 이에 대한 상세한 설명은 후술한다.A surveillance camera management device according to an embodiment of the present invention can compress images acquired by a surveillance camera using a learned artificial neural network. For example, the surveillance camera management device divides the image acquired from the surveillance camera into at least one first partial image that requires compression according to a first compression condition and at least one second partial image that requires compression according to a second compression condition, A compressed image can be generated from each partial image. A detailed description of this will be provided later.
본 발명에서 '인공 신경망'은 소정의 목적에 따라 학습된 신경망으로, 머신 러닝(Machine Learning) 또는 딥러닝(Deep Learning) 기법에 의해 학습된 인공 신경망을 의미할 수 있다. 이와 같은 신경망에 대해서는 도 4 내지 도 9를 참조하여 설명한다.In the present invention, 'artificial neural network' is a neural network learned according to a predetermined purpose, and may refer to an artificial neural network learned by machine learning or deep learning techniques. Such a neural network will be described with reference to FIGS. 4 to 9.
본 발명의 일 실시예에 따른 감시 시스템은 도 1에 도시된 바와 같이 감시카메라 관리 장치(100), 적어도 하나의 감시카메라(200), 네트워크(300), 영상 저장 장치(310) 및 사용자 단말(320)을 포함할 수 있다.As shown in FIG. 1, the surveillance system according to an embodiment of the present invention includes a surveillance camera management device 100, at least one surveillance camera 200, a network 300, an image storage device 310, and a user terminal ( 320) may be included.
본 발명의 일 실시예에 따른 감시카메라 관리 장치(100)는 전술한 바와 같이 적어도 하나의 감시카메라(200)에 전원을 공급하고, 적어도 하나의 감시카메라(200)가 획득한 영상을 압축하여 하나 이상의 다른 장치에 공급할 수 있다.As described above, the surveillance camera management device 100 according to an embodiment of the present invention supplies power to at least one surveillance camera 200 and compresses the video acquired by the at least one surveillance camera 200 into one It can be supplied to other devices.
이때 본 발명의 일 실시예에 따른 감시카메라 관리 장치(100)는 연결된 적어도 하나의 감시카메라(200) 각각에 대한 식별 번호를 부여하여 관리할 수 있다. 가령 적어도 하나의 감시카메라(200)들이 RJ45타입의 플러그를 갖는 네트워크 연결 케이블을 통해 연결되는 경우, 감시카메라 관리 장치(100)는 적어도 하나의 감시카메라(200) 각각에 대한 IP 주소를 부여하여 관리할 수 있다. 다만 이와 같은 방식은 예시적인것으로 본 발명의 사상이 이에 한정되는 것은 아니다.At this time, the surveillance camera management device 100 according to an embodiment of the present invention can manage by assigning an identification number to each of at least one connected surveillance camera 200. For example, when at least one surveillance camera 200 is connected through a network connection cable with an RJ45 type plug, the surveillance camera management device 100 manages by assigning an IP address to each of the at least one surveillance camera 200. can do. However, this method is illustrative and the scope of the present invention is not limited thereto.
도 2 및 도 3은 본 발명의 일 실시예에 따른 감시카메라 관리 장치(100)의 구성을 개략적으로 도시한 도면이다. 이하에서는 도 2 및 도 3을 함께 참조하여 설명한다.2 and 3 are diagrams schematically showing the configuration of a surveillance camera management device 100 according to an embodiment of the present invention. Hereinafter, it will be described with reference to FIGS. 2 and 3 together.
본 발명의 일 실시예에 따른 감시카메라 관리 장치(100)는 도 2에 도시된 바와 같이 제어부(110), 전원부(120), 감시카메라 연결부(130), 네트워크 연결부(140) 및 통신부(150)를 포함할 수 있다.As shown in FIG. 2, the surveillance camera management device 100 according to an embodiment of the present invention includes a control unit 110, a power unit 120, a surveillance camera connection unit 130, a network connection unit 140, and a communication unit 150. may include.
본 발명의 일 실시예에 따른 제어부(110)는 감시카메라 관리 장치(100)가 적어도 하나의 감시카메라(200)에 전원을 공급하고, 적어도 하나의 감시카메라(200)가 획득한 영상을 압축하여 하나 이상의 다른 장치에 공급하는 일련의 과정을 제어하는 장치일 수 있다.The control unit 110 according to an embodiment of the present invention allows the surveillance camera management device 100 to supply power to at least one surveillance camera 200 and compress the video acquired by the at least one surveillance camera 200. It may be a device that controls a series of processes that supply supply to one or more other devices.
이때 제어부(110)는 예를 들어 프로그램 내에 포함된 코드 또는 명령으로 표현된 기능을 수행하기 위해 물리적으로 구조화된 회로를 갖는, 하드웨어에 내장된 데이터 처리 장치를 의미할 수 있다. 이와 같이 하드웨어에 내장된 데이터 처리 장치의 일 예로써, 마이크로프로세서(Microprocessor), 중앙처리장치(Central Processing Unit: CPU), 프로세서 코어(Processor Core), 멀티프로세서(Multiprocessor), ASIC(Application-Specific Integrated Circuit), FPGA(Field Programmable Gate Array), GPU(Graphics processing unit) 등의 처리 장치를 망라할 수 있으나, 본 발명의 범위가 이에 한정되는 것은 아니다.At this time, the control unit 110 may mean, for example, a data processing device built into hardware that has a physically structured circuit to perform a function expressed as a code or command included in a program. Examples of data processing devices built into hardware include microprocessor, central processing unit (CPU), processor core, multiprocessor, and application-specific integrated (ASIC). Circuit), FPGA (Field Programmable Gate Array), GPU (Graphics processing unit), etc., but the scope of the present invention is not limited thereto.
도 1에는 제어부(110)가 단수인 것으로 도시되었으나, 제어부(110)는 상술한 일련의 과정을 제어하기 위해 복수의 하드웨어(예를 들어 복수의 칩)로 구성될 수도 있다. 다만 이는 예시적인것으로 본 발명의 사상이 이에 한정되는 것은 아니다.Although the control unit 110 is shown as a single unit in FIG. 1, the control unit 110 may be composed of a plurality of hardware (eg, a plurality of chips) to control the series of processes described above. However, this is an example and the spirit of the present invention is not limited thereto.
본 발명의 일 실시예에 따른 전원부(120)는 전원(또는 외부 전원)으로부터 전력을 공급받아 감시카메라 연결부(130)를 통하여 연결된 적어도 하나의 감시카메라(200)에 공급되는 전력을 생성할 수 있다. 물론 전원부(120)는 전원으로부터 공급받은 전력으로부터 감시카메라 관리 장치(100)가 사용하는 전력을 생성할 수도 있다.The power supply unit 120 according to an embodiment of the present invention can receive power from a power source (or an external power source) and generate power to be supplied to at least one surveillance camera 200 connected through the surveillance camera connection unit 130. . Of course, the power unit 120 may generate power used by the surveillance camera management device 100 from power supplied from the power source.
가령 본 발명의 일 실시예에 따른 전원부(120)는 전원으로부터 전력을 공급받아 적어도 하나의 감시카메라(200) 및/또는 감시카메라 관리 장치(100)에 공급되는 전력을 생성할 수 있다. 이때 전원부(120)는 적어도 하나의 감시카메라(200)에 공급되는 제1 유형의 전력을 생성하고, 감시카메라 관리 장치(100)에 사용되는 제2 유형의 전력을 생성할 수 있다. 다만 이는 예시적인것으로 본 발명의 사상이 이에 한정되는 것은 아니다.For example, the power supply unit 120 according to an embodiment of the present invention may receive power from a power source and generate power to be supplied to at least one surveillance camera 200 and/or the surveillance camera management device 100. At this time, the power supply unit 120 may generate a first type of power supplied to at least one surveillance camera 200 and a second type of power used in the surveillance camera management device 100. However, this is an example and the spirit of the present invention is not limited thereto.
본 발명의 일 실시예에 따른 전원부(120)가 생성한 전력은 감시카메라 연결부(130)에 연결된 인터페이스를 통해 적어도 하나의 감시카메라(200)에 공급될 수 있다. 가령 감시카메라 연결부(130)에 RJ45타입의 플러그를 갖는 네트워크 연결 케이블이 연결된 경우, 전원부(120)가 생성한 전력은 이와 같은 케이블을 통하여 적어도 하나의 감시카메라(200)에 공급될 수 있다. 바꾸어 말하면 적어도 하나의 감시카메라(200)에 대한 전원의 공급과 적어도 하나의 감시카메라(200)와의 데이터 송수신이 하나의 인터페이스를 통해 이루어질 수 있다.Power generated by the power supply unit 120 according to an embodiment of the present invention may be supplied to at least one surveillance camera 200 through an interface connected to the surveillance camera connection unit 130. For example, when a network connection cable having an RJ45 type plug is connected to the surveillance camera connection unit 130, power generated by the power supply unit 120 may be supplied to at least one surveillance camera 200 through such a cable. In other words, power supply to at least one surveillance camera 200 and data transmission and reception to and from at least one surveillance camera 200 can be accomplished through one interface.
본 발명의 일 실시예에 따른 전원부(120)는 외부 전원과 연결되는 전원 포트(121)를 포함할 수 있다.The power supply unit 120 according to an embodiment of the present invention may include a power port 121 connected to an external power source.
본 발명의 일 실시예에 따른 감시카메라 연결부(130)는 적어도 하나의 감시카메라(200)와 감시카메라 관리 장치(100)를 연결할 수 있다. 이때 두 장치를 '연결'하는 것은 물리적으로 연결하는 것뿐만 아니라 전기적으로 연결하는 것을 의미할 수 있다.The surveillance camera connection unit 130 according to an embodiment of the present invention can connect at least one surveillance camera 200 and the surveillance camera management device 100. At this time, ‘connecting’ two devices may mean not only physically connecting but also electrically connecting.
본 발명의 일 실시예에 따른 감시카메라 연결부(130)는 도 3에 도시된 바와 같이 소정의 인터페이스를 통하여 적어도 하나의 감시카메라(200)와 연결되는 하나 이상의 연결 포트(131 내지 136)를 포함할 수 있다. 가령 연결부(130)의 첫 번째 포트(131)에 첫 번째 감시카메라가 연결될 수 있고, 두 번째 포트(132)에 두 번째 감시카메라가 연결될 수 있다. 다만 이는 예시적인것으로 본 발명의 사상이 이에 한정되는 것은 아니다.The surveillance camera connection unit 130 according to an embodiment of the present invention may include one or more connection ports 131 to 136 connected to at least one surveillance camera 200 through a predetermined interface, as shown in FIG. 3. You can. For example, the first surveillance camera may be connected to the first port 131 of the connection unit 130, and the second surveillance camera may be connected to the second port 132. However, this is an example and the spirit of the present invention is not limited thereto.
한편 하나 이상의 포트(131)는 소정의 인터페이스에 따른 연결을 위한 것 일 수 있다. 가령 하나 이상의 포트(131)는 RJ45타입의 플러그를 갖는 네트워크 연결 케이블을 연결하기 위한 것 일 수 있다. 가만 이는 예시적인것으로 본 발명의 사상이 이에 한정되는 것은 아니다.Meanwhile, one or more ports 131 may be for connection according to a predetermined interface. For example, one or more ports 131 may be for connecting a network connection cable with an RJ45 type plug. Please note that this is an example and the spirit of the present invention is not limited thereto.
본 발명의 일 실시예에 따른 네트워크 연결부(140)는 네트워크(300)와 감시카메라 관리 장치(100)를 연결할 수 있다. The network connection unit 140 according to an embodiment of the present invention can connect the network 300 and the surveillance camera management device 100.
본 발명의 일 실시예에 따른 네트워크 연결부(140)는 제1 인터페이스에 따라 네트워크(300)에 연결되는 제1 포트(141)와 제2 인터페이스에 따라 네트워크(300)에 연결되는 제2 포트(142)를 포함할 수 있다. 이때 제1 인터페이스는 예를 들어 RJ45타입의 플러그를 갖는 네트워크 연결 케이블일 수 있고, 제2 인터페이스는 광케이블 일 수 있다. 다만 이는 예시적인것으로 본 발명의 사상이 이에 한정되는 것은 아니다.The network connection unit 140 according to an embodiment of the present invention includes a first port 141 connected to the network 300 according to a first interface and a second port 142 connected to the network 300 according to a second interface. ) may include. At this time, the first interface may be, for example, a network connection cable with an RJ45 type plug, and the second interface may be an optical cable. However, this is an example and the spirit of the present invention is not limited thereto.
본 발명의 일 실시예에 따른 통신부(150)는 적어도 하나의 감시카메라(200)로부터 영상 데이터를 수신하여 제어부(110)에 제공하고, 제어부(110)가 생성한 압축 영상 데이터를 네트워크(300)에 연결된 적어도 하나의 외부 장치에 전송할 수 있다.The communication unit 150 according to an embodiment of the present invention receives image data from at least one surveillance camera 200 and provides it to the control unit 110, and transmits the compressed image data generated by the control unit 110 to the network 300. Can be transmitted to at least one external device connected to.
이때 통신부(150)는 감시카메라 관리 장치(100)가 영상 저장 장치(310) 및/또는 사용자 단말(320)과 같은 다른 네트워크 장치와 유무선 연결을 통해 제어 신호 또는 데이터 신호와 같은 신호를 송수신하기 위해 필요한 하드웨어 및 소프트웨어를 포함하는 장치일 수 있다. At this time, the communication unit 150 allows the surveillance camera management device 100 to transmit and receive signals such as control signals or data signals through wired or wireless connection with other network devices such as the video storage device 310 and/or the user terminal 320. It may be a device containing the necessary hardware and software.
본 발명의 일 실시예에 따른 통신부(150)는 감시카메라 연결부(130)를 통해 적어도 하나의 감시카메라(200)로부터 영상 데이터를 수신하고, 네트워크 연결부(140)를 통해 제어부(110)가 생성한 압축 영상 데이터를 적어도 하나의 외부 장치에 전송할 수 있다. 이때 감시카메라 연결부(130)와 적어도 하나의 감시카메라(200)는 소정의 인터페이스에 따라 연결된 것 일 수 있다. 또한 이와 같은 인터페이스를 통해 영상 데이터의 송수신 뿐만 아니라 적어도 하나의 감시카메라(200)에 대한 전력의 공급이 이루어질 수 있다.The communication unit 150 according to an embodiment of the present invention receives video data from at least one surveillance camera 200 through the surveillance camera connection unit 130, and receives video data generated by the control unit 110 through the network connection unit 140. Compressed video data can be transmitted to at least one external device. At this time, the surveillance camera connection unit 130 and at least one surveillance camera 200 may be connected according to a predetermined interface. Additionally, through this interface, not only can video data be transmitted and received, but power can be supplied to at least one surveillance camera 200.
본 발명의 일 실시예에 따른 적어도 하나의 감시카메라(200)는 주변 환경에 대한 영상을 획득하고 이를 다른 장치(예를 들어 감시카메라 관리 장치(100))에 전송하는 장치일 수 있다.At least one surveillance camera 200 according to an embodiment of the present invention may be a device that acquires images of the surrounding environment and transmits them to another device (for example, the surveillance camera management device 100).
도 1에 도시된 바와 같이 적어도 하나의 감시카메라(200)는 팬, 틸트 및 줌이 가능한 타입일 수도 있고, 화각이 고정된 타입일 수도 있다. 다만 이는 예시적인것으로, 환경에 대한 영상을 획득하고 이를 전송하는 장치라면 본 발명에서 설명하는 '적어도 하나의 감시카메라(200)'에 해당할 수 있다.As shown in FIG. 1, at least one surveillance camera 200 may be of a type capable of panning, tilting, and zooming, or may be of a type with a fixed angle of view. However, this is an example, and any device that acquires images of the environment and transmits them may correspond to 'at least one surveillance camera 200' described in the present invention.
본 발명의 일 실시예에 따른 적어도 하나의 감시카메라(200) 각각은 감시카메라 관리 장치(100)와 소정의 인터페이스에 따라 연결될 수 있다. 가령 적어도 하나의 감시카메라(200) 각각은 RJ45타입의 플러그를 갖는 네트워크 연결 케이블을 통해 감시카메라 관리 장치(100)와 연결될 수 있다. 이때 적어도 하나의 감시카메라(200)는 하나의 인터페이스를 통해 전원을 공급받음과 동시에 획득된 영상 데이터를 전송할 수 있다. Each of at least one surveillance camera 200 according to an embodiment of the present invention may be connected to the surveillance camera management device 100 according to a predetermined interface. For example, each of at least one surveillance camera 200 may be connected to the surveillance camera management device 100 through a network connection cable having an RJ45 type plug. At this time, at least one surveillance camera 200 may receive power through one interface and transmit acquired image data at the same time.
본 발명의 일 실시예에 따른 네트워크(300)는 감시카메라 관리 장치(100)와 적어도 하나의 외부 장치(310, 320) 간의 데이터 송수신을 매개하는 통신망을 의미할 수 있다. 가령 네트워크(300)는 LANs(Local Area Networks), WANs(Wide Area Networks), MANs(Metropolitan Area Networks), ISDNs(Integrated Service Digital Networks) 등의 유선 네트워크나, 무선 LANs, CDMA, 블루투스, 위성 통신 등의 무선 네트워크를 망라할 수 있으나, 본 발명의 범위가 이에 한정되는 것은 아니다. 예를 들어 네트워크(300)는 감시카메라 관리 장치(100)가 압축 영상 데이터를 영상 저장 장치(310) 및/또는 사용자 단말(320)에 전송하는 경로를 제공할 수 있다. 다만 이는 예시적인것으로 본 발명의 사상이 이에 한정되는 것은 아니다.The network 300 according to an embodiment of the present invention may refer to a communication network that mediates data transmission and reception between the surveillance camera management device 100 and at least one external device 310 and 320. For example, the network 300 may be a wired network such as Local Area Networks (LANs), Wide Area Networks (WANs), Metropolitan Area Networks (MANs), or Integrated Service Digital Networks (ISDNs), or wireless LANs, CDMA, Bluetooth, satellite communication, etc. may cover wireless networks, but the scope of the present invention is not limited thereto. For example, the network 300 may provide a path through which the surveillance camera management device 100 transmits compressed video data to the video storage device 310 and/or the user terminal 320. However, this is an example and the spirit of the present invention is not limited thereto.
본 발명의 일 실시예에 따른 영상 저장 장치(310)는 감시카메라 관리 장치(100)가 전송한 압축 영상 데이터를 수신하여 저장 및/또는 관리하는 장치일 수 있다. 가령 영상 저장 장치(310)는 NVR(Network Video Recorder)이거나 NVR을 포함하는 장치일 수 있다. 다만 이는 예시적인것으로 본 발명의 사상이 이에 한정되는 것은 아니다.The video storage device 310 according to an embodiment of the present invention may be a device that receives, stores, and/or manages compressed video data transmitted by the surveillance camera management device 100. For example, the video storage device 310 may be a Network Video Recorder (NVR) or a device including an NVR. However, this is an example and the spirit of the present invention is not limited thereto.
본 발명의 일 실시예에 따른 영상 저장 장치(310)는 네트워크(300)를 통하여 수신된 압축 영상 데이터로부터 영상을 생성하고 이를 스토리지에 저장할 수 있다. 또한 영상 저장 장치(310)는 사용자 단말(320)과 같은 다른 장치의 요청에 따라 스토리지에 저장된 영상을 그 다른 장치에 제공할 수도 있다.The image storage device 310 according to an embodiment of the present invention can generate an image from compressed image data received through the network 300 and store it in storage. Additionally, the image storage device 310 may provide the image stored in the storage to another device, such as the user terminal 320, upon request from the other device.
본 발명의 일 실시예에 따른 영상 저장 장치(310)는 저장된 영상을 검색 및/또는 관리하기 위한 인터페이스를 제공할 수 있다. 가령 영상 저장 장치(310)는 네트워크(300)를 통해 접속된 사용자 단말(320)에 웹 페이지의 형태로 저장된 영상을 제공할 수 있다. 다만 이는 예시적인것으로 본 발명의 사상이 이에 한정되는 것은 아니다.The image storage device 310 according to an embodiment of the present invention may provide an interface for searching and/or managing stored images. For example, the image storage device 310 may provide images stored in the form of a web page to the user terminal 320 connected through the network 300. However, this is an example and the spirit of the present invention is not limited thereto.
본 발명의 일 실시예에 따른 사용자 단말(320)은 감시카메라 관리 장치(100) 및/또는 영상 저장 장치(310)로부터 영상을 수신하여 표시할 수 있다. 이와 같은 사용자 단말(320)은 가령 도 1에 도시된 바와 같이 휴대폰 일 수 있다. 다만 이는 예시적인 것으로, 태블릿 PC 및 PC 등과 같이 네트워킹 기능을 갖는 범용적 정보처리장치라면 본 발명의 사용자 단말(320)에 해당할 수 있다.The user terminal 320 according to an embodiment of the present invention can receive and display images from the surveillance camera management device 100 and/or the image storage device 310. Such a user terminal 320 may be, for example, a mobile phone as shown in FIG. 1 . However, this is an example, and any general-purpose information processing device with a networking function, such as a tablet PC or PC, may correspond to the user terminal 320 of the present invention.
이하에서는 감시카메라 관리 장치(100)가 영상을 압축하는 과정에서 사용되는 인공 신경망에 대해서 상세히 설명한다.Below, the artificial neural network used in the process of video compression by the surveillance camera management device 100 will be described in detail.
도 4 및 도 5는 본 발명의 일 실시예에 따른 감시카메라 관리 장치(100)가 사용하는 학습된 인공 신경망의 예시적인 구조를 설명하기 위한 도면이다. 이하에서는 인공 신경망이 별도의 인공 신경망 학습 장치(미도시)에 의해 학습되며, 감시카메라 관리 장치(100)는 학습된 인공 신경망을 인공 신경망 학습 장치로부터 수신하여 사용하는 것을 전제로 설명한다. 이때 인공 신경망 학습 장치(미도시)는 컴퓨터, 서버 등과 같은 범용적 정보처리기기의 형태로 구현될 수 있다.Figures 4 and 5 are diagrams for explaining an exemplary structure of a learned artificial neural network used by the surveillance camera management device 100 according to an embodiment of the present invention. Hereinafter, the description will be made on the premise that the artificial neural network is learned by a separate artificial neural network learning device (not shown), and the surveillance camera management device 100 receives and uses the learned artificial neural network from the artificial neural network learning device. At this time, the artificial neural network learning device (not shown) may be implemented in the form of a general-purpose information processing device such as a computer or server.
본 발명의 일 실시예에 따른 인공 신경망은 도 4에 도시된 바와 같은 합성 곱 신경망(CNN: Convolutional Neural Network) 모델에 따른 인공 신경망일 수 있다. 이때 CNN 모델은 복수의 연산 레이어(Convolutional Layer, Pooling Layer)를 번갈아 수행하여 최종적으로는 입력 데이터의 특징을 추출하는 데 사용되는 계층 모델일 수 있다.The artificial neural network according to an embodiment of the present invention may be an artificial neural network based on a convolutional neural network (CNN) model as shown in FIG. 4. At this time, the CNN model may be a hierarchical model used to ultimately extract features of input data by alternately performing a plurality of computational layers (Convolutional Layer, Pooling Layer).
본 발명의 일 실시예에 따른 인공 신경망 학습 장치(미도시)는 학습 데이터를 지도학습(Supervised Learning) 기법에 따라 처리하여 인공 신경망 모델을 구축하거나 학습시킬 수 있다. 인공 신경망 학습 장치(미도시)가 인공 신경망을 학습시키는 방법에 대한 상세한 설명은 후술한다.An artificial neural network learning device (not shown) according to an embodiment of the present invention can build or learn an artificial neural network model by processing learning data according to a supervised learning technique. A detailed description of how the artificial neural network learning device (not shown) trains the artificial neural network will be described later.
본 발명의 일 실시예에 따른 인공 신경망 학습 장치(미도시)는 복수의 학습 데이터를 이용하여, 어느 하나의 입력 데이터를 인공 신경망에 입력하여 생성된 출력 값이 해당 학습 데이터에 표지된 값에 근접하도록 각 레이어 및/또는 각 노드의 가중치를 갱신하는 과정을 반복하여 수행함으로써 인공 신경망을 학습시킬 수 있다. An artificial neural network learning device (not shown) according to an embodiment of the present invention uses a plurality of learning data, so that the output value generated by inputting any one input data to the artificial neural network is close to the value labeled in the corresponding learning data. The artificial neural network can be trained by repeatedly performing the process of updating the weight of each layer and/or each node.
이때 본 발명의 일 실시예에 따른 인공 신경망 학습 장치(미도시)는 역전파(Back Propagation) 알고리즘에 따라 각 레이어 및/또는 각 노드의 가중치(또는 계수)를 갱신할 수 있다.At this time, the artificial neural network learning device (not shown) according to an embodiment of the present invention may update the weight (or coefficient) of each layer and/or each node according to a back propagation algorithm.
본 발명의 일 실시예에 따른 인공 신경망 학습 장치(미도시)는 입력 데이터의 특징 값을 추출하기 위한 컨볼루션 레이어(Convolution layer), 추출된 특징 값을 결합하여 특징 맵을 구성하는 풀링 레이어(pooling layer)를 생성할 수 있다. An artificial neural network learning device (not shown) according to an embodiment of the present invention includes a convolution layer for extracting feature values of input data, and a pooling layer for forming a feature map by combining the extracted feature values. layer) can be created.
또한 본 발명의 일 실시예에 따른 인공 신경망 학습 장치(미도시)는 생성된 특징 맵을 결합하여, 입력 데이터가 복수의 항목 각각에 해당할 확률을 결정할 준비를 하는 풀리 커넥티드 레이어(Fully Conected Layer)를 생성할 수 있다. In addition, the artificial neural network learning device (not shown) according to an embodiment of the present invention is a fully connected layer that prepares to determine the probability that input data corresponds to each of a plurality of items by combining the generated feature maps. ) can be created.
본 발명의 일 실시예에 따른 인공 신경망 학습 장치(미도시)는 입력 데이터에 대응되는 출력을 포함하는 아웃풋 레이어(Output Layer)를 산출할 수 있다.An artificial neural network learning device (not shown) according to an embodiment of the present invention can calculate an output layer including output corresponding to input data.
도 4에 도시된 예시에서는, 입력 데이터가 5X7 형태의 블록으로 나누어지며, 컨볼루션 레이어의 생성에 5X3 형태의 단위 블록이 사용되고, 풀링 레이어의 생성에 1X4 또는 1X2 형태의 단위 블록이 사용되는 것으로 도시되었지만, 이는 예시적인 것으로 본 발명의 사상이 이에 한정되는 것은 아니다. 따라서 입력 데이터의 종류 및/또는 각 블록의 크기는 다양하게 구성될 수 있다.In the example shown in Figure 4, the input data is divided into blocks of 5 However, this is an example and the spirit of the present invention is not limited thereto. Therefore, the type of input data and/or the size of each block can be configured in various ways.
한편 이와 같은 인공 신경망은 인공 신경망 학습 장치(미도시)의 메모리에 인공 신경망의 모델의 종류, 인공 신경망을 구성하는 적어도 하나의 노드의 계수, 노드의 가중치 및 인공 신경망을 구성하는 복수의 레이어 간의 관계를 정의하는 함수의 계수들의 형태로 저장될 수 있다. 물론 인공 신경망의 구조 또한 메모리에 소스코드 및/또는 프로그램의 형태로 저장될 수 있다.Meanwhile, such an artificial neural network stores in the memory of an artificial neural network learning device (not shown) the type of artificial neural network model, the coefficient of at least one node constituting the artificial neural network, the weight of the node, and the relationship between the plurality of layers constituting the artificial neural network. It can be stored in the form of coefficients of the function that defines. Of course, the structure of the artificial neural network can also be stored in memory in the form of source code and/or program.
본 발명의 일 실시예에 따른 인공 신경망은 도 5에 도시된 바와 같은 순환 신경망(Recurrent Neural Network, RNN) 모델에 따른 인공 신경망일 수 있다.The artificial neural network according to an embodiment of the present invention may be an artificial neural network based on a Recurrent Neural Network (RNN) model as shown in FIG. 5.
도 5를 참조하면, 이와 같은 순환 신경망(RNN) 모델에 따른 인공 신경망은 적어도 하나의 입력 노드(N1)를 포함하는 입력 레이어(L1), 복수의 히든 노드(N2)를 포함하는 히든 레이어(L2) 및 적어도 하나의 출력 노드(N3)를 포함하는 출력 레이어(L3)를 포함할 수 있다.Referring to FIG. 5, the artificial neural network according to this recurrent neural network (RNN) model includes an input layer (L1) including at least one input node (N1), and a hidden layer (L2) including a plurality of hidden nodes (N2). ) and an output layer (L3) including at least one output node (N3).
히든 레이어(L2)는 도시된 바와 같이 전체적으로 연결된(Fully Connected) 하나 이상의 레이어를 포함할 수 있다. 히든 레이어(L2)가 복수의 레이어를 포함하는 경우, 인공 신경망은 각각의 히든 레이어 사이의 관계를 정의하는 함수(미도시)를 포함할 수 있다.The hidden layer L2 may include one or more layers that are fully connected as shown. When the hidden layer (L2) includes a plurality of layers, the artificial neural network may include a function (not shown) that defines the relationship between each hidden layer.
출력 레이어(L3)의 적어도 하나의 출력 노드(N3)는 인공 신경망 학습 장치(미도시)의 제어에 따라 인공 신경망이 입력 레이어(L1)의 입력 값으로부터 생성한 출력 값을 포함할 수 있다.At least one output node N3 of the output layer L3 may include an output value generated by the artificial neural network from input values of the input layer L1 under the control of an artificial neural network learning device (not shown).
한편 각 레이어의 각 노드에 포함되는 값은 벡터일 수 있다. 또한 각 노드는 해당 노드의 중요도에 대응되는 가중치를 포함할 수도 있다.Meanwhile, the value included in each node of each layer may be a vector. Additionally, each node may include a weight corresponding to the importance of the node.
한편 인공 신경망은 입력 레이어(L1)와 히든 레이어(L2)의 관계를 정의하는 제1 함수(F1) 및 히든 레이어(L2)와 출력 레이어(L3)의 관계를 정의하는 제2 함수(F2)를 포함할 수 있다. Meanwhile, the artificial neural network uses a first function (F1) that defines the relationship between the input layer (L1) and the hidden layer (L2) and a second function (F2) that defines the relationship between the hidden layer (L2) and the output layer (L3). It can be included.
제1 함수(F1)는 입력 레이어(L1)에 포함되는 입력 노드(N1)와 히든 레이어(L2)에 포함되는 히든 노드(N2)간의 연결관계를 정의할 수 있다. 이와 유사하게, 제2 함수(F2)는 히든 레이어(L2)에 포함되는 히든 노드(N2)와 출력 레이어(L3)에 포함되는 출력 노드(N3)간의 연결관계를 정의할 수 있다.The first function (F1) may define a connection relationship between the input node (N1) included in the input layer (L1) and the hidden node (N2) included in the hidden layer (L2). Similarly, the second function (F2) may define a connection relationship between the hidden node (N2) included in the hidden layer (L2) and the output node (N3) included in the output layer (L3).
이와 같은 제1 함수(F1), 제2 함수(F2) 및 히든 레이어 사이의 함수들은 이전 노드의 입력에 기초하여 결과물을 출력하는 순환 신경망 모델을 포함할 수 있다.The functions between the first function (F1), the second function (F2), and the hidden layer may include a recurrent neural network model that outputs a result based on the input of the previous node.
인공 신경망 학습 장치(미도시)에 의해 인공 신경망이 학습되는 과정에서, 복수의 학습 데이터에 기초하여 제1 함수(F1) 및 제2 함수(F2)가 학습될 수 있다. 물론 인공 신경망이 학습되는 과정에서 전술한 제1 함수(F1) 및 제2 함수(F2) 외에 복수의 히든 레이어 사이의 함수들 또한 학습될 수 있다.In the process of learning an artificial neural network by an artificial neural network learning device (not shown), the first function (F1) and the second function (F2) may be learned based on a plurality of learning data. Of course, in the process of learning the artificial neural network, functions between a plurality of hidden layers in addition to the above-described first function (F1) and second function (F2) may also be learned.
본 발명의 일 실시예에 따른 인공 신경망은 표지(Labeled)된 학습 데이터를 기반으로 지도학습(Supervised Learning) 방식으로 학습될 수 있다. The artificial neural network according to an embodiment of the present invention can be learned using a supervised learning method based on labeled learning data.
본 발명의 일 실시예에 따른 인공 신경망 학습 장치(미도시)는 복수의 학습 데이터를 이용하여, 어느 하나의 입력 데이터를 인공 신경망에 입력하여 생성된 출력 값이 해당 학습 데이터에 표지된 값에 근접하도록 전술한 함수들(F1, F2, 히든 레이어 사이의 함수들 등)을 갱신하는 과정을 반복하여 수행함으로써 인공 신경망을 학습시킬 수 있다. An artificial neural network learning device (not shown) according to an embodiment of the present invention uses a plurality of learning data, so that the output value generated by inputting any one input data to the artificial neural network is close to the value labeled in the corresponding learning data. An artificial neural network can be trained by repeatedly performing the process of updating the above-mentioned functions (F1, F2, functions between hidden layers, etc.).
이때 본 발명의 일 실시예에 따른 인공 신경망 학습 장치(미도시)는 역전파(Back Propagation) 알고리즘에 따라 전술한 함수들(F1, F2, 히든 레이어 사이의 함수들 등)을 갱신할 수 있다. 다만 이는 예시적인 것으로 본 발명의 사상이 이에 한정되는 것은 아니다.At this time, the artificial neural network learning device (not shown) according to an embodiment of the present invention can update the above-described functions (F1, F2, functions between hidden layers, etc.) according to the back propagation algorithm. However, this is an example and the spirit of the present invention is not limited thereto.
도 4 및 도 5에서 설명한 인공 신경망의 종류 및/또는 구조는 예시적인 것으로 본 발명의 사상이 이에 한정되는 것은 아니다. 따라서 다양한 종류의 모델의 인공 신경망이 명세서를 통하여 설명하는 '인공 신경망'에 해당할 수 있다. The types and/or structures of artificial neural networks described in FIGS. 4 and 5 are exemplary and the scope of the present invention is not limited thereto. Therefore, artificial neural networks of various types of models may correspond to the 'artificial neural network' described throughout the specification.
도 6은 본 발명의 일 실시예에 따른 인공 신경망 학습 장치(미도시)가 복수의 학습 데이터(410)를 이용하여 제1 인공 신경망(420)을 학습하는 방법을 설명하기 위한 도면이다. 도 7은 본 발명의 일 실시예에 따른 인공 신경망 학습 장치(미도시)가 학습된 제1 인공 신경망(420)을 이용하여 제1 영상(430)으로부터 제1 부분 영상(441)과 제2 부분 영상(442)을 출력하는 과정을 설명하기 위한 도면이다.FIG. 6 is a diagram illustrating a method in which an artificial neural network learning device (not shown) learns a first artificial neural network 420 using a plurality of learning data 410 according to an embodiment of the present invention. 7 shows a first partial image 441 and a second partial image from the first image 430 using the first artificial neural network 420 learned by an artificial neural network learning device (not shown) according to an embodiment of the present invention. This is a diagram to explain the process of outputting the image 442.
본 발명의 일 실시예에 따른 제1 인공 신경망(420)은 복수의 학습 데이터(410) 각각에 포함되는 제1 학습 영상, 제1 학습 영상으로부터 생성된 제1 부분 학습 영상 및 제1 학습 영상으로부터 생성된 제2 부분 학습 영상 간의 상관관계를 학습한 신경망일 수 있다.The first artificial neural network 420 according to an embodiment of the present invention is generated from the first learning image, the first partial learning image generated from the first learning image, and the first learning image included in each of the plurality of learning data 410. It may be a neural network that learns the correlation between the generated second partial learning images.
따라서 본 발명의 일 실시예에 따른 제1 인공 신경망(420)은 도 7에 도시된 바와 같이 제1 영상(430)의 입력에 따라 제1 부분 영상(441)과 제2 부분 영상(442)을 출력하도록 학습된(또는 학습되는) 신경망을 의미할 수 있다.Therefore, the first artificial neural network 420 according to an embodiment of the present invention generates the first partial image 441 and the second partial image 442 according to the input of the first image 430, as shown in FIG. It may refer to a neural network that has been trained (or is being trained) to output.
한편 본 발명에서 제1 부분 영상(또는 제1 부분 학습 영상)은 원본 영상(제1 영상 또는 제1 학습 영상) 내에서 움직임의 정도가 소정의 임계 정도 이상일 것으로 분류된 객체에 해당하는 영역만을 포함하는 영상일 수 있다. 가령 원본 영상이 보행자가 보행하고 있는 장면을 담은 영상인 경우, 제1 부분 영상은 원본 영상에서 보행자에 해당하는 영역만이 추출된 영상일 수 있다. 다만 이는 예시적인것으로 본 발명의 사상이 이에 한정되는 것은 아니다.Meanwhile, in the present invention, the first partial image (or first partial training image) includes only areas corresponding to objects classified as having a degree of movement greater than or equal to a predetermined threshold within the original image (first image or first training image). It could be a video doing it. For example, if the original image is an image containing a scene where a pedestrian is walking, the first partial image may be an image in which only the area corresponding to the pedestrian is extracted from the original image. However, this is an example and the spirit of the present invention is not limited thereto.
또한 본 발명에서 제2 부분 영상(또는 제2 부분 학습 영상)은 원본 영상(제1 영상 또는 제1 학습 영상) 내에서 움직임의 정도가 소정의 임계 정도 미만인 것으로 분류된 객체에 해당하는 영역만을 포함하는 영상일 수 있다. 전술한 예시에서와 같이 원본 영상이 보행자가 보행하고 있는 장면을 담은 영상인 경우, 제2 부분 영상은 원본 영상에서 보행자에 해당하는 영역을 제외한 나머지 영역만이 추출된 영상일 수 있다. 다만 이는 예시적인것으로 본 발명의 사상이 이에 한정되는 것은 아니다.Additionally, in the present invention, the second partial image (or second partial training image) includes only areas corresponding to objects classified as having a degree of movement less than a predetermined threshold within the original image (first image or first training image). It could be a video doing it. As in the above-mentioned example, if the original image is an image containing a scene in which a pedestrian is walking, the second partial image may be an image in which only the remaining area excluding the area corresponding to the pedestrian is extracted from the original image. However, this is an example and the spirit of the present invention is not limited thereto.
본 발명의 선택적 실시예에서, 제2 부분 영상(또는 제2 부분 학습 영상)은 원본 영상(제1 영상 또는 제1 학습 영상)에서 제1 부분 영상(또는 제1 부분 학습 영상)을 제외하는 방식으로 생성될 수 있다. 바꾸어 말하면 제1 부분 영상(또는 제1 부분 학습 영상)과 제2 부분 영상(또는 제2 부분 학습 영상)을 합한 영상이 원본 영상(제1 영상 또는 제1 학습 영상)에 해당할 수 있다. 다만 이는 예시적인것으로 본 발명의 사상이 이에 한정되는 것은 아니다.In an optional embodiment of the present invention, the second partial image (or the second partial training image) is generated by excluding the first partial image (or the first partial training image) from the original image (the first image or the first training image). It can be created with In other words, an image obtained by combining the first partial image (or first partial training image) and the second partial image (or second partial training image) may correspond to the original image (first image or first training image). However, this is an example and the spirit of the present invention is not limited thereto.
본 발명의 일 실시예에 따른 복수의 학습 데이터(410) 각각은 제1 학습 영상, 제1 학습 영상으로부터 생성된 제1 부분 학습 영상 및 제1 학습 영상으로부터 생성된 제2 부분 학습 영상을 포함할 수 있다.Each of the plurality of learning data 410 according to an embodiment of the present invention may include a first learning image, a first partial learning image generated from the first learning image, and a second partial learning image generated from the first learning image. You can.
가령 첫 번째 학습 데이터(411)의 경우 제1 학습 영상(411A), 제1 학습 영상(411A)으로부터 생성된 제1 부분 학습 영상(411B) 및 제1 학습 영상으로부터 생성된 제2 부분 학습 영상(411C)을 포함할 수 있다. 이와 유사하게 두 번째 학습 데이터(412) 및 세 번째 학습 데이터(413)도 각각 제1 학습 영상, 제1 학습 영상으로부터 생성된 제1 부분 학습 영상 및 제1 학습 영상으로부터 생성된 제2 부분 학습 영상을 포함할 수 있다.For example, in the case of the first learning data 411, there is a first learning image 411A, a first partial learning image 411B generated from the first learning image 411A, and a second partial learning image generated from the first learning image ( 411C) may be included. Similarly, the second learning data 412 and the third learning data 413 also include a first learning image, a first partial learning image generated from the first learning image, and a second partial learning image generated from the first learning image, respectively. may include.
도 8은 본 발명의 일 실시예에 따른 인공 신경망 학습 장치(미도시)가 복수의 학습 데이터(510)를 이용하여 제2 인공 신경망(520)을 학습하는 방법을 설명하기 위한 도면이다. 도 9는 본 발명의 일 실시예에 따른 인공 신경망 학습 장치(미도시)가 학습된 제2 인공 신경망(520)을 이용하여 제1 영상(530)으로부터 인식된 객체에 해당하는 영역(541) 및 객체의 식별 정보(542)를 출력하는 과정을 설명하기 위한 도면이다.FIG. 8 is a diagram illustrating a method in which an artificial neural network learning device (not shown) learns a second artificial neural network 520 using a plurality of learning data 510 according to an embodiment of the present invention. 9 shows a region 541 corresponding to an object recognized from the first image 530 using a second artificial neural network 520 learned by an artificial neural network learning device (not shown) according to an embodiment of the present invention. This diagram is for explaining the process of outputting object identification information 542.
본 발명의 일 실시예에 따른 제2 인공 신경망(520)은 복수의 학습 데이터(510) 각각에 포함되는 제2 학습 영상, 제2 학습 영상 내에서 객체에 해당하는 영역 및 객체의 식별 정보 간의 상관관계를 학습한 신경망일 수 있다.The second artificial neural network 520 according to an embodiment of the present invention provides a correlation between the second learning image included in each of the plurality of learning data 510, the area corresponding to the object in the second learning image, and the identification information of the object. It could be a neural network that learned relationships.
따라서 본 발명의 일 실시예에 따른 제2 인공 신경망(520)은 도 9에 도시된 바와 같이 제1 영상(530)의 입력에 따라 제1 영상(530)내에서 객체에 해당하는 영역(541)과 해당 객체의 식별 정보(542)를 출력하도록 학습된(또는 학습되는) 신경망을 의미할 수 있다.Therefore, the second artificial neural network 520 according to an embodiment of the present invention creates a region 541 corresponding to the object within the first image 530 according to the input of the first image 530, as shown in FIG. 9. It may refer to a neural network that has been trained (or is being learned) to output identification information 542 of the corresponding object.
본 발명의 일 실시예에 따른 복수의 학습 데이터(510) 각각은 제2 학습 영상, 제2 학습 영상 내에서 객체에 해당하는 영역 및 객체의 식별 정보를 포함할 수 있다.Each of the plurality of learning data 510 according to an embodiment of the present invention may include a second learning image, an area corresponding to an object within the second learning image, and identification information of the object.
가령 첫 번째 학습 데이터(511)의 경우 제2 학습 영상(511A), 제2 학습 영상(511A) 내에서 객체에 해당하는 영역(511B) 및 객체의 식별 정보(511C)를 포함할 수 있다. 이와 유사하게 두 번째 학습 데이터(512) 및 세 번째 학습 데이터(513)도 각각 제2 학습 영상, 제2 학습 영상 내에서 객체에 해당하는 영역 및 객체의 식별 정보를 포함할 수 있다.For example, the first learning data 511 may include a second learning image 511A, an area 511B corresponding to an object within the second learning image 511A, and identification information 511C of the object. Similarly, the second learning data 512 and the third learning data 513 may also include a second learning image, an area corresponding to an object within the second learning image, and identification information of the object, respectively.
한편 제2 인공 신경망에 의해 생성된 객체에 해당하는 영역 및 객체의 식별 정보는 원본 영상을 두 개의 부분 영상으로 구분하는데 사용될 수 있으며, 이에 대한 상세한 설명은 후술한다.Meanwhile, the area corresponding to the object generated by the second artificial neural network and the object's identification information can be used to divide the original image into two partial images, which will be described in detail later.
이하에서는 상술한 과정에 따라 제1 인공 신경망(420) 및 제2 인공 신경망(520)이 학습되어 감시카메라 관리 장치(100)에 저장되어 있음을 전제로 제어부(110)의 동작을 중심으로 설명한다.Hereinafter, the description will focus on the operation of the control unit 110 on the premise that the first artificial neural network 420 and the second artificial neural network 520 are learned and stored in the surveillance camera management device 100 according to the above-described process. .
본 발명의 일 실시예에 따른 제어부(110)는 적어도 하나의 감시카메라(200)로부터 제1 영상을 획득할 수 있다. 이때 제1 영상은 적어도 하나의 감시카메라(200)가 획득한 영상으로 제1 해상도 및 제1 프레임 레이트로 구성되는 영상일 수 있다. 가령 제1 영상은 1920x1080의 해상도 및 30Fps의 프레임 레이트로 구성된 영상일 수 있다. 다만 이는 예시적인것으로 본 발명의 사상이 이에 한정되는 것은 아니다.The control unit 110 according to an embodiment of the present invention may acquire the first image from at least one surveillance camera 200. At this time, the first image may be an image acquired by at least one surveillance camera 200 and may be an image composed of a first resolution and a first frame rate. For example, the first image may be an image with a resolution of 1920x1080 and a frame rate of 30 Fps. However, this is an example and the spirit of the present invention is not limited thereto.
본 발명의 일 실시예에 따른 제어부(110)는 학습된 인공 신경망을 이용하여 제1 영상 데이터 및 제2 영상 데이터를 생성할 수 있다.The control unit 110 according to an embodiment of the present invention may generate first image data and second image data using a learned artificial neural network.
도 10은 본 발명의 일 실시예에 따른 제어부(110)가 제1 영상(610)으로부터 제1 영상 데이터(622) 및 제2 영상 데이터(632)를 생성하는 과정을 설명하기 위한 도면이다.FIG. 10 is a diagram illustrating a process in which the control unit 110 generates first image data 622 and second image data 632 from the first image 610 according to an embodiment of the present invention.
먼저 본 발명의 일 실시예에 따른 제어부(110)가 제1 인공 신경망(420)을 이용하는 경우를 설명한다.First, a case where the control unit 110 according to an embodiment of the present invention uses the first artificial neural network 420 will be described.
본 발명의 일 실시예에 따른 제어부(110)는 학습된 제1 인공 신경망(420)을 이용하여 제1 영상(610)을 제1 압축 조건에 따라 압축이 필요한 적어도 하나의 제1 부분 영상(621)과 제2 압축 조건에 따라 압축이 필요한 적어도 하나의 제2 부분 영상(631)으로 구분할 수 있다.The control unit 110 according to an embodiment of the present invention uses the learned first artificial neural network 420 to convert the first image 610 into at least one first partial image 621 that requires compression according to the first compression condition. ) and at least one second partial image 631 that requires compression according to the second compression condition.
제1 인공 신경망(420)은 도 7에 도시된 바와 같이 제1 영상(430)의 입력에 따라 제1 부분 영상(441)과 제2 부분 영상(442)을 출력하도록 학습된(또는 학습되는) 신경망이기에, 제1 영상(610)으로부터 제1 부분 영상(621)과 제2 부분 영상(631)을 출력할 수 있다.As shown in FIG. 7, the first artificial neural network 420 is trained (or learned) to output the first partial image 441 and the second partial image 442 according to the input of the first image 430. Since it is a neural network, a first partial image 621 and a second partial image 631 can be output from the first image 610.
이어서 본 발명의 일 실시예에 따른 제어부(110)는 제1 부분 영상(441)을 제1 압축 조건에 따라 압축하여 제1 영상 데이터(622)를 생성하고, 이와 마찬가지로 제2 부분 영상(442)을 제2 압축 조건에 따라 압축하여 제2 영상 데이터(632)를 생성할 수 있다. Subsequently, the control unit 110 according to an embodiment of the present invention compresses the first partial image 441 according to the first compression condition to generate first image data 622, and similarly, the second partial image 442 may be compressed according to the second compression condition to generate second image data 632.
전술한 바와 같이 제1 영상(610)이 제1 해상도 및 제1 프레임 레이트로 구성되는 경우, 제1 압축 조건은 제1 부분 영상(621)을 제2 해상도 및 제2 프레임 레이트로 압축하는 조건일 수 있다. 이때 제2 해상도는 제1 해상도 이상이고, 제2 프레임 레이트는 제1 프레임 레이트 이상일 수 있다. 바꾸어 말하면 제1 압축 조건은 최소 압축 품질이 원본 영상의 품질인 압축 조건일 수 있다. 예를 들어 제1 영상(610)이 1920x1080의 해상도 및 30Fps의 프레임 레이트로 구성된 영상일 경우, 제1 압축 조건은 제1 부분 영상(621)을 1920x1080의 해상도 및 30Fps의 프레임 레이트로 압축하는 조건일 수 있다.As described above, when the first image 610 is configured with a first resolution and a first frame rate, the first compression condition is a condition for compressing the first partial image 621 with a second resolution and a second frame rate. You can. At this time, the second resolution may be higher than the first resolution, and the second frame rate may be higher than the first frame rate. In other words, the first compression condition may be a compression condition in which the minimum compression quality is the quality of the original image. For example, if the first image 610 is an image composed of a resolution of 1920x1080 and a frame rate of 30 Fps, the first compression condition is a condition for compressing the first partial image 621 with a resolution of 1920x1080 and a frame rate of 30 Fps. You can.
한편 제2 압축 조건은 제2 부분 영상(631)을 제3 해상도 및 제3 프레임 레이트로 압축하는 조건일 수 있다. 이때 제3 해상도는 제1 해상도 이하이고, 제3 프레임 레이트는 제1 프레임 레이트 미만일 수 있다. 바꾸어 말하면 제2 압축 조건은 최대 품질이 원본 영상의 품질인 압축 조건일 수 있다. 예를 들어 제1 영상(610)이 1920x1080의 해상도 및 30Fps의 프레임 레이트로 구성된 영상일 경우, 제2 압축 조건은 제2 부분 영상(631)을 1920x1080의 해상도 및 10Fps의 프레임 레이트로 압축하는 조건일 수 있다.Meanwhile, the second compression condition may be a condition for compressing the second partial image 631 to a third resolution and a third frame rate. In this case, the third resolution may be less than the first resolution, and the third frame rate may be less than the first frame rate. In other words, the second compression condition may be a compression condition in which the maximum quality is the quality of the original image. For example, if the first image 610 is an image composed of a resolution of 1920x1080 and a frame rate of 30 Fps, the second compression condition is a condition for compressing the second partial image 631 with a resolution of 1920x1080 and a frame rate of 10 Fps. You can.
다음으로 본 발명의 일 실시예에 따른 제어부(110)가 제2 인공 신경망(520)을 이용하는 경우를 설명한다.Next, a case where the control unit 110 according to an embodiment of the present invention uses the second artificial neural network 520 will be described.
본 발명의 일 실시예에 따른 제어부(110)는 학습된 제2 인공 신경망(520)을 이용하여 제1 영상에 포함된 적어도 하나의 객체 각각의 식별 정보를 생성할 수 있다. 이때 제2 인공 신경망(520)은 도 9에 도시된 바와 같이 제1 영상(530)의 입력에 따라 제1 영상(530)내에서 객체에 해당하는 영역(541)과 해당 객체의 식별 정보(542)를 출력하도록 학습된(또는 학습되는) 신경망을 의미할 수 있다.The control unit 110 according to an embodiment of the present invention may generate identification information for each of at least one object included in the first image using the learned second artificial neural network 520. At this time, as shown in FIG. 9, the second artificial neural network 520 generates an area 541 corresponding to the object within the first image 530 and identification information 542 of the object according to the input of the first image 530. ) may refer to a neural network that has been trained (or is being trained) to output.
본 발명의 일 실시예에 따른 제어부(110)는 제2 인공 신경망(520)이 제1 영상으로부터 출력한 적어도 하나의 객체 각각의 식별 정보를 참조하여, 제1 영상을 제1 압축 조건에 따라 압축이 필요한 적어도 하나의 제3 부분 영상과 제2 압축 조건에 따라 압축이 필요한 제4 부분 영상으로 구분할 수 있다. 이때 제3 부분 영상은 제1 인공 신경망(420)을 사용할 때의 제1 부분 영상에 상응하는 영상이고, 제4 부분 영상도 제1 인공 신경망(420)을 사용할 때의 제2 부분 영상에 상응하는 영상일 수 있다.The control unit 110 according to an embodiment of the present invention compresses the first image according to the first compression condition by referring to the identification information of each of at least one object output by the second artificial neural network 520 from the first image. It can be divided into at least one third partial image that requires this and a fourth partial image that requires compression according to the second compression condition. At this time, the third partial image is an image corresponding to the first partial image when using the first artificial neural network 420, and the fourth partial image is also an image corresponding to the second partial image when using the first artificial neural network 420. It could be a video.
가령 본 발명의 일 실시예에 따른 제어부(110)는 적어도 하나의 객체 각각의 식별 정보를 이용하여, 제1 영상 내에서 움직임의 정도가 소정의 임계 정도 이상인 것으로 분류된 객체에 해당하는 영역으로 구성된 제3 부분 영상을 생성할 수 있다. 또한 제어부(110)는 제1 영상 내에서 움직임의 정도가 소정의 임계 정도 미만인 것으로 분류된 객체에 해당하는 영역으로 구성된 제4 부분 영상을 생성할 수 있다.For example, the control unit 110 according to an embodiment of the present invention uses identification information for each of at least one object to create an area consisting of an area corresponding to an object classified as having a degree of movement greater than a predetermined threshold level in the first image. A third partial image can be generated. Additionally, the controller 110 may generate a fourth partial image comprised of an area corresponding to an object classified as having a movement degree less than a predetermined threshold within the first image.
예를 들어 제1 영상이 자동차가 통행하는 도로에 대한 영상인 경우 제어부(110)는 차량, 보행자 등과 같이 움직임의 정도가 큰 객체로 식별 정보가 부여된 영역을 포함하는 제3 부분 영상과 가로수, 도로면, 도로 주변의 건물 등과 같이 움직임의 정도가 작은 객체로 식별 정보가 부여된 영역을 포함하는 제4 부분 영상을 생성할 수 있다. 다만 이는 예시적인것으로 본 발명의 사상이 이에 한정되는 것은 아니다.For example, if the first image is an image of a road on which cars pass, the control unit 110 displays a third partial image including an area where identification information is given as an object with a large degree of movement such as a vehicle, a pedestrian, etc., a street tree, A fourth partial image can be generated that includes an area to which identification information is assigned as an object with a small degree of movement, such as a road surface or a building near the road. However, this is an example and the spirit of the present invention is not limited thereto.
한편 제어부(110)가 객체의 움직임을 판단/분류 하는 기준은 미리 저장되어 있을 수 있다.Meanwhile, the criteria by which the control unit 110 determines/classifies the movement of an object may be stored in advance.
본 발명의 일 실시예에 따른 제어부(110)는 제3 부분 영상을 제1 압축 조건에 따라 압축한 제3 영상 데이터를 생성하고, 제4 부분 영상을 제2 압축 조건에 따라 압축한 제4 영상 데이터를 생성할 수 있다. 이때 제1 압축 조건 및 제2 압축 조건 각각에 대해서는 상술하였으므로 이에 대한 상세한 설명은 생략한다.The control unit 110 according to an embodiment of the present invention generates third image data by compressing the third partial image according to the first compression condition, and generates a fourth image data by compressing the fourth partial image according to the second compression condition. Data can be generated. At this time, since each of the first compression condition and the second compression condition has been described in detail, detailed description thereof will be omitted.
본 발명의 일 실시예에 따른 제어부(110)는 상술한 과정에 따라 생성된 제1 영상 데이터(622) 및 제2 영상 데이터(632)(또는 제3 영상 데이터 및 제4 영상 데이터)를 네트워크(300)에 연결된 적어도 하나의 외부 장치에 전송할 수 있다. 가령 제어부(110)는 제1 영상 데이터(622) 및 제2 영상 데이터(632)를 영상 저장 장치(310)에 전송할 수 있다.The control unit 110 according to an embodiment of the present invention transmits the first image data 622 and the second image data 632 (or the third image data and the fourth image data) generated according to the above-described process through a network ( 300) can be transmitted to at least one external device connected to the device. For example, the control unit 110 may transmit the first image data 622 and the second image data 632 to the image storage device 310.
도 11은 제어부(110)에 의해 전송되는 영상 데이터를 시간의 흐름에 따라 도시한 도면이다.FIG. 11 is a diagram showing image data transmitted by the control unit 110 over time.
전술한 바와 같이 제어부(110)는 최소한 원본 영상의 품질로 압축된 제1 영상 데이터(Image Data_1)와 최대한 원본 영상의 품질로 압축된 제2 영상 데이터(Image Data_2)를 생성할 수 있다. 이때 '최대한 원본 영상의 품질'로 압축되는 것은 프레임 레이트가 원본보다 작게 설정되는 것을 포함하기에, 도 11에 도시된 바와 같이 제2 영상 데이터(Image Data_2)는 제1 영상 데이터(Image Data_1)에 비해 낮은 빈도로 전송될 수 있다. 바꾸어 말하면 제1 영상 데이터(Image Data_1)는 제2 영상 데이터(Image Data_2)보다 더 자주 생성되어 전송될 수 있다.As described above, the control unit 110 may generate first image data (Image Data_1) compressed to at least the quality of the original image and second image data (Image Data_2) compressed to the quality of the original image as much as possible. At this time, compression to 'maximum original image quality' includes setting the frame rate to be smaller than the original, so as shown in FIG. 11, the second image data (Image Data_2) is added to the first image data (Image Data_1). It may be transmitted at a lower frequency compared to In other words, the first image data (Image Data_1) may be generated and transmitted more frequently than the second image data (Image Data_2).
이로써 본 발명은 영상의 전송에 있어서 네트워크 리소스를 보다 효율적으로 사용하도록 할 수 있다.As a result, the present invention enables more efficient use of network resources in video transmission.
본 발명의 일 실시예에 따른 제어부(110)는 적어도 하나의 외부 장치로부터 수신된 감시카메라 관리 인터페이스 요청에 따라, 적어도 하나의 외부 장치에 적어도 하나의 감시카메라(200)의 관리를 위한 인터페이스를 제공할 수 있다. 이때 제공되는 인터페이스는 적어도 하나의 감시카메라(200)가 설치된 환경을 설정하는 인터페이스, 적어도 하나의 감시카메라가 획득한 영상을 압축하는데 사용되는 적어도 하나의 조건을 설정하는 인터페이스 및 적어도 하나의 감시카메라(200)에 공급되는 전원과 관련된 항목을 설정하는 인터페이스 중 적어도 하나를 포함할 수 있다. 가령 제어부(110)는 사용자 단말(320)의 감시카메라 관리 인터페이스 제공 요청에 따라, 적어도 하나의 감시카메라(200)가 획득한 영상을 압축하는데 사용되는 조건들을 설정하는 인터페이스를 제공할 수 있다. 다만 이는 예시적인것으로 본 발명의 사상이 이에 한정되는 것은 아니다.The control unit 110 according to an embodiment of the present invention provides an interface for management of at least one surveillance camera 200 to at least one external device in accordance with a surveillance camera management interface request received from at least one external device. can do. The interface provided at this time includes an interface for setting the environment in which at least one surveillance camera 200 is installed, an interface for setting at least one condition used to compress the image acquired by at least one surveillance camera, and at least one surveillance camera ( 200) may include at least one of the interfaces for setting items related to power supplied to the device. For example, the control unit 110 may provide an interface for setting conditions used to compress images acquired by at least one surveillance camera 200 in response to a request from the user terminal 320 to provide a surveillance camera management interface. However, this is an example and the spirit of the present invention is not limited thereto.
도 12는 본 발명의 일 실시예에 따른 감시카메라 관리 장치(100)에 의해 수행되는 감시카메라 관리 방법을 설명하기 위한 흐름도이다. 이하에서는 도 1 내지 도 11을 함께 참조하여 설명하되, 중복되는 설명은 생략한다.FIG. 12 is a flowchart illustrating a surveillance camera management method performed by the surveillance camera management device 100 according to an embodiment of the present invention. Hereinafter, the description will be made with reference to FIGS. 1 to 11, but overlapping description will be omitted.
본 발명의 일 실시예에 따른 감시카메라 관리 장치(100)는 인공 신경망을 로드할 수 있다.(S1210) 이때 인공 신경망을 '로드'하는 것은 학습 된 인공 신경망을 다른 장치(예를 들어 인공 신경망 학습 장치(미도시))로부터 수신하여 사용할 수 있도록 하는 것을 의미할 수 있다. 가령 감시카메라 관리 장치(100)는 인공 신경망을 구성하는 가중치들을 인공 신경망 학습 장치로부터 수신하고, 이를 이용할 수 있다.The surveillance camera management device 100 according to an embodiment of the present invention can load an artificial neural network. (S1210) At this time, 'loading' the artificial neural network means using the learned artificial neural network in another device (for example, learning artificial neural network). This may mean receiving from a device (not shown) and allowing it to be used. For example, the surveillance camera management device 100 may receive the weights constituting the artificial neural network from the artificial neural network learning device and use them.
본 발명의 일 실시예에 따른 감시카메라 관리 장치(100)는 시스템에 추가된 제1 감시카메라(201)에 전원을 공급하고(S1220), 소정의 과정을 통해 네트워크를 설정할 수 있다.(S1230) 가령 감시카메라 관리 장치(100)는 RJ45타입의 플러그를 갖는 네트워크 연결 케이블을 통해 제1 감시카메라(201)에 전력을 공급하고, 또한 동일한 케이블을 이용하여 제1 감시카메라(201)의 IP 주소를 할당할 수 있다.The surveillance camera management device 100 according to an embodiment of the present invention can supply power to the first surveillance camera 201 added to the system (S1220) and set up a network through a predetermined process (S1230). For example, the surveillance camera management device 100 supplies power to the first surveillance camera 201 through a network connection cable with an RJ45 type plug, and also uses the same cable to set the IP address of the first surveillance camera 201. Can be assigned.
본 발명의 일 실시예에 따른 제1 감시카메라(201)는 제1 영상을 획득할 수 있다.(S1240) 또한 본 발명의 일 실시예에 따른 감시카메라 관리 장치(100)는 제1 감시카메라(201)로부터 제1 영상을 수신할 수 있다.(S1250) 이때 제1 영상은 제1 감시카메라(201)가 획득한 영상으로 제1 해상도 및 제1 프레임 레이트로 구성되는 영상일 수 있다. 가령 제1 영상은 1920x1080의 해상도 및 30Fps의 프레임 레이트로 구성된 영상일 수 있다. 다만 이는 예시적인것으로 본 발명의 사상이 이에 한정되는 것은 아니다.The first surveillance camera 201 according to an embodiment of the present invention can acquire the first image. (S1240) In addition, the surveillance camera management device 100 according to an embodiment of the present invention includes a first surveillance camera ( The first image may be received from 201) (S1250). At this time, the first image may be an image acquired by the first surveillance camera 201 and may be an image composed of a first resolution and a first frame rate. For example, the first image may be an image with a resolution of 1920x1080 and a frame rate of 30 Fps. However, this is an example and the spirit of the present invention is not limited thereto.
본 발명의 일 실시예에 따른 감시카메라 관리 장치(100)는 학습된 인공 신경망을 이용하여 제1 영상 데이터 및 제2 영상 데이터를 생성할 수 있다.(S1260)The surveillance camera management device 100 according to an embodiment of the present invention can generate first image data and second image data using a learned artificial neural network (S1260).
도 10은 본 발명의 일 실시예에 따른 감시카메라 관리 장치(100)가 제1 영상(610)으로부터 제1 영상 데이터(622) 및 제2 영상 데이터(632)를 생성하는 과정을 설명하기 위한 도면이다.FIG. 10 is a diagram illustrating a process in which the surveillance camera management device 100 generates first image data 622 and second image data 632 from the first image 610 according to an embodiment of the present invention. am.
먼저 본 발명의 일 실시예에 따른 감시카메라 관리 장치(100)가 제1 인공 신경망(420)을 이용하는 경우를 설명한다.First, a case where the surveillance camera management device 100 according to an embodiment of the present invention uses the first artificial neural network 420 will be described.
본 발명의 일 실시예에 따른 감시카메라 관리 장치(100)는 학습된 제1 인공 신경망(420)을 이용하여 제1 영상(610)을 제1 압축 조건에 따라 압축이 필요한 적어도 하나의 제1 부분 영상(621)과 제2 압축 조건에 따라 압축이 필요한 적어도 하나의 제2 부분 영상(631)으로 구분할 수 있다.The surveillance camera management device 100 according to an embodiment of the present invention uses the learned first artificial neural network 420 to convert the first image 610 into at least one first portion that requires compression according to a first compression condition. It can be divided into an image 621 and at least one second partial image 631 that requires compression according to the second compression condition.
제1 인공 신경망(420)은 도 7에 도시된 바와 같이 제1 영상(430)의 입력에 따라 제1 부분 영상(441)과 제2 부분 영상(442)을 출력하도록 학습된(또는 학습되는) 신경망이기에, 제1 영상(610)으로부터 제1 부분 영상(621)과 제2 부분 영상(631)을 출력할 수 있다.As shown in FIG. 7, the first artificial neural network 420 is trained (or learned) to output the first partial image 441 and the second partial image 442 according to the input of the first image 430. Since it is a neural network, a first partial image 621 and a second partial image 631 can be output from the first image 610.
이어서 본 발명의 일 실시예에 따른 감시카메라 관리 장치(100)는 제1 부분 영상(441)을 제1 압축 조건에 따라 압축하여 제1 영상 데이터(622)를 생성하고, 이와 마찬가지로 제2 부분 영상(442)을 제2 압축 조건에 따라 압축하여 제2 영상 데이터(632)를 생성할 수 있다. Subsequently, the surveillance camera management device 100 according to an embodiment of the present invention compresses the first partial image 441 according to the first compression condition to generate first image data 622, and similarly, the second partial image Second image data 632 may be generated by compressing 442 according to the second compression condition.
전술한 바와 같이 제1 영상(610)이 제1 해상도 및 제1 프레임 레이트로 구성되는 경우, 제1 압축 조건은 제1 부분 영상(621)을 제2 해상도 및 제2 프레임 레이트로 압축하는 조건일 수 있다. 이때 제2 해상도는 제1 해상도 이상이고, 제2 프레임 레이트는 제1 프레임 레이트 이상일 수 있다. 바꾸어 말하면 제1 압축 조건은 최소 압축 품질이 원본 영상의 품질인 압축 조건일 수 있다. 예를 들어 제1 영상(610)이 1920x1080의 해상도 및 30Fps의 프레임 레이트로 구성된 영상일 경우, 제1 압축 조건은 제1 부분 영상(621)을 1920x1080의 해상도 및 30Fps의 프레임 레이트로 압축하는 조건일 수 있다.As described above, when the first image 610 is configured with a first resolution and a first frame rate, the first compression condition is a condition for compressing the first partial image 621 with a second resolution and a second frame rate. You can. At this time, the second resolution may be higher than the first resolution, and the second frame rate may be higher than the first frame rate. In other words, the first compression condition may be a compression condition in which the minimum compression quality is the quality of the original video. For example, if the first image 610 is an image composed of a resolution of 1920x1080 and a frame rate of 30 Fps, the first compression condition is a condition for compressing the first partial image 621 with a resolution of 1920x1080 and a frame rate of 30 Fps. You can.
한편 제2 압축 조건은 제2 부분 영상(631)을 제3 해상도 및 제3 프레임 레이트로 압축하는 조건일 수 있다. 이때 제3 해상도는 제1 해상도 이하이고, 제3 프레임 레이트는 제1 프레임 레이트 미만일 수 있다. 바꾸어 말하면 제2 압축 조건은 최대 품질이 원본 영상의 품질인 압축 조건일 수 있다. 예를 들어 제1 영상(610)이 1920x1080의 해상도 및 30Fps의 프레임 레이트로 구성된 영상일 경우, 제2 압축 조건은 제2 부분 영상(631)을 1920x1080의 해상도 및 10Fps의 프레임 레이트로 압축하는 조건일 수 있다.Meanwhile, the second compression condition may be a condition for compressing the second partial image 631 to a third resolution and a third frame rate. In this case, the third resolution may be less than the first resolution, and the third frame rate may be less than the first frame rate. In other words, the second compression condition may be a compression condition in which the maximum quality is the quality of the original image. For example, if the first image 610 is an image composed of a resolution of 1920x1080 and a frame rate of 30 Fps, the second compression condition is a condition for compressing the second partial image 631 with a resolution of 1920x1080 and a frame rate of 10 Fps. You can.
다음으로 본 발명의 일 실시예에 따른 감시카메라 관리 장치(100)가 제2 인공 신경망(520)을 이용하는 경우를 설명한다.Next, a case where the surveillance camera management device 100 according to an embodiment of the present invention uses the second artificial neural network 520 will be described.
본 발명의 일 실시예에 따른 감시카메라 관리 장치(100)는 학습된 제2 인공 신경망(520)을 이용하여 제1 영상에 포함된 적어도 하나의 객체 각각의 식별 정보를 생성할 수 있다. 이때 제2 인공 신경망(520)은 도 9에 도시된 바와 같이 제1 영상(530)의 입력에 따라 제1 영상(530)내에서 객체에 해당하는 영역(541)과 해당 객체의 식별 정보(542)를 출력하도록 학습된(또는 학습되는) 신경망을 의미할 수 있다.The surveillance camera management device 100 according to an embodiment of the present invention may generate identification information for each of at least one object included in the first image using the learned second artificial neural network 520. At this time, as shown in FIG. 9, the second artificial neural network 520 generates an area 541 corresponding to the object within the first image 530 and identification information 542 of the object according to the input of the first image 530. ) may refer to a neural network that has been trained (or is being trained) to output.
본 발명의 일 실시예에 따른 감시카메라 관리 장치(100)는 제2 인공 신경망(520)이 제1 영상으로부터 출력한 적어도 하나의 객체 각각의 식별 정보를 참조하여, 제1 영상을 제1 압축 조건에 따라 압축이 필요한 적어도 하나의 제3 부분 영상과 제2 압축 조건에 따라 압축이 필요한 제4 부분 영상으로 구분할 수 있다. 이때 제3 부분 영상은 제1 인공 신경망(420)을 사용할 때의 제1 부분 영상에 상응하는 영상이고, 제4 부분 영상도 제1 인공 신경망(420)을 사용할 때의 제2 부분 영상에 상응하는 영상일 수 있다.The surveillance camera management device 100 according to an embodiment of the present invention refers to the identification information of each of at least one object output by the second artificial neural network 520 from the first image, and applies the first image to the first compression condition. It can be divided into at least one third partial image that needs to be compressed according to and a fourth partial image that needs to be compressed according to the second compression condition. At this time, the third partial image is an image corresponding to the first partial image when using the first artificial neural network 420, and the fourth partial image is also an image corresponding to the second partial image when using the first artificial neural network 420. It could be a video.
가령 본 발명의 일 실시예에 따른 감시카메라 관리 장치(100)는 적어도 하나의 객체 각각의 식별 정보를 이용하여, 제1 영상 내에서 움직임의 정도가 소정의 임계 정도 이상인 것으로 분류된 객체에 해당하는 영역으로 구성된 제3 부분 영상을 생성할 수 있다. 또한 감시카메라 관리 장치(100)는 제1 영상 내에서 움직임의 정도가 소정의 임계 정도 미만인 것으로 분류된 객체에 해당하는 영역으로 구성된 제4 부분 영상을 생성할 수 있다.For example, the surveillance camera management device 100 according to an embodiment of the present invention uses identification information for each of at least one object to detect an object corresponding to an object classified as having a degree of movement greater than a predetermined threshold in the first image. A third partial image composed of regions can be generated. Additionally, the surveillance camera management device 100 may generate a fourth partial image comprised of an area corresponding to an object classified as having a movement degree less than a predetermined threshold within the first image.
예를 들어 제1 영상이 자동차가 통행하는 도로에 대한 영상인 경우 감시카메라 관리 장치(100)는 차량, 보행자 등과 같이 움직임의 정도가 큰 객체로 식별 정보가 부여된 영역을 포함하는 제3 부분 영상과 가로수, 도로면, 도로 주변의 건물 등과 같이 움직임의 정도가 작은 객체로 식별 정보가 부여된 영역을 포함하는 제4 부분 영상을 생성할 수 있다. 다만 이는 예시적인것으로 본 발명의 사상이 이에 한정되는 것은 아니다.For example, if the first image is an image of a road on which cars pass, the surveillance camera management device 100 generates a third partial image that includes an area where identification information is given as an object with a large degree of movement, such as a vehicle, a pedestrian, etc. A fourth partial image can be generated that includes an area where identification information is assigned to objects with a small degree of movement, such as street trees, road surfaces, and buildings around the road. However, this is an example and the spirit of the present invention is not limited thereto.
한편 감시카메라 관리 장치(100)가 객체의 움직임을 판단/분류 하는 기준은 미리 저장되어 있을 수 있다.Meanwhile, the standards by which the surveillance camera management device 100 determines/classifies the movement of objects may be stored in advance.
본 발명의 일 실시예에 따른 감시카메라 관리 장치(100)는 제3 부분 영상을 제1 압축 조건에 따라 압축한 제3 영상 데이터를 생성하고, 제4 부분 영상을 제2 압축 조건에 따라 압축한 제4 영상 데이터를 생성할 수 있다. 이때 제1 압축 조건 및 제2 압축 조건 각각에 대해서는 상술하였으므로 이에 대한 상세한 설명은 생략한다.The surveillance camera management device 100 according to an embodiment of the present invention generates third image data by compressing the third partial image according to the first compression condition, and compresses the fourth partial image according to the second compression condition. Fourth image data can be generated. At this time, since each of the first compression condition and the second compression condition has been described in detail, detailed description thereof will be omitted.
본 발명의 일 실시예에 따른 감시카메라 관리 장치(100)는 상술한 과정에 따라 생성된 제1 영상 데이터(622) 및 제2 영상 데이터(632)(또는 제3 영상 데이터 및 제4 영상 데이터)를 사용자 단말(320)에 전송할 수 있다.(S1270)The surveillance camera management device 100 according to an embodiment of the present invention includes first image data 622 and second image data 632 (or third image data and fourth image data) generated according to the above-described process. Can be transmitted to the user terminal 320. (S1270)
또한 본 발명의 일 실시예에 따른 감시카메라 관리 장치(100)는 제1 감시카메라(201)가 획득한 제2 영상에 대해서 단계 S1250, 단계 S1260 및 단계 S1270의 과정을 반복하여 수행할 수 있다.(S1280) 물론 감시카메라 관리 장치(100)는 제1 감시카메라(201)가 제2 영상 이후에 획득한 복수의 제3 영상에 대해서도 마찬가지 과정을 반복적으로 수행할 수 있다.Additionally, the surveillance camera management device 100 according to an embodiment of the present invention may repeatedly perform steps S1250, S1260, and S1270 on the second image acquired by the first surveillance camera 201. (S1280) Of course, the surveillance camera management device 100 can repeatedly perform the same process for a plurality of third images acquired by the first surveillance camera 201 after the second image.
도 11은 감시카메라 관리 장치(100)에 의해 전송되는 영상 데이터를 시간의 흐름에 따라 도시한 도면이다.FIG. 11 is a diagram showing video data transmitted by the surveillance camera management device 100 over time.
전술한 바와 같이 감시카메라 관리 장치(100)는 최소한 원본 영상의 품질로 압축된 제1 영상 데이터(Image Data_1)와 최대한 원본 영상의 품질로 압축된 제2 영상 데이터(Image Data_2)를 생성할 수 있다. 이때 '최대한 원본 영상의 품질'로 압축되는 것은 프레임 레이트가 원본보다 작게 설정되는 것을 포함하기에, 도 11에 도시된 바와 같이 제2 영상 데이터(Image Data_2)는 제1 영상 데이터(Image Data_1)에 비해 낮은 빈도로 전송될 수 있다. 바꾸어 말하면 제1 영상 데이터(Image Data_1)는 제2 영상 데이터(Image Data_2)보다 더 자주 생성되어 전송될 수 있다.As described above, the surveillance camera management device 100 can generate first image data (Image Data_1) compressed to at least the quality of the original image and second image data (Image Data_2) compressed to the quality of the original image as much as possible. . At this time, compression to 'maximum original image quality' includes setting the frame rate to be smaller than the original, so as shown in FIG. 11, the second image data (Image Data_2) is added to the first image data (Image Data_1). It may be transmitted at a lower frequency compared to In other words, the first image data (Image Data_1) may be generated and transmitted more frequently than the second image data (Image Data_2).
이로써 본 발명은 영상의 전송에 있어서 네트워크 리소스를 보다 효율적으로 사용하도록 할 수 있다.As a result, the present invention enables more efficient use of network resources in video transmission.
이상 설명된 본 발명에 따른 실시예는 컴퓨터 상에서 다양한 구성요소를 통하여 실행될 수 있는 컴퓨터 프로그램의 형태로 구현될 수 있으며, 이와 같은 컴퓨터 프로그램은 컴퓨터로 판독 가능한 매체에 기록될 수 있다. 이때, 매체는 컴퓨터로 실행 가능한 프로그램을 저장하는 것일 수 있다. 매체의 예시로는, 하드 디스크, 플로피 디스크 및 자기 테이프와 같은 자기 매체, CD-ROM 및 DVD와 같은 광기록 매체, 플롭티컬 디스크(floptical disk)와 같은 자기-광 매체(magneto-optical medium), 및 ROM, RAM, 플래시 메모리 등을 포함하여 프로그램 명령어가 저장되도록 구성된 것이 있을 수 있다. The embodiments according to the present invention described above may be implemented in the form of a computer program that can be executed through various components on a computer, and such a computer program may be recorded on a computer-readable medium. At this time, the medium may be one that stores a program executable on a computer. Examples of media include magnetic media such as hard disks, floppy disks, and magnetic tapes, optical recording media such as CD-ROMs and DVDs, magneto-optical media such as floptical disks, And there may be something configured to store program instructions, including ROM, RAM, flash memory, etc.
한편, 상기 컴퓨터 프로그램은 본 발명을 위하여 특별히 설계되고 구성된 것이거나 컴퓨터 소프트웨어 분야의 당업자에게 공지되어 사용 가능한 것일 수 있다. 컴퓨터 프로그램의 예에는, 컴파일러에 의하여 만들어지는 것과 같은 기계어 코드뿐만 아니라 인터프리터 등을 사용하여 컴퓨터에 의해서 실행될 수 있는 고급 언어 코드도 포함될 수 있다.Meanwhile, the computer program may be designed and configured specifically for the present invention, or may be known and available to those skilled in the art of computer software. Examples of computer programs may include not only machine language code such as that created by a compiler, but also high-level language code that can be executed by a computer using an interpreter or the like.
본 발명에서 설명하는 특정 실행들은 일 실시 예들로서, 어떠한 방법으로도 본 발명의 범위를 한정하는 것은 아니다. 명세서의 간결함을 위하여, 종래 전자적인 구성들, 제어 시스템들, 소프트웨어, 상기 시스템들의 다른 기능적인 측면들의 기재는 생략될 수 있다. 또한, 도면에 도시된 구성 요소들 간의 선들의 연결 또는 연결 부재들은 기능적인 연결 및/또는 물리적 또는 회로적 연결들을 예시적으로 나타낸 것으로서, 실제 장치에서는 대체 가능하거나 추가의 다양한 기능적인 연결, 물리적인 연결, 또는 회로 연결들로서 나타내어질 수 있다. 또한, "필수적인", "중요하게" 등과 같이 구체적인 언급이 없다면 본 발명의 적용을 위하여 반드시 필요한 구성 요소가 아닐 수 있다.The specific implementations described in the present invention are examples and do not limit the scope of the present invention in any way. For the sake of brevity of the specification, descriptions of conventional electronic components, control systems, software, and other functional aspects of the systems may be omitted. In addition, the connections or connection members of lines between components shown in the drawings exemplify functional connections and/or physical or circuit connections, and in actual devices, various functional connections or physical connections may be replaced or added. Can be represented as connections, or circuit connections. Additionally, if there is no specific mention such as “essential,” “important,” etc., it may not be a necessary component for the application of the present invention.
따라서, 본 발명의 사상은 상기 설명된 실시예에 국한되어 정해져서는 아니 되며, 후술하는 특허청구범위뿐만 아니라 이 특허청구범위와 균등한 또는 이로부터 등가적으로 변경된 모든 범위는 본 발명의 사상의 범주에 속한다고 할 것이다.Therefore, the spirit of the present invention should not be limited to the above-described embodiments, and the scope of the patent claims described below as well as all scopes equivalent to or equivalently changed from the scope of the claims are within the scope of the spirit of the present invention. It will be said to belong to

Claims (12)

  1. 적어도 하나의 감시카메라에 전원을 공급하고 상기 적어도 하나의 감시카메라가 획득한 영상을 압축하여 제공하는 감시카메라 관리 장치에 있어서,In a surveillance camera management device that supplies power to at least one surveillance camera and compresses and provides images acquired by the at least one surveillance camera,
    적어도 하나의 감시카메라와 연결되는 감시카메라 연결부;A surveillance camera connection connected to at least one surveillance camera;
    네트워크와 연결되는 네트워크 연결부;A network connection unit connected to the network;
    전원으로부터 전력을 공급받아 상기 감시카메라 연결부를 통하여 연결된 상기 적어도 하나의 감시카메라에 공급되는 전력을 생성하는 전원부;a power supply unit that receives power from a power source and generates power to be supplied to the at least one surveillance camera connected through the surveillance camera connection unit;
    상기 적어도 하나의 감시카메라로부터 수신된 영상 데이터를 압축하여 압축 영상 데이터를 생성하는 제어부; 및a control unit that compresses video data received from the at least one surveillance camera to generate compressed video data; and
    상기 적어도 하나의 감시카메라로부터 상기 영상 데이터를 수신하여 상기 제어부에 제공하고, 상기 제어부가 생성한 상기 압축 영상 데이터를 상기 네트워크에 연결된 적어도 하나의 외부 장치에 전송하는 통신부;를 포함하는, 감시카메라 관리 장치.A communication unit that receives the video data from the at least one surveillance camera, provides the video data to the control unit, and transmits the compressed video data generated by the control unit to at least one external device connected to the network. Surveillance camera management comprising a. Device.
  2. 청구항 1에 있어서In claim 1
    상기 제어부는The control unit
    학습된 제1 인공 신경망을 이용하여 제1 영상을 제1 압축 조건에 따라 압축이 필요한 적어도 하나의 제1 부분 영상과 제2 압축 조건에 따라 압축이 필요한 적어도 하나의 제2 부분 영상으로 구분하고,Using the learned first artificial neural network, the first image is divided into at least one first partial image that requires compression according to the first compression condition and at least one second partial image that requires compression according to the second compression condition,
    상기 제1 부분 영상을 상기 제1 압축 조건에 따라 압축한 제1 영상 데이터를 생성하고, 상기 제2 부분 영상을 상기 제2 압축 조건에 따라 압축한 제2 영상 데이터를 생성하고,Generating first image data by compressing the first partial image according to the first compression conditions, and generating second image data by compressing the second partial image according to the second compression conditions,
    상기 제1 영상 데이터 및 상기 제2 영상 데이터를 상기 적어도 하나의 외부 장치에 전송하는, 감시카메라 관리 장치.A surveillance camera management device that transmits the first video data and the second video data to the at least one external device.
  3. 청구항 2에 있어서In claim 2
    상기 제1 인공 신경망은The first artificial neural network is
    제1 학습 영상, 상기 제1 학습 영상으로부터 생성된 제1 부분 학습 영상 및 상기 제1 학습 영상으로부터 생성된 제2 부분 학습 영상 간의 상관관계를 학습한 신경망인, 감시카메라 관리 장치.A surveillance camera management device, which is a neural network that learns the correlation between a first learning image, a first partial learning image generated from the first learning image, and a second partial learning image generated from the first learning image.
  4. 청구항 3에 있어서In claim 3
    상기 제1 부분 학습 영상은The first part learning video is
    상기 제1 학습 영상 내에서 움직임의 정도가 소정의 임계 정도 이상일 것으로 분류된 객체에 해당하는 영역만을 포함하는 영상이고,An image that includes only areas corresponding to objects classified as having a degree of movement greater than or equal to a predetermined threshold within the first learning image,
    상기 제2 부분 학습 영상은The second part learning video is
    상기 제1 학습 영상 내에서 움직임의 정도가 상기 소정의 임계 정도 미만인 것으로 분류된 객체에 해당하는 영역만을 포함하는 영상인, 감시카메라 관리 장치.A surveillance camera management device, which is an image that includes only areas corresponding to objects classified as having a degree of movement within the first learning image that is less than the predetermined threshold.
  5. 청구항 2에 있어서In claim 2
    상기 제1 영상은 제1 해상도 및 제1 프레임 레이트로 구성되며,The first image consists of a first resolution and a first frame rate,
    상기 제1 압축 조건은The first compression condition is
    상기 제1 부분 영상을 제2 해상도 및 제2 프레임 레이트로 압축하는 조건이고, A condition for compressing the first partial image to a second resolution and a second frame rate,
    상기 제2 해상도는 상기 제1 해상도 이상이고,The second resolution is greater than or equal to the first resolution,
    상기 제2 프레임 레이트는 상기 제1 프레임 레이트 이상인, 감시카메라 관리 장치.The second frame rate is higher than the first frame rate.
  6. 청구항 2에 있어서In claim 2
    상기 제1 영상은 제1 해상도 및 제1 프레임 레이트로 구성되며,The first image consists of a first resolution and a first frame rate,
    상기 제2 압축 조건은The second compression condition is
    상기 제2 부분 영상을 제3 해상도 및 제3 프레임 레이트로 압축하는 조건이고, A condition for compressing the second partial video to a third resolution and a third frame rate,
    상기 제3 해상도는 상기 제1 해상도 이하이고,The third resolution is less than or equal to the first resolution,
    상기 제3 프레임 레이트는 상기 제1 프레임 레이트 미만인, 감시카메라 관리 장치.The third frame rate is less than the first frame rate.
  7. 청구항 1에 있어서In claim 1
    상기 제어부는The control unit
    학습된 제2 인공 신경망을 이용하여 제1 영상에 포함된 적어도 하나의 객체 각각의 식별 정보를 생성하고,Generating identification information for each of at least one object included in the first image using a learned second artificial neural network,
    상기 적어도 하나의 객체 각각의 식별 정보를 참조하여, 상기 제1 영상을 제1 압축 조건에 따라 압축이 필요한 적어도 하나의 제3 부분 영상과 제2 압축 조건에 따라 압축이 필요한 제4 부분 영상으로 구분하고,With reference to the identification information of each of the at least one object, the first image is divided into at least one third partial image that requires compression according to a first compression condition and a fourth partial image that requires compression according to the second compression condition. do,
    상기 제3 부분 영상을 상기 제1 압축 조건에 따라 압축한 제3 영상 데이터를 생성하고, 상기 제4 부분 영상을 상기 제2 압축 조건에 따라 압축한 제4 영상 데이터를 생성하고,Generating third image data by compressing the third partial image according to the first compression condition, and generating fourth image data by compressing the fourth partial image according to the second compression condition,
    상기 제3 영상 데이터 및 상기 제4 영상 데이터를 상기 적어도 하나의 외부 장치에 전송하는, 감시카메라 관리 장치.A surveillance camera management device that transmits the third video data and the fourth video data to the at least one external device.
  8. 청구항 7에 있어서In claim 7
    상기 인공 신경망은The artificial neural network is
    제2 학습 영상, 상기 제2 학습 영상 내에서 객체에 해당하는 영역 및 상기 객체의 식별 정보 간의 상관관계를 학습한 신경망인, 감시카메라 관리 장치.A surveillance camera management device, which is a neural network that learns the correlation between a second learning image, an area corresponding to an object within the second learning image, and identification information of the object.
  9. 청구항 7에 있어서In claim 7
    상기 제어부는The control unit
    상기 적어도 하나의 객체 각각의 식별 정보를 이용하여, 상기 제1 영상 내에서 움직임의 정도가 소정의 임계 정도 이상인 것으로 분류된 객체에 해당하는 영역으로 구성된 상기 제3 부분 영상을 생성하고,Using identification information for each of the at least one object, generate the third partial image consisting of an area corresponding to an object classified as having a degree of movement greater than a predetermined threshold in the first image,
    상기 적어도 하나의 객체 각각의 식별 정보를 이용하여, 상기 제1 영상 내에서 움직임의 정도가 소정의 임계 정도 미만인 것으로 분류된 객체에 해당하는 영역으로 구성된 상기 제4 부분 영상을 생성하는, 감시카메라 관리 장치.Surveillance camera management that uses identification information for each of the at least one object to generate the fourth partial image composed of an area corresponding to an object classified as having a degree of movement in the first image that is less than a predetermined threshold. Device.
  10. 청구항 1에 있어서In claim 1
    상기 감시카메라 연결부와 상기 적어도 하나의 감시카메라는 제1 인터페이스를 통해 연결되고,The surveillance camera connection unit and the at least one surveillance camera are connected through a first interface,
    상기 전원부가 생성한 전력은 상기 제1 인터페이스를 통해 상기 적어도 하나의 감시카메라로 공급되고,The power generated by the power unit is supplied to the at least one surveillance camera through the first interface,
    상기 적어도 하나의 감시카메라가 생성한 상기 영상 데이터는 상기 제1 인터페이스를 통해 상기 통신부로 전송되는, 감시카메라 관리 장치.The video data generated by the at least one surveillance camera is transmitted to the communication unit through the first interface.
  11. 청구항 1에 있어서In claim 1
    상기 제어부는The control unit
    상기 적어도 하나의 외부 장치로부터 수신된 감시카메라 관리 인터페이스 요청에 따라, 상기 적어도 하나의 외부 장치에 상기 적어도 하나의 감시카메라의 관리를 위한 인터페이스를 제공하는, 감시카메라 관리 장치.A surveillance camera management device that provides an interface for managing the at least one surveillance camera to the at least one external device in response to a surveillance camera management interface request received from the at least one external device.
  12. 청구항 11에 있어서In claim 11
    상기 인터페이스는The interface is
    상기 적어도 하나의 감시카메라가 설치된 환경을 설정하는 인터페이스;An interface for setting an environment in which the at least one surveillance camera is installed;
    상기 적어도 하나의 감시카메라가 획득한 영상을 압축하는데 사용되는 적어도 하나의 조건을 설정하는 인터페이스; 및an interface that sets at least one condition used to compress images acquired by the at least one surveillance camera; and
    상기 적어도 하나의 감시카메라에 공급되는 전원과 관련된 항목을 설정하는 인터페이스;중 적어도 하나를 포함하는, 감시카메라 관리 장치.A surveillance camera management device comprising at least one of: an interface for setting items related to power supplied to the at least one surveillance camera.
PCT/KR2023/002898 2022-03-11 2023-03-03 Surveillance camera management device WO2023171981A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2022-0030951 2022-03-11
KR1020220030951A KR102471701B1 (en) 2022-03-11 2022-03-11 A surveillance camera management device

Publications (1)

Publication Number Publication Date
WO2023171981A1 true WO2023171981A1 (en) 2023-09-14

Family

ID=84237018

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2023/002898 WO2023171981A1 (en) 2022-03-11 2023-03-03 Surveillance camera management device

Country Status (2)

Country Link
KR (1) KR102471701B1 (en)
WO (1) WO2023171981A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102471701B1 (en) * 2022-03-11 2022-11-28 (주) 글로벌비엠아이 A surveillance camera management device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100480520B1 (en) * 2004-10-29 2005-04-07 (주)유디피 Method for transferring image signals and system using the method
KR20090032800A (en) * 2007-09-28 2009-04-01 주식회사 삼전 Wireless transmitting and receiving device for cctv
KR20100022447A (en) * 2008-08-19 2010-03-02 브로드콤 코포레이션 Method and system for motion-compensated frame-rate up-conversion for both compressed and decompressed video bitstreams
KR20200079697A (en) * 2018-12-26 2020-07-06 삼성전자주식회사 Image processing apparatus and image processing method thereof
KR20200119372A (en) * 2019-03-22 2020-10-20 주식회사 핀텔 Artificial Neural Network Based Object Region Detection Method, Device and Computer Program Thereof
KR102471701B1 (en) * 2022-03-11 2022-11-28 (주) 글로벌비엠아이 A surveillance camera management device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100480520B1 (en) * 2004-10-29 2005-04-07 (주)유디피 Method for transferring image signals and system using the method
KR20090032800A (en) * 2007-09-28 2009-04-01 주식회사 삼전 Wireless transmitting and receiving device for cctv
KR20100022447A (en) * 2008-08-19 2010-03-02 브로드콤 코포레이션 Method and system for motion-compensated frame-rate up-conversion for both compressed and decompressed video bitstreams
KR20200079697A (en) * 2018-12-26 2020-07-06 삼성전자주식회사 Image processing apparatus and image processing method thereof
KR20200119372A (en) * 2019-03-22 2020-10-20 주식회사 핀텔 Artificial Neural Network Based Object Region Detection Method, Device and Computer Program Thereof
KR102471701B1 (en) * 2022-03-11 2022-11-28 (주) 글로벌비엠아이 A surveillance camera management device

Also Published As

Publication number Publication date
KR102471701B1 (en) 2022-11-28

Similar Documents

Publication Publication Date Title
WO2019132518A1 (en) Image acquisition device and method of controlling the same
WO2019164251A1 (en) Method of performing learning of deep neural network and apparatus thereof
WO2018128362A1 (en) Electronic apparatus and method of operating the same
WO2020207030A1 (en) Video encoding method, system and device, and computer-readable storage medium
EP3545436A1 (en) Electronic apparatus and method of operating the same
WO2023171981A1 (en) Surveillance camera management device
WO2019027141A1 (en) Electronic device and method of controlling operation of vehicle
WO2020180084A1 (en) Method for completing coloring of target image, and device and computer program therefor
WO2020013631A1 (en) Method and device for encoding three-dimensional image, and method and device for decoding three-dimensional image
WO2021091022A1 (en) Machine learning system and operating method for machine learning system
WO2022114731A1 (en) Deep learning-based abnormal behavior detection system and detection method for detecting and recognizing abnormal behavior
WO2020251074A1 (en) Artificial intelligence robot for providing voice recognition function and operation method thereof
WO2022255529A1 (en) Learning method for generating lip-sync video on basis of machine learning and lip-sync video generating device for executing same
WO2021206221A1 (en) Artificial intelligence apparatus using a plurality of output layers and method for same
WO2020256169A1 (en) Robot for providing guidance service by using artificial intelligence, and operating method therefor
WO2021002493A1 (en) Intelligent gateway device, and control system comprising same
WO2021040105A1 (en) Artificial intelligence device generating named entity table and method for same
WO2022154153A1 (en) Robot and control method therefor
WO2023177108A1 (en) Method and system for learning to share weights across transformer backbones in vision and language tasks
WO2020262721A1 (en) Control system for controlling plurality of robots by using artificial intelligence
WO2020080685A1 (en) Playing block depth map generation method and system using single image and depth network
WO2021006363A1 (en) Robot for providing information service by using artificial intelligence, and operating method therefor
WO2022225102A1 (en) Adjustment of shutter value of surveillance camera via ai-based object recognition
WO2023058812A1 (en) Method, apparatus, and computer program for switching image
WO2023003090A1 (en) Device, method, and computer program for providing lip-sync image, and device, method, and computer program for displaying lip-sync image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23767082

Country of ref document: EP

Kind code of ref document: A1