KR101695655B1 - Method and apparatus for analyzing video and image - Google Patents

Method and apparatus for analyzing video and image Download PDF

Info

Publication number
KR101695655B1
KR101695655B1 KR1020160021333A KR20160021333A KR101695655B1 KR 101695655 B1 KR101695655 B1 KR 101695655B1 KR 1020160021333 A KR1020160021333 A KR 1020160021333A KR 20160021333 A KR20160021333 A KR 20160021333A KR 101695655 B1 KR101695655 B1 KR 101695655B1
Authority
KR
South Korea
Prior art keywords
image
similarity
face image
pixel
moving
Prior art date
Application number
KR1020160021333A
Other languages
Korean (ko)
Inventor
이정선
Original Assignee
이정선
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 이정선 filed Critical 이정선
Priority to KR1020160021333A priority Critical patent/KR101695655B1/en
Application granted granted Critical
Publication of KR101695655B1 publication Critical patent/KR101695655B1/en

Links

Images

Classifications

    • G06K9/00228
    • G06K9/00288
    • G06K9/6201
    • G06K9/64

Abstract

The present embodiment is characterized in that at least one face image is set as a reference image and stored in a database, a step of extracting a moving face image from the captured moving image, and a step of extracting a reference image stored in the database And a similarity degree between the scanned facial image and the reference image is determined based on the reference object having the minutiae point, and the proximity of the facial image is found from the database.
Thus, in the present embodiment, high-speed image scanning and multi-processing are performed to find a matching or proximate face image as a result of comparison based on the similarity between the shot image and the stored face image or face image, Can be utilized.

Description

METHOD AND APPARATUS FOR ANALYZING VIDEO AND IMAGE [0002]

The present invention relates to an image analysis method and apparatus, and more particularly, to an image analysis method and apparatus for quickly identifying the identity of a photographed face image and preventing misuse and identification.

With the development of the Internet technology, various image devices for capturing images and reproducing the captured images have been developed. For example, a closed-circuit television (CCTV) for capturing an image, a camera and a smart phone, a TV and a smart phone for reproducing an image, and the like.

Due to such imaging devices, facial images taken are exposed to a large number of people or leaked, thereby increasing the risk of identity or abuse.

On the other hand, there is a positive aspect in which images exposed from various video devices are used to arrest criminals.

For example, a face image obtained or exposed from various imaging devices can be searched against a face image stored in a database. However, in the past, it takes a lot of time to compare images for detecting a criminal, and there are many difficulties in finding an accurate criminal.

For example, when face images acquired from various imaging devices are blurry or unclear, it is difficult to accurately compare the images.

Korea Open Patent: No. 2012-0035299 (2012.04.16: Open day) Korean Published Patent: No. 2013-0047223 (2013.05.08: public date) Korea registered patent: No. 1215948 (2012.12.20: registration day) Korea registered patent: No. 1468407 (Nov. 21, 2014: registered)

It is an object of the present invention to provide an image analysis method and apparatus capable of instantly finding the same data as a plurality of stored image data or an image photographed from image data by an automatic high-speed scan other than manual.

It is another object of the present invention to provide an image analysis method and apparatus capable of masking a face image having mobility exposed from video devices.

It is another object of the present invention to provide an image analysis method and apparatus that can prevent a stored image from being leaked and abused.

According to one embodiment, there is provided an image analysis method for finding a face image close to a face image acquired through an image analysis apparatus, the method comprising: (a) setting at least one face image as a reference image and storing the image in a database; (b) extracting a moving face image from the photographed moving image; And (c) scanning a reference image stored in the database in correspondence with the extracted face image, calculating similarities between the scanned face image and the reference image based on the reference object having the feature points, The method comprising the steps of:

The face image may be a still face image or a photographic image.

Wherein the step (b) includes extracting at least one first object identified by an identifier for each of the stored facial images, wherein the step (a) includes storing the extracted at least one first object in the database ; And storing at least one reference object corresponding to the at least one first object in the database.

The step (c) may include obtaining the first degree of similarity by comparing the stored at least one first object with the stored reference object.

Wherein the step (b) further comprises extracting at least one second object identified by the identifier from the face image, and wherein the step (c) further comprises extracting at least one second object and the stored reference object To acquire the second similarity degree.

(D) displaying the obtained second similarity on a display screen; (e) comparing the selected second similarity to the first similarity if any second similarity among the displayed second similarities is selected to extract the matching or proximate face image from the database; And (f) displaying the extracted neighboring face image on the display screen.

The similarity may be pixel similarity.

The image analysis method comprising: comparing a first frame, which is a recording at a first time with respect to the moving image, and a second frame, which is a recording with respect to the moving image at a second time after the first time, Calculating a pixel point or pixel region in the first frame corresponding to the object and a pixel point or pixel region in the second frame to discriminate the moving object; Based on the pixel point or the pixel region in the first frame and the pixel point or the pixel region in the second frame, in accordance with the movement of the moving object at the third time after the predetermined time interval from the second time, Estimating a pixel point or a pixel area where the moving object is located on the screen; And in the estimating step, masking processing is performed on the pixel point or the pixel region according to the estimation.

Wherein the step of estimating includes a step of estimating a pixel point or a pixel area in which the moving object is located on the screen according to the following equation (1), and the step of performing masking includes: And applying masking processing to an area on the screen that is expected to be positioned at the third time as the moving object moves by changing the pixel value of the original by applying a weight to the pixel value corresponding to the area have.

[x, y] = [x , y] t1 + t * ([x, y] t2 - [x, y] t1) ... formula (1)

Here, [x, y] t1 is a pixel value of a pixel point in the first frame at the first time corresponding to the moving object, and [x, y] t2 is a pixel value of the pixel point in the first frame corresponding to the moving object Is the pixel value of the pixel point in the second frame at the second time point, and [x, y] t3 is the pixel value at the pixel point or the pixel region in which the moving object is located on the screen in accordance with the movement of the moving object Value, and t may indicate a time interval.

The image analysis method comprising the steps of: generating image log information including event data in which abuse is recorded for a face image or a moving image stored in the database; And generating abuse notification data when the abuse condition is compared with video log information including the recorded event data and abuse condition data for judging abuse that has been stored in the database.

The abuse condition data may include an attempt to illegally copy, attempt to transmit the face image or a moving image remotely, an attempt to delete the face image, (IP) is connected to the image analyzing apparatus outside the working hours when the network connected IP is not the authenticated IP band, when the image transmitting apparatus is connected to the image analyzing apparatus outside the working hours, The access log may include a case in which the access to the device is not performed, the case where the video log information is deleted without any reason, and the case where the access log is not an authorized access IP.

According to another aspect of the present invention, there is provided an image analysis apparatus for finding a face image close to an acquired face image, the apparatus comprising: a storage management unit configured to store at least one face image as a reference image in a database; An image extracting unit for extracting a moving face image from a photographed moving image; And a step of scanning the reference image stored in the database corresponding to the extracted facial image and obtaining similarities between the scanned facial image and the reference image based on the reference object having the feature points to find the adjacent facial image from the database An image analyzing apparatus includes an image scanning unit.

Wherein the image analyzing apparatus comprises: a first object extracting unit for extracting at least one first object identified by an identifier for each of the stored face images; A reference object extracting unit extracting at least one reference object corresponding to the extracted at least one first object; A first degree of similarity acquiring unit for acquiring a first degree of similarity by comparing the stored at least one first object with the stored reference object; A second object extracting unit for extracting at least one second object identified by the identifier from the face image; And a second degree of similarity acquiring unit for acquiring a second degree of similarity by comparing the extracted at least one second object with the stored reference object.

Wherein the image analysis apparatus displays the obtained second similarity on a display screen, and when the second similarity degree is selected from among the displayed second similarity degrees, the selected second similarity degree is compared with the first similarity degree, A proximity image extracting unit for extracting the face image from the database; And a proximity image display unit for displaying the extracted proximity face image on the display screen.

As described above, according to the present embodiment, the comparison result based on the similarity degree between the photographed image and the stored facial image or facial image through high-speed image scanning and multi-processing can be used to find a matching or proximate facial image, The utilization rate is very high in the place where the sales profit is increased.

In addition, the present embodiment analyzes images stored on the basis of video log information, thereby preventing image leakage and preventing abuse.

In addition, in the present embodiment, masking processing is performed along the face of a person who is in the moving state, thereby protecting the face image from being exposed to the motion picture, thereby protecting the identity.

It is to be understood that other advantages, which are not mentioned above, may be apparent to those skilled in the art from the following description.

1 is a flowchart illustrating an example of an image analysis method according to an embodiment.
Fig. 2 is a block diagram exemplarily showing an example of an image analysis apparatus for realizing the image analysis method of Fig. 1. Fig.
3 is a flowchart exemplarily showing an example of a similarity algorithm according to an embodiment.
FIG. 4 is a diagram showing data states processed in the image analysis method of FIGS. 1 and 3. FIG.
5 is a flowchart exemplarily showing another example of the image analysis method according to the embodiment.
6A to 6C are conceptual diagrams for explaining a masking process for protecting privacy information of a moving object included in moving images.
FIG. 7 is a flowchart illustrating another example of the image analysis method according to an embodiment.
FIG. 8 is a block diagram illustrating an example of an image analysis apparatus according to an embodiment.

Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings, wherein like reference numerals are used to designate identical or similar elements, and redundant description thereof will be omitted.

The terms including ordinals such as 'first' and 'second' disclosed herein may be used to describe various elements, but the elements are not limited by the terms. The terms are used to distinguish one component from another.

In the following description of the embodiments of the present invention, a detailed description of related arts will be omitted when it is determined that the gist of the embodiments disclosed herein may be obscured.

It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are intended to provide further explanation of the invention as claimed. , ≪ / RTI > equivalents, and alternatives.

It is to be understood that the terms such as " comprise ", "comprise ", or " comprise ", as used in the following examples, It should be understood that the present invention is not limited to the components but includes other components.

<Example of image analysis method>

FIG. 1 is a flowchart illustrating an example of an image analysis method according to an embodiment. FIG. 2 is a block diagram illustrating an example of an image analysis apparatus for realizing the image analysis method of FIG.

The image analysis apparatus 100 shown in FIG. 2 receives a moving image photographed or input from a video apparatus 101 connected to a wired or wireless Internet opened via a communication network from the video apparatus 101 via a communication network Can be obtained.

The communication network mentioned may be communicated by wireless communication with a network such as the Internet, also called the World Wide Web (WWW), a cellular telephone network, an intranet such as a metropolitan area network (MAN) and / have.

The wireless network may be a wireless network, such as a cellular network (e.g., Global System for Mobile Communications (GSM), Enhanced Data Rates for GSM Evolution (EDGE), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA) (TDMA), Universal Mobile Telecommunications System (UMTS), Long Term Evolution (LTE), or other cellular networks), and the like.

For example, if the network data access element (s) is part of a GSM network, the network data access element (s) may be a base transceiver station (BTS), a base station controller (BSC), a mobie switching center (MSC) GPRS Support Node), and the like. As another example, if the network data access element (s) is part of a LAN, the network data access element (s) may include one or more network switches, routers, hubs, and /

On the other hand, if the wireless network is local area communication, the local area communication may be a wireless LAN, a Wi-Fi, a Bluetooth, a zigbee, a Wi-Fi direct (WFD) ), Infrared Data Association (IrDA), Bluetooth Low Energy (BLE), and Near Field Communication (NFC).

However, the present invention is not limited to the image analysis apparatus 100 connected to the communication network. For example, when the moving image is inputted by the input means kept in the offline state, the image analysis apparatus 100 may acquire the moving image inputted through the input means offline.

The input means may be a storage medium such as USB, DVD and CD.

On the other hand, the video apparatus 101 mentioned above may be a CCTV and video apparatus having no portability, a smart phone and a camera having good portability, a video camera, and the like. Such an example is merely an example and may be another photographing device.

The image analysis apparatus 100 described above can be substantially processed by the processor 120 to quickly find a face image similar to the acquired face image from the database 110 as described above.

The processor 120 mentioned above processes the similarity algorithm related to the acquired face image, and is capable of multiprocessing during the processing of the similarity algorithm, and the processed image (indicating an image close to the acquired face image) by the similarity algorithm It is possible to perform a function of searching the database at a high speed.

For this, the processor 120 may be a central processing unit (CPU), and may be a general purpose processor, a digital signal processor (abbreviated as DSP), an application specific integrated circuit (Application Specific Integrated Circuit, abbreviated as ASIC). The general purpose processor may be a microprocessor or the processor may be any commercialization processor or the like.

On the other hand, the above-mentioned database 110 is a concept including a computer-readable recording medium, which includes not only a consultation database but also a database having a wide meaning including data records based on a file system, Is included in the scope of the database referred to in the present invention if it can retrieve it and extract the data.

In addition, a server, a network device (e.g., a switch, a router, etc.) for image processing may be formed between the image analysis apparatus 100 and the image apparatus 101. However, it is not limited thereto.

Hereinafter, an image analysis method realized by the above-described image analysis apparatus 100 will be described.

Referring to FIG. 1, an image analysis method 200 according to an exemplary embodiment may include steps 210 to 260 for finding a face image close to a face image acquired through the image analysis apparatus 100.

First, in an exemplary step 210, the image analysis apparatus 100 may set at least one face image as a reference image and store it in the database 110. The at least one face image may be stored in the database 110 for each user.

The at least one face image may be an image of a search target, a still face image, or a photograph image.

In an exemplary step 220, the image analysis apparatus 100 receives or receives at least one moving image photographed by the open-line or online connected imaging apparatus 101, acquires a plurality of moving images, Each face image can be extracted.

Here, since the human image displayed in the moving image has mobility in the image, the face image can be extracted from the moving human image. To this end, the processor 120 of the image analysis apparatus 100 may scan a moving facial image by multiprocessing and multi-threading.

In the exemplary step 230, the image analysis apparatus 100 scans the reference image (face image) stored in the database 110 corresponding to the extracted face image at high speed by multiprocessing and multithreading, The degree of similarity can be obtained.

In order to obtain the degree of similarity between the facial image and the reference image, the image analyzing apparatus 100 compares at least one feature point of the face image with the reference object having the unique pixel value by dividing the extracted facial image into at least one feature point, The degree of similarity to the facial image can be obtained.

As a result, the similarity of the facial image can be determined according to whether the comparison of at least one feature point between the reference object and the facial image matches the degree of the intrinsic pixel value.

Likewise, the image analysis apparatus 100 may compute at least one feature point of the reference image and a reference object having the unique pixel value by dividing the reference image into at least one feature point, and obtain the similarity to each reference image.

In this case, the similarity of the reference image can be stored in the database 120 before the similarity of each face image is determined.

In this state, the image analysis apparatus 100 can display the similarity of each of the obtained face images on the display screen. Accordingly, if an arbitrary similarity corresponding to a face image to be searched by the user is selected from the similarity of each face image displayed on the display screen, the similarity of the reference image matched to the arbitrary similarity is found from the database 120, The corresponding face image matched with the reference image can be positively detected from the database 120. [

Assuming that these processing steps are collectively referred to as a 'similarity algorithm', the image analysis apparatus 100 performs a high-speed scan processing by multiprocessing and multithreading in order to execute the similarity processing algorithm, By comparing the similarity between the similarity and the similarity of the corresponding face image, a face image with high proximity can be quickly found.

Therefore, the present embodiment can provide a more accurate and quicker detection of a face image close to a captured face image than a conventional face analysis algorithm by applying a similarity algorithm differently from the existing face analysis algorithm .

Hereinafter, the above-described similarity algorithm processing will be more specifically exemplified.

<Processing example of the similarity algorithm>

3 is a flowchart exemplarily showing an example of a similarity algorithm according to an embodiment.

Referring to FIG. 3, the similarity algorithm according to an exemplary embodiment may include steps 210 through 270. The steps 210 to 230 are steps to be described with reference to FIG. 1, and the remaining steps 240 to 270 are added in the present embodiment. Each step will be described in any order.

First, in step 220, the image analysis apparatus 100 extracts at least one first object having a feature point for each face image stored in the database 120 (step 221).

For example, the first object represents feature points such as eyes, nose, mouth, forehead and jaw in the face image, and each pixel has a pixel value.

In an exemplary step 210, the image analysis apparatus 100 stores the at least one first object extracted by the step 220 in the database 120, and extracts (at least) corresponding to the extracted at least one first object One reference object may be stored in the database 120 (211).

The reference object may have a unique reference pixel value for comparing the first object.

Accordingly, in an exemplary step 230, the image analysis apparatus 100 compares at least one first object stored in the database 120 with a reference object stored in the database 120 to determine a first The first degree of similarity may be obtained by obtaining the degree of similarity (231).

The first degree of similarity may be a result indicating the degree of pixel correspondence between the first object having the feature point and the reference object.

In an exemplary step 220, the image analysis apparatus 100 may extract at least one second object having feature points from each face image (222). The second object has a corresponding relationship with the first object described above.

For example, since the second object has a pixel value for each feature point such as eyes, nose, mouth, forehead and jaw in the face image, the image analyzing apparatus 100 may determine that the second object having the pixel value of the feature point is ultimately .

Similarly, in an exemplary step 230, the image analysis apparatus 100 compares the extracted at least one second object with a reference object stored in the database 120 through high-speed scanning by multiprocessing and multithreading, A second degree of similarity for the two objects may be obtained 232.

It is needless to say that knowing the first similarity degree and the second similarity degree of at least one of the first object and the second object and ultimately the similarity degree to the arbitrary reference image and the face image, respectively.

Thus, in an exemplary step 240, the image analysis apparatus 100 may display a second similarity degree of the second object acquired for each face image on the display screen.

In an exemplary step 250, the image analysis apparatus 100 determines whether a second degree of similarity is selected through the operation of scanning a first object matching or proximate to the second object, The first degree of similarity of one object can be compared.

As a result, in step 260, the image analyzing apparatus 100 can extract a corresponding face image when the comparison result is identical or close to the predetermined value. For example, the second similarity degree of the second object or the face image, The facial images that match or approximate the first similarity of the first object or the reference image can be extracted from the database 120 through high-speed scanning by multiprocessing and multithreading.

In exemplary step 270, the image analysis apparatus 100 may display the extracted proximity face image on the display screen as a response by the user's selection (235). Therefore, the user can automatically check the face image close to the photographed face image.

<Example of Data Status>

FIG. 4 is a diagram showing data states processed in the image analysis method of FIGS. 1 and 3. FIG.

Referring to FIG. 4, in one embodiment, the database 120 may set and store a plurality of face images processed by the image analysis apparatus 100 as reference images. For example, the database 120 stores facial images of P1, P2, P3 .... Pn, and stores facial images of F1 to F10 extracted from the facial images of P1, P2, P3 .... Pn 1 You can save more objects.

Furthermore, the database 120 may further store a reference object having a unique pixel value for each of the minutiae corresponding to the first object, obtain the similarity of the first object matched to each face image based on the stored reference object, (120). &Lt; / RTI &gt;

The exemplary image analyzing apparatus 100 can obtain similarity between the second object and the second object with respect to the face image as well as the similarity between the first object and the first object with respect to the obtained face image and display the same on the display screen .

If a certain degree of similarity among the similarities of the second objects displayed on the display screen is selected by the user, the image analyzing apparatus 100 may compare the similarities of the selected second objects with the tacticity of the first objects stored in the database 120 A fast scan based on multiprocessing and multithreading can be attempted for one database 120. [

As a result of the high-speed scan, the image analysis apparatus 100 can immediately find a matching or proximate face image from the arbitrary face image to be searched from the database 120 by comparing the similarity of the first object with the similarity of the second object .

As described above, the present embodiment finds the same or near face image through comparison based on the similarity degree between the photographed image and the stored face image or face image through high-speed image scanning and multi-processing, It will be able to increase utilization.

The image analysis method described above may be implemented in the form of program instructions that can be executed through various computer components and recorded in a computer-readable medium.

The computer readable medium may be any medium accessible by the processor. Such media can include both volatile and nonvolatile media, removable and non-removable media, communication media, storage media, and computer storage media.

Communication media may include computer readable instructions, data structures, program modules, other data of a modulated data signal such as a carrier wave or other transport mechanism, and may include any other form of information delivery medium known in the art.

The storage medium may be any type of storage medium such as RAM, flash memory, ROM, EPROM, electrically erasable read only memory ("EEPROM"), registers, hard disk, removable disk, compact disk read only memory Or any other type of storage medium.

Computer storage media includes removable and non-removable, nonvolatile, and nonvolatile storage media implemented in any method or technology for storing information such as computer readable instructions, data structures, program modules or other data, Volatile media.

Such computer storage media may be embodied as program instructions, such as RAM, ROM, EPROM, EEPROM, flash memory, other solid state memory technology, CDROMs, digital versatile disks (DVDs) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage, Lt; RTI ID = 0.0 &gt; and / or &lt; / RTI &gt;

Examples of program instructions may include machine language code such as those produced by a compiler, as well as high-level language code that may be executed by a computer using an interpreter or the like.

<Other examples of image analysis method>

FIG. 5 is a flowchart exemplarily showing another example of the image analysis method according to the embodiment. FIGS. 6 (a) to 6 (c) illustrate a masking process for protecting privacy information of a moving object included in a moving image FIG.

Referring to FIG. 5, the image analysis method 200 according to an exemplary embodiment may further include steps 281 to 284 for performing a masking process on a moving object of a moving image.

First, in an exemplary step 281, the image analysis apparatus 100 receives a moving image of a moving object from the image apparatus 101 via on / off line, receives a moving image photographed from the image apparatus 101 directly connected thereto And can be input through a storage medium.

For example, the image analysis apparatus 100 may be connected to a video apparatus 101 such as a CCTV and a camera to receive an image of a moving object in real time, and may input the captured moving image in the form of a file or data You can get it.

At this time, the moving object may be a person, an animal, an automobile, a train, and other objects that pass through the region to be photographed in photographing a region fixed by the video apparatus 101 such as a CCTV and a smart phone and a camera.

Since the image including the moving object is inputted in real time, it can be composed of a plurality of frames 300, 310, and 320 according to each time.

For example, a plurality of frames according to the time sequence shown in Figs. 6 (a) to 6 (c), for example, the time sequence of the first time t1, the second time t2 and the third time t3 (300, 310, 320).

In an exemplary step 282, the image analysis apparatus 100 generates a first frame 300, which is a first time t1 for the image, and a second frame 300, which is a second time t2 after the first time t1, The second frame 310 corresponding to the moving object is compared with the pixel point 301 or pixel region 302 in the first frame 300 corresponding to the moving object and the pixel region 302 corresponding to the moving object in the second frame 310. [ The point 311 or the pixel region 312 can be calculated.

The pixel points 301 and 311 or the pixel regions 302 and 312 may be calculated through various techniques such as edge detection, difference calculation, and feature extraction.

In the exemplary embodiment, the image analysis apparatus 100 includes the pixel point 301 or the pixel region 302 in the first frame 300 and the pixel point 311 or the pixel region 302 in the second frame 120 The pixel point 321 or the pixel region 322 in which the moving object is positioned on the screen in accordance with the movement of the moving object at the third time t3 after the predetermined time interval from the second time t2, 322).

The exemplary image analysis apparatus 100 may estimate a pixel point 321 or a pixel region 322 that is located on the screen for a partial area of a moving object including privacy information of the moving object according to the movement of the moving object have.

The above-described pixel point 321 or pixel region 322 may be estimated according to the following equation (1).

[x, y] = [x , y] t1 + t * ([x, y] t2 - [x, y] t1) ... formula (1)

[X, y] t1 is the pixel value of the pixel point in the first frame of the first time corresponding to the moving object, and [x, y] t2 is the pixel value of the second frame of the second time [X, y] t3 denotes a pixel value of a pixel point or a pixel area in which a moving object is positioned on the screen according to movement of the moving object, and t denotes a pixel value of the pixel point in the time interval .

Accordingly, in an exemplary step 284, the image analysis apparatus 100 can perform a masking process on the pixel point 321 or the pixel region 322 according to the estimation.

For example, the image analysis apparatus 100 may apply weighting to pixel values corresponding to the pixel points 321 or 322 and pixel values around the pixel points 321 or 322, The masking process can be performed on an area on the screen that is expected to be positioned at the third time t3 as the moving object moves.

At this time, effects such as blurring and mosaic may occur depending on the setting of the weight.

On the other hand, in the case where a part of a frame constituting the entire moving picture is missing due to a sampling rate of a moving picture or a system resource, the image analyzing apparatus 100 may calculate the time interval t and the first frame 300 or The pixel point 321 or the pixel region 322 may be re-estimated by changing the time of selecting the second frame 310, respectively.

As described above, according to the present embodiment, the pixel value of the area on the screen requiring the masking processing is determined by using the pixel value in the previous frames through the image analysis processing for protecting the privacy information of the moving object included in the moving image It is unnecessary to change the original image and masking processing can be performed in an area where the moving object is positioned even if some frames are missing.

Furthermore, as described above, the present embodiment can perform the masking process at the position where the movement is expected by tracking the movement of the moving object included in the moving image. Accordingly, the present embodiment has an advantage that masking processing can be effectively performed corresponding to a moving object moving at a high speed, and it is possible to prevent personal privacy invasion at the time of reproduction or exposure of a moving image.

<Another example of image analysis method>

FIG. 7 is a flowchart illustrating another example of the image analysis method according to an embodiment.

Referring to FIG. 7, the image analysis method 200 according to an exemplary embodiment may include steps 290 and 295 to prevent abuse of data stored in the database 120.

First, in step 290, the image analysis apparatus 100 may generate image log information including event data in which abuse is recorded for a face image (image) or a moving image stored in the database 120.

For example, if the video log information is intended to delete a file by an unauthorized person, event data on the intent to delete may be generated.

The image analysis apparatus 100 may further check the hash value of the moving image or the face image stored in the database 120 and generate the modulation confirmation data when a change occurs in the hash value.

In step 295, if the image analysis apparatus 100 compares the abuse condition data with the video log information including the recorded event data and the abuse condition data for judging abuse that has been stored in the database 120, Lt; / RTI &gt;

Here, abuse refers to a change in a video or a face image due to an unauthorized person or an illegal user because the hash value is checked for a moving image or a face image and the hash value is changed (modulated) And abuse notification data can be generated.

Here, as the abuse condition data, when an attempt is made to view an image after a period of time when it is connected to the database 120 and an attempt is made to view an image over a period of time, an attempt is made to undermine / change / When an image is transmitted in a number larger than the number of images set to be transmitted within a predetermined period of time, when the IP connected to the network is not an authenticated IP band, when accessing the database outside the working hours, (IP), access logs of other equipment, video audit log records are deleted when access is made to an IP other than the always-connected IP.

In particular, in the case of connecting with an IP other than the always-connected IP, most of the time is always connected to a self-computer, that is, the same IP, and it can be classified as misuse when connecting to another computer, that is, another IP.

<Example of image analysis device>

FIG. 8 is a block diagram illustrating an example of an image analysis apparatus according to an embodiment.

Referring to FIG. 8, the image analysis apparatus according to an embodiment includes a storage management unit 410 for storing at least one face image as a reference image and storing the image in a database in order to find a face image close to the acquired face image, An image extracting unit 420 for extracting a moving face image from the moving image, and an image extracting unit 420 for scanning the reference image stored in the database corresponding to the extracted face image, And an image scanning unit 430 for obtaining each similarity degree of the reference image and searching for the near face image from the database.

More specifically, in one embodiment, the image analysis apparatus includes a first object extraction unit 440 for extracting at least one first object having a feature point for each stored face image, A first degree of similarity acquisition unit 460 that compares at least one first object and the stored reference object to obtain a first degree of similarity, A second object extraction unit (470) for extracting at least one second object separated from the image by the identifier, and a second object extraction unit (470) for comparing the extracted at least one second object with the stored reference object 2 similarity acquiring unit 480 as shown in FIG.

Accordingly, in one embodiment, the image analysis apparatus displays the obtained second similarity on the display screen, and when an arbitrary second similarity among the displayed second similarities is selected, the selected second similarity and the first similarity And a proximity image display unit 495 for displaying the extracted proximity face image on the display screen. The proximity image display unit 495 may include a proximity image extracting unit 490 for comparing the face image with the proximity image,

Thus, the present embodiment finds the same or near face image through comparison based on the similarity degree between the shot image and the stored face image or the face image through high-speed image scanning and multi-processing, The utilization rate is very high, which can increase sales profit.

While the present invention has been described in connection with what is presently considered to be practical exemplary embodiments, it is to be understood that the invention is not limited to the disclosed embodiments, but, on the contrary, It will be understood. The embodiments described above are therefore to be considered in all respects as illustrative and not restrictive.

100, 400: Image analysis apparatus 101: Imaging apparatus
110: Database 120: Processor
410: storage management unit 420:
430: image scanning unit 440: first object extracting unit
450: reference object extracting unit 460: first similarity obtaining unit
470: second object extracting unit 480: second similarity obtaining unit
490: proximity image extracting unit 495: proximity image displaying unit

Claims (14)

An image analysis method for finding a face image close to a face image acquired through an image analysis apparatus,
(a) storing at least one face image as a reference image in a database;
(b) extracting a moving face image from the photographed moving image; And
(c) scanning a reference image stored in the database corresponding to the extracted facial image, obtaining similarities between the extracted facial image and the scanned reference image based on the reference object having the feature points, And comparing the face image to the face image,
The step (b) may include extracting at least one first object having a feature point for each of the stored face images, wherein the step (c) comprises: extracting at least one first object and the reference And comparing the objects to obtain a first degree of similarity for the at least one first object,
The step (b) may further include extracting at least one second object having a minutiae from the face image, wherein the step (c) includes the step of extracting at least one second object and the reference object To obtain a second degree of similarity for the at least one second object,
The step (c)
Displaying the obtained second similarity on a display screen;
Comparing the selected second similarity with the first similarity if any second similarity among the displayed second similarities is selected to extract the matching or proximate face image from the database; And
Displaying the extracted neighboring face image on the display screen
And an image analyzing method.
The method according to claim 1,
Wherein the face image is a still face image or a photographic image.
The method according to claim 1,
The step (a)
Storing the extracted at least one first object in the database; And
Storing at least one reference object corresponding to the at least one first object in the database
And an image analyzing method.
delete delete delete The method according to claim 1,
Wherein the similarity is pixel similarity.
The method according to claim 1,
And a second frame which is a record of the moving image at a second time after the first time is compared with a moving object included in the moving image by comparing the first frame, which is the recording at the first time with respect to the moving image, Calculating a pixel point or pixel region in the first frame and a pixel point or pixel region in the second frame to determine the moving object;
Based on the pixel point or the pixel region in the first frame and the pixel point or the pixel region in the second frame, in accordance with the movement of the moving object at the third time after the predetermined time interval from the second time, Estimating a pixel point or a pixel area where the moving object is located on the screen; And
Wherein the step of estimating includes the step of masking pixel points or pixel regions according to the estimation
Further comprising the steps of:
9. The method of claim 8,
Wherein the estimating step comprises:
Estimating a pixel point or a pixel area in which the moving object is located on the screen according to the following equation (1)
The masking process may include:
A pixel value corresponding to the estimated pixel point or pixel region is applied to a pixel value of the original to change the pixel value of the original pixel so that masking processing is performed on an area on the screen expected to be positioned at the third time point as the moving object moves Step
And an image analyzing method.
[x, y] = [x , y] t1 + t * ([x, y] t2 - [x, y] t1) ... formula (1)
Here, [x, y] t1 is a pixel value of a pixel point in the first frame at the first time corresponding to the moving object, and [x, y] t2 is a pixel value of the pixel point in the first frame corresponding to the moving object Is the pixel value of the pixel point in the second frame at the second time point, and [x, y] t3 is the pixel value at the pixel point or the pixel region in which the moving object is located on the screen in accordance with the movement of the moving object Value, and t is the time interval.
The method according to claim 1,
Generating video log information including event data in which abuse is recorded for a face image or a moving image stored in the database; And
Comparing the video log information including the recorded event data with the abuse condition data for judging misuse of the database; and generating abuse notification data
And an image analyzing method.
11. The method of claim 10,
The abuse condition data may include a case where an attempt is made to view the past image again, an attempt is made to undo / change / delete the image despite the storage period remaining, an attempt is made to copy illegally, (IP) is connected to the image analysis apparatus outside the working hours when the network connected IP is not the authenticated IP band, when transmitting an image larger than the number of moving images set to be transmitted within a predetermined time, In case of access by another device but not based on the video log information,
Image analysis method.
An image analyzing apparatus for finding a face image close to an acquired face image,
A storage management unit configured to store at least one face image as a reference image in a database;
An image extracting unit for extracting a moving face image from a photographed moving image;
A scan image of the reference image stored in the database corresponding to the extracted face image, a similarity degree of the scanned face image and the reference image based on the reference object having the feature points, A scan unit;
A first object extracting unit for extracting at least one first object having a feature point for each of the stored face images;
A reference object extracting unit extracting at least one reference object corresponding to the extracted at least one first object;
A first degree of similarity acquiring unit for acquiring a first degree of similarity by comparing the stored at least one first object with the stored reference object;
A second object extracting unit for extracting at least one second object identified by the identifier from the face image; And
A second similarity degree obtaining unit for comparing the extracted at least one second object with the stored reference object to obtain a second similarity degree,
And an image analyzer.
delete 13. The method of claim 12,
Displaying the obtained second similarity on a display screen and comparing the selected second similarity with the first similarity when an arbitrary second similarity among the displayed second similarities is selected, A proximity image extracting unit for extracting from the database; And
A proximity image display unit for displaying the extracted proximity face image on the display screen,
Further comprising an image analyzer.
KR1020160021333A 2016-02-23 2016-02-23 Method and apparatus for analyzing video and image KR101695655B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020160021333A KR101695655B1 (en) 2016-02-23 2016-02-23 Method and apparatus for analyzing video and image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020160021333A KR101695655B1 (en) 2016-02-23 2016-02-23 Method and apparatus for analyzing video and image

Publications (1)

Publication Number Publication Date
KR101695655B1 true KR101695655B1 (en) 2017-01-12

Family

ID=57811577

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020160021333A KR101695655B1 (en) 2016-02-23 2016-02-23 Method and apparatus for analyzing video and image

Country Status (1)

Country Link
KR (1) KR101695655B1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113177481A (en) * 2021-04-29 2021-07-27 北京百度网讯科技有限公司 Target detection method and device, electronic equipment and storage medium
CN114444940A (en) * 2022-01-27 2022-05-06 黑龙江邮政易通信息网络有限责任公司 Enterprise data acquisition and analysis system based on big data

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101058592B1 (en) * 2010-12-14 2011-08-23 주식회사 포드림 System for auditing picture information abuse
KR20110114384A (en) * 2010-04-13 2011-10-19 주식회사 소프닉스 Automatic object processing method in movie and authoring apparatus for object service
KR20120035299A (en) 2010-10-05 2012-04-16 한국인터넷진흥원 Image protection processing apparatus for privacy protection, and image security system and method using the same
KR101215650B1 (en) * 2012-06-15 2012-12-26 (주)리얼허브 Apparatus and method for masking a moving object for protecting the privacy information included in moving picture
KR101215948B1 (en) 2012-04-02 2012-12-27 주식회사 뉴인테크 Image information masking method of monitoring system based on face recognition and body information
KR20130047223A (en) 2011-10-31 2013-05-08 한국전자통신연구원 Apparatus and method for masking privacy region based on monitoring video images
KR101468407B1 (en) 2013-05-24 2014-12-03 주식회사 보라시스템즈 Digital forensic photographing device and digital forensic photographing system installed in car for preventing abuse of personal image information using the device
KR20160011916A (en) * 2014-07-23 2016-02-02 삼성전자주식회사 Method and apparatus of identifying user using face recognition

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110114384A (en) * 2010-04-13 2011-10-19 주식회사 소프닉스 Automatic object processing method in movie and authoring apparatus for object service
KR20120035299A (en) 2010-10-05 2012-04-16 한국인터넷진흥원 Image protection processing apparatus for privacy protection, and image security system and method using the same
KR101058592B1 (en) * 2010-12-14 2011-08-23 주식회사 포드림 System for auditing picture information abuse
KR20130047223A (en) 2011-10-31 2013-05-08 한국전자통신연구원 Apparatus and method for masking privacy region based on monitoring video images
KR101215948B1 (en) 2012-04-02 2012-12-27 주식회사 뉴인테크 Image information masking method of monitoring system based on face recognition and body information
KR101215650B1 (en) * 2012-06-15 2012-12-26 (주)리얼허브 Apparatus and method for masking a moving object for protecting the privacy information included in moving picture
KR101468407B1 (en) 2013-05-24 2014-12-03 주식회사 보라시스템즈 Digital forensic photographing device and digital forensic photographing system installed in car for preventing abuse of personal image information using the device
KR20160011916A (en) * 2014-07-23 2016-02-02 삼성전자주식회사 Method and apparatus of identifying user using face recognition

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113177481A (en) * 2021-04-29 2021-07-27 北京百度网讯科技有限公司 Target detection method and device, electronic equipment and storage medium
CN113177481B (en) * 2021-04-29 2023-09-29 北京百度网讯科技有限公司 Target detection method, target detection device, electronic equipment and storage medium
CN114444940A (en) * 2022-01-27 2022-05-06 黑龙江邮政易通信息网络有限责任公司 Enterprise data acquisition and analysis system based on big data
CN114444940B (en) * 2022-01-27 2023-12-26 黑龙江邮政易通信息网络有限责任公司 Enterprise data acquisition and analysis system based on big data

Similar Documents

Publication Publication Date Title
CN111046752B (en) Indoor positioning method, computer equipment and storage medium
Ba et al. ABC: Enabling smartphone authentication with built-in camera
JP6789601B2 (en) A learning video selection device, program, and method for selecting a captured video masking a predetermined image area as a learning video.
CN104333694B (en) A method of prevent shops from visiting fraud of taking pictures
CN105659279B (en) Information processing apparatus, information processing method, and computer program
JP2007158421A (en) Monitoring camera system and face image tracing recording method
US11050920B2 (en) Photographed object recognition method, apparatus, mobile terminal and camera
US20230260313A1 (en) Method for identifying potential associates of at least one target person, and an identification device
Korshunov et al. Framework for objective evaluation of privacy filters
US9742990B2 (en) Image file communication system with tag information in a communication network
CN110889314B (en) Image processing method, device, electronic equipment, server and system
US11520931B2 (en) Privacy masking method using format-preserving encryption in image security system and recording medium for performing same
KR101951605B1 (en) Cctv image security system to prevent image leakage
KR101695655B1 (en) Method and apparatus for analyzing video and image
JP2022177267A (en) Authentication system, authentication method, and program
US9025833B2 (en) System and method for video-assisted identification of mobile phone users
US10713498B2 (en) System and method for associating an identifier of a mobile communication terminal with a person-of-interest, using video tracking
KR20190047218A (en) Method and apparatus of providing traffic information, and computer program for executing the method.
KR101929212B1 (en) Apparatus and method for masking moving object
EP3751851A1 (en) Method of highlighting an object of interest in an image or video
WO2017157435A1 (en) A method and system for visual privacy protection for mobile and wearable devices
CN114387674A (en) Living body detection method, living body detection system, living body detection apparatus, storage medium, and program product
Sanjana et al. Real-Time Piracy Detection Based on Thermogram Analysis and Machine Learning Techniques
CN111078804A (en) Information association method, system and computer terminal
EP4296996A1 (en) Secure search method, secure search system, secure search device, encryption device, searcher terminal, and program

Legal Events

Date Code Title Description
E701 Decision to grant or registration of patent right
GRNT Written decision to grant