KR101695655B1 - Method and apparatus for analyzing video and image - Google Patents
Method and apparatus for analyzing video and image Download PDFInfo
- Publication number
- KR101695655B1 KR101695655B1 KR1020160021333A KR20160021333A KR101695655B1 KR 101695655 B1 KR101695655 B1 KR 101695655B1 KR 1020160021333 A KR1020160021333 A KR 1020160021333A KR 20160021333 A KR20160021333 A KR 20160021333A KR 101695655 B1 KR101695655 B1 KR 101695655B1
- Authority
- KR
- South Korea
- Prior art keywords
- image
- similarity
- face image
- pixel
- moving
- Prior art date
Links
Images
Classifications
-
- G06K9/00228—
-
- G06K9/00288—
-
- G06K9/6201—
-
- G06K9/64—
Abstract
The present embodiment is characterized in that at least one face image is set as a reference image and stored in a database, a step of extracting a moving face image from the captured moving image, and a step of extracting a reference image stored in the database And a similarity degree between the scanned facial image and the reference image is determined based on the reference object having the minutiae point, and the proximity of the facial image is found from the database.
Thus, in the present embodiment, high-speed image scanning and multi-processing are performed to find a matching or proximate face image as a result of comparison based on the similarity between the shot image and the stored face image or face image, Can be utilized.
Description
The present invention relates to an image analysis method and apparatus, and more particularly, to an image analysis method and apparatus for quickly identifying the identity of a photographed face image and preventing misuse and identification.
With the development of the Internet technology, various image devices for capturing images and reproducing the captured images have been developed. For example, a closed-circuit television (CCTV) for capturing an image, a camera and a smart phone, a TV and a smart phone for reproducing an image, and the like.
Due to such imaging devices, facial images taken are exposed to a large number of people or leaked, thereby increasing the risk of identity or abuse.
On the other hand, there is a positive aspect in which images exposed from various video devices are used to arrest criminals.
For example, a face image obtained or exposed from various imaging devices can be searched against a face image stored in a database. However, in the past, it takes a lot of time to compare images for detecting a criminal, and there are many difficulties in finding an accurate criminal.
For example, when face images acquired from various imaging devices are blurry or unclear, it is difficult to accurately compare the images.
It is an object of the present invention to provide an image analysis method and apparatus capable of instantly finding the same data as a plurality of stored image data or an image photographed from image data by an automatic high-speed scan other than manual.
It is another object of the present invention to provide an image analysis method and apparatus capable of masking a face image having mobility exposed from video devices.
It is another object of the present invention to provide an image analysis method and apparatus that can prevent a stored image from being leaked and abused.
According to one embodiment, there is provided an image analysis method for finding a face image close to a face image acquired through an image analysis apparatus, the method comprising: (a) setting at least one face image as a reference image and storing the image in a database; (b) extracting a moving face image from the photographed moving image; And (c) scanning a reference image stored in the database in correspondence with the extracted face image, calculating similarities between the scanned face image and the reference image based on the reference object having the feature points, The method comprising the steps of:
The face image may be a still face image or a photographic image.
Wherein the step (b) includes extracting at least one first object identified by an identifier for each of the stored facial images, wherein the step (a) includes storing the extracted at least one first object in the database ; And storing at least one reference object corresponding to the at least one first object in the database.
The step (c) may include obtaining the first degree of similarity by comparing the stored at least one first object with the stored reference object.
Wherein the step (b) further comprises extracting at least one second object identified by the identifier from the face image, and wherein the step (c) further comprises extracting at least one second object and the stored reference object To acquire the second similarity degree.
(D) displaying the obtained second similarity on a display screen; (e) comparing the selected second similarity to the first similarity if any second similarity among the displayed second similarities is selected to extract the matching or proximate face image from the database; And (f) displaying the extracted neighboring face image on the display screen.
The similarity may be pixel similarity.
The image analysis method comprising: comparing a first frame, which is a recording at a first time with respect to the moving image, and a second frame, which is a recording with respect to the moving image at a second time after the first time, Calculating a pixel point or pixel region in the first frame corresponding to the object and a pixel point or pixel region in the second frame to discriminate the moving object; Based on the pixel point or the pixel region in the first frame and the pixel point or the pixel region in the second frame, in accordance with the movement of the moving object at the third time after the predetermined time interval from the second time, Estimating a pixel point or a pixel area where the moving object is located on the screen; And in the estimating step, masking processing is performed on the pixel point or the pixel region according to the estimation.
Wherein the step of estimating includes a step of estimating a pixel point or a pixel area in which the moving object is located on the screen according to the following equation (1), and the step of performing masking includes: And applying masking processing to an area on the screen that is expected to be positioned at the third time as the moving object moves by changing the pixel value of the original by applying a weight to the pixel value corresponding to the area have.
[x, y] = [x , y] t1 + t * ([x, y] t2 - [x, y] t1) ... formula (1)
Here, [x, y] t1 is a pixel value of a pixel point in the first frame at the first time corresponding to the moving object, and [x, y] t2 is a pixel value of the pixel point in the first frame corresponding to the moving object Is the pixel value of the pixel point in the second frame at the second time point, and [x, y] t3 is the pixel value at the pixel point or the pixel region in which the moving object is located on the screen in accordance with the movement of the moving object Value, and t may indicate a time interval.
The image analysis method comprising the steps of: generating image log information including event data in which abuse is recorded for a face image or a moving image stored in the database; And generating abuse notification data when the abuse condition is compared with video log information including the recorded event data and abuse condition data for judging abuse that has been stored in the database.
The abuse condition data may include an attempt to illegally copy, attempt to transmit the face image or a moving image remotely, an attempt to delete the face image, (IP) is connected to the image analyzing apparatus outside the working hours when the network connected IP is not the authenticated IP band, when the image transmitting apparatus is connected to the image analyzing apparatus outside the working hours, The access log may include a case in which the access to the device is not performed, the case where the video log information is deleted without any reason, and the case where the access log is not an authorized access IP.
According to another aspect of the present invention, there is provided an image analysis apparatus for finding a face image close to an acquired face image, the apparatus comprising: a storage management unit configured to store at least one face image as a reference image in a database; An image extracting unit for extracting a moving face image from a photographed moving image; And a step of scanning the reference image stored in the database corresponding to the extracted facial image and obtaining similarities between the scanned facial image and the reference image based on the reference object having the feature points to find the adjacent facial image from the database An image analyzing apparatus includes an image scanning unit.
Wherein the image analyzing apparatus comprises: a first object extracting unit for extracting at least one first object identified by an identifier for each of the stored face images; A reference object extracting unit extracting at least one reference object corresponding to the extracted at least one first object; A first degree of similarity acquiring unit for acquiring a first degree of similarity by comparing the stored at least one first object with the stored reference object; A second object extracting unit for extracting at least one second object identified by the identifier from the face image; And a second degree of similarity acquiring unit for acquiring a second degree of similarity by comparing the extracted at least one second object with the stored reference object.
Wherein the image analysis apparatus displays the obtained second similarity on a display screen, and when the second similarity degree is selected from among the displayed second similarity degrees, the selected second similarity degree is compared with the first similarity degree, A proximity image extracting unit for extracting the face image from the database; And a proximity image display unit for displaying the extracted proximity face image on the display screen.
As described above, according to the present embodiment, the comparison result based on the similarity degree between the photographed image and the stored facial image or facial image through high-speed image scanning and multi-processing can be used to find a matching or proximate facial image, The utilization rate is very high in the place where the sales profit is increased.
In addition, the present embodiment analyzes images stored on the basis of video log information, thereby preventing image leakage and preventing abuse.
In addition, in the present embodiment, masking processing is performed along the face of a person who is in the moving state, thereby protecting the face image from being exposed to the motion picture, thereby protecting the identity.
It is to be understood that other advantages, which are not mentioned above, may be apparent to those skilled in the art from the following description.
1 is a flowchart illustrating an example of an image analysis method according to an embodiment.
Fig. 2 is a block diagram exemplarily showing an example of an image analysis apparatus for realizing the image analysis method of Fig. 1. Fig.
3 is a flowchart exemplarily showing an example of a similarity algorithm according to an embodiment.
FIG. 4 is a diagram showing data states processed in the image analysis method of FIGS. 1 and 3. FIG.
5 is a flowchart exemplarily showing another example of the image analysis method according to the embodiment.
6A to 6C are conceptual diagrams for explaining a masking process for protecting privacy information of a moving object included in moving images.
FIG. 7 is a flowchart illustrating another example of the image analysis method according to an embodiment.
FIG. 8 is a block diagram illustrating an example of an image analysis apparatus according to an embodiment.
Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings, wherein like reference numerals are used to designate identical or similar elements, and redundant description thereof will be omitted.
The terms including ordinals such as 'first' and 'second' disclosed herein may be used to describe various elements, but the elements are not limited by the terms. The terms are used to distinguish one component from another.
In the following description of the embodiments of the present invention, a detailed description of related arts will be omitted when it is determined that the gist of the embodiments disclosed herein may be obscured.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are intended to provide further explanation of the invention as claimed. , ≪ / RTI > equivalents, and alternatives.
It is to be understood that the terms such as " comprise ", "comprise ", or " comprise ", as used in the following examples, It should be understood that the present invention is not limited to the components but includes other components.
<Example of image analysis method>
FIG. 1 is a flowchart illustrating an example of an image analysis method according to an embodiment. FIG. 2 is a block diagram illustrating an example of an image analysis apparatus for realizing the image analysis method of FIG.
The
The communication network mentioned may be communicated by wireless communication with a network such as the Internet, also called the World Wide Web (WWW), a cellular telephone network, an intranet such as a metropolitan area network (MAN) and / have.
The wireless network may be a wireless network, such as a cellular network (e.g., Global System for Mobile Communications (GSM), Enhanced Data Rates for GSM Evolution (EDGE), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA) (TDMA), Universal Mobile Telecommunications System (UMTS), Long Term Evolution (LTE), or other cellular networks), and the like.
For example, if the network data access element (s) is part of a GSM network, the network data access element (s) may be a base transceiver station (BTS), a base station controller (BSC), a mobie switching center (MSC) GPRS Support Node), and the like. As another example, if the network data access element (s) is part of a LAN, the network data access element (s) may include one or more network switches, routers, hubs, and /
On the other hand, if the wireless network is local area communication, the local area communication may be a wireless LAN, a Wi-Fi, a Bluetooth, a zigbee, a Wi-Fi direct (WFD) ), Infrared Data Association (IrDA), Bluetooth Low Energy (BLE), and Near Field Communication (NFC).
However, the present invention is not limited to the
The input means may be a storage medium such as USB, DVD and CD.
On the other hand, the
The
The
For this, the
On the other hand, the above-mentioned
In addition, a server, a network device (e.g., a switch, a router, etc.) for image processing may be formed between the
Hereinafter, an image analysis method realized by the above-described
Referring to FIG. 1, an
First, in an
The at least one face image may be an image of a search target, a still face image, or a photograph image.
In an
Here, since the human image displayed in the moving image has mobility in the image, the face image can be extracted from the moving human image. To this end, the
In the
In order to obtain the degree of similarity between the facial image and the reference image, the
As a result, the similarity of the facial image can be determined according to whether the comparison of at least one feature point between the reference object and the facial image matches the degree of the intrinsic pixel value.
Likewise, the
In this case, the similarity of the reference image can be stored in the
In this state, the
Assuming that these processing steps are collectively referred to as a 'similarity algorithm', the
Therefore, the present embodiment can provide a more accurate and quicker detection of a face image close to a captured face image than a conventional face analysis algorithm by applying a similarity algorithm differently from the existing face analysis algorithm .
Hereinafter, the above-described similarity algorithm processing will be more specifically exemplified.
<Processing example of the similarity algorithm>
3 is a flowchart exemplarily showing an example of a similarity algorithm according to an embodiment.
Referring to FIG. 3, the similarity algorithm according to an exemplary embodiment may include
First, in
For example, the first object represents feature points such as eyes, nose, mouth, forehead and jaw in the face image, and each pixel has a pixel value.
In an
The reference object may have a unique reference pixel value for comparing the first object.
Accordingly, in an
The first degree of similarity may be a result indicating the degree of pixel correspondence between the first object having the feature point and the reference object.
In an
For example, since the second object has a pixel value for each feature point such as eyes, nose, mouth, forehead and jaw in the face image, the
Similarly, in an
It is needless to say that knowing the first similarity degree and the second similarity degree of at least one of the first object and the second object and ultimately the similarity degree to the arbitrary reference image and the face image, respectively.
Thus, in an
In an exemplary step 250, the
As a result, in
In
<Example of Data Status>
FIG. 4 is a diagram showing data states processed in the image analysis method of FIGS. 1 and 3. FIG.
Referring to FIG. 4, in one embodiment, the
Furthermore, the
The exemplary
If a certain degree of similarity among the similarities of the second objects displayed on the display screen is selected by the user, the
As a result of the high-speed scan, the
As described above, the present embodiment finds the same or near face image through comparison based on the similarity degree between the photographed image and the stored face image or face image through high-speed image scanning and multi-processing, It will be able to increase utilization.
The image analysis method described above may be implemented in the form of program instructions that can be executed through various computer components and recorded in a computer-readable medium.
The computer readable medium may be any medium accessible by the processor. Such media can include both volatile and nonvolatile media, removable and non-removable media, communication media, storage media, and computer storage media.
Communication media may include computer readable instructions, data structures, program modules, other data of a modulated data signal such as a carrier wave or other transport mechanism, and may include any other form of information delivery medium known in the art.
The storage medium may be any type of storage medium such as RAM, flash memory, ROM, EPROM, electrically erasable read only memory ("EEPROM"), registers, hard disk, removable disk, compact disk read only memory Or any other type of storage medium.
Computer storage media includes removable and non-removable, nonvolatile, and nonvolatile storage media implemented in any method or technology for storing information such as computer readable instructions, data structures, program modules or other data, Volatile media.
Such computer storage media may be embodied as program instructions, such as RAM, ROM, EPROM, EEPROM, flash memory, other solid state memory technology, CDROMs, digital versatile disks (DVDs) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage, Lt; RTI ID = 0.0 > and / or < / RTI >
Examples of program instructions may include machine language code such as those produced by a compiler, as well as high-level language code that may be executed by a computer using an interpreter or the like.
<Other examples of image analysis method>
FIG. 5 is a flowchart exemplarily showing another example of the image analysis method according to the embodiment. FIGS. 6 (a) to 6 (c) illustrate a masking process for protecting privacy information of a moving object included in a moving image FIG.
Referring to FIG. 5, the
First, in an
For example, the
At this time, the moving object may be a person, an animal, an automobile, a train, and other objects that pass through the region to be photographed in photographing a region fixed by the
Since the image including the moving object is inputted in real time, it can be composed of a plurality of
For example, a plurality of frames according to the time sequence shown in Figs. 6 (a) to 6 (c), for example, the time sequence of the first time t1, the second time t2 and the third time t3 (300, 310, 320).
In an
The pixel points 301 and 311 or the
In the exemplary embodiment, the
The exemplary
The above-described
[x, y] = [x , y] t1 + t * ([x, y] t2 - [x, y] t1) ... formula (1)
[X, y] t1 is the pixel value of the pixel point in the first frame of the first time corresponding to the moving object, and [x, y] t2 is the pixel value of the second frame of the second time [X, y] t3 denotes a pixel value of a pixel point or a pixel area in which a moving object is positioned on the screen according to movement of the moving object, and t denotes a pixel value of the pixel point in the time interval .
Accordingly, in an
For example, the
At this time, effects such as blurring and mosaic may occur depending on the setting of the weight.
On the other hand, in the case where a part of a frame constituting the entire moving picture is missing due to a sampling rate of a moving picture or a system resource, the
As described above, according to the present embodiment, the pixel value of the area on the screen requiring the masking processing is determined by using the pixel value in the previous frames through the image analysis processing for protecting the privacy information of the moving object included in the moving image It is unnecessary to change the original image and masking processing can be performed in an area where the moving object is positioned even if some frames are missing.
Furthermore, as described above, the present embodiment can perform the masking process at the position where the movement is expected by tracking the movement of the moving object included in the moving image. Accordingly, the present embodiment has an advantage that masking processing can be effectively performed corresponding to a moving object moving at a high speed, and it is possible to prevent personal privacy invasion at the time of reproduction or exposure of a moving image.
<Another example of image analysis method>
FIG. 7 is a flowchart illustrating another example of the image analysis method according to an embodiment.
Referring to FIG. 7, the
First, in
For example, if the video log information is intended to delete a file by an unauthorized person, event data on the intent to delete may be generated.
The
In
Here, abuse refers to a change in a video or a face image due to an unauthorized person or an illegal user because the hash value is checked for a moving image or a face image and the hash value is changed (modulated) And abuse notification data can be generated.
Here, as the abuse condition data, when an attempt is made to view an image after a period of time when it is connected to the
In particular, in the case of connecting with an IP other than the always-connected IP, most of the time is always connected to a self-computer, that is, the same IP, and it can be classified as misuse when connecting to another computer, that is, another IP.
<Example of image analysis device>
FIG. 8 is a block diagram illustrating an example of an image analysis apparatus according to an embodiment.
Referring to FIG. 8, the image analysis apparatus according to an embodiment includes a
More specifically, in one embodiment, the image analysis apparatus includes a first object extraction unit 440 for extracting at least one first object having a feature point for each stored face image, A first degree of similarity acquisition unit 460 that compares at least one first object and the stored reference object to obtain a first degree of similarity, A second object extraction unit (470) for extracting at least one second object separated from the image by the identifier, and a second object extraction unit (470) for comparing the extracted at least one second object with the stored reference object 2 similarity acquiring unit 480 as shown in FIG.
Accordingly, in one embodiment, the image analysis apparatus displays the obtained second similarity on the display screen, and when an arbitrary second similarity among the displayed second similarities is selected, the selected second similarity and the first similarity And a proximity image display unit 495 for displaying the extracted proximity face image on the display screen. The proximity image display unit 495 may include a proximity image extracting unit 490 for comparing the face image with the proximity image,
Thus, the present embodiment finds the same or near face image through comparison based on the similarity degree between the shot image and the stored face image or the face image through high-speed image scanning and multi-processing, The utilization rate is very high, which can increase sales profit.
While the present invention has been described in connection with what is presently considered to be practical exemplary embodiments, it is to be understood that the invention is not limited to the disclosed embodiments, but, on the contrary, It will be understood. The embodiments described above are therefore to be considered in all respects as illustrative and not restrictive.
100, 400: Image analysis apparatus 101: Imaging apparatus
110: Database 120: Processor
410: storage management unit 420:
430: image scanning unit 440: first object extracting unit
450: reference object extracting unit 460: first similarity obtaining unit
470: second object extracting unit 480: second similarity obtaining unit
490: proximity image extracting unit 495: proximity image displaying unit
Claims (14)
(a) storing at least one face image as a reference image in a database;
(b) extracting a moving face image from the photographed moving image; And
(c) scanning a reference image stored in the database corresponding to the extracted facial image, obtaining similarities between the extracted facial image and the scanned reference image based on the reference object having the feature points, And comparing the face image to the face image,
The step (b) may include extracting at least one first object having a feature point for each of the stored face images, wherein the step (c) comprises: extracting at least one first object and the reference And comparing the objects to obtain a first degree of similarity for the at least one first object,
The step (b) may further include extracting at least one second object having a minutiae from the face image, wherein the step (c) includes the step of extracting at least one second object and the reference object To obtain a second degree of similarity for the at least one second object,
The step (c)
Displaying the obtained second similarity on a display screen;
Comparing the selected second similarity with the first similarity if any second similarity among the displayed second similarities is selected to extract the matching or proximate face image from the database; And
Displaying the extracted neighboring face image on the display screen
And an image analyzing method.
Wherein the face image is a still face image or a photographic image.
The step (a)
Storing the extracted at least one first object in the database; And
Storing at least one reference object corresponding to the at least one first object in the database
And an image analyzing method.
Wherein the similarity is pixel similarity.
And a second frame which is a record of the moving image at a second time after the first time is compared with a moving object included in the moving image by comparing the first frame, which is the recording at the first time with respect to the moving image, Calculating a pixel point or pixel region in the first frame and a pixel point or pixel region in the second frame to determine the moving object;
Based on the pixel point or the pixel region in the first frame and the pixel point or the pixel region in the second frame, in accordance with the movement of the moving object at the third time after the predetermined time interval from the second time, Estimating a pixel point or a pixel area where the moving object is located on the screen; And
Wherein the step of estimating includes the step of masking pixel points or pixel regions according to the estimation
Further comprising the steps of:
Wherein the estimating step comprises:
Estimating a pixel point or a pixel area in which the moving object is located on the screen according to the following equation (1)
The masking process may include:
A pixel value corresponding to the estimated pixel point or pixel region is applied to a pixel value of the original to change the pixel value of the original pixel so that masking processing is performed on an area on the screen expected to be positioned at the third time point as the moving object moves Step
And an image analyzing method.
[x, y] = [x , y] t1 + t * ([x, y] t2 - [x, y] t1) ... formula (1)
Here, [x, y] t1 is a pixel value of a pixel point in the first frame at the first time corresponding to the moving object, and [x, y] t2 is a pixel value of the pixel point in the first frame corresponding to the moving object Is the pixel value of the pixel point in the second frame at the second time point, and [x, y] t3 is the pixel value at the pixel point or the pixel region in which the moving object is located on the screen in accordance with the movement of the moving object Value, and t is the time interval.
Generating video log information including event data in which abuse is recorded for a face image or a moving image stored in the database; And
Comparing the video log information including the recorded event data with the abuse condition data for judging misuse of the database; and generating abuse notification data
And an image analyzing method.
The abuse condition data may include a case where an attempt is made to view the past image again, an attempt is made to undo / change / delete the image despite the storage period remaining, an attempt is made to copy illegally, (IP) is connected to the image analysis apparatus outside the working hours when the network connected IP is not the authenticated IP band, when transmitting an image larger than the number of moving images set to be transmitted within a predetermined time, In case of access by another device but not based on the video log information,
Image analysis method.
A storage management unit configured to store at least one face image as a reference image in a database;
An image extracting unit for extracting a moving face image from a photographed moving image;
A scan image of the reference image stored in the database corresponding to the extracted face image, a similarity degree of the scanned face image and the reference image based on the reference object having the feature points, A scan unit;
A first object extracting unit for extracting at least one first object having a feature point for each of the stored face images;
A reference object extracting unit extracting at least one reference object corresponding to the extracted at least one first object;
A first degree of similarity acquiring unit for acquiring a first degree of similarity by comparing the stored at least one first object with the stored reference object;
A second object extracting unit for extracting at least one second object identified by the identifier from the face image; And
A second similarity degree obtaining unit for comparing the extracted at least one second object with the stored reference object to obtain a second similarity degree,
And an image analyzer.
Displaying the obtained second similarity on a display screen and comparing the selected second similarity with the first similarity when an arbitrary second similarity among the displayed second similarities is selected, A proximity image extracting unit for extracting from the database; And
A proximity image display unit for displaying the extracted proximity face image on the display screen,
Further comprising an image analyzer.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020160021333A KR101695655B1 (en) | 2016-02-23 | 2016-02-23 | Method and apparatus for analyzing video and image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020160021333A KR101695655B1 (en) | 2016-02-23 | 2016-02-23 | Method and apparatus for analyzing video and image |
Publications (1)
Publication Number | Publication Date |
---|---|
KR101695655B1 true KR101695655B1 (en) | 2017-01-12 |
Family
ID=57811577
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020160021333A KR101695655B1 (en) | 2016-02-23 | 2016-02-23 | Method and apparatus for analyzing video and image |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR101695655B1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113177481A (en) * | 2021-04-29 | 2021-07-27 | 北京百度网讯科技有限公司 | Target detection method and device, electronic equipment and storage medium |
CN114444940A (en) * | 2022-01-27 | 2022-05-06 | 黑龙江邮政易通信息网络有限责任公司 | Enterprise data acquisition and analysis system based on big data |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101058592B1 (en) * | 2010-12-14 | 2011-08-23 | 주식회사 포드림 | System for auditing picture information abuse |
KR20110114384A (en) * | 2010-04-13 | 2011-10-19 | 주식회사 소프닉스 | Automatic object processing method in movie and authoring apparatus for object service |
KR20120035299A (en) | 2010-10-05 | 2012-04-16 | 한국인터넷진흥원 | Image protection processing apparatus for privacy protection, and image security system and method using the same |
KR101215650B1 (en) * | 2012-06-15 | 2012-12-26 | (주)리얼허브 | Apparatus and method for masking a moving object for protecting the privacy information included in moving picture |
KR101215948B1 (en) | 2012-04-02 | 2012-12-27 | 주식회사 뉴인테크 | Image information masking method of monitoring system based on face recognition and body information |
KR20130047223A (en) | 2011-10-31 | 2013-05-08 | 한국전자통신연구원 | Apparatus and method for masking privacy region based on monitoring video images |
KR101468407B1 (en) | 2013-05-24 | 2014-12-03 | 주식회사 보라시스템즈 | Digital forensic photographing device and digital forensic photographing system installed in car for preventing abuse of personal image information using the device |
KR20160011916A (en) * | 2014-07-23 | 2016-02-02 | 삼성전자주식회사 | Method and apparatus of identifying user using face recognition |
-
2016
- 2016-02-23 KR KR1020160021333A patent/KR101695655B1/en active IP Right Grant
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20110114384A (en) * | 2010-04-13 | 2011-10-19 | 주식회사 소프닉스 | Automatic object processing method in movie and authoring apparatus for object service |
KR20120035299A (en) | 2010-10-05 | 2012-04-16 | 한국인터넷진흥원 | Image protection processing apparatus for privacy protection, and image security system and method using the same |
KR101058592B1 (en) * | 2010-12-14 | 2011-08-23 | 주식회사 포드림 | System for auditing picture information abuse |
KR20130047223A (en) | 2011-10-31 | 2013-05-08 | 한국전자통신연구원 | Apparatus and method for masking privacy region based on monitoring video images |
KR101215948B1 (en) | 2012-04-02 | 2012-12-27 | 주식회사 뉴인테크 | Image information masking method of monitoring system based on face recognition and body information |
KR101215650B1 (en) * | 2012-06-15 | 2012-12-26 | (주)리얼허브 | Apparatus and method for masking a moving object for protecting the privacy information included in moving picture |
KR101468407B1 (en) | 2013-05-24 | 2014-12-03 | 주식회사 보라시스템즈 | Digital forensic photographing device and digital forensic photographing system installed in car for preventing abuse of personal image information using the device |
KR20160011916A (en) * | 2014-07-23 | 2016-02-02 | 삼성전자주식회사 | Method and apparatus of identifying user using face recognition |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113177481A (en) * | 2021-04-29 | 2021-07-27 | 北京百度网讯科技有限公司 | Target detection method and device, electronic equipment and storage medium |
CN113177481B (en) * | 2021-04-29 | 2023-09-29 | 北京百度网讯科技有限公司 | Target detection method, target detection device, electronic equipment and storage medium |
CN114444940A (en) * | 2022-01-27 | 2022-05-06 | 黑龙江邮政易通信息网络有限责任公司 | Enterprise data acquisition and analysis system based on big data |
CN114444940B (en) * | 2022-01-27 | 2023-12-26 | 黑龙江邮政易通信息网络有限责任公司 | Enterprise data acquisition and analysis system based on big data |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111046752B (en) | Indoor positioning method, computer equipment and storage medium | |
Ba et al. | ABC: Enabling smartphone authentication with built-in camera | |
JP6789601B2 (en) | A learning video selection device, program, and method for selecting a captured video masking a predetermined image area as a learning video. | |
CN104333694B (en) | A method of prevent shops from visiting fraud of taking pictures | |
CN105659279B (en) | Information processing apparatus, information processing method, and computer program | |
JP2007158421A (en) | Monitoring camera system and face image tracing recording method | |
US11050920B2 (en) | Photographed object recognition method, apparatus, mobile terminal and camera | |
US20230260313A1 (en) | Method for identifying potential associates of at least one target person, and an identification device | |
Korshunov et al. | Framework for objective evaluation of privacy filters | |
US9742990B2 (en) | Image file communication system with tag information in a communication network | |
CN110889314B (en) | Image processing method, device, electronic equipment, server and system | |
US11520931B2 (en) | Privacy masking method using format-preserving encryption in image security system and recording medium for performing same | |
KR101951605B1 (en) | Cctv image security system to prevent image leakage | |
KR101695655B1 (en) | Method and apparatus for analyzing video and image | |
JP2022177267A (en) | Authentication system, authentication method, and program | |
US9025833B2 (en) | System and method for video-assisted identification of mobile phone users | |
US10713498B2 (en) | System and method for associating an identifier of a mobile communication terminal with a person-of-interest, using video tracking | |
KR20190047218A (en) | Method and apparatus of providing traffic information, and computer program for executing the method. | |
KR101929212B1 (en) | Apparatus and method for masking moving object | |
EP3751851A1 (en) | Method of highlighting an object of interest in an image or video | |
WO2017157435A1 (en) | A method and system for visual privacy protection for mobile and wearable devices | |
CN114387674A (en) | Living body detection method, living body detection system, living body detection apparatus, storage medium, and program product | |
Sanjana et al. | Real-Time Piracy Detection Based on Thermogram Analysis and Machine Learning Techniques | |
CN111078804A (en) | Information association method, system and computer terminal | |
EP4296996A1 (en) | Secure search method, secure search system, secure search device, encryption device, searcher terminal, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
E701 | Decision to grant or registration of patent right | ||
GRNT | Written decision to grant |