US20180359449A1 - Monitoring device, monitoring system, and monitoring method - Google Patents

Monitoring device, monitoring system, and monitoring method Download PDF

Info

Publication number
US20180359449A1
US20180359449A1 US15/775,475 US201615775475A US2018359449A1 US 20180359449 A1 US20180359449 A1 US 20180359449A1 US 201615775475 A US201615775475 A US 201615775475A US 2018359449 A1 US2018359449 A1 US 2018359449A1
Authority
US
United States
Prior art keywords
image
mask
monitoring
processing
identifiability
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/775,475
Other languages
English (en)
Inventor
Yuichi Matsumoto
Yoshiyuki Kamino
Takeshi Watanabe
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Panasonic Intellectual Property Management Co Ltd
Original Assignee
Panasonic Intellectual Property Management Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Panasonic Intellectual Property Management Co Ltd filed Critical Panasonic Intellectual Property Management Co Ltd
Assigned to PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD. reassignment PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KAMINO, Yoshiyuki, MATSUMOTO, YUICHI, WATANABE, TAKESHI
Publication of US20180359449A1 publication Critical patent/US20180359449A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19678User interface
    • G08B13/19686Interfaces masking personal details for privacy, e.g. blurring faces, vehicle license plates
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • G06T2207/20012Locally adaptive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Definitions

  • the present disclosure relates to a monitoring device, a monitoring system, and a monitoring method for generating and outputting a monitoring image on which privacy mask processing is performed on a captured image obtained by imaging a target area.
  • a surveillance system that images in the facility with cameras and monitors the situation in the facility with the images of the cameras has been adopted, but if the images captured by the cameras installed in the facility are distributed to general users on the Internet, the users may confirm the congested situation in the facility and the like without visiting the site, so the convenience of the users may be enhanced.
  • the monitoring device of the present disclosure includes a processor, wherein the monitoring device that generates and outputs a monitoring image on which privacy mask processing is performed on a captured image obtained by imaging a target area, and the processor performs image processing for reducing identifiability of an object appearing in the captured image on the captured image, detects a moving object from the captured image to generate a mask image corresponding to the image area of the moving object, and generates and outputs the monitoring image on which the mask image is superimposed on the identifiability-reduced image.
  • the monitoring system of the present disclosure is a monitoring system that generates a monitoring image on which privacy mask processing is performed on a captured image obtained by imaging a target area to distribute the image to a user terminal device.
  • a monitoring system that generates a monitoring image on which privacy mask processing is performed on a captured image obtained by imaging a target area to distribute the image to a user terminal device.
  • a camera that images the target area
  • a server device that distributes the monitoring image to the user terminal device
  • a user terminal device either the camera or the server device performs image processing for reducing identifiability of an object appearing in the captured image on the captured image, detects a moving object from the captured image to generate a mask image corresponding to the image area of the moving object, and generates and outputs the monitoring image on which the mask image is superimposed on the identifiability-reduced image.
  • the monitoring method of the present disclosure is a monitoring method for causing an information processing device to perform processing of generating and outputting a monitoring image on which privacy mask processing is performed on a captured image obtained by imaging a target area, including performing image processing for reducing identifiability of an object appearing in the captured image on the captured image to generate an identifiability-reduced image, detecting a moving object from the captured image to generate a mask image corresponding to the image area of the moving object and detecting a moving object from the captured image to generate a mask image corresponding to the image area of the moving object, and generating and outputting the monitoring image in which the mask image is superimposed on the identifiability-reduced image.
  • the moving object such as a person and the like may clearly be distinguished from the background and visually recognized by the mask image, it is possible to clearly grasp the state of the moving object. Therefore, it is possible to intuitively grasp the congested situation in the facility and the like.
  • the moving object whose moving object detection failed appears on an identifiability-reduced image, but because it is not possible to identify the moving object with this identifiability-reduced image, it is possible to reliably protect the privacy of individuals.
  • FIG. 1 is an overall configuration view of a monitoring system according to a first embodiment.
  • FIG. 2 is a plan view of an inside of a station building showing an example of the installation state of camera 1 .
  • FIG. 3A is an explanatory view for explaining an overview of image processing performed by a camera 1 .
  • FIG. 3B is an explanatory view for explaining an overview of image processing performed by camera 1 .
  • FIG. 3C is an explanatory view for explaining an overview of image processing performed by camera 1 .
  • FIG. 4A is an explanatory view for explaining an overview of image processing performed by camera 1 .
  • FIG. 4B is an explanatory view for explaining an overview of image processing performed by camera 1 .
  • FIG. 4C is an explanatory view for explaining an overview of image processing performed by camera 1 .
  • FIG. 5A is an explanatory view for explaining an overview of image processing performed by camera 1 .
  • FIG. 5B is an explanatory view for explaining an overview of image processing performed by camera 1 .
  • FIG. 5C is an explanatory view for explaining an overview of image processing performed by camera 1 .
  • FIG. 6A is an explanatory view for explaining an overview of image processing performed by camera 1 .
  • FIG. 6B is an explanatory view for explaining an overview of image processing performed by camera 1 .
  • FIG. 6C is an explanatory view for explaining an overview of image processing performed by camera 1 .
  • FIG. 7 is a block view showing a hardware configuration of camera 1 and server device 3 .
  • FIG. 8 is a functional block view of camera 1 .
  • FIG. 9 is an explanatory view showing a monitoring screen displayed on user terminal device 4 .
  • FIG. 10 is an explanatory view showing an overview of image processing performed by camera 1 .
  • FIG. 11 is a functional block view of camera 101 and server device 102 according to a second embodiment.
  • FIG. 12 is an explanatory view showing a mask condition setting screen displayed on user terminal device 4 .
  • the main object of the present disclosure is to provide a monitoring device, a monitoring system, and a monitoring method that are capable of reliably protecting the privacy of individuals and displaying monitoring images from which a congested situation in a facility may be intuitively grasped.
  • a first disclosure made to solve the above problem includes a processor, wherein the monitoring device that generates and outputs a monitoring image on which privacy mask processing is performed on a captured image obtained by imaging a target area, and the processor performs image processing for reducing identifiability of an object appearing in the captured image on the captured image, detects a moving object from the captured image to generate a mask image corresponding to the image area of the moving object, and generates and outputs the monitoring image on which the mask image generated is superimposed on the identifiability-reduced image.
  • the moving object such as a person and the like may clearly be distinguished from the background and visually recognized by the mask image, it is possible to clearly grasp the state of the moving object. Therefore, it is possible to intuitively grasp the congested situation in the facility and the like.
  • the moving object whose motion detection failed appears on an identifiability-reduced image, but because it is not possible to identify the moving object with this identifiability-reduced image, it is possible to reliably protect the privacy of individuals.
  • image processing for reducing identifiability may be performed on the entire captured images, but an area, such as the ceiling of a building where it is clear that a moving object such as a person does not appear, may be excluded from the targets of the image processing for reducing identifiability.
  • the processor executes any one of mosaic processing, blurring processing, and blending processing as image processing for reducing identifiability.
  • the processor generates a transparent mask image representing a contour shape of a moving object.
  • the mask image has transparency and a background image becomes transparent through a mask image portion in a monitoring image, it is easy to grasp the state of the moving object.
  • the processor generates a mask image in accordance with mask conditions set according to an operation input by a user, in which under the mask conditions, at least one display element of color, transmittance, presence/absence of a contour line of the mask image may be changed.
  • the display elements of the mask image may be changed, it is possible to display a monitoring image that is easy for the user to see.
  • the processor generates a mask image in accordance with mask conditions set according to an operation input by a user, in which as the mask condition, a congested state display mode in which the mask image is generated with the color specified by a degree of congestion or contrasting density and transmittance with the same hue may be set.
  • the color of the mask image and the like dynamically changes according to the congested situation, and it is possible to grasp the real situation of a target area.
  • the degree of congestion in each target area it is possible to compare the degree of congestion in each target area and instantaneously grasp the state of the plurality of target areas.
  • a monitoring system that generates a monitoring image on which privacy mask processing is performed on a captured image obtained by imaging a target area to distribute the image to a user terminal device.
  • a camera that images the target area
  • a server device that distributes the monitoring image to the user terminal device
  • a user terminal device either the camera or the server device performs image processing for reducing identifiability of an object appearing in the captured image on the captured image, detects a moving object from the captured image to generate a mask image corresponding to the image area of the moving object, and generates and outputs the monitoring image on which the mask image is superimposed on the identifiability-reduced image.
  • a monitoring method for causing an information processing device to perform processing of generating and outputting a monitoring image on which privacy mask processing is performed on a captured image obtained by imaging a target area, including performing image processing for reducing identifiability of an object appearing in the captured image on the captured image to generate an identifiability-reduced image, detecting a moving object from the captured image to generate a mask image corresponding to the image area of the moving object and detecting a moving object from the captured image to generate a mask image corresponding to the image area of the moving object, and generating and outputting the monitoring image in which the mask image is superimposed on the identifiability-reduced image.
  • FIG. 1 is an overall configuration diagram of a monitoring system according to the first embodiment.
  • This monitoring system is a system for an observer to monitor the situation of the premises by images (moving image) of each area of the premises of the railway station (facility) and distribute the images of each area to general users, including camera (monitoring device) 1 , monitoring terminal device 2 , server device 3 , user terminal device (browsing device) 4 .
  • Camera 1 is installed in each target area such as a home or a ticket gate in a station building and images each target area.
  • Camera 1 is connected to a closed area network such as a virtual local area network (VLAN via a local area network and router 6 .
  • image processing for protecting the privacy of a person is performed, and a monitoring image (processed image) which is a moving image obtained by this image processing and an unprocessed image are output from camera 1 .
  • Monitoring terminal device 2 is constituted with a PC and is installed in a monitoring room in the station building. Monitoring terminal device 2 is connected to camera 1 via a local area network. This monitoring terminal device 2 is a device for the observer to view images of camera 1 for the purpose of monitoring for security or disaster prevention, and unprocessed images are transmitted from each camera 1 to monitoring terminal device 2 , the unprocessed images of each camera 1 are displayed on monitoring terminal device 2 , the observer views the unprocessed images, and therefore it is possible to monitor the situation in the station.
  • Server device 3 is connected to each camera 1 at each station building via the closed area network and receives a monitoring image transmitted from each camera 1 at each station building.
  • server device 3 is connected to user terminal device 4 via the Internet, generates a screen to be viewed by a user to distribute the screen to user terminal device 4 , and acquires information input by the user on the screen.
  • User terminal device 4 is constituted with a smartphone, a tablet terminal, and a PC.
  • a monitoring image distributed from server device 3 is displayed. The user views this monitoring image, and therefore it is possible to grasp the congested situation in the station building and the running situation of the train.
  • server device 3 it is possible to perform live distribution in which a current monitoring image transmitted from camera 1 is distributed as it is. In addition, in server device 3 , it is possible to accumulate the monitoring image transmitted from camera 1 and distribute the monitoring image of the date specified by user terminal device 4 .
  • server device 3 and user terminal device 4 are connected via the Internet, it is possible to access server device 3 from user terminal device 4 at any place.
  • FIG. 2 is a plan view of an inside of the station building showing an example of the installation state of camera 1 .
  • camera 1 is installed on a platform in the station building. This camera 1 is installed on a ceiling or a pole of the platform, and a person present on the platform or a staircase is imaged.
  • a camera having a predetermined angle of view a so-called box camera, is adopted as camera 1 , but it is also possible to use an omnidirectional camera having a 360 -degree imaging range using a fish-eye lens.
  • FIG. 2 shows an example of the platform, but camera 1 is installed so as to image an appropriately set target area such as a ticket gate or an escalator in the station building.
  • FIGS. 3A to 3C , FIGS. 4A to 4C , FIGS. 5A to 5C , and FIGS. 6A to 6C are explanatory views for explaining the outline of image processing performed by camera 1 .
  • each area in the station building is imaged, and an unprocessed captured image shown in FIG. 3A is obtained.
  • image processing for protecting the privacy of a person is performed.
  • the privacy mask processing as shown in FIG. 3B , it is conceivable to perform image processing for reducing the identifiability of the object on the entire captured images.
  • it is conceivable to perform moving object detection and person detection on the captured image obtain position information of the image area of the person detected by this moving object detection and person detection, and perform image processing for reducing identifiability on the image area of the person (inside the contour of the person) as shown in FIG. 3C .
  • mosaic processing is performed as image processing for reducing identifiability.
  • a background image is generated by moving object removal processing (background image generation processing) for removing an image of a moving object (foreground image) from a plurality of captured images.
  • moving object removal processing background image generation processing
  • a mask image covering the image area of a person is generated based on the detection results of moving object detection and person detection.
  • a masked image shown in FIG. 4C is generated. With this masked image, since it becomes impossible to identify individuals, the privacy of a person may be protected.
  • FIG. 5A in a background image generated by moving object removal processing, a person with less movement may remain as it is.
  • a mask image of only the persons except that person is generated.
  • FIG. 5C a masked image shown in FIG. 5C is obtained.
  • the person who could not be removed by the moving object removal processing is displayed as it is, and the privacy of the person may not be protected.
  • FIG. 6A an image (the same as in FIG. 3B ) on which image processing for reducing identifiability is performed is used as a background image, and a masked image shown in FIG. 6C is generated by superimposing the mask image (same as FIG. 5B ) shown in FIG. 6B on this background image.
  • the moving object such as a person and the like may clearly be distinguished from the background and visually recognized by the mask image, it is possible to clearly grasp the state of the moving object. Therefore, it is possible to intuitively grasp the congested situation in the facility and the like.
  • a person whose motion detection or person detection failed appears on the background image but because individuals may not be identified by image processing for reducing identifiability on this background image, the privacy of a person may be reliably protected.
  • a person frame representing the face or the upper body area of the person may be displayed on the masked image based on the detection results of the moving object detection and the person detection.
  • a mask image is displayed so as to cover the image area of the plurality of people, it is difficult to distinguish individual persons from each other, and it is sometimes impossible to easily grasp how many people are present, and in such a case, if a person frame is displayed, it is possible to easily grasp the number of people.
  • the color of the mask image may be changed according to the degree of congestion. For example, in a case where the degree of congestion is high, the mask image is displayed in red, and in a case where the degree of congestion is low, the mask image is displayed in blue.
  • the degree of congestion may be expressed by contrasting density and transmittance with the same hue.
  • the color of the mask image and the like dynamically changes according to the congested situation, and it is possible to grasp the real situation of a target area.
  • the degree of congestion may be obtained based on the detection result of person detection (corresponding to the number of people frames).
  • FIG. 7 is a block view showing a hardware configuration of camera 1 and the server device 3 .
  • FIG. 8 is a functional block view of camera 1 .
  • camera 1 includes imaging unit 21 , processor 22 , storage device 23 , and communicator 24 .
  • Imaging unit 21 includes an image sensor and sequentially outputs captured images (frames) that are temporally continuous, that is, a so-called moving image.
  • Processor 22 performs image processing on the captured image and generates and outputs a monitoring image.
  • Storage device 23 stores a program executed by processor 22 or the captured image output from imaging unit 21 .
  • Communicator 24 transmits the monitoring image output from processor 22 to server device 3 via the network.
  • communicator 24 transmits the unprocessed image output from imaging unit 21 to monitoring terminal device 2 via the network.
  • Server device 3 includes processor 31 , storage device 32 , and communicator 33 .
  • Communicator 33 receives the monitoring image transmitted from each camera 1 .
  • communicator 33 distributes a screen including a monitoring image to be viewed by the user to user terminal device 4 .
  • storage device 32 the monitoring image for each camera 1 received by communicator 33 and a program executed by processor 31 are stored.
  • Processor 31 generates a screen to be distributed to user terminal device 4 .
  • camera 1 includes image acquisition unit 41 , first processing unit 42 , second processing unit 43 , and image output controller 44 .
  • Image acquisition unit 41 , first processing unit 42 , second processing unit 43 , and image output controller 44 are realized by causing processor 22 to execute a monitoring program (instructions) stored in storage device 23 .
  • image acquisition unit 41 the captured image captured by imaging unit 21 is acquired from imaging unit 21 and storage device (image storage) 23 .
  • First processing unit 42 includes first background image generator 51 .
  • a first background image (identifiability-reduced image) is generated by performing image processing for reducing the identifiability of an object captured in a captured image on the entire captured images.
  • image processing such as mosaic processing, blurring processing, and blending processing may be performed as image processing for reducing the identifiability of an object.
  • a first background image (identifiability-reduced image) may be generated by lowering the resolution of the image to such an extent that the identifiability of the object is lost without performing such special image processing.
  • first background image generator 51 since it is unnecessary to mount a special image processing function, first background image generator 51 may be constituted inexpensively and the amount of image data may be reduced, the communication load on the network may be reduced.
  • the mosaic processing is to divide a captured image into a plurality of blocks and replace the pixel values of all the pixels in the blocks with a single pixel value such as the pixel value of one pixel in the block or the average value of the pixel values of each pixel in the block.
  • the blurring processing is filtering processing by various kinds of filtering processing such as a blur filter, a Gaussian filter, a median filter, a bilateral filter, and the like. Furthermore, it is also possible to use various kinds of image processing such as negative/positive inversion, color tone correction (brightness change, RGB color balance change, contrast change, gamma correction, saturation adjustment, and the like), binarization, an edge filter, and the like.
  • filtering processing such as a blur filter, a Gaussian filter, a median filter, a bilateral filter, and the like.
  • image processing such as negative/positive inversion, color tone correction (brightness change, RGB color balance change, contrast change, gamma correction, saturation adjustment, and the like), binarization, an edge filter, and the like.
  • the blending processing synthesizes (blends) two images in a semi-transparent state and synthesizes an image for a predetermined synthesis and a captured image based on an a value indicating the degree of synthesis.
  • Second processing unit 43 includes second background image generator 53 , position information acquisition unit 54 , and mask image generator 55 .
  • second background image generator 53 processing for generating a second background image from which an image (foreground image) of a person is removed from the captured image is performed.
  • a second background image is generated from a plurality of captured images (frames) in a nearest predetermined learning period, and the second background image is sequentially updated according to a new captured image to be acquired.
  • a publicly known technique may be used for the processing performed by second background image generator 53 .
  • position information acquisition unit 54 processing for detecting a person from the captured image and acquiring position information of the image area of the person present in the captured image is performed. This processing is performed based on the second background image generated by second background image generator 53 , in which the image area of the moving object is specified (moving object detection) from the difference between the captured image at the time of interest (current time in real-time processing) and the second background image acquired in the previous learning period.
  • the moving object is determined as a person (person detection).
  • a publicly known technique may be used for the processing performed by position information acquisition unit 54 .
  • the second background image in the present embodiment includes a so-called “background model”, and by building the background model from a plurality of captured images in the learning period in second background image generator 53 and by comparing the captured image at the time of interest with the background model in position information acquisition unit 54 , the image area (foreground area) of the moving object and the background area are divided to obtain the position information of the image area of the moving object.
  • the second background image is sequentially updated as described above, but a captured image when no person is present, for example, a captured image before starting work may be held in the camera in advance as a second background image.
  • mask image generator 55 based on the position information of the image area of the person acquired by position information acquisition unit 54 , processing of generating a mask image having a contour corresponding to the image area of the person is performed.
  • information on the contour of the image area of the person is generated from the position information of the image area of the person, and a mask image representing the contour shape of the person is generated based on the information on the contour.
  • This mask image is obtained by filling the inside the contour of a person with a predetermined color (for example, blue) and has transparency.
  • processing of generating a monitoring image is performed by superimposing the mask image generated by mask image generator 55 on the first background image generated by first background image generator 51 .
  • the mask image has transparency, and in the monitoring image, the background image is transparent through the mask image portion.
  • FIG. 9 is an explanatory view showing the monitoring screen displayed on user terminal device 4 .
  • FIG. 9 shows an example of a smartphone as user terminal device 4 .
  • the monitoring screen displayed on user terminal device 4 may be edited to contents for digital signage and displayed on a signage terminal (large display) installed at the station, a commercial facility or the like to indicate the current congested situation.
  • monitoring screen shown in FIG. 9 is displayed.
  • this monitoring screen is viewed by the user, it is possible to grasp the congested situation in the station.
  • the monitoring screen has main menu display button 61 , station selection button 62 , date and time input unit 63 , reproduction operator 64 , and image list display 65 .
  • main menu display button 61 When main menu display button 61 is operated, the main menu is displayed. By using this main menu, it is possible to select station monitoring, user settings, and the like. When monitoring in the station building is selected, the monitoring screen shown in FIG. 9 is displayed.
  • image list display 65 monitoring images for each target area such as a platform, a ticket gate, or the like in the station building are displayed side by side.
  • station selection button 62 it is possible to select a station to be a target of the monitoring image to be displayed on image list display 65 .
  • station selection button 62 the currently set station is displayed.
  • station selection button 62 is operated, a station selection menu is displayed so that the station may be changed.
  • Date and time input unit 63 is used to input the display date and time of the monitoring image to be displayed on image list display 65 .
  • Date and time input section 63 NOW button 71 , date change button 72 , and time change button 73 are provided.
  • the display date and time may be changed to the current time.
  • the display date may be changed.
  • the currently set display date is displayed.
  • a calendar screen (not shown) is displayed, and a date may be selected on this calendar screen.
  • time change button 73 the display time may be changed. The currently set display time is displayed on time change button 73 .
  • time change button 73 is operated, a time selection menu is displayed, and the display time may be changed with this time selection menu. In an initial state, the monitoring image of the current time is displayed.
  • Reproduction operator 64 performs operations related to the reproduction of the monitoring images displayed on the image list display 65 and is provided with operation buttons for normal reproduction, fast forward reproduction, rewind reproduction, and stop, and by operating these operation buttons, it is possible to efficiently browse the monitoring images.
  • this monitoring screen may be enlarged and displayed by a pinch-out operation (operation of spreading two fingers touching the screen). Then, by moving the screen by performing a swipe operation (operation to shift the finger touching the screen) in an enlarged display state, the monitoring image of another area may also be viewed in the enlarged display.
  • a monitoring image is tapped (an operation to touch with a single finger for a short time)
  • a screen for enlarging the monitoring image may be displayed.
  • the monitoring images of each area at the station selected by the user are displayed side by side in image list display 65 , but an area selection button may be provided so that the monitoring image of the area selected by this area selection button may be displayed.
  • FIG. 10 is an explanatory view showing an overview of image processing performed by camera 1 .
  • second background image generator 53 a second background image is generated from a plurality of captured images (frames) in a predetermined learning period with reference to display time (current time in real-time display). This processing is repeated each time a newly captured image is output from imaging unit 21 , and the second background image is updated each time.
  • position information acquisition unit 54 position information for each person is acquired from the captured image and the second background image at the display time. Then, in mask image generator 55 , a mask image is generated from position information for each person.
  • first background image generator 51 image processing for reducing identifiability is performed on the captured image at the display time to generate a first background image. Then, in image output controller 44 , a monitoring image in which the mask image is superimposed on the first background image is generated.
  • the second background image, the position information, the mask images, and the first background image at each time corresponding to the output timing of the captured image are acquired, and the monitoring images at respective times are sequentially output from camera 1 .
  • the first background image may be generated from a captured image at each time. but a first background image may be generated by selecting a captured image that is the basis of the first background image while sorting the captured images at a predetermined interval.
  • an image obtained by performing image processing for reducing identifiability on a captured image is set as the first background image, but it is also possible to generate a first background image by performing image processing for reducing identifiability on the second background image generated for moving object detection.
  • a monitoring image in which a mask image is superimposed on the first background image (identifiability-reduced image) on which image processing for reducing identifiability is performed is generated and output.
  • the moving object such as a person and the like may clearly be distinguished from the background and visually recognized by the mask image, it is possible to clearly grasp the state of the moving object. Therefore, it is possible to intuitively grasp the congested situation in the station building and the like.
  • the person whose motion detection failed appears on the first background image, but because individuals may not be identified in this first background image, the privacy of a person may be reliably protected.
  • FIG. 11 is a functional block view showing a schematic configuration of a camera 101 and a server device 102 according to the second embodiment.
  • a first background image and a mask image are generated in camera 1 , and a monitoring image in which the mask image is superimposed on the first background image is generated and output, but in the second embodiment, the first background image and the position information of the image area of the person are transmitted from camera 101 to server device 102 so that the display elements of the mask image may be changed for each user, and in server device 102 , a mask image is generated in accordance with the contents of the display elements specified by the user, and a monitoring image in which the mask image is superimposed on the first background image is generated.
  • camera 101 includes image acquisition unit 41 , first processing unit 42 , and second processing unit 104 , but in second processing unit 104 , mask image generator 55 provided in second processing unit 43 in the first embodiment (see FIG. 8 ) is omitted.
  • image output controller 44 provided in the first embodiment is also omitted.
  • Server device 102 includes mask condition setting unit 106 , mask image generator 107 , and image output controller 108 .
  • Mask condition setting unit 106 , mask image generator 107 , and image output controller 108 are realized by causing processor 31 to execute the monitoring program (instructions) stored in storage device 32 .
  • mask condition setting unit 106 various conditions relating to the mask image are set according to the user's input operation at user terminal device 4 .
  • mask image generator 107 a mask image is generated based on the mask conditions for each user set in mask condition setting unit 106 and the position information acquired from camera 1 .
  • mask condition setting unit 106 mask conditions related to the display elements of the mask image is set for each user, and in mask image generator 107 , a mask image is generated in accordance with the contents of the display elements specified by the user.
  • image output controller 108 processing of generating a monitoring image (masked image) is performed by superimposing the mask image generated by mask image generator 107 on the first background image acquired from camera 1 .
  • the monitoring image in which the mask image in accordance with the contents of the display elements specified by the user appears is displayed on user terminal device 4 .
  • a mask image is generated in server device 102 , but a mask image may be temporarily generated in camera 101 and the mask image may be adjusted by image editing in server device 102 in accordance with the contents of the display elements specified by the user.
  • FIG. 12 is an explanatory view showing a mask condition setting screen displayed on user terminal device 4 .
  • a user setting menu is displayed, and when the mask condition setting is selected in this user setting menu.
  • the mask condition setting screen shown in FIG. 12 is displayed. With this mask condition setting screen, the user may change the display elements of the mask image.
  • a filling selector 111 On the mask condition setting screen, a filling selector 111 , a transmittance selector 112 , a contour line drawing selector 113 , and a setting button 114 are provided.
  • the user selects a filling method (color, pattern, and the like) inside the contour line in the mask image from a tile menu.
  • the transmittance selector 112 the user selects the transmittance of the mask image from a pull-down menu.
  • the transmittance may be selected in the range of 0% to 100%. That is, in a case where the transmittance is 0%, the first background image is completely invisible, and in a case where the transmittance is 100%, the first background image appears as it is.
  • the contour line drawing selector 113 the user selects whether or not to draw a contour line in the mask image from a pull-down menu.
  • the monitoring image is displayed with the person being erased.
  • the transmittance and the presence/absence of the contour line drawing are selected with filling selector 111 , transmittance selector 112 , and contour line drawing selector 113 and setting button 114 is operated, the input contents are transmitted to server device 102 , and processing of setting the mask conditions of the user is performed in mask condition setting unit 106 .
  • a congested state display mode may be provided on the mask condition setting screen so that the user may select on/off of the mode.
  • the user since the user may change at least one of the display elements of the color, the transmittance, and the presence/absence of the contour line of the mask image, it is possible to display a monitoring image that is easy for the user to see.
  • mask condition setting unit 106 is provided in server device 102 so that the display elements of the mask image may be changed for each user, but a mask condition setting unit may be provided in camera 1 (see FIG. 8 ) of the first embodiment, and in this mask condition setting unit, mask conditions may be set according to the operation input by the user, and mask image generator 55 may generate a mask image based on the mask conditions.
  • a user such as an administrator may freely change the display element of the mask image for each camera 1 .
  • the embodiment has been described as an example of the technique disclosed in the present application.
  • the technique in the present disclosure is not limited thereto and may also be applied to embodiments in which change, replacement, addition, omission, and the like are performed.
  • a rectangular mask image corresponding to a person frame may be used based on the detection results of moving object detection and person detection.
  • the shape of the mask image corresponding to the image area of the person changes, and the setting desired by the user, such as the mask conditions described in the above embodiment, may be performed.
  • a railway station has been described, but it is not limited to such a railway station, but the embodiment may be widely applied to various facilities such as a theme park, an event venue, and the like.
  • a bus stop, a sidewalk, a road, and the like where a camera (monitoring device 1 ) is installed are also included in a target facility, and the technique according to the present disclosure may also be applied to these target facilities.
  • a moving object to be masked is a person
  • a moving object other than a person for example, a vehicle such as a car, a bicycle, and the like may be used as a target. Even for a moving object other than such a person, in a case where it is possible to identify the owner or user thereof, consideration not to infringe personal privacy is required.
  • image processing for reducing identifiability is performed on the entire captured images, but an area where it is clear that a person does not appear, such as a ceiling of a building and the like, may be excluded from the target of image processing of reducing identifiability. In this way, it becomes easier to grasp the situation of the target area.
  • an administrator or the like may manually set an area to be excluded from image processing for reducing identifiability, but an area to be excluded from the image processing for reducing identifiability may be set based on the detection result of moving object detection. That is, an area in which a moving object is not detected for a certain period of time or more by moving object detection may be excluded from the target of image processing for reducing identifiability.
  • the effect of image processing for reducing identifiability may be gradually reduced as the time during which no moving object is detected continues.
  • first processing of generating a first background image in which the identifiability of an object is reduced, second processing of generating a mask image, and image output control of superimposing the mask image on the background image are performed, but all or a part of these kinds of necessary processing may be performed by the PC.
  • a recorder image storage device
  • an adapter image output control device
  • the monitoring device, the monitoring system, and the monitoring method according to the present disclosure have an effect of reliably protecting the privacy of a person and displaying a monitoring image from which a congested situation and the like in a facility may be intuitively grasped and are useful as a monitoring device, a monitoring system, a monitoring method, and the like that generate and output a monitoring image on which privacy mask processing is performed on a captured image of a target area.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Processing (AREA)
  • Alarm Systems (AREA)
  • Image Analysis (AREA)
US15/775,475 2015-11-27 2016-11-11 Monitoring device, monitoring system, and monitoring method Abandoned US20180359449A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2015-231710 2015-11-27
JP2015231710A JP6504364B2 (ja) 2015-11-27 2015-11-27 モニタリング装置、モニタリングシステムおよびモニタリング方法
PCT/JP2016/004870 WO2017090238A1 (ja) 2015-11-27 2016-11-11 モニタリング装置、モニタリングシステムおよびモニタリング方法

Publications (1)

Publication Number Publication Date
US20180359449A1 true US20180359449A1 (en) 2018-12-13

Family

ID=58763305

Family Applications (1)

Application Number Title Priority Date Filing Date
US15/775,475 Abandoned US20180359449A1 (en) 2015-11-27 2016-11-11 Monitoring device, monitoring system, and monitoring method

Country Status (7)

Country Link
US (1) US20180359449A1 (ja)
JP (1) JP6504364B2 (ja)
CN (1) CN108293105B (ja)
DE (1) DE112016005412T5 (ja)
GB (1) GB2557847A (ja)
SG (1) SG11201803937TA (ja)
WO (1) WO2017090238A1 (ja)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190012793A1 (en) * 2017-07-04 2019-01-10 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
US20200126383A1 (en) * 2018-10-18 2020-04-23 Idemia Identity & Security Germany Ag Alarm dependent video surveillance
CN113159074A (zh) * 2021-04-26 2021-07-23 京东数科海益信息科技有限公司 图像处理方法、装置、电子设备和存储介质
US20210243360A1 (en) * 2018-04-27 2021-08-05 Sony Corporation Information processing device and information processing method
US11100655B2 (en) * 2018-01-30 2021-08-24 Canon Kabushiki Kaisha Image processing apparatus and image processing method for hiding a specific object in a captured image
US11277589B2 (en) * 2019-09-04 2022-03-15 Denso Ten Limited Image recording system
US11354786B2 (en) * 2017-10-10 2022-06-07 Robert Bosch Gmbh Method for masking an image of an image sequence with a mask, computer program, machine-readable storage medium and electronic control unit
US20220201253A1 (en) * 2020-12-22 2022-06-23 Axis Ab Camera and a method therein for facilitating installation of the camera
US11508077B2 (en) * 2020-05-18 2022-11-22 Samsung Electronics Co., Ltd. Method and apparatus with moving object detection
US11715047B2 (en) 2018-07-30 2023-08-01 Toyota Jidosha Kabushiki Kaisha Image processing apparatus, image processing method
US11869347B2 (en) 2018-04-04 2024-01-09 Panasonic Holdings Corporation Traffic monitoring system and traffic monitoring method
KR20240077189A (ko) 2022-11-24 2024-05-31 (주)피플앤드테크놀러지 객체 검출 및 분할 모델을 이용한 인공지능기반 마스킹 방법 및 이를 위한 시스템

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6274635B1 (ja) * 2017-05-18 2018-02-07 株式会社ドリームエンジン マグネシウム空気電池
JP6272531B1 (ja) * 2017-05-18 2018-01-31 株式会社ドリームエンジン マグネシウム空気電池
JP7278735B2 (ja) * 2017-10-06 2023-05-22 キヤノン株式会社 画像処理装置、画像処理方法、及びプログラム
JP7071086B2 (ja) * 2017-10-13 2022-05-18 キヤノン株式会社 画像処理装置、画像処理方法及びコンピュータプログラム
JP7122815B2 (ja) * 2017-11-15 2022-08-22 キヤノン株式会社 画像処理装置、画像処理方法、及びプログラム
JP7030534B2 (ja) 2018-01-16 2022-03-07 キヤノン株式会社 画像処理装置および画像処理方法
JP7102856B2 (ja) * 2018-03-29 2022-07-20 大日本印刷株式会社 コンテンツ出力システム、コンテンツ出力装置及びプログラム
JP7244979B2 (ja) * 2018-08-27 2023-03-23 日本信号株式会社 画像処理装置及び監視システム
WO2020054030A1 (ja) * 2018-09-13 2020-03-19 三菱電機株式会社 車内監視情報生成制御装置及び車内監視情報生成制御方法
JP7418074B2 (ja) * 2018-12-26 2024-01-19 キヤノン株式会社 画像処理装置、画像処理方法、及びプログラム
JP7297455B2 (ja) * 2019-01-31 2023-06-26 キヤノン株式会社 画像処理装置、画像処理方法、およびプログラム
JP2020141212A (ja) * 2019-02-27 2020-09-03 沖電気工業株式会社 画像処理システム、画像処理装置、画像処理プログラム、画像処理方法、及び表示装置
JP6796294B2 (ja) * 2019-04-10 2020-12-09 昌樹 加藤 監視カメラ
CN110443748A (zh) * 2019-07-31 2019-11-12 思百达物联网科技(北京)有限公司 人体屏蔽方法、装置以及存储介质
CN110996010A (zh) * 2019-12-20 2020-04-10 歌尔科技有限公司 一种摄像头及其图像处理方法、装置及计算机存储介质
EP4095809A4 (en) * 2020-01-20 2023-06-28 Sony Group Corporation Image generation device, image generation method, and program
WO2021220814A1 (ja) * 2020-04-28 2021-11-04 ソニーセミコンダクタソリューションズ株式会社 情報処理装置、情報処理方法、及び、プログラム
CN112887481B (zh) * 2021-01-26 2022-04-01 维沃移动通信有限公司 图像处理方法及装置
JP2023042661A (ja) * 2021-09-15 2023-03-28 キヤノン株式会社 表示装置、制御装置、制御方法、及び、プログラム

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030202697A1 (en) * 2002-04-25 2003-10-30 Simard Patrice Y. Segmented layered image system
US20040032906A1 (en) * 2002-08-19 2004-02-19 Lillig Thomas M. Foreground segmentation for digital video
US20050117023A1 (en) * 2003-11-20 2005-06-02 Lg Electronics Inc. Method for controlling masking block in monitoring camera
US20080193018A1 (en) * 2007-02-09 2008-08-14 Tomonori Masuda Image processing apparatus
US20090128632A1 (en) * 2007-11-19 2009-05-21 Hitachi, Ltd. Camera and image processor
US20090284799A1 (en) * 2008-05-14 2009-11-19 Seiko Epson Corporation Image processing device, method for image processing and program
US20100103193A1 (en) * 2007-07-25 2010-04-29 Fujitsu Limited Image monitor apparatus and computer-readable storage medium storing image monitor program
US20110096922A1 (en) * 2009-10-23 2011-04-28 Canon Kabushiki Kaisha Image processing apparatus and image processing method
US20110293180A1 (en) * 2010-05-28 2011-12-01 Microsoft Corporation Foreground and Background Image Segmentation
US20120027299A1 (en) * 2010-07-20 2012-02-02 SET Corporation Method and system for audience digital monitoring
US20120151601A1 (en) * 2010-07-06 2012-06-14 Satoshi Inami Image distribution apparatus
US20120293654A1 (en) * 2011-05-17 2012-11-22 Canon Kabushiki Kaisha Image transmission apparatus, image transmission method thereof, and storage medium
US20130004090A1 (en) * 2011-06-28 2013-01-03 Malay Kundu Image processing to prevent access to private information
US20130343650A1 (en) * 2012-06-22 2013-12-26 Sony Corporation Image processor, image processing method, and program
US20140023248A1 (en) * 2012-07-20 2014-01-23 Electronics And Telecommunications Research Institute Apparatus and method for protecting privacy information based on face recognition
US20140049655A1 (en) * 2012-05-21 2014-02-20 Canon Kabushiki Kaisha Image pickup apparatus, method for controlling the image pickup apparatus, and recording medium
US20160026875A1 (en) * 2014-07-28 2016-01-28 Panasonic Intellectual Property Management Co., Ltd. Monitoring device, monitoring system and monitoring method
US20160037087A1 (en) * 2014-08-01 2016-02-04 Adobe Systems Incorporated Image segmentation for a live camera feed
US20160065864A1 (en) * 2013-04-17 2016-03-03 Digital Makeup Ltd System and method for online processing of video images in real time
US20160125255A1 (en) * 2014-10-29 2016-05-05 Behavioral Recognition Systems, Inc. Dynamic absorption window for foreground background detector
US20170006211A1 (en) * 2015-07-01 2017-01-05 Sony Corporation Method and apparatus for autofocus area selection by detection of moving objects
US20170039387A1 (en) * 2015-08-03 2017-02-09 Agt International Gmbh Method and system for differentiated privacy protection

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS577562A (en) * 1980-06-17 1982-01-14 Mitsubishi Electric Corp Rotation detector
CN1767638B (zh) * 2005-11-30 2011-06-08 北京中星微电子有限公司 一种保护隐私权的可视图像监控方法及其系统
JP2008042595A (ja) * 2006-08-08 2008-02-21 Matsushita Electric Ind Co Ltd ネットワークカメラ装置及び受信端末装置
JP4672680B2 (ja) * 2007-02-05 2011-04-20 日本電信電話株式会社 画像処理方法、画像処理装置、画像処理プログラム及びそのプログラムを記録したコンピュータ読み取り可能な記録媒体
JP6157094B2 (ja) * 2012-11-21 2017-07-05 キヤノン株式会社 通信装置、設定装置、通信方法、設定方法、及び、プログラム
JP5834196B2 (ja) * 2014-02-05 2015-12-16 パナソニックIpマネジメント株式会社 モニタリング装置、モニタリングシステムおよびモニタリング方法
JP5707562B1 (ja) * 2014-05-23 2015-04-30 パナソニックIpマネジメント株式会社 モニタリング装置、モニタリングシステムおよびモニタリング方法

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030202697A1 (en) * 2002-04-25 2003-10-30 Simard Patrice Y. Segmented layered image system
US20040032906A1 (en) * 2002-08-19 2004-02-19 Lillig Thomas M. Foreground segmentation for digital video
US20050117023A1 (en) * 2003-11-20 2005-06-02 Lg Electronics Inc. Method for controlling masking block in monitoring camera
US20080193018A1 (en) * 2007-02-09 2008-08-14 Tomonori Masuda Image processing apparatus
US20100103193A1 (en) * 2007-07-25 2010-04-29 Fujitsu Limited Image monitor apparatus and computer-readable storage medium storing image monitor program
US20090128632A1 (en) * 2007-11-19 2009-05-21 Hitachi, Ltd. Camera and image processor
US20090284799A1 (en) * 2008-05-14 2009-11-19 Seiko Epson Corporation Image processing device, method for image processing and program
US20110096922A1 (en) * 2009-10-23 2011-04-28 Canon Kabushiki Kaisha Image processing apparatus and image processing method
US20110293180A1 (en) * 2010-05-28 2011-12-01 Microsoft Corporation Foreground and Background Image Segmentation
US20120151601A1 (en) * 2010-07-06 2012-06-14 Satoshi Inami Image distribution apparatus
US20120027299A1 (en) * 2010-07-20 2012-02-02 SET Corporation Method and system for audience digital monitoring
US20120293654A1 (en) * 2011-05-17 2012-11-22 Canon Kabushiki Kaisha Image transmission apparatus, image transmission method thereof, and storage medium
US20130004090A1 (en) * 2011-06-28 2013-01-03 Malay Kundu Image processing to prevent access to private information
US20140049655A1 (en) * 2012-05-21 2014-02-20 Canon Kabushiki Kaisha Image pickup apparatus, method for controlling the image pickup apparatus, and recording medium
US20130343650A1 (en) * 2012-06-22 2013-12-26 Sony Corporation Image processor, image processing method, and program
US20140023248A1 (en) * 2012-07-20 2014-01-23 Electronics And Telecommunications Research Institute Apparatus and method for protecting privacy information based on face recognition
US20160065864A1 (en) * 2013-04-17 2016-03-03 Digital Makeup Ltd System and method for online processing of video images in real time
US20160026875A1 (en) * 2014-07-28 2016-01-28 Panasonic Intellectual Property Management Co., Ltd. Monitoring device, monitoring system and monitoring method
US20160037087A1 (en) * 2014-08-01 2016-02-04 Adobe Systems Incorporated Image segmentation for a live camera feed
US20160125255A1 (en) * 2014-10-29 2016-05-05 Behavioral Recognition Systems, Inc. Dynamic absorption window for foreground background detector
US20170006211A1 (en) * 2015-07-01 2017-01-05 Sony Corporation Method and apparatus for autofocus area selection by detection of moving objects
US20170039387A1 (en) * 2015-08-03 2017-02-09 Agt International Gmbh Method and system for differentiated privacy protection

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11004214B2 (en) * 2017-07-04 2021-05-11 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
US20190012793A1 (en) * 2017-07-04 2019-01-10 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
US11354786B2 (en) * 2017-10-10 2022-06-07 Robert Bosch Gmbh Method for masking an image of an image sequence with a mask, computer program, machine-readable storage medium and electronic control unit
US11100655B2 (en) * 2018-01-30 2021-08-24 Canon Kabushiki Kaisha Image processing apparatus and image processing method for hiding a specific object in a captured image
US11869347B2 (en) 2018-04-04 2024-01-09 Panasonic Holdings Corporation Traffic monitoring system and traffic monitoring method
US20210243360A1 (en) * 2018-04-27 2021-08-05 Sony Corporation Information processing device and information processing method
US11715047B2 (en) 2018-07-30 2023-08-01 Toyota Jidosha Kabushiki Kaisha Image processing apparatus, image processing method
US11049377B2 (en) * 2018-10-18 2021-06-29 Idemia Identity & Security Germany Ag Alarm dependent video surveillance
US20200126383A1 (en) * 2018-10-18 2020-04-23 Idemia Identity & Security Germany Ag Alarm dependent video surveillance
US11277589B2 (en) * 2019-09-04 2022-03-15 Denso Ten Limited Image recording system
US11508077B2 (en) * 2020-05-18 2022-11-22 Samsung Electronics Co., Ltd. Method and apparatus with moving object detection
US20220201253A1 (en) * 2020-12-22 2022-06-23 Axis Ab Camera and a method therein for facilitating installation of the camera
US11825241B2 (en) * 2020-12-22 2023-11-21 Axis Ab Camera and a method therein for facilitating installation of the camera
CN113159074A (zh) * 2021-04-26 2021-07-23 京东数科海益信息科技有限公司 图像处理方法、装置、电子设备和存储介质
KR20240077189A (ko) 2022-11-24 2024-05-31 (주)피플앤드테크놀러지 객체 검출 및 분할 모델을 이용한 인공지능기반 마스킹 방법 및 이를 위한 시스템

Also Published As

Publication number Publication date
JP2017098879A (ja) 2017-06-01
CN108293105A (zh) 2018-07-17
WO2017090238A1 (ja) 2017-06-01
DE112016005412T5 (de) 2018-09-06
CN108293105B (zh) 2020-08-11
JP6504364B2 (ja) 2019-04-24
GB2557847A (en) 2018-06-27
SG11201803937TA (en) 2018-06-28
GB201806567D0 (en) 2018-06-06

Similar Documents

Publication Publication Date Title
US20180359449A1 (en) Monitoring device, monitoring system, and monitoring method
JP6924079B2 (ja) 情報処理装置及び方法及びプログラム
JP5866564B1 (ja) モニタリング装置、モニタリングシステムおよびモニタリング方法
JP6485709B2 (ja) 座席モニタリング装置、座席モニタリングシステムおよび座席モニタリング方法
JP6156665B1 (ja) 施設内活動分析装置、施設内活動分析システムおよび施設内活動分析方法
EP3043329A2 (en) Image processing apparatus, image processing method, and program
WO2017163955A1 (ja) 監視システム、画像処理装置、画像処理方法およびプログラム記録媒体
US20140340515A1 (en) Image processing method and system
JP6910772B2 (ja) 撮像装置、撮像装置の制御方法およびプログラム
JP2017201745A (ja) 画像処理装置、画像処理方法およびプログラム
CN110798590B (zh) 图像处理设备及其控制方法和计算机可读存储介质
EP3723049B1 (en) System and method for anonymizing content to protect privacy
JP2010206475A (ja) 監視支援装置、その方法、及びプログラム
JP2019200715A (ja) 画像処理装置、画像処理方法及びプログラム
KR101613762B1 (ko) 영상 제공 장치 및 방법
JP2019135810A (ja) 画像処理装置、画像処理方法およびプログラム
JP2010193227A (ja) 映像処理システム
CN101141629A (zh) 基于多级画面分割的快速数字视频控制方法
KR20110037486A (ko) 지능형 영상 감시 장치
JP2016144049A (ja) 画像処理装置、画像処理方法、およびプログラム
KR101874588B1 (ko) 고해상도 카메라를 이용한 다채널 관심영역 표출방법
CN108419045A (zh) 一种基于红外热成像技术的监控方法及装置
JP2005173879A (ja) 融合画像表示装置
JP6312464B2 (ja) 画像処理システム及び画像処理プログラム
JP2021056901A (ja) 情報処理装置、情報処理方法、およびプログラム

Legal Events

Date Code Title Description
AS Assignment

Owner name: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LT

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MATSUMOTO, YUICHI;KAMINO, YOSHIYUKI;WATANABE, TAKESHI;REEL/FRAME:046557/0794

Effective date: 20180419

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION