US20200175282A1 - Image processing device, image processing method, and image processing system - Google Patents
Image processing device, image processing method, and image processing system Download PDFInfo
- Publication number
- US20200175282A1 US20200175282A1 US16/784,814 US202016784814A US2020175282A1 US 20200175282 A1 US20200175282 A1 US 20200175282A1 US 202016784814 A US202016784814 A US 202016784814A US 2020175282 A1 US2020175282 A1 US 2020175282A1
- Authority
- US
- United States
- Prior art keywords
- frame image
- region
- image
- detection position
- dead zone
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000012545 processing Methods 0.000 title claims abstract description 63
- 238000003672 processing method Methods 0.000 title abstract description 6
- 238000001514 detection method Methods 0.000 claims abstract description 132
- 238000000034 method Methods 0.000 claims description 16
- 230000008859 change Effects 0.000 abstract description 6
- 238000004891 communication Methods 0.000 description 28
- 238000012544 monitoring process Methods 0.000 description 19
- 238000010586 diagram Methods 0.000 description 17
- 238000005516 engineering process Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 9
- 230000000694 effects Effects 0.000 description 8
- 230000008569 process Effects 0.000 description 6
- 230000004048 modification Effects 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 238000009499 grossing Methods 0.000 description 3
- 238000004590 computer program Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/66—Remote control of cameras or camera parts, e.g. by remote control devices
- H04N23/661—Transmitting camera control signals through networks, e.g. control via the Internet
-
- G06K9/00765—
-
- G06K9/00771—
-
- G06K9/2054—
-
- G06K9/78—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/49—Segmenting video sequences, i.e. computational techniques such as parsing or cutting the sequence, low-level clustering or determining units such as shots or scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
-
- H04N5/225—
-
- H04N5/232—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
-
- G06K2209/21—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Definitions
- the present disclosure relates to image processing devices, image processing methods, and image processing systems.
- Patent Literature 1 describes a technology for detecting moving objects in an image captured by a camera with a fisheye lens and cutting out a circumscribed quadrangle region of each of the detected moving objects.
- Patent Literature 2 describes a technology for treating detected people that are detected in a captured image and that have distances between each other that are less than a threshold value as the same group, and cutting out an image along a frame surrounding the same group.
- the position of the cutout region is limited.
- the cutout region is decided always on the basis of a position of a person after moving despite the magnitude of change in the position of the person every time the position of the person changes.
- the present disclosure proposes a novel and improved image processing device, image processing method, and image processing system that are capable of deciding a cutout region in accordance with change in the detection position of the same object between frame images captured at different times.
- an image processing device including: a first region setting unit configured to set a first region including a detection position of an object in a cutout region in a first frame image; and a cutout region deciding unit configured to decide a cutout region in a second frame image subsequent to the first frame image, on the basis of a positional relation between the first region and a detection position of the object in the second frame image.
- an image processing method including: setting a first region including a detection position of an object in a cutout region in a first frame image; and deciding a cutout region in a second frame image subsequent to the first frame image, on the basis of a positional relation between the first region and a detection position of the object in the second frame image.
- an image processing system including: a first region setting unit configured to set a first region including a detection position of an object in a cutout region in a first frame image; a cutout region deciding unit configured to decide a cutout region in a second frame image subsequent to the first frame image, on the basis of a positional relation between the first region and a detection position of the object in the second frame image; a cutout image generation unit configured to generate a cutout image by cutting out the cutout region in the second frame image decided by the cutout region deciding unit, from the second frame image; and a storage unit configured to store the generated cutout image.
- FIG. 1 is an explanatory diagram illustrating a configuration example of an image processing system according to an embodiment of the present disclosure.
- FIG. 2 is an explanatory diagram illustrating an example of a shrunken image 32 generated by a camera 10 .
- FIG. 3 is an explanatory diagram illustrating an example of a plurality of cropped images 50 generated from a frame image 30 .
- FIG. 4 is a functional block diagram illustrating a configuration of the camera 10 according to the embodiment.
- FIG. 5 is an explanatory diagram illustrating a relation between the frame image 30 and a cropping region 40 .
- FIG. 6 is a functional block diagram illustrating a configuration of a monitoring terminal 22 according to the embodiment.
- FIG. 7 is a functional block diagram illustrating a configuration of a region setting unit 104 according to the embodiment.
- FIG. 8 is an explanatory diagram illustrating an example of setting a dead zone region 60 according to the embodiment.
- FIG. 9 is an explanatory diagram illustrating an example of deciding a cropping region in the case where a detection position of a person is changed according to the embodiment.
- FIG. 10 is an explanatory diagram illustrating another example of deciding a cropping region in the case where a detection position of a person is changed according to the embodiment.
- FIG. 11 is a flowchart illustrating operation according to the embodiment.
- FIG. 12 is a flowchart illustrating a part of operation of a cropped image generation process according to the embodiment.
- FIG. 13 is a flowchart illustrating a part of operation of the cropped image generation process according to the embodiment.
- structural elements that have substantially the same function and structure are sometimes distinguished from each other using different alphabets after the same reference numeral.
- structural elements that have substantially the same function and structure are distinguished into a video cropping unit 106 a and a video cropping unit 106 b as necessary.
- the same reference numeral alone is attached.
- the video cropping units 106 in a case where it is not necessary to distinguish the video cropping unit 106 a and the video cropping unit 106 b from each other, they are simply referred to as the video cropping units 106 .
- the image processing system includes a camera 10 , a storage 20 , a monitoring terminal 22 , and a communication network 24 .
- the camera 10 is an example of the image processing device according to the present disclosure.
- the camera 10 is a device for capturing moving images of an external environment.
- the camera 10 may be installed in a place crowded with people and automobiles, a monitoring target place, or the like.
- the camera 10 may be installed in a road, a station, an airport, a commercial building, an amusement park, a park, a parking lot, a restricted area, or the like.
- the camera 10 is capable of generating another image by using a captured frame image, and transmitting the generated another image to another device via the communication network 24 to be described later.
- the frame image is an image with the maximum resolution captured by the camera 10 .
- the frame image may be a 4 K image.
- the camera 10 generates another image with smaller data volume on the basis of the frame image. This is because it is not preferable to transmit the frame image itself to the another device since transmission of the frame image with large data volume takes a long time.
- examples of the another image generated by the camera 10 include a shrunken image obtained by simply reducing the resolution of the frame image, and a cropped image obtained by cropping (cutting out) a gaze target region.
- the shrunken image may be a full HD image.
- FIG. 2 is an explanatory diagram illustrating an example of a shrunken image (shrunken image 32 ).
- the shrunken image 32 includes all regions included in the frame image.
- the gaze target region such as a face of a person may be so small in the shrunken image 32 , and therefore it may be difficult to see the gaze target region.
- the regions 40 illustrated in FIG. 2 are regions corresponding to cropping regions to be described later. In general, the cropping region is set to be within a frame image. However, for convenience of description, regions corresponding to the cropping regions in the shrunken image 32 in FIG. 2 are referred to as the regions 40 .
- FIG. 3 is an explanatory diagram illustrating an example of a plurality of cropped images (a set 52 of the cropped images 50 ) generated from one frame image.
- the cropped images 50 have the same resolution as the frame image, each of the cropped images 50 includes only a partial region of the frame image, as illustrated in FIG. 3 .
- the camera 10 basically generates one shrunken image and one or more cropped images from one frame image.
- a user can check the entire scene captured by the camera 10 and can check the gaze target region at high resolution.
- the camera 10 includes an image capturing unit 100 , a video shrinking unit 102 , a region setting unit 104 , a plurality of video cropping units 106 , and a communication unit 108 .
- FIG. 4 shows an example in which there are four video cropping units 106 , the number of the video cropping units 106 is not limited thereto. For example, there are any number of the video cropping units 106 as long as the minimum number is one.
- the image capturing unit 100 has a function of acquiring the frame image by causing an image sensor such as a charge-coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) to form an image of video of an outside through a lens.
- an image sensor such as a charge-coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS)
- CCD charge-coupled device
- CMOS complementary metal-oxide-semiconductor
- the video shrinking unit 102 generates the shrunken image by shrinking the frame image acquired by the image capturing unit 100 down to a predetermined size.
- the region setting unit 104 sets a cropping region in the frame image acquired by the image capturing unit 100 .
- a cropped image is generated on the basis of the cropping region.
- the region setting unit 104 sets the same number of cropping regions as the number of the video cropping units 106 in the camera 10 , in the frame image acquired by the image capturing unit 100 .
- FIG. 5 is an explanatory diagram illustrating an example in which the region setting unit 104 sets a cropping region. Note that, in FIG. 5 , “crop_width” represents the length of a width of the cropping region, and “crop_height” represents the length of a height of the cropping region.
- the region setting unit 104 detects a detection target object such as a person 300 in the frame image 30 , and sets the cropping region 40 on the basis of a detection position 302 of the object.
- Video Cropping Unit 106 (1-1-1-4. Video Cropping Unit 106 )
- the video cropping unit 106 is an example of the cutout image generation unit according to the present disclosure.
- the video cropping unit 106 generates a cropped image by cutting out the cropping region set by the region setting unit 104 from the frame image acquired by the image capturing unit 100 .
- FIG. 3 illustrates an example of four cropped images 50 respectively generated by the four video cropping units 106 .
- the video cropping unit 106 a generates a cropped image 50 a from a cropping region corresponding to a region 40 a illustrated in FIG. 2 that is set by the region setting unit 104 .
- the video cropping unit 106 b generates a cropped image 50 b from a cropping region corresponding to a region 40 b illustrated in FIG. 2 that is set by the region setting unit 104 .
- the communication unit 108 exchanges various kinds of information with devices connected with the communication network 24 .
- the communication unit 108 transmits, to the storage 20 , the shrunken image acquired by the video shrinking unit 102 and the plurality of cropped images generated by the plurality of video cropping units 106 .
- the storage 20 is a storage device configured to a store shrunken image and cropped images received from the camera 10 .
- the storage 20 stores the received shrunken image and the plurality of received cropped images in association with identification information of the camera 10 and image capturing date and time.
- the storage 20 may be installed in a data center, a monitoring center where observers are working, or the like.
- the monitoring terminal 22 is an information processing terminal configured to display the shrunken image and the cropped images generated by the camera 10 .
- the monitoring terminal 22 may be installed in the monitoring center, and may be used by observers.
- FIG. 6 is a functional block diagram illustrating the configuration of the monitoring terminal 22 according to the embodiment.
- the monitoring terminal 22 includes a control unit 220 , a communication unit 222 , a display unit 224 , and an input unit 226 .
- the control unit 220 controls entire operation of the monitoring terminal 22 by using hardware such as a central processing unit (CPU), random access memory (RAM), and read only memory (ROM) embedded in the monitoring terminal 22 .
- hardware such as a central processing unit (CPU), random access memory (RAM), and read only memory (ROM) embedded in the monitoring terminal 22 .
- CPU central processing unit
- RAM random access memory
- ROM read only memory
- the communication unit 222 exchanges various kinds of information with devices connected with the communication network 24 .
- the communication unit 222 receives, from the storage 20 , the shrunken image and the cropped images stored in the storage 20 .
- the communication unit 222 may directly receive the shrunken image and the plurality of cropped images generated by the camera 10 from the camera 10 .
- the display unit 224 is implemented by a display such as a liquid crystal display (LCD), or an organic light emitting diode (OLED).
- the display unit 224 displays a monitoring screen including the shrunken image and the cropped images received from the storage 20 .
- the input unit 226 includes an input device such as a mouse, a keyboard, a touchscreen, or a microphone.
- the input unit 226 receives various kinds of input performed by the user on the monitoring terminal 22 .
- the communication network 24 is a wired or wireless communication channel through which information is transmitted from devices connected with the communication network 24 .
- the communication network 24 may include a public network, various kinds of local area networks (LANs), a wide area network (WAN), and the like.
- the public network includes the Internet, a satellite communication network, a telephone network, and the like, and the LANs include Ethernet (registered trademark).
- the communication network 24 may include a dedicated line network such as an Internet Protocol Virtual Private Network (IP-VPN).
- IP-VPN Internet Protocol Virtual Private Network
- the image processing system is not limited to the above described configurations.
- the storage 20 may be integrated with the monitoring terminal 22 .
- the image processing system does not have to include the storage 20 or the monitoring terminal 22 .
- the region setting unit 104 sets a cropping region on the basis of the detection position of the object detected in the frame image.
- Examples of a method for setting the cropping region include a method for setting a cropping region such that the detection position of the detection target object is at the center of the cropping region. According to the setting method, it is possible to generate the cropped image such that the user can easily see the detection target object in the cropped image.
- this setting method includes a problem that the cropped image slightly vibrates when the detection position of the object slightly vibrates in the captured moving image.
- a known technology has proposed a method for smoothing detection positions of an object by using a plurality of past frame images.
- it is impossible to completely remove the vibration in the case where short smoothing time intervals are set. Therefore, the cropped image still vibrates slightly.
- the cropped image is affected by the detection position of the object in the past frame image. Therefore, for example, in the case where the object moves fast, the cropped image is generated in which the object seems to move slower than its actual movement.
- the camera 10 according to the embodiment has been developed in view of the above described circumstance.
- the camera 10 according to the embodiment is capable of deciding a position of a cropping region in a current frame image in accordance with whether a detection position of an object in the current frame image is within a dead zone region set in a previous frame image.
- the region setting unit 104 is one of the characteristics of the camera 10 according to the embodiment. Next, with reference to FIG. 7 , details of the configuration of the region setting unit 104 according to the embodiment will be described.
- the region setting unit 104 includes an object detection unit 120 , a dead zone region setting unit 122 , a cropping region deciding unit 124 , and a tracking target setting unit 126 .
- the object detection unit 120 detects objects in a frame image acquired by the image capturing unit 100 .
- the object detection unit 120 detects an object of a preset type in the frame image.
- the object detection unit 120 detects objects in the acquired frame image.
- the number of the object is smaller than or equal to the number of the video cropping units 106 in the camera 10 .
- the types of the detection targets object may include a human and an automobile.
- the types of the detection target objects may further include a ship, an airplane, a motorcycle, a bicycle, and the like.
- the dead zone region setting unit 122 is an example of the first region setting unit according to the present disclosure.
- the dead zone region setting unit 122 sets a dead zone region on the basis of whether the frame image acquired by the image capturing unit 100 (hereinafter, referred to as a current frame image) is a frame image of an initial frame (hereinafter, referred to as an initial frame image). For example, in the case where the current frame image is the initial frame image, the dead zone region setting unit 122 sets a dead zone region in the current frame image such that a detection position of an object detected by the object detection unit 120 in the current frame image is at the center of the dead zone region.
- the initial frame image may be a frame image of an initial frame in which generation of a cropped image starts.
- the dead zone region is an example of the first region according to the present disclosure. Note that, the dead zone region is a region used for deciding a position of a cropping region in a current frame image on the basis of a positional relation with a detection position of an object in the current frame image. Details of the dead zone region will be described later.
- FIG. 8 is an explanatory diagram illustrating an example of setting a dead zone region 60 and a cropping region 40 in the case where the current frame image is the initial frame image.
- the dead zone region setting unit 122 sets the dead zone region 60 in the current frame image such that the detection position 302 of the person detected by the object detection unit 120 is at the center 600 of the dead zone region 60 .
- the center of the cropped image is set such that the center of the cropped image is not the same as the object detection position 302 but is the same as the center 600 of the dead zone region 60 .
- the center of the cropping region 40 is set at the same position as the center 600 of the dead zone region 60 .
- the dead zone region setting unit 122 sets a dead zone region on the basis of whether a dead zone region in a previous frame image includes a detection position of an object in the current frame image. For example, in the case where the dead zone region in the previous frame image includes the detection position of the object in the current frame image, the dead zone region setting unit 122 sets a position of the dead zone region in the current frame image at the same position as the dead zone region in the previous frame image. Note that, since the positions of the dead zone regions are the same, the position of the cropping region also becomes the same as the previous frame image. Details thereof will be described later.
- FIG. 9 is an explanatory diagram illustrating an example of setting a dead zone region 60 and a cropping region 40 in the case where the current frame image is a frame image other than the initial frame image.
- the detection position 302 of the person is included in the dead zone region 60 in the previous frame image as illustrated in FIG. 9 . Therefore, the dead zone region setting unit 122 sets a position of the dead zone region in the current frame image at the same position as the dead zone region 60 in the previous frame image.
- the dead zone region setting unit 122 sets the dead zone region in the current frame image by moving the dead zone region in the previous frame image such that the detection position of the object in the current frame image is within the outline of the dead zone region.
- the dead zone region setting unit 122 sets the dead zone region in the current frame image by moving the dead zone region in the previous frame image in a direction of the detection position of the object in the current frame image by the minimum distance between the outline of the dead zone region in the previous frame image and the detection position of the object in the current frame image.
- FIG. 10 is an explanatory diagram illustrating another example of setting a dead zone region 60 and a cropping region 40 in the case where the current frame image is a frame image other than the initial frame image.
- the detection position 302 of the person is not included in the dead zone region in the previous frame image. Therefore, as illustrated in FIG. 10 , the dead zone region setting unit 122 sets the dead zone region 60 in the current frame image 30 such that the detection position 302 of the person overlaps the outline of the dead zone region 60 .
- the dead zone region approaches the detection position of the object by the distance of the object out of the outline of the dead zone region in the previous frame image. Accordingly, it is possible to set the dead zone region in the current frame image such that the detection position of the object is not out of the dead zone region surrounded by the outline.
- the dead zone region setting unit 122 sets the dead zone region in a positional relation such that the dead zone region follows the object every time the position of the object goes out of the dead zone region.
- the dead zone region may have a rectangular shape or a circular shape, for example.
- the dead zone region has a predetermined size.
- the width of the dead zone region may be in a range from a length of few pixels to half of the width of the cropping region.
- the height of the dead zone region may be in a range from a length of few pixels to half of the height of the cropping region.
- the size of the dead zone region is basically set in advance. However, it is also possible for an administrator to appropriately change the size of the dead zone region, for example.
- the dead zone region setting unit 122 sets a dead zone region on the basis of a detection position of an object set as a tracking target by the tracking target setting unit 126 (to be described later), for example.
- the dead zone region setting unit 122 sets a position of the dead zone region in the current frame image at the same position as the dead zone region in the previous frame image.
- the dead zone region setting unit 122 sets the dead zone region in the current frame image by moving the dead zone region in the previous frame image such that the detection position of the object set as the tracking target (in the current frame image) is within the outline of the dead zone region.
- the cropping region deciding unit 124 is an example of the cutout region deciding unit according to the present disclosure.
- the cropping region deciding unit 124 sets a cropping region in the current frame image in accordance with whether the current frame image is the initial frame image. For example, in the case where the current frame image is the initial frame image, the cropping region deciding unit 124 sets a cropping region in the current frame image such that a detection position of an object detected by the object detection unit 120 in the current frame image is at the center of the cropping region.
- the cropping region deciding unit 124 sets the cropping region 40 in the current frame image 30 such that the detection position 302 of the person detected by the object detection unit 120 is at the center of the cropping region 40 .
- the cropping region deciding unit 124 sets a cropping region in the current frame image on the basis of a positional relation between the detection position of the object detected by the object detection unit 120 and the dead zone region in the previous frame image. For example, in the case where the dead zone region in the previous frame image includes the detection position of the object detected by the object detection unit 120 , the cropping region deciding unit 124 decides to make the position of the cropping region in the current frame image the same as the position of the cropping region in the previous frame image.
- the shape and the size of the cropping region in the current frame image is basically set to be the same as the shape and the size of the cropping region in the previous frame image.
- the cropping region deciding unit 124 decides to make the position of the cropping region 40 in the current frame image 30 the same as the position of the cropping region in the previous frame image.
- the position of the cropping region does not vary between frames in which the detection positions of the object are within the dead zone region set in the initial frame image. In other words, even when the object slightly vibrates, the position of the cropping region does not vary unless the object goes out of the dead zone region. Therefore, it is possible to improve visibility of the cropped images.
- the cropping region deciding unit 124 decides the cropping region in the current frame image such that the center of the dead zone region set by the dead zone region setting unit 122 (in the current frame image) is at the center of the cropping region.
- the cropping region deciding unit 124 decides the cropping region 40 in the current frame image 30 such that the center 600 of the dead zone region 60 set by the dead zone region setting unit 122 in the current frame image 30 is at the center of the cropping region 40 .
- a dashed rectangular 44 in FIG. 10 represents the cropping region in the previous frame image.
- the position of the cropping region 44 in the previous frame image is different from the position of the cropping region 40 in the current frame image 30 .
- the cropping region deciding unit 124 is capable of deciding a position of a cropping region in the current frame image on the basis of a positional relation between a detection position of an object set as a tracking target by the tracking target setting unit 126 (to be described later) and a dead zone region set with regard to the tracking target object.
- the cropping region deciding unit 124 decides to make the position of the cropping region in the current frame image the same as the position of the cropping region in the previous frame image.
- the cropping region deciding unit 124 decides the cropping region in the current frame image such that the center of the dead zone region set with regard to the object in the current frame image is at the center of the cropping region.
- the tracking target setting unit 126 sets the tracking target object on the basis of a result of detection of the object performed by the object detection unit 120 .
- the tracking target setting unit 126 sets the tracking target object in the current frame image on the basis of a distance between a specific position set in an image capturing region such as a monitoring target position and a detection position of an object detected by the object detection unit 120 .
- the tracking target setting unit 126 sets the tracking target object in the current frame image to the tracking target object in the previous frame image in the case where the distance between the specific position set in the image capturing range and the detection position of the object detected by the object detection unit 120 is within a predetermined distance.
- the tracking target setting unit 126 sets the tracking target object in the current frame image to an object different from the tracking target object in the previous frame image in the case where the distance between the specific position set in the image capturing range and the detection position of the object detected by the object detection unit 120 exceeds the predetermined distance.
- the object different from the tracking target object in the previous frame image may be an object closest to the specific position.
- FIG. 11 is a flowchart illustrating an operation example according to the embodiment. As illustrated in FIG. 11 , first, the image capturing unit 100 of the camera 10 acquires a frame image by capturing video of an outside when a predetermined image capturing timing comes (S 101 ).
- the video shrinking unit 102 generates a shrunken image by shrinking the frame image acquired in S 101 (hereinafter, referred to as current frame image) down to a predetermined size (S 103 ).
- the camera 10 performs a “cropped image generation process” (to be described later) the same number of times as the number of the video cropping units 106 (in other words, four times) (S 105 to S 111 ).
- the communication unit 108 transmits the shrunken image generated in S 103 and the four cropped images generated in S 107 to the storage 20 (S 113 ).
- the object detection unit 120 of the camera 10 first detects a detection target object in the current frame image (S 151 ).
- the dead zone region setting unit 122 sets a dead zone region in the current frame image such that the detection position of the object detected in S 151 is at the center of the dead zone region (S 155 ).
- the cropping region deciding unit 124 decides the cropping region in the current frame image such that the detection position of the object detected in S 151 is at the center of the cropping region (S 157 ).
- the camera 10 performs operation in S 167 to be described later.
- the dead zone region setting unit 122 first determines whether the detection position of the object detected in S 151 is within the dead zone region in the previous frame image (S 161 ).
- the dead zone region setting unit 122 sets a position of the dead zone region in the current frame image to the same position as the dead zone region in the previous frame image.
- the cropping region deciding unit 124 decides to make the position of the cropping region in the current frame image the same as the position of the cropping region in the previous frame image (S 169 ).
- the camera 10 performs operation in S 167 to be described later.
- the dead zone region setting unit 122 sets the dead zone region in the current frame image by moving the dead zone region in the previous frame image such that the detection position of the object in the current frame image is within the outline of the dead zone region (S 163 ).
- the cropping region deciding unit 124 decides the cropping region in the current frame image such that the center of the dead zone region set in S 163 (in the current frame image) is at the center of the cropping region (S 165 ).
- the video cropping unit 106 generates a cropped image by cutting out the cropping region decided in S 157 , S 165 , or S 169 from the current frame image (S 167 ).
- the camera 10 decides a cropping region in a current frame image on the basis of a positional relation between a detection position of an object in the current frame image and a dead zone region set in a previous frame image. Therefore, it is possible to decide a cutout region in accordance with a magnitude of change in the detection position of the object between the current frame image and the previous frame image.
- the camera 10 decides to make the position of the cropping region in the current frame image the same as the position of the cropping region in the previous frame image. Therefore, the position of the cropping region does not vary between frames in which the detection positions of the object are within the dead zone region set in the initial frame image. In other words, even when the object slightly vibrates, the position of the cropping region does not vary unless the object goes out of the dead zone region. Therefore, it is possible to improve visibility of the cropped images.
- the camera 10 moves the dead zone region closer to the detection position of the object by the distance of the object out of the outline of the dead zone region in the previous frame image.
- the camera 10 decides the cropping region in the current frame image such that the center of the moved dead zone region is at the center of the cropping region. Therefore, it is possible to set the dead zone region at a position in contact with the object even in the case where the object moves fast. Accordingly, the user feels as if the object moves smoothly between the successive cropped images, and high visibility can be obtained.
- the camera 10 can generate cropped images in real time.
- the camera 10 does not have to transmit the frame image to another device such as a server to generate the shrunken image and the cropped images, and it is possible to reduce communication traffic.
- the monitoring terminal 22 may serve as the image processing device according to the present disclosure in the case where (the control unit 220 of) the monitoring terminal 22 includes all the video shrinking unit 102 , the region setting unit 104 , and the plurality of video cropping units 106 instead of the camera 10 .
- a separately-provided server may serve as the image processing device according to the present disclosure in the case where the server is capable of communicating with the camera 10 via the communication network 24 and the server includes all the video shrinking unit 102 , the region setting unit 104 , and the plurality of video cropping units 106 instead of the camera 10 .
- the server may be integrated with the storage 20 .
- present technology may also be configured as below.
- An image processing device including:
- a first region setting unit configured to set a first region including a detection position of an object in a cutout region in a first frame image
- a cutout region deciding unit configured to decide a cutout region in a second frame image subsequent to the first frame image, on the basis of a positional relation between the first region and a detection position of the object in the second frame image.
- the cutout region deciding unit decides whether to make a position of the cutout region in the second frame image the same as the cutout region in the first frame image, on the basis of the positional relation between the first region and the detection position of the object in the second frame image.
- the cutout region deciding unit decides to make the positon of the cutout region in the second frame the same as the cutout region in the first frame image.
- the first region setting unit moves the position of the first region such that the detection position of the object in the second frame image overlaps an outline of the first region
- the cutout region deciding unit decides the cutout region in the second frame image by setting a center of the moved first region to a center of the cutout region in the second frame image.
- a shape and a size of the cutout region in the second frame image are the same as a shape and a size of the cutout region in the first frame image.
- the first region setting unit sets a first region including a detection position of each of a plurality of objects in the cutout region in the first frame image
- the image processing device further includes a tracking target setting unit configured to set any one of the plurality of objects as a tracking target, and
- the cutout region deciding unit decides whether to make the position of the cutout region in the second frame image the same as the cutout region in the first frame image, on the basis of a positional relation between a detection position of a tracking target object set by the tracking target setting unit and the first region set with respect to the tracking target object.
- a cutout image generation unit configured to generate a cutout image by cutting out the cutout region in the second frame image decided by the cutout region deciding unit, from the second frame image.
- the object is a human or an automobile.
- An image processing method including:
- An image processing system including:
- a first region setting unit configured to set a first region including a detection position of an object in a cutout region in a first frame image
- a cutout region deciding unit configured to decide a cutout region in a second frame image subsequent to the first frame image, on the basis of a positional relation between the first region and a detection position of the object in the second frame image;
- a cutout image generation unit configured to generate a cutout image by cutting out the cutout region in the second frame image decided by the cutout region deciding unit, from the second frame image;
- a storage unit configured to store the generated cutout image.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
- Studio Devices (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
The present disclosure provides an image processing device, image processing method, and image processing system that are capable of deciding a cutout region in accordance with change in the detection position of the same object between frame images captured at different times. The image processing device includes: a first region setting unit configured to set a first region including a detection position of an object in a cutout region in a first frame image; and a cutout region deciding unit configured to decide a cutout region in a second frame image subsequent to the first frame image, on the basis of a positional relation between the first region and a detection position of the object in the second frame image.
Description
- This application is a continuation of U.S. application Ser. No. 15/559,506, filed Sep. 19, 2017, which is a National Stage of PCT/JP2016/054525, filed Feb. 17, 2016, and claims the benefit of priority under 35 U.S.C. § 119 of Japanese Application No. 2015-082274, filed Apr. 14, 2015. The entire contents of which are incorporated herein by reference.
- The present disclosure relates to image processing devices, image processing methods, and image processing systems.
- Conventionally, various kinds of technologies for cutting out a region of an object such as a detection target person from a captured image have been developed.
- For example,
Patent Literature 1 describes a technology for detecting moving objects in an image captured by a camera with a fisheye lens and cutting out a circumscribed quadrangle region of each of the detected moving objects. In addition,Patent Literature 2 describes a technology for treating detected people that are detected in a captured image and that have distances between each other that are less than a threshold value as the same group, and cutting out an image along a frame surrounding the same group. -
- Patent Literature 1: JP 2001-333422A
- Patent Literature 2: JP 2012-253723A
- However, when using the technology described in
Patent Literature 1 orPatent Literature 2, the position of the cutout region is limited. For example, according to the technology described inPatent Literature 1, the cutout region is decided always on the basis of a position of a person after moving despite the magnitude of change in the position of the person every time the position of the person changes. - Accordingly, the present disclosure proposes a novel and improved image processing device, image processing method, and image processing system that are capable of deciding a cutout region in accordance with change in the detection position of the same object between frame images captured at different times.
- According to the present disclosure, there is provided an image processing device including: a first region setting unit configured to set a first region including a detection position of an object in a cutout region in a first frame image; and a cutout region deciding unit configured to decide a cutout region in a second frame image subsequent to the first frame image, on the basis of a positional relation between the first region and a detection position of the object in the second frame image.
- In addition, according to the present disclosure, there is provided an image processing method including: setting a first region including a detection position of an object in a cutout region in a first frame image; and deciding a cutout region in a second frame image subsequent to the first frame image, on the basis of a positional relation between the first region and a detection position of the object in the second frame image.
- In addition, according to the present disclosure, there is provided an image processing system including: a first region setting unit configured to set a first region including a detection position of an object in a cutout region in a first frame image; a cutout region deciding unit configured to decide a cutout region in a second frame image subsequent to the first frame image, on the basis of a positional relation between the first region and a detection position of the object in the second frame image; a cutout image generation unit configured to generate a cutout image by cutting out the cutout region in the second frame image decided by the cutout region deciding unit, from the second frame image; and a storage unit configured to store the generated cutout image.
- As described above, according to the present disclosure, it is possible to decide a cutout region in accordance with change in the detection position of the same object between frame images captured at different times. Note that the effects described here are not necessarily limited, and any effect described in the present disclosure may be exhibited.
-
FIG. 1 is an explanatory diagram illustrating a configuration example of an image processing system according to an embodiment of the present disclosure. -
FIG. 2 is an explanatory diagram illustrating an example of ashrunken image 32 generated by acamera 10. -
FIG. 3 is an explanatory diagram illustrating an example of a plurality of cropped images 50 generated from aframe image 30. -
FIG. 4 is a functional block diagram illustrating a configuration of thecamera 10 according to the embodiment. -
FIG. 5 is an explanatory diagram illustrating a relation between theframe image 30 and acropping region 40. -
FIG. 6 is a functional block diagram illustrating a configuration of amonitoring terminal 22 according to the embodiment. -
FIG. 7 is a functional block diagram illustrating a configuration of aregion setting unit 104 according to the embodiment. -
FIG. 8 is an explanatory diagram illustrating an example of setting adead zone region 60 according to the embodiment. -
FIG. 9 is an explanatory diagram illustrating an example of deciding a cropping region in the case where a detection position of a person is changed according to the embodiment. -
FIG. 10 is an explanatory diagram illustrating another example of deciding a cropping region in the case where a detection position of a person is changed according to the embodiment. -
FIG. 11 is a flowchart illustrating operation according to the embodiment. -
FIG. 12 is a flowchart illustrating a part of operation of a cropped image generation process according to the embodiment. -
FIG. 13 is a flowchart illustrating a part of operation of the cropped image generation process according to the embodiment. - Hereinafter, (a) preferred embodiment(s) of the present disclosure will be described in detail with reference to the appended drawings. In this specification and the appended drawings, structural elements that have substantially the same function and structure are denoted with the same reference numerals, and repeated explanation of these structural elements is omitted.
- Note that, in this specification and the drawings, structural elements that have substantially the same function and structure are sometimes distinguished from each other using different alphabets after the same reference numeral. For example, structural elements that have substantially the same function and structure are distinguished into a
video cropping unit 106 a and avideo cropping unit 106 b as necessary. However, when there is no need in particular to distinguish structural elements that have substantially the same function and structure, the same reference numeral alone is attached. For example, in a case where it is not necessary to distinguish thevideo cropping unit 106 a and thevideo cropping unit 106 b from each other, they are simply referred to as the video cropping units 106. - In addition, description proceeds in this section “Mode(s) for Carrying Out the Invention” in the following order.
- 1. Basic configuration of image processing system
2. Detailed description of embodiment - As specifically described in “2. Detailed description of embodiment” as an example, the present disclosure may be executed in a variety of forms. First, with reference to
FIG. 1 , a basic configuration of the image processing system according to the embodiment will be described. - As illustrated in
FIG. 1 , the image processing system according to the embodiment includes acamera 10, astorage 20, amonitoring terminal 22, and acommunication network 24. - The
camera 10 is an example of the image processing device according to the present disclosure. Thecamera 10 is a device for capturing moving images of an external environment. Thecamera 10 may be installed in a place crowded with people and automobiles, a monitoring target place, or the like. For example, thecamera 10 may be installed in a road, a station, an airport, a commercial building, an amusement park, a park, a parking lot, a restricted area, or the like. - In addition, the
camera 10 is capable of generating another image by using a captured frame image, and transmitting the generated another image to another device via thecommunication network 24 to be described later. Here, for example, the frame image is an image with the maximum resolution captured by thecamera 10. For example, the frame image may be a 4K image. - For example, the
camera 10 generates another image with smaller data volume on the basis of the frame image. This is because it is not preferable to transmit the frame image itself to the another device since transmission of the frame image with large data volume takes a long time. - Here, examples of the another image generated by the
camera 10 include a shrunken image obtained by simply reducing the resolution of the frame image, and a cropped image obtained by cropping (cutting out) a gaze target region. For example, the shrunken image may be a full HD image. -
FIG. 2 is an explanatory diagram illustrating an example of a shrunken image (shrunken image 32). Theshrunken image 32 includes all regions included in the frame image. However, as illustrated inFIG. 2 , the gaze target region such as a face of a person may be so small in theshrunken image 32, and therefore it may be difficult to see the gaze target region. Note that, theregions 40 illustrated inFIG. 2 are regions corresponding to cropping regions to be described later. In general, the cropping region is set to be within a frame image. However, for convenience of description, regions corresponding to the cropping regions in theshrunken image 32 inFIG. 2 are referred to as theregions 40. - In addition,
FIG. 3 is an explanatory diagram illustrating an example of a plurality of cropped images (aset 52 of the cropped images 50) generated from one frame image. Although the cropped images 50 have the same resolution as the frame image, each of the cropped images 50 includes only a partial region of the frame image, as illustrated inFIG. 3 . Accordingly, thecamera 10 according to the embodiment basically generates one shrunken image and one or more cropped images from one frame image. In such a generation example, a user can check the entire scene captured by thecamera 10 and can check the gaze target region at high resolution. In addition, it is possible to reduce a total data volume in comparison with the frame image. - Next, with reference to
FIG. 4 , an internal configuration of thecamera 10 will be described. As illustrated inFIG. 4 , thecamera 10 includes animage capturing unit 100, avideo shrinking unit 102, aregion setting unit 104, a plurality of video cropping units 106, and acommunication unit 108. Note that, althoughFIG. 4 shows an example in which there are four video cropping units 106, the number of the video cropping units 106 is not limited thereto. For example, there are any number of the video cropping units 106 as long as the minimum number is one. - The
image capturing unit 100 has a function of acquiring the frame image by causing an image sensor such as a charge-coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) to form an image of video of an outside through a lens. For example, theimage capturing unit 100 acquires a frame image by capturing video of an outside at a predetermined frame rate. - The
video shrinking unit 102 generates the shrunken image by shrinking the frame image acquired by theimage capturing unit 100 down to a predetermined size. - The
region setting unit 104 sets a cropping region in the frame image acquired by theimage capturing unit 100. A cropped image is generated on the basis of the cropping region. For example, theregion setting unit 104 sets the same number of cropping regions as the number of the video cropping units 106 in thecamera 10, in the frame image acquired by theimage capturing unit 100. -
FIG. 5 is an explanatory diagram illustrating an example in which theregion setting unit 104 sets a cropping region. Note that, inFIG. 5 , “crop_width” represents the length of a width of the cropping region, and “crop_height” represents the length of a height of the cropping region. - As illustrated in
FIG. 5 , theregion setting unit 104 detects a detection target object such as aperson 300 in theframe image 30, and sets the croppingregion 40 on the basis of adetection position 302 of the object. - The video cropping unit 106 is an example of the cutout image generation unit according to the present disclosure. The video cropping unit 106 generates a cropped image by cutting out the cropping region set by the
region setting unit 104 from the frame image acquired by theimage capturing unit 100. - For example,
FIG. 3 illustrates an example of four cropped images 50 respectively generated by the four video cropping units 106. As illustrated inFIG. 3 , for example, thevideo cropping unit 106 a generates a croppedimage 50 a from a cropping region corresponding to aregion 40 a illustrated inFIG. 2 that is set by theregion setting unit 104. In addition, thevideo cropping unit 106 b generates a cropped image 50 b from a cropping region corresponding to aregion 40 b illustrated inFIG. 2 that is set by theregion setting unit 104. - Via the
communication network 24 to be described later, thecommunication unit 108 exchanges various kinds of information with devices connected with thecommunication network 24. For example, thecommunication unit 108 transmits, to thestorage 20, the shrunken image acquired by thevideo shrinking unit 102 and the plurality of cropped images generated by the plurality of video cropping units 106. - The
storage 20 is a storage device configured to a store shrunken image and cropped images received from thecamera 10. For example, thestorage 20 stores the received shrunken image and the plurality of received cropped images in association with identification information of thecamera 10 and image capturing date and time. Note that, thestorage 20 may be installed in a data center, a monitoring center where observers are working, or the like. - The
monitoring terminal 22 is an information processing terminal configured to display the shrunken image and the cropped images generated by thecamera 10. For example, the monitoringterminal 22 may be installed in the monitoring center, and may be used by observers. - Next, details of the configuration of the
monitoring terminal 22 will be described.FIG. 6 is a functional block diagram illustrating the configuration of themonitoring terminal 22 according to the embodiment. As illustrated inFIG. 6 , the monitoringterminal 22 includes acontrol unit 220, acommunication unit 222, adisplay unit 224, and aninput unit 226. - The
control unit 220 controls entire operation of themonitoring terminal 22 by using hardware such as a central processing unit (CPU), random access memory (RAM), and read only memory (ROM) embedded in themonitoring terminal 22. - Via the
communication network 24 to be described later, thecommunication unit 222 exchanges various kinds of information with devices connected with thecommunication network 24. For example, thecommunication unit 222 receives, from thestorage 20, the shrunken image and the cropped images stored in thestorage 20. - Note that, it is also possible for the
communication unit 222 to directly receive the shrunken image and the plurality of cropped images generated by thecamera 10 from thecamera 10. - For example, the
display unit 224 is implemented by a display such as a liquid crystal display (LCD), or an organic light emitting diode (OLED). For example, thedisplay unit 224 displays a monitoring screen including the shrunken image and the cropped images received from thestorage 20. - The
input unit 226 includes an input device such as a mouse, a keyboard, a touchscreen, or a microphone. Theinput unit 226 receives various kinds of input performed by the user on themonitoring terminal 22. - The
communication network 24 is a wired or wireless communication channel through which information is transmitted from devices connected with thecommunication network 24. For example, thecommunication network 24 may include a public network, various kinds of local area networks (LANs), a wide area network (WAN), and the like. The public network includes the Internet, a satellite communication network, a telephone network, and the like, and the LANs include Ethernet (registered trademark). In addition, thecommunication network 24 may include a dedicated line network such as an Internet Protocol Virtual Private Network (IP-VPN). - Note that, the image processing system according to the embodiment is not limited to the above described configurations. For example, the
storage 20 may be integrated with themonitoring terminal 22. Alternatively, the image processing system does not have to include thestorage 20 or themonitoring terminal 22. - As described above, the
region setting unit 104 sets a cropping region on the basis of the detection position of the object detected in the frame image. - Examples of a method for setting the cropping region include a method for setting a cropping region such that the detection position of the detection target object is at the center of the cropping region. According to the setting method, it is possible to generate the cropped image such that the user can easily see the detection target object in the cropped image.
- However, this setting method includes a problem that the cropped image slightly vibrates when the detection position of the object slightly vibrates in the captured moving image.
- To solve this problem, a known technology has proposed a method for smoothing detection positions of an object by using a plurality of past frame images. However, according to the known technology, it is impossible to completely remove the vibration in the case where short smoothing time intervals are set. Therefore, the cropped image still vibrates slightly.
- In addition, according to the known technology, the cropped image is affected by the detection position of the object in the past frame image. Therefore, for example, in the case where the object moves fast, the cropped image is generated in which the object seems to move slower than its actual movement.
- Therefore, the
camera 10 according to the embodiment has been developed in view of the above described circumstance. Thecamera 10 according to the embodiment is capable of deciding a position of a cropping region in a current frame image in accordance with whether a detection position of an object in the current frame image is within a dead zone region set in a previous frame image. - The
region setting unit 104 is one of the characteristics of thecamera 10 according to the embodiment. Next, with reference toFIG. 7 , details of the configuration of theregion setting unit 104 according to the embodiment will be described. - As illustrated in
FIG. 7 , theregion setting unit 104 includes anobject detection unit 120, a dead zoneregion setting unit 122, a croppingregion deciding unit 124, and a trackingtarget setting unit 126. - The
object detection unit 120 detects objects in a frame image acquired by theimage capturing unit 100. For example, theobject detection unit 120 detects an object of a preset type in the frame image. In addition, for example, theobject detection unit 120 detects objects in the acquired frame image. The number of the object is smaller than or equal to the number of the video cropping units 106 in thecamera 10. In this case, the types of the detection targets object may include a human and an automobile. In addition, the types of the detection target objects may further include a ship, an airplane, a motorcycle, a bicycle, and the like. - The dead zone
region setting unit 122 is an example of the first region setting unit according to the present disclosure. The dead zoneregion setting unit 122 sets a dead zone region on the basis of whether the frame image acquired by the image capturing unit 100 (hereinafter, referred to as a current frame image) is a frame image of an initial frame (hereinafter, referred to as an initial frame image). For example, in the case where the current frame image is the initial frame image, the dead zoneregion setting unit 122 sets a dead zone region in the current frame image such that a detection position of an object detected by theobject detection unit 120 in the current frame image is at the center of the dead zone region. - Here, the initial frame image may be a frame image of an initial frame in which generation of a cropped image starts. In addition, the dead zone region is an example of the first region according to the present disclosure. Note that, the dead zone region is a region used for deciding a position of a cropping region in a current frame image on the basis of a positional relation with a detection position of an object in the current frame image. Details of the dead zone region will be described later.
- Next, with reference to
FIG. 8 , details of the function of the dead zoneregion setting unit 122 will be described.FIG. 8 is an explanatory diagram illustrating an example of setting adead zone region 60 and a croppingregion 40 in the case where the current frame image is the initial frame image. For example, in the case where the current frame image is the initial frame image and theperson 300 illustrated inFIG. 8 is a detection target object, the dead zoneregion setting unit 122 sets thedead zone region 60 in the current frame image such that thedetection position 302 of the person detected by theobject detection unit 120 is at thecenter 600 of thedead zone region 60. - Note that, although details will be described later, hereinafter the center of the cropped image is set such that the center of the cropped image is not the same as the
object detection position 302 but is the same as thecenter 600 of thedead zone region 60. In other words, even if theobject detection position 302 moves from thecenter 600 of thedead zone region 60 in a subsequent frame, the center of the croppingregion 40 is set at the same position as thecenter 600 of thedead zone region 60. -
Case 1 - In addition, in the case where the current frame image is a frame image of a frame subsequent to the initial frame, the dead zone
region setting unit 122 sets a dead zone region on the basis of whether a dead zone region in a previous frame image includes a detection position of an object in the current frame image. For example, in the case where the dead zone region in the previous frame image includes the detection position of the object in the current frame image, the dead zoneregion setting unit 122 sets a position of the dead zone region in the current frame image at the same position as the dead zone region in the previous frame image. Note that, since the positions of the dead zone regions are the same, the position of the cropping region also becomes the same as the previous frame image. Details thereof will be described later. - Next, with reference to
FIG. 9 , details of the above described functions will be described.FIG. 9 is an explanatory diagram illustrating an example of setting adead zone region 60 and a croppingregion 40 in the case where the current frame image is a frame image other than the initial frame image. In the case where theperson 300 in the example illustrated inFIG. 9 is a detection target object, thedetection position 302 of the person is included in thedead zone region 60 in the previous frame image as illustrated inFIG. 9 . Therefore, the dead zoneregion setting unit 122 sets a position of the dead zone region in the current frame image at the same position as thedead zone region 60 in the previous frame image. -
Case 2 - In addition, in the case where the dead zone region in the previous frame image does not include the detection position of the object in the current frame image, the dead zone
region setting unit 122 sets the dead zone region in the current frame image by moving the dead zone region in the previous frame image such that the detection position of the object in the current frame image is within the outline of the dead zone region. In this case, for example, the dead zoneregion setting unit 122 sets the dead zone region in the current frame image by moving the dead zone region in the previous frame image in a direction of the detection position of the object in the current frame image by the minimum distance between the outline of the dead zone region in the previous frame image and the detection position of the object in the current frame image. - Next, with reference to
FIG. 10 , details of the above described functions will be described.FIG. 10 is an explanatory diagram illustrating another example of setting adead zone region 60 and a croppingregion 40 in the case where the current frame image is a frame image other than the initial frame image. In the case where theperson 300 in the example illustrated inFIG. 10 is a detection target object, thedetection position 302 of the person is not included in the dead zone region in the previous frame image. Therefore, as illustrated inFIG. 10 , the dead zoneregion setting unit 122 sets thedead zone region 60 in thecurrent frame image 30 such that thedetection position 302 of the person overlaps the outline of thedead zone region 60. For example, according to the setting example, in the case where the object is out of the dead zone region set in the previous frame image such as the case where the object moves fast, the dead zone region approaches the detection position of the object by the distance of the object out of the outline of the dead zone region in the previous frame image. Accordingly, it is possible to set the dead zone region in the current frame image such that the detection position of the object is not out of the dead zone region surrounded by the outline. - Note that, even in the case where a detection position of the object detected in a next frame image is out of the dead zone region in the current frame image, it is also possible for the dead zone
region setting unit 122 to set the dead zone region in the next frame image such that the detection position of the object is not out of the dead zone region surrounded by the outline. In other words, the dead zoneregion setting unit 122 sets the dead zone region in a positional relation such that the dead zone region follows the object every time the position of the object goes out of the dead zone region. - Note that, the dead zone region may have a rectangular shape or a circular shape, for example. In addition, the dead zone region has a predetermined size. For example, in the case where the dead zone region has the rectangular shape, the width of the dead zone region may be in a range from a length of few pixels to half of the width of the cropping region. In addition, the height of the dead zone region may be in a range from a length of few pixels to half of the height of the cropping region.
- Note that, it is assumed that the size of the dead zone region is basically set in advance. However, it is also possible for an administrator to appropriately change the size of the dead zone region, for example.
- Note that, in the case where the current frame image includes a plurality of objects, the dead zone
region setting unit 122 sets a dead zone region on the basis of a detection position of an object set as a tracking target by the tracking target setting unit 126 (to be described later), for example. - For example, in the case where the dead zone region in the previous frame image includes the detection position of the object set as the tracking target (in the current frame image), the dead zone
region setting unit 122 sets a position of the dead zone region in the current frame image at the same position as the dead zone region in the previous frame image. In addition, in the case where the dead zone region in the previous frame image does not include the detection position of the object of the object set as the tracking target (in the current frame image), the dead zoneregion setting unit 122 sets the dead zone region in the current frame image by moving the dead zone region in the previous frame image such that the detection position of the object set as the tracking target (in the current frame image) is within the outline of the dead zone region. - The cropping
region deciding unit 124 is an example of the cutout region deciding unit according to the present disclosure. The croppingregion deciding unit 124 sets a cropping region in the current frame image in accordance with whether the current frame image is the initial frame image. For example, in the case where the current frame image is the initial frame image, the croppingregion deciding unit 124 sets a cropping region in the current frame image such that a detection position of an object detected by theobject detection unit 120 in the current frame image is at the center of the cropping region. - For example, in the example illustrated in
FIG. 8 , the croppingregion deciding unit 124 sets the croppingregion 40 in thecurrent frame image 30 such that thedetection position 302 of the person detected by theobject detection unit 120 is at the center of the croppingregion 40. -
Case 1 - For example, in the case where the current frame image is a frame image other than the initial frame image, the cropping
region deciding unit 124 sets a cropping region in the current frame image on the basis of a positional relation between the detection position of the object detected by theobject detection unit 120 and the dead zone region in the previous frame image. For example, in the case where the dead zone region in the previous frame image includes the detection position of the object detected by theobject detection unit 120, the croppingregion deciding unit 124 decides to make the position of the cropping region in the current frame image the same as the position of the cropping region in the previous frame image. Note that, the shape and the size of the cropping region in the current frame image is basically set to be the same as the shape and the size of the cropping region in the previous frame image. - In the example illustrated in
FIG. 9 , thedetection position 302 of the person is included in thedead zone region 60 in the previous frame image. Therefore, the croppingregion deciding unit 124 decides to make the position of the croppingregion 40 in thecurrent frame image 30 the same as the position of the cropping region in the previous frame image. - According to such decision example, the position of the cropping region does not vary between frames in which the detection positions of the object are within the dead zone region set in the initial frame image. In other words, even when the object slightly vibrates, the position of the cropping region does not vary unless the object goes out of the dead zone region. Therefore, it is possible to improve visibility of the cropped images.
-
Case 2 - In addition, in the case where the dead zone region in the previous frame image does not include the detection position of the object detected by the
object detection unit 120, the croppingregion deciding unit 124 decides the cropping region in the current frame image such that the center of the dead zone region set by the dead zone region setting unit 122 (in the current frame image) is at the center of the cropping region. - In the example illustrated in
FIG. 10 , the croppingregion deciding unit 124 decides the croppingregion 40 in thecurrent frame image 30 such that thecenter 600 of thedead zone region 60 set by the dead zoneregion setting unit 122 in thecurrent frame image 30 is at the center of the croppingregion 40. Note that, a dashed rectangular 44 inFIG. 10 represents the cropping region in the previous frame image. As illustrated inFIG. 10 , the position of the croppingregion 44 in the previous frame image is different from the position of the croppingregion 40 in thecurrent frame image 30. - Note that, for example, in the case where the current frame image includes a plurality of objects, the cropping
region deciding unit 124 is capable of deciding a position of a cropping region in the current frame image on the basis of a positional relation between a detection position of an object set as a tracking target by the tracking target setting unit 126 (to be described later) and a dead zone region set with regard to the tracking target object. - For example, in the case where the dead zone region set with regard to the object includes the detection position of the object set as the tracking target by the tracking
target setting unit 126, the croppingregion deciding unit 124 decides to make the position of the cropping region in the current frame image the same as the position of the cropping region in the previous frame image. In addition, in the case where the dead zone region in the previous frame image set with regard to the object does not include the detection position of the object set as the tracking target by the trackingtarget setting unit 126, the croppingregion deciding unit 124 decides the cropping region in the current frame image such that the center of the dead zone region set with regard to the object in the current frame image is at the center of the cropping region. - The tracking
target setting unit 126 sets the tracking target object on the basis of a result of detection of the object performed by theobject detection unit 120. For example, the trackingtarget setting unit 126 sets the tracking target object in the current frame image on the basis of a distance between a specific position set in an image capturing region such as a monitoring target position and a detection position of an object detected by theobject detection unit 120. - For example, the tracking
target setting unit 126 sets the tracking target object in the current frame image to the tracking target object in the previous frame image in the case where the distance between the specific position set in the image capturing range and the detection position of the object detected by theobject detection unit 120 is within a predetermined distance. On the other hand, the trackingtarget setting unit 126 sets the tracking target object in the current frame image to an object different from the tracking target object in the previous frame image in the case where the distance between the specific position set in the image capturing range and the detection position of the object detected by theobject detection unit 120 exceeds the predetermined distance. For example, the object different from the tracking target object in the previous frame image may be an object closest to the specific position. - The configurations according to the embodiment have been described above. Next, with reference to
FIG. 11 toFIG. 13 , operation according to the embodiment will be described. Note that, an operation example will be described in which thecamera 10 includes four video cropping units 106 and one shrunken image and four cropped images are generated from one frame image. Note that, this operation repeats at a predetermined frame rate. -
FIG. 11 is a flowchart illustrating an operation example according to the embodiment. As illustrated inFIG. 11 , first, theimage capturing unit 100 of thecamera 10 acquires a frame image by capturing video of an outside when a predetermined image capturing timing comes (S101). - Next, the
video shrinking unit 102 generates a shrunken image by shrinking the frame image acquired in S101 (hereinafter, referred to as current frame image) down to a predetermined size (S103). - Subsequently, the
camera 10 performs a “cropped image generation process” (to be described later) the same number of times as the number of the video cropping units 106 (in other words, four times) (S105 to S111). - Next, the
communication unit 108 transmits the shrunken image generated in S103 and the four cropped images generated in S107 to the storage 20 (S113). - Next, with reference to
FIG. 12 toFIG. 13 , details of operation in the “cropped image generation process” in S107 will be described. As illustrated inFIG. 12 , theobject detection unit 120 of thecamera 10 first detects a detection target object in the current frame image (S151). - Next, in the case where the current frame image is the initial frame image (Yes in S153), the dead zone
region setting unit 122 sets a dead zone region in the current frame image such that the detection position of the object detected in S151 is at the center of the dead zone region (S155). - Subsequently, the cropping
region deciding unit 124 decides the cropping region in the current frame image such that the detection position of the object detected in S151 is at the center of the cropping region (S157). Next, thecamera 10 performs operation in S167 to be described later. - Next, with reference to
FIG. 13 , an example of operation performed in the case where the current frame image is not the initial frame image (No in S153) will be described. As illustrated inFIG. 13 , the dead zoneregion setting unit 122 first determines whether the detection position of the object detected in S151 is within the dead zone region in the previous frame image (S161). - In the case where the dead zone region in the previous frame image includes the detection position of the object (Yes in S161), the dead zone
region setting unit 122 sets a position of the dead zone region in the current frame image to the same position as the dead zone region in the previous frame image. Next, the croppingregion deciding unit 124 decides to make the position of the cropping region in the current frame image the same as the position of the cropping region in the previous frame image (S169). Next, thecamera 10 performs operation in S167 to be described later. - On the other hand, in the case where the detection position of the object is out of the dead zone region in the previous frame image (No in S161), the dead zone
region setting unit 122 sets the dead zone region in the current frame image by moving the dead zone region in the previous frame image such that the detection position of the object in the current frame image is within the outline of the dead zone region (S163). - Next, the cropping
region deciding unit 124 decides the cropping region in the current frame image such that the center of the dead zone region set in S163 (in the current frame image) is at the center of the cropping region (S165). - Subsequently, the video cropping unit 106 generates a cropped image by cutting out the cropping region decided in S157, S165, or S169 from the current frame image (S167).
- For example, as described with reference to
FIG. 4 ,FIG. 7 ,FIG. 11 toFIG. 13 , and the like, thecamera 10 according to the embodiment decides a cropping region in a current frame image on the basis of a positional relation between a detection position of an object in the current frame image and a dead zone region set in a previous frame image. Therefore, it is possible to decide a cutout region in accordance with a magnitude of change in the detection position of the object between the current frame image and the previous frame image. - For example, in the case where the dead zone region in the previous frame image includes the detection position of the object detected in the current frame image, the
camera 10 decides to make the position of the cropping region in the current frame image the same as the position of the cropping region in the previous frame image. Therefore, the position of the cropping region does not vary between frames in which the detection positions of the object are within the dead zone region set in the initial frame image. In other words, even when the object slightly vibrates, the position of the cropping region does not vary unless the object goes out of the dead zone region. Therefore, it is possible to improve visibility of the cropped images. - In addition, in comparison with known technologies, it is possible to eliminate a small vibration of a cropped image without performing a smoothing process in a temporal direction.
- In addition, for example, in the case where the detection position of the object is out of the dead zone region set in the previous frame image such as the case where the object moves fast, the
camera 10 moves the dead zone region closer to the detection position of the object by the distance of the object out of the outline of the dead zone region in the previous frame image. Next, thecamera 10 decides the cropping region in the current frame image such that the center of the moved dead zone region is at the center of the cropping region. Therefore, it is possible to set the dead zone region at a position in contact with the object even in the case where the object moves fast. Accordingly, the user feels as if the object moves smoothly between the successive cropped images, and high visibility can be obtained. - In addition, since the method for deciding a cropping region by the cropping
region deciding unit 124 is simple, thecamera 10 can generate cropped images in real time. - In addition, according to the embodiment, it is possible to generate a shrunken image and cropped images simply by using the
camera 10. Accordingly, thecamera 10 does not have to transmit the frame image to another device such as a server to generate the shrunken image and the cropped images, and it is possible to reduce communication traffic. - The preferred embodiment(s) of the present disclosure has/have been described above with reference to the accompanying drawings, whilst the present disclosure is not limited to the above examples. A person skilled in the art may find various alterations and modifications within the scope of the appended claims, and it should be understood that they will naturally come under the technical scope of the present disclosure.
- In the above described embodiment, the example in which the
camera 10 serves as the image processing device according to the present disclosure has been described. However, the present disclosure is not limited thereto. For example, the monitoringterminal 22 may serve as the image processing device according to the present disclosure in the case where (thecontrol unit 220 of) themonitoring terminal 22 includes all thevideo shrinking unit 102, theregion setting unit 104, and the plurality of video cropping units 106 instead of thecamera 10. - Alternatively, a separately-provided server (not illustrated) may serve as the image processing device according to the present disclosure in the case where the server is capable of communicating with the
camera 10 via thecommunication network 24 and the server includes all thevideo shrinking unit 102, theregion setting unit 104, and the plurality of video cropping units 106 instead of thecamera 10. In addition, the server may be integrated with thestorage 20. - In addition, according to the above described embodiment, it is also possible to provide a computer program for causing a hardware such as a CPU, ROM, and RAM to execute functions equivalent to the
video shrinking unit 102, theregion setting unit 104, and the video cropping units 106 described above. Moreover, it may be possible to provide a recording medium having the computer program stored therein. - Additionally, the present technology may also be configured as below.
- (1)
- An image processing device including:
- a first region setting unit configured to set a first region including a detection position of an object in a cutout region in a first frame image; and
- a cutout region deciding unit configured to decide a cutout region in a second frame image subsequent to the first frame image, on the basis of a positional relation between the first region and a detection position of the object in the second frame image.
- (2)
- The image processing device according to (1),
- in which the cutout region deciding unit decides whether to make a position of the cutout region in the second frame image the same as the cutout region in the first frame image, on the basis of the positional relation between the first region and the detection position of the object in the second frame image.
- (3)
- The image processing device according to (2),
- in which, in the case where the first region includes the detection position of the object in the second frame image, the cutout region deciding unit decides to make the positon of the cutout region in the second frame the same as the cutout region in the first frame image.
- (4)
- The image processing device according to (2) or (3), in which
- in the case where the first region does not include the detection position of the object in the second frame image, the first region setting unit moves the position of the first region such that the detection position of the object in the second frame image overlaps an outline of the first region, and
- the cutout region deciding unit decides the cutout region in the second frame image by setting a center of the moved first region to a center of the cutout region in the second frame image.
- (5)
- The image processing device according to (4),
- in which a shape and a size of the cutout region in the second frame image are the same as a shape and a size of the cutout region in the first frame image.
- (6)
- The image processing device according to (5),
- in which a position of a center of the cutout region in the first frame image and a position of a center of the first region are the same as the detection position of the object in the first frame image.
- (7)
- The image processing device according to any one of (2) to (6), in which
- the first region setting unit sets a first region including a detection position of each of a plurality of objects in the cutout region in the first frame image,
- the image processing device further includes a tracking target setting unit configured to set any one of the plurality of objects as a tracking target, and
- the cutout region deciding unit decides whether to make the position of the cutout region in the second frame image the same as the cutout region in the first frame image, on the basis of a positional relation between a detection position of a tracking target object set by the tracking target setting unit and the first region set with respect to the tracking target object.
- (8)
- The image processing device according to any one of (2) to (7), further including
- a cutout image generation unit configured to generate a cutout image by cutting out the cutout region in the second frame image decided by the cutout region deciding unit, from the second frame image.
- (9)
- The image processing device according to any one of (2) to (8),
- in which the object is a human or an automobile.
- (10)
- An image processing method including:
- setting a first region including a detection position of an object in a cutout region in a first frame image; and
- deciding a cutout region in a second frame image subsequent to the first frame image, on the basis of a positional relation between the first region and a detection position of the object in the second frame image.
- (11)
- An image processing system including:
- a first region setting unit configured to set a first region including a detection position of an object in a cutout region in a first frame image;
- a cutout region deciding unit configured to decide a cutout region in a second frame image subsequent to the first frame image, on the basis of a positional relation between the first region and a detection position of the object in the second frame image;
- a cutout image generation unit configured to generate a cutout image by cutting out the cutout region in the second frame image decided by the cutout region deciding unit, from the second frame image; and
- a storage unit configured to store the generated cutout image.
-
- 10 camera
- 20 storage
- 22 monitoring terminal
- 24 communication network
- 100 image capturing unit
- 102 video shrinking unit
- 104 region setting unit
- 106 video cropping unit
- 108 communication unit
- 120 object detection unit
- 122 dead zone region setting unit
- 124 cropping region deciding unit
- 126 tracking target setting unit
- 220 control unit
- 222 communication unit
- 224 display unit
- 226 input unit
Claims (20)
1. An image processing device comprising:
processing circuitry configured to:
set a first region in a first frame image on based on a positional relation between the first region in a second frame image that is previous to the first frame image and a detection position of an object in the first frame image, and
set a cutout region in the first frame image based on the first region in the first frame image.
2. The image processing device according to claim 1 , wherein the processing circuitry further configured to generate a cutout image by cutting out the cutout region in the first frame image.
3. The image processing device according to claim 1 , wherein the first region in the second frame image is a region used for deciding whether to use the detection position of the object in the first frame image for changing a position of the cutout region.
4. The image processing device according to claim 2 , wherein the first region in the second frame image is a region used for deciding not to use the detection position of the object in the first frame image for changing a position of the cutout region when the first region in the second frame image includes the detection position of the object in the first frame image.
5. The image processing device according to claim 1 , wherein the first region is a dead zone region.
6. The image processing device according to claim 1 , wherein the first region in the first frame image includes at least a part of the object in the first frame image.
7. The image processing device according to claim 1 , wherein the processing circuitry is further configured to determine whether to set a position of the first region in the first frame image the same as the first region in the second frame image based on the positional relation between the first region in the second frame image and the detection position of the object in the first frame image.
8. The image processing device according to claim 1 , wherein when the first region in the second frame image includes the detection position of the object in the first frame image, the processing circuitry is further configured to set a position of the first region in the first frame image the same as the first region in the second frame image.
9. The image processing device according to claim 1 , wherein a first position of the center of the cutout region in the first frame image is the same as a second position of the center of the first region in the first frame image
10. The image processing device according to claim 1 , wherein the processing circuitry is further configured to:
set one of plurality of objects as a tracking target,
set the first region in the first frame image on based on the positional relation between the first region in the second frame image and the detection position of the tracking target in the first frame image, and
set the cutout region in the first frame image based on the first region in the first frame image.
11. The image processing device according to claim 1 , wherein the processing circuitry further configured to set the first region in the first frame image at a position where the first region in the second frame image is moved in a direction of the detection position of the object in the first frame image by the minimum distance between an outline of the first region in the second frame image and the detection position in the first frame image when the first region in the second frame image does not include the detection position of the object in the first frame image.
12. The image processing device according to claim 1 , wherein the processing circuitry further configured to set the first region in the first frame image such that the detection position of the object in the first frame image overlaps an outline of the first region in the first frame image when the first region in the second frame image does not include the detection position of the object in the first frame image.
13. The image processing device according to claim 1 , wherein the processing circuitry further configured to set the first region in the first frame image such that the detection position of the object in the first frame image is not out of the first region in the first frame image surrounded by an outline of the first region in the first frame image when the first region in the second frame image does not include the detection position of the object in the first frame image.
14. The image processing device according to claim 12 , wherein the detection position of the object in the second frame image overlaps the outline of the first region in the second frame image.
15. The image processing device according to claim 12 , wherein the processing circuitry further configured to set the first region in a third frame image subsequent to the first frame image such that the detection position of the object in the third frame image overlaps the outline of the first region in the third frame image when the first region in the first frame image does not include the detection position of the object in the third frame image.
16. A method for image processing, the method comprising:
setting, by processing circuitry, a first region in a first frame image on based on a positional relation between the first region in a second frame image that is previous to the first frame image and a detection position of an object in the first frame image; and
setting a cutout region in the first frame image based on the first region in the first frame image.
17. The method according to claim 16 , wherein the first region in the second frame image is a region used for deciding whether to use the detection position of the object in the first frame image for changing a position of the cutout region.
18. The method according to claim 16 , further comprising:
setting the first region in the first frame image such that the detection position of the object in the first frame image overlaps an outline of the first region in the first frame image when the first region in the second frame image does not include the detection position of the object in the first frame image.
19. An image processing system comprising:
processing circuitry configured to:
set a first region in a first frame image on based on a positional relation between the first region in a second frame image that is previous to the first frame image and a detection position of an object in the first frame image, and
set a cutout region in the first frame image based on the first region in the first frame image.
20. The image processing system according to claim 19 , wherein the first region in the second frame image is a region used for deciding whether to use the detection position of the object in the first frame image for changing a position of the cutout region.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/784,814 US20200175282A1 (en) | 2015-04-14 | 2020-02-07 | Image processing device, image processing method, and image processing system |
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015-082274 | 2015-04-14 | ||
JP2015082274 | 2015-04-14 | ||
PCT/JP2016/054525 WO2016167016A1 (en) | 2015-04-14 | 2016-02-17 | Image processing device, image processing method, and image processing system |
US201715559506A | 2017-09-19 | 2017-09-19 | |
US16/784,814 US20200175282A1 (en) | 2015-04-14 | 2020-02-07 | Image processing device, image processing method, and image processing system |
Related Parent Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2016/054525 Continuation WO2016167016A1 (en) | 2015-04-14 | 2016-02-17 | Image processing device, image processing method, and image processing system |
US15/559,506 Continuation US10607088B2 (en) | 2015-04-14 | 2016-02-17 | Image processing device, image processing method, and image processing system |
Publications (1)
Publication Number | Publication Date |
---|---|
US20200175282A1 true US20200175282A1 (en) | 2020-06-04 |
Family
ID=57125889
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/559,506 Active US10607088B2 (en) | 2015-04-14 | 2016-02-17 | Image processing device, image processing method, and image processing system |
US16/784,814 Abandoned US20200175282A1 (en) | 2015-04-14 | 2020-02-07 | Image processing device, image processing method, and image processing system |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/559,506 Active US10607088B2 (en) | 2015-04-14 | 2016-02-17 | Image processing device, image processing method, and image processing system |
Country Status (5)
Country | Link |
---|---|
US (2) | US10607088B2 (en) |
EP (1) | EP3285476A4 (en) |
JP (1) | JP6693509B2 (en) |
CN (1) | CN107431761B (en) |
WO (1) | WO2016167016A1 (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3285477B1 (en) * | 2015-04-14 | 2023-06-28 | Sony Group Corporation | Image processing device, image processing method, and image processing system |
CA3056269C (en) * | 2017-03-17 | 2021-07-13 | Unity IPR ApS | Method and system for automated camera collision and composition preservation |
CN108307113B (en) * | 2018-01-26 | 2020-10-09 | 北京图森智途科技有限公司 | Image acquisition method, image acquisition control method and related device |
JP6572500B1 (en) * | 2018-03-14 | 2019-09-11 | エスゼット ディージェイアイ テクノロジー カンパニー リミテッドSz Dji Technology Co.,Ltd | Image processing apparatus, imaging apparatus, moving object, image processing method, and program |
CN111010590B (en) * | 2018-10-08 | 2022-05-17 | 阿里巴巴(中国)有限公司 | Video clipping method and device |
TWI729322B (en) * | 2018-11-08 | 2021-06-01 | 財團法人工業技術研究院 | Information display system and information display method |
WO2020174911A1 (en) * | 2019-02-28 | 2020-09-03 | 富士フイルム株式会社 | Image display device, image display method, and program |
TWI749365B (en) * | 2019-09-06 | 2021-12-11 | 瑞昱半導體股份有限公司 | Motion image integration method and motion image integration system |
KR20220102765A (en) * | 2021-01-14 | 2022-07-21 | 현대두산인프라코어(주) | System and method of controlling construction machinery |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4516665B2 (en) | 2000-05-19 | 2010-08-04 | パナソニック株式会社 | Monitoring device |
JP4528212B2 (en) | 2005-06-23 | 2010-08-18 | 日本放送協会 | Trimming control device and trimming control program |
JP4320657B2 (en) * | 2005-12-26 | 2009-08-26 | ソニー株式会社 | Signal processing device |
JP4622883B2 (en) * | 2006-02-22 | 2011-02-02 | 富士通株式会社 | Video attribute automatic assigning device, video attribute automatic assigning program, and video attribute automatic assigning method |
JP4464360B2 (en) * | 2006-03-27 | 2010-05-19 | 富士フイルム株式会社 | Monitoring device, monitoring method, and program |
JP2008011497A (en) * | 2006-06-01 | 2008-01-17 | Canon Inc | Camera apparatus |
JP5111088B2 (en) | 2007-12-14 | 2012-12-26 | 三洋電機株式会社 | Imaging apparatus and image reproduction apparatus |
JP5237055B2 (en) * | 2008-11-07 | 2013-07-17 | キヤノン株式会社 | Video transmission apparatus, video transmission method, and computer program |
WO2011065960A1 (en) * | 2009-11-30 | 2011-06-03 | Hewlett-Packard Development Company, L.P. | Stabilizing a subject of interest in captured video |
CN102737384A (en) * | 2011-04-08 | 2012-10-17 | 慧友电子股份有限公司 | Automatic spherical camera tracing method |
JP5810296B2 (en) | 2011-06-07 | 2015-11-11 | パナソニックIpマネジメント株式会社 | Image display device and image display method |
JP2013165485A (en) * | 2012-01-11 | 2013-08-22 | Panasonic Corp | Image processing apparatus, image capturing apparatus, and computer program |
JP2013162221A (en) * | 2012-02-02 | 2013-08-19 | Sony Corp | Information processor, information processing method, and information processing program |
JP2013172446A (en) * | 2012-02-23 | 2013-09-02 | Sony Corp | Information processor, terminal, imaging apparatus, information processing method, and information provision method in imaging apparatus |
JP2015023294A (en) * | 2013-07-16 | 2015-02-02 | オリンパスイメージング株式会社 | Moving image processing apparatus and moving image processing method |
EP3285477B1 (en) * | 2015-04-14 | 2023-06-28 | Sony Group Corporation | Image processing device, image processing method, and image processing system |
-
2016
- 2016-02-17 CN CN201680019618.XA patent/CN107431761B/en active Active
- 2016-02-17 JP JP2017512214A patent/JP6693509B2/en active Active
- 2016-02-17 US US15/559,506 patent/US10607088B2/en active Active
- 2016-02-17 EP EP16779808.1A patent/EP3285476A4/en not_active Ceased
- 2016-02-17 WO PCT/JP2016/054525 patent/WO2016167016A1/en active Application Filing
-
2020
- 2020-02-07 US US16/784,814 patent/US20200175282A1/en not_active Abandoned
Also Published As
Publication number | Publication date |
---|---|
JPWO2016167016A1 (en) | 2018-02-08 |
CN107431761A (en) | 2017-12-01 |
US20180121736A1 (en) | 2018-05-03 |
CN107431761B (en) | 2021-02-19 |
WO2016167016A1 (en) | 2016-10-20 |
EP3285476A1 (en) | 2018-02-21 |
EP3285476A4 (en) | 2018-09-19 |
US10607088B2 (en) | 2020-03-31 |
JP6693509B2 (en) | 2020-05-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20200175282A1 (en) | Image processing device, image processing method, and image processing system | |
US10319099B2 (en) | Image processing apparatus, image processing method, and image processing system | |
US11263769B2 (en) | Image processing device, image processing method, and image processing system | |
EP3588456B1 (en) | Image processing apparatus, image processing method, and program | |
US9710923B2 (en) | Information processing system, information processing device, imaging device, and information processing method | |
US20180364801A1 (en) | Providing virtual reality experience service | |
CN111295872B (en) | Method, system and readable medium for obtaining image data of an object in a scene | |
JP2018513992A (en) | System and method for continuous autofocus | |
TWI672674B (en) | Depth processing system | |
JP2017211760A (en) | Image processing system, image processing method, and imaging device | |
US20230419505A1 (en) | Automatic exposure metering for regions of interest that tracks moving subjects using artificial intelligence | |
US10062006B2 (en) | Image sensing apparatus, object detecting method thereof and non-transitory computer readable recording medium | |
CN111788538A (en) | Head mounted display and method of reducing visually induced motion sickness in a connected remote display | |
KR20150019230A (en) | Method and apparatus for tracking object using multiple camera | |
US9953448B2 (en) | Method and system for image processing | |
JP2012212235A (en) | Object detection system, object detection method, and program | |
CN111219940A (en) | Method and device for controlling light in refrigerator and refrigerator | |
JP6996538B2 (en) | Image processing equipment, image processing methods, and image processing systems | |
JP2019033469A (en) | Imaging apparatus, control method, and program | |
KR100891793B1 (en) | Method for processing image of digital camera | |
WO2024148975A1 (en) | Photographing method and device | |
KR20240134722A (en) | Transmission of a collage of detected objects in a video | |
CN117425071A (en) | Image acquisition method, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO PAY ISSUE FEE |