CN105611230B - Image processing apparatus and image processing method - Google Patents
Image processing apparatus and image processing method Download PDFInfo
- Publication number
- CN105611230B CN105611230B CN201510789241.3A CN201510789241A CN105611230B CN 105611230 B CN105611230 B CN 105611230B CN 201510789241 A CN201510789241 A CN 201510789241A CN 105611230 B CN105611230 B CN 105611230B
- Authority
- CN
- China
- Prior art keywords
- camera
- locations
- image
- photographic device
- image processing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/215—Motion-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/23—Recognition of whole body movements, e.g. for sport training
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/66—Remote control of cameras or camera parts, e.g. by remote control devices
- H04N23/661—Transmitting camera control signals through networks, e.g. control via the Internet
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/69—Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/695—Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/631—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Closed-Circuit Television Systems (AREA)
- Studio Devices (AREA)
Abstract
The present invention provides a kind of image processing apparatus and image processing method.Touring camera shooting is preset for photographic device in order to more easily be arranged, image processing apparatus is constructed as follows.Described image processing unit includes: camera control unit, control photographic device is configured to, to shoot image in the state of to be not yet set by one or more camera positions for the set of locations that the photographic device uses in order;Determination unit is configured to by using the image shot before the set of locations is arranged by the photographic device, to determine the camera position of the set of locations;And control unit, it is configured to control the photographic device, to be imaged by using at least one camera position determined by the determination unit.
Description
Technical field
At the image being configured the present invention relates to the positioned in sequence of the photographic device by camera etc. to camera shooting
Manage device, image processing method and image processing system.
Background technique
In recent years, the camera that comes using as connectable to network and can be controlled by computer, existing can change
Become the camera angle of camera to carry out the camera of the camera shooting of wider range.For example, this camera includes holder zoom
(pan-tilt-zoom, PTZ) camera, fish-eye camera and multiple lens camera.PTZ camera includes coiled hair, and is had
There are variable panning angle, inclination angle and zoom ratio.Fish-eye camera includes fish eye lens and can use single camera shooting covering
Wide range.Multiple lens camera includes a plurality of lenses and imaging sensor in individual equipment body and can distribute from logical
Cross the image for combining the visual angle that the panoramic picture of acquisition is cut out in the camera.
These cameras preset touring function with so-called, i.e., based on (the camera shooting of the pre-set presupposed information of user
Visual angle, camera time section, camera shooting sequence etc.) switch default camera position (visual angle) sequentially to be imaged.It is single using the function
A camera can carry out camera shooting and the record of wider range, thus be conducive to attempt to realize more effectively monitoring.
The technology discussed in Japanese Unexamined Patent Publication 2011-188258 bulletin is preset the camera system of touring camera shooting
System.In the art, once presetting the touring ongoing face for being detected simultaneously by people, then the visual angle for being directed toward the position is new
It is added to default camera position.
However, the technical requirements user discussed in Japanese Unexamined Patent Publication 2011-188258 bulletin as described above carries out manually
For presetting touring initial setting up.In this way, setting depends on the supposition etc. of user, thus may to preset touring set
Start under the visual angle for being not suitable for monitoring or image analysis to preset camera position.For example, it may be possible to take low traffic route
Or the visual angle of camera may be too low and be difficult to accurately identify personage.
In addition, the technology discussed in Japanese Unexamined Patent Publication 2011-188258 bulletin as described above can not will preset it is touring
In visual angle not to be covered be registered as new default camera position.In this way, when pre-set default camera position is inappropriate, in advance
If camera position is not updated suitably, the monitoring target in monitoring target zone thus can not be suitably monitored.
Summary of the invention
According to an aspect of the present invention, a kind of image processing apparatus, the image processing apparatus include: camera control unit,
It is configured to control photographic device, with one or more in the set of locations to be used in order by the photographic device
Camera position shoots image in the state of being not yet set;Determination unit is configured to by using by the photographic device
The image shot before the set of locations is set, to determine the camera position in the set of locations;And control unit, quilt
It is configured to control the photographic device, to be taken the photograph by using at least one camera position determined by the determination unit
Picture.
According to the description of exemplary embodiment, other features of the invention be will be apparent referring to the drawings.
Detailed description of the invention
Fig. 1 is to illustrate the figure of the exemplary network connection architecture of representative image processing system.
Fig. 2 is the functional block diagram of device.
Fig. 3 is illustratively to instantiate the testing result and output result of movable body detection processing and human testing processing
Figure.
Fig. 4 is the exemplary figure of the hardware configuration of exemplary server device.
Fig. 5 is to illustrate the figure of the first camera direction.
Fig. 6 is to illustrate the figure of the second camera direction.
Fig. 7 is to illustrate the figure of third camera direction.
Fig. 8 is to illustrate the flow chart of the process of the preanalysis processing executed in camera.
Fig. 9 is to illustrate the exemplary figure of image analysis result.
Figure 10 is the flow chart of the process of the image analysis processing executed in exemplary server device.
Figure 11 is to illustrate the calculated score information of image analysis result based on using the first camera towards acquisition
Exemplary figure.
Figure 12 is to illustrate the calculated score information of image analysis result based on using the second camera towards acquisition
Exemplary figure.
Figure 13 is to illustrate the calculated score information of image analysis result based on using third camera towards acquisition
Exemplary figure.
Figure 14 is to illustrate the exemplary figure of score tabulating result.
Figure 15 is to illustrate the figure for the exemplary network connection architecture for representing Web camera system.
Figure 16 is to illustrate the exemplary figure of camera arrangement.
Specific embodiment
Detailed description of the present invention exemplary embodiment referring to the drawings.
Exemplary embodiments described below is the example of realization unit of the invention, and by using dress of the invention
The structure and various conditions set are appropriately corrected and modify.The present invention is not limited to exemplary embodiments described below.
Fig. 1 is to illustrate the exemplary network company for the operating environment for representing image processing system according to the present exemplary embodiment
The figure of binding structure.Image processing system according to the present exemplary embodiment is applied to Web camera system.
Web camera system 10 include network camera (hereinafter, also referred to as " camera ") 20, storage device 30,
Image analysis server unit (hereinafter, also referred to as " server unit ") 40 and image display device 50.Camera
20, storage device 30, server unit 40 and image display device 50 pass through the local area network (LAN) 60 as network line
It is connected to each other.Network line is not limited to LAN, can be internet, wide area network (WAN) etc..
Camera 20 is the PTZ camera with pan function, tilt function and zoom function.Camera 20 has institute
Meaning presets touring function, presets in touring function at this, according to presupposed information, makes a reservation for preset what is switched according to predetermined order
It is imaged in the monitoring target zone of camera position.
Presupposed information is setting (presetting touring setting project) relevant to touring function is preset, and including taking the photograph with default
The relevant information in image position and to preset touring relevant information.More specifically, information table relevant to default camera position
Show camera angle (panning, inclination and zoom position).Taking the photograph in each default camera position is indicated to touring relevant information is preset
As period and the transfer sequence (camera shooting sequence) of the default camera position of expression.In this way, letter relevant to default camera position
Breath includes the information with the camera shooting directional correlation of camera 20.
The image of the reference object when turning to multiple default camera positions of camera 20, meanwhile, by thus obtained image
Data are sent to storage device 30 by LAN 60.Camera 20 executes image analysis processing based on the image data of shooting, and
And image analysis result can be sent to storage device 30 by LAN 60.
Camera 20 also has according to the photograph for changing focus setting and camera angle etc. from external instruction
The function of camera camera shooting setting.
Camera 20 is not limited to PTZ camera, can be fish-eye camera, multiple lens camera etc..
Storage device 30 is recording device and including the image data that sends from camera 20 and image analysis knot is written
The writing area of fruit.Storage device 30 reads image data and image analysis result from camera 20 by LAN 60.
Server unit 40 reads the image data being recorded in storage device 30 and image analysis knot by LAN 60
Fruit.Then, server unit 40 is executed automatic setting by using the data of collection and updates the default setting of presupposed information
Processing.In default setting processing, the presupposed information for being suitable for monitoring the monitoring target in monitoring target zone is determined.Example
Such as, it would be possible to which the visual angle for shooting the personage moved in monitoring target zone is set as default camera position.It is explained below pre-
If setting is handled.
Image display device 50 is included in such as personal computer (PC), and can be by user (for example, monitoring generation
Reason) operation.Image display device 50 has the function of playing and showing the image data distributed from camera 20.Image display dress
Set the defeated of the various operations that 50 also serve as the camera setting operate including camera angle, preset touring function etc.
Enter unit.
Wired or wireless physical connection to LAN 60 can be established.It is connected to the storage device 30 and image of LAN 60
Each quantity of display device 50 is not limited to shown in FIG. 1.Any number of storage device 30 can be provided and image is shown
Device 50, as long as each device can use address etc. and be identified.
Monitoring target can be movable body, human body, subject face at least one.Alternatively, monitoring target is removed
Outside movement human as described above, such as the arbitrary motion object that can also be vehicle etc..
Then, description is formed to the specific structure of each device of Web camera system 10.
Fig. 2 is the functional block diagram of device as described above.
Camera 20 include image acquisition unit 121, coding unit 122, image analyzing unit 123, communication unit 124,
Camera control unit 125 and view angle control unit 126.
It is imaged on the image sensing face for the camera unit that image acquisition unit 121 will be described later by photoelectric conversion
Optical imagery is converted to digital electric signal, and handles digital electric signal using picture element interpolation and color conversion process to generate
RGB or YUV digital picture.Then, image acquisition unit 121 is executed at predetermined computation by using thus obtained digital picture
Reason, and the calculated result based on acquisition executes for correct white balance, acutance and contrast and carries out the image of color conversion
Correction process.
Image acquisition unit 121 will be exported as the data image signal of the result handled as described above (image data)
To coding unit 122 and image analyzing unit 123.
Coding unit 122 passes through setting frame rate and carries out the compression for network distribution, single to obtaining from image
The data image signal of 121 input of member is encoded.For example, the compression for distribution is based on such as Motion Picture Experts Group's stage
4 (MPEG4), H.264, the preassigned of joint photographic experts group (JPEG) or movement JPEG (MJPEG) etc..
Image analyzing unit 123 by execute such as moved in image movable body detection (movable body detection),
The detection (human testing) of human body and the image analysis processing of face recognition etc. moved in image carrys out the prison in detection image
Depending on target.
Fig. 3 is the detection for illustrating movable body detection and human testing and the exemplary figure for exporting result.
Various methods be can use to detect movable body.For example, in one approach, sequence frames image is compared to each other,
And the part that predeterminated level or higher levels of difference are detected is confirmed as movable body.It in another approach, will be current
Image compared with preparing in advance and correspond to the background image of the image, what predeterminated level or higher levels of difference were detected
Part is confirmed as movable body.
As the testing result of movable body, dividing an image into the block with same size, (8 hang down in the example in Fig. 3
Directly × 10 horizontal block) come in the case where executing processing, for each piece of acquisition region be background area or movement body region really
Determine result.For example, as shown in Figure 3, in the case where image includes movement human 100, the region for being related to movement is movable body
Region 101, the region for not being related to movement is background area 102.Rectangular information can be obtained by amalgamation result.
For example, as shown in Figure 3, to indicate that each piece is two value informations for moving body region 101 or background area 102
Form come obtain movable body detection output result.In the present example embodiment, " 1 " represents movement body region 101, " 0 " generation
Table background area 102.
As detection human body method, can be used check image whether include predetermined image pattern (template matching) side
Method, using for counting method of rote learning system of feature etc. in identification topography.
As human testing as a result, for example, as shown in Figure 3, it is available include head-to-toe human body 100 (or
The top half of human body 100) rectangular area 103.
For example, as shown in Figure 3, by the image coordinate (x, y) at the center of rectangular area 103 and rectangular area 103
The form of rectangular information that represents of size (width, height), to obtain the output result of human testing.
Can by obtaining the image similar with target face from the multiple images pre-registered, together with the image can
Energy property comes together to carry out face recognition.
Camera direction of these image analysis results in the information and camera shooting about the camera analyzed (is shaken
Take the photograph, tilt, zoom, installation site etc.), image frame number and temporal information it is associated in the state of be sent to communication unit
124。
Communication unit 124 passes through the communication unit 131 of LAN 60 and storage device 30, the communication unit of server unit 40
141 and image display device 50 communication unit 151 communicate.The image that communication unit 124 will be exported from coding unit 122
Data and from image analyzing unit 123 export image analysis result be sent to storage device 30.
Communication unit 124 receives the finger for such as changing shooting image size, frame rate and focus setting by LAN 60
The camera operation command of order etc., and camera control unit 125 is sent by the camera operation command received.Communication
Unit 124 also receives the display image for such as updating panning and obliquity and updating for fish-eye camera by LAN 60
The viewing angle control order of the instruction in region etc., and view angle control unit 126 is sent by the viewing angle control order.
Camera control unit 125 controls the image size of shooting, frame according to camera operation command as described above
Rate and focus setting.View angle control unit 126 controls the view of camera 20 according to viewing angle control order as described above
Angle and direction.
Storage device 30 includes communication unit 131 and recording unit 132.
Communication unit 131 receives the image data that sends from camera 20 and image analysis result, and by described image
Data and image analysis result are sent to recording unit 132.
The image data and image analysis result that send from communication unit 131 are recorded in memory etc. by recording unit 132
In.
Server unit 40 includes communication unit 141, coordinate transformation unit 142, score calculation unit 143 and presets
Determination unit 144 is set.
Communication unit 141 receives the figure for sending and being recorded in storage device 30 from camera 20 by LAN 60
As data file, and described image data file is sent to coordinate transformation unit 142.Communication unit 141 passes through LAN 60
Camera 20 will be sent to by the presupposed information of default setting processing setting described below.
Coordinate transformation unit 142 receives image analysis result from communication unit 141, and turn for the coordinate of tabulation
It changes.More specifically, the coordinate information of image analysis result is converted into the location information in the entire image pickup scope of camera 20.
Score calculation unit 143 calculates score based on the information that the conversion by coordinate transformation unit 142 obtains.It is somebody's turn to do
Point be in the image pickup scope of camera 20 multiple regions it is each in monitoring target testing result (for example, detection is frequently
Rate) corresponding information.
Default setting determination unit 144 executes setting presupposed information based on the score calculated by score calculation unit 143
Default setting processing.
For example, in the present example embodiment, default setting processing is performed in the initial setting up proceeded as follows:
When user's setting sign on default by the input of image display device 50, camera 20 receives default setting sign on simultaneously
And image analysis processing as described above is executed in predetermined amount of time.Then, server unit 40 receives image analysis result simultaneously
And execute default setting processing.
As described above, camera 20 executes image before the default setting processing for automatically determining and updating presupposed information starts
Analysis processing.In preanalysis processing, camera 20 repeats to monitor while switching camera angle (camera direction)
Camera shooting and image analysis processing in target zone.In the case where camera 20 is PTZ camera, can be shaken by updating
It takes the photograph, tilt and zoom position switches camera angle.The case where camera 20 is fish-eye camera or multiple lens camera
Under, camera angle can be switched by changing display image area.
The timing that the timing of default setting processing is not limited to the described above, presetting setting processing for example can be usually default
Touring period (when camera 20 is being imaged and recording operation) is performed periodically.Default setting processing can not be by
It executes, but is performed when monitoring target (movable body or human body) is not detected in predetermined amount of time every time periodically.
Image display device 50 includes communication unit 151 and display control unit 152.
Communication unit 151 receives image that is sending from camera 20 and being recorded in storage device 30 by LAN 60
Data file, and display control unit 152 is sent by the image data file.
Display control unit 152 is shown on picture from the received image data of communication unit 151 and image analysis result.
(hardware configuration)
Fig. 4 is the exemplary figure of the hardware configuration of exemplary server device 40.
Server unit 40 includes central processing unit (CPU) 41, read-only memory (ROM) 42, random access memory
(RAM) 43, external memory 44, communication I/F 45 and system bus 46.
CPU 41 carries out whole control to the operation in server unit 40 and passes through 46 control unit (42 of system bus
To 45).
ROM 42 is the nonvolatile memory of control program needed for storing 41 execution of CPU processing etc..The program may be used also
To be stored in external memory 44 and removable storage medium (not illustrating).
RAM 43 is used as main memory, working region etc. for CPU 41.In this way, CPU 41 by required program etc. from
ROM 42 is loaded into RAM 43, and executes program etc. to realize various feature operations.
For example, external memory 44 stores various types of data, various needed for CPU 41 executes processing using program
The information etc. of type.For example, external memory 44 also store through CPU 41 handle using program etc. obtain it is various
Data, information of type etc..
Communication I/F 45 is the interface for communicating with external device (ED) (storage device 30 and image display device 50).Example
Such as, communication I/F 45 is LAN interface.
System bus 46 is by CPU 41, ROM 42, RAM 43, external memory 44 and communication I/F 45 with server
The mode that the component of device 40 can communicate with one another is connected with each other.
The function of the component of server unit 40 shown in Fig. 2 is executed in ROM 42 or external memory 44 with CPU 41 and is deposited
The mode of the program of storage is implemented.
Camera 20 further includes the hard of camera unit etc. other than including structure corresponding with component shown in Fig. 2
Part structure.The camera unit of image for reference object include such as complementary metal oxide semiconductor (CMOS) sensor,
The image sensor element of charge (CCD) sensor etc..
Image display device 50 further includes such as input unit other than including structure corresponding with component shown in Fig. 2
With the hardware configuration of display unit etc..Input unit includes the indicating equipment of keyboard and mouse etc..Image display device
50 user can be issued to image display device 50 by input unit and be instructed.Display unit includes such as liquid crystal display
(LCD) the monitor such as.(process of the preanalysis processing in camera 20)
Then, it will be described in the preanalysis executed in camera 20 processing.
In preanalysis processing, multiple camera directions of the entire monitoring target zone of setting covering, and it is directed to each photograph
Camera direction carries out image analysis.(for example, every five minutes) switch camera court when here, by passing through predetermined amount of time every time
Always image analysis is carried out.Removable disk, inclination, zooming range can be specified by user or can be based on to monitoring target zone
It is designated.In the present example embodiment, figure is carried out by entirely monitoring the image of target zone in shooting preanalysis processing
As analysis.Alternatively, for example, can be obtained by slave removable disk, inclination and the zooming range that shooting user specifies whole
The image of the partial region of a monitoring target zone carries out image analysis.
It describes and monitors that target zone is ABCD plane shown in fig. 5 and human body 100 can be in monitoring target zone
The example of thirdly P3 is moved to via second point P2 from the first point P1.Here, camera direction (PTZ to the right shown in fig. 5
=(45,0,50)) it is set as the first camera direction.Camera direction (PTZ=(0,0,50)) shown in fig. 6 towards center
It is set as the second camera direction.Camera to the left shown in Fig. 7 is set as third photograph towards (PTZ=(- 45,0,50))
Machine direction.
Numerical value PTZ respectively represents panning position, obliquity and zoom position, and respectively indicates relative to photograph
The angle at machine center.Panning and tilting value camera 20 towards it is right and downward when respectively take positive value.The value of zoom is horizontal view angle.
In preanalysis processing, camera is switched to the second camera direction towards from the first camera direction, then
It is imaged to third camera direction, and under the image pickup scope for corresponding respectively to camera direction.
Fig. 8 is to illustrate the flow chart of the process of preanalysis processing.
It is executed with the CPU of camera 20 for program needed for executing processing corresponding with flow chart shown in Fig. 8
Mode come realize preanalysis handle.
In step sl, camera direction is switched to pre-set first camera direction by camera 20, and is located
Reason proceeds to step S2.
In step s 2, camera 20 shoots the image in image pickup scope, and handles and proceed to step S3.
In step s3, camera 20 analyzes the image shot in step s 2, thus to obtain output shown in Fig. 3 knot
Fruit.
Then, in step s 4, camera 20 determine whether pre-set predetermined amount of time (for example, 5 minutes) with
Same camera direction has carried out camera shooting and image analysis.In the case where not yet passing through predetermined amount of time (in step S4
"No"), processing returns to step S2 is arrived, so that continuing camera shooting and image analysis with same camera direction.On the other hand, passing through
In the case where predetermined amount of time ("Yes" in step S4), processing proceeds to step S5.
In step s 5, camera 20 determines whether to carry out by switching camera towards to entire monitoring target zone
Camera shooting.In the case where there is the region not yet imaged in monitoring target zone, that is, in pre-set multiple cameras
Towards in exist not yet complete camera shooting and image analysis any camera towards in the case where ("No" in step S5), handle into
Row arrives step S6.
In step s 6, camera direction is switched to the camera court for not yet carrying out camera shooting and image analysis by camera 20
To, and handle and proceed to step S2.
On the other hand, camera 20 is determining in step S5 as described above images entire monitoring target zone
In the case where ("Yes" in step S5), processing proceed to step S7.In the step s 7, the output of camera 20 is with each camera direction
The image analysis result of acquisition.Fig. 9 is to illustrate the exemplary figure of image analysis result.
In the example depicted in fig. 9, frame number 0 analyzes 10 images, and per second by being 10:00:00 in one second
Record an image analysis result (human detection result).Every camera direction of switching in 5 minutes.
In the example depicted in fig. 9, the camera court with camera to the right towards (PTZ=(45,0,50)) and to the left
The quantity of the human detection result obtained to (PTZ=(- 45,0,50)) is few, and towards the camera at center towards (PTZ=
0,0,50)) per second to obtain human detection result.In this way, result indicates have to each of camera direction of right and left
It is not suitable for the visual angle of image analysis, and there is the visual angle for being suitable for image analysis and size simultaneously towards the camera at center direction
And it is suitable for recording.
In this way, server unit 40 is generated in monitoring target zone based on the image analysis result obtained by camera 20
Human testing frequency diagram (figure for indicating the region that human testing frequency is high in monitoring target zone), and based on human testing frequency
The setting of rate figure is suitable for the presupposed information of monitoring and image analysis.
(process of the default setting processing in server unit 40)
Figure 10 is to illustrate the flow chart of the process of default setting processing.
It is executed with CPU 41 shown in Fig. 4 for program needed for executing processing corresponding with flow chart shown in Fig. 10
Mode realize default setting processing.
In step s 11, server unit 40 reads the image analysis obtained by the preanalysis processing by camera 20
As a result and processing proceeds to step S12.
In step s 12, server unit 40 turns the coordinate information in the image analysis result read in step s 11
Change the location information in monitoring target zone into.In this way, interior image coordinate is converted into the interior monitoring target as the second coordinate
Range coordinate.
Various coordinate transformation methods can be used.Simple method is, for example, following method, i.e., so that any in image
The mode that about 4 points and actual coordinate information of selected element are associated with each other stores this 4 points to be directed to each camera direction
And actual coordinate information, and projection conversion is carried out based on the information.Accurate method is, for example, following method, i.e., previously against
Camera is calibrated, and the information based on the height, panning, inclination and the zoom position that indicate to install, to prepare for carrying out
From shooting image to the determinant of the conversion of world coordinate system.
Then, in step s 13, server unit 40 is based on the figure after converting in step S12 to the coordinate of the second coordinate
The score for each camera direction is calculated as analysis result.In the present example embodiment, which is monitoring target model
Enclose multiple regions it is each in human testing quantity.
For example, as shown in Figure 11 to Figure 13 numerical value (quantity for representing human testing) can be mapped with by representing
Monitor the ABCD plane of target zone, Lai Daibiao score information.Figure 11 is illustrated based on the first camera direction (PTZ=
(45,0,50)) figure for the score information that the image analysis result obtained calculates.Figure 12 is illustrated based on the second camera direction
The figure for the score information that the image analysis result that (PTZ=(0,0,50)) obtains calculates.Figure 13 is to illustrate to be based on taking a picture with third
The figure for the score information that machine is calculated towards the image analysis result that (PTZ=(- 45,0,50)) obtain.
The score is the quantity of human testing according to the present exemplary embodiment.Alternatively, score can be movable body
The quantity of detection or the quantity of face recognition.
Then, in step S14, server unit 40 is for calculated in each couple of step S13 of camera direction
Score tabulation, and handle and proceed to step S15.Figure 14 be diagrammatic illustration 11 to shown in Figure 13 respectively with camera direction
The exemplary figure that the corresponding score of different persons is tabulated.Merge obtaining for each acquisition for being directed to camera direction in this way
Point, the human testing frequency diagram in monitoring target zone can be generated.In this way, entirely monitoring human testing quantity in target zone
Big region (region that people frequently occurs) can be identified.
In step S15, server unit 40 is directed in advance the result that score is tabulated to be determined as based in step S14
If one visual angle in touring setting project.More specifically, selection has the position (area of the score higher than predetermined threshold
Domain), and visual angle is selected so that the position to be set as to the center of shooting image.Visual angle can be included in distance with the position of selection
The mode for shooting in the center preset range of the image rather than center of stringent matching shooting image is set.
For example, being selected in Figure 14 the 5th piece and downward the to the right from the upper left corner (point A) in the case where threshold value is 3
Four pieces of position P4.In this way, the visual angle that position P4 is arranged on the center of shooting image is newly set as default camera position.Work as presence
When multiple candidate, it can be newly set as default camera position by all candidate, or can choose the predetermined number with top score
The candidate of amount.
For example, can be selected according to realizing to have for only shooting as the zoom ratio for selecting a factor at visual angle
The mode of enough high zooms of the multiplying power of the image in region is selected to be set.Alternatively, as shown in Figure 14 in selection area
In the case where obtaining movable body detection or human testing around domain, change is set in the way of also being included by the region
Burnt multiplying power.Zoom ratio can be not based on region and be determined, and can average value according to the size of human body or movable body, size
Intermediate value or the most frequent human body detected or movable body size (mode) be set.
Then, in step s 16, server unit 40 is determined based on the score tabulating result obtained in step S14
As for the camera time section for presetting one of touring setting project.
Based on score information, as human testing frequency is higher, camera time section is set to more grow.It is directed in this way
Region with high score (human testing frequency) sets long for camera time section, can clap in the longest possible period
Take the photograph the image including personage.Camera time section can be fixed.
Then, in step S17, server unit 40 is made based on the score tabulating result obtained in step S14 to determine
For for the camera shooting sequence for presetting one of touring setting project.With the descending from the region with top score according to score
The mode imaged come be arranged camera shooting sequence.Camera shooting sequence can be pre-registered.
Then, in step S18, server unit 40 presets touring setting for what is determined in step S15 to step S17
Project is output to camera 20 as the touring setting value of newly presetting for being used directly.
Image display device 50 can be output to by presetting touring setting project.In this case, image display device
50 can be presented to the user the candidate for presetting touring setting, and can wait the confirmation operation of user.
In the score tabulation in step S14 as described above, simply merged with camera towards the score obtained.Example
Such as, in score tabulation, less than the face recognition knot that the image analysis result of predefined size and possibility are lower than predetermined possibility
Fruit can be filtered off.In this way, influencing to become significantly less critical due to caused by the image analysis result of mistake.Such as shoot in image
Monitoring target position and monitor that the size of target and the information of detection possibility can be weighted, and can will be this kind of
Information accounts for further calculating score.
Score computation method can be switched according to the installation purpose of supervision camera.It is counting number or inspection in purpose
In the case where crowded, preferably the region for being likely to occur many people is imaged.In this way, to large number of image point has been obtained
The score computation method that higher score is given in the region (region with the Supreme People's Procuratorate's measured frequency) of analysis result can be selected.
In the case where purpose is for monitoring to shop, preferably the part that may occur to commit theft in shop is taken the photograph
Picture.As a result, in this case, the area to the image analysis result (region with low detection frequency) for having obtained lesser amt
The score computation method that higher score is given in domain can be selected.
Not only can be according to the purpose of installation camera, but also how many image can be obtained according in monitoring target zone
Analysis is as a result, to select score computation method.For example, obtained in monitoring target zone big number of images analysis result and
It is subsequently assumed that the region that can choose smallest number people (has low detection frequency in the case that big quantity people occurs in entire scope
The region of rate) it is given high score and is thus possible to be selected as the calculation method for presetting touring target.In this way, can be by
According to the mode actively imaged to the people to take action in less crowded region in crowded shop or public place, to set
Set preset it is touring.
On the other hand, smallest number image analysis result and it is subsequently assumed that entire scope have been obtained in monitoring target zone
In the case where there is smallest number people, the region (region with the Supreme People's Procuratorate's measured frequency) that can choose big quantity people is given height
It score and thus may be selected as presetting the calculation method of touring target.In this way, people can occur according to in rareness
Shutdown shop or the mode that is actively imaged of invader in parking lot or a suspect, come be arranged preset it is touring.
It can be default to determine for the day part (such as morning, noon and evening) for shooting the image for image analysis
Information.In this case, by being carried out according to presupposed information of the period switching of touring camera shooting as reference preset
It presets touring.In this way, even if the object in monitoring target zone there is a possibility that in day part not it is also possible to pass through
Flexibly switching visual angle suitably carries out presetting touring camera shooting.Can according to other than the period as described above such as weather, the date,
The external factor of working day, season and temperature etc. determines presupposed information.
In fig. 8, the processing in step S1 and S6 corresponds to the processing that is executed by view angle control unit 126, in step S2
Processing correspond to the processing that is executed by image acquisition unit 121, the processing in step S3 to step S5 corresponds to by image point
The processing that unit 123 executes is analysed, and the processing in step S7 corresponds to the processing executed by communication unit 124.
In Figure 10, the processing in step S11 and S18 corresponds to the processing that is executed by communication unit 141, in step S12
Processing correspond to the processing that is executed by coordinate transformation unit 142, the processing in step S13 and step S14 corresponds to by score
The processing that computing unit 143 executes, and the processing executed in step S15 to step S17 is single corresponding to being determined by default setting
The processing that member 144 executes.
As described above, in the present example embodiment, the entire image for monitoring target zone is taken and analyzes, and pre-
If information is determined by analysis result.The presupposed information determined using the result analyzed based on pre-image, can determine and be suitable for supervising
Depending on the presupposed information of the monitoring in target zone.
Setter or user itself need independently to carry out presetting touring setting to each camera.Some users may not know
How road, which carries out presetting touring setting or may may not even be aware that camera has, occupy the first and presets touring function.As a result
It is the fixed photograph that there is the installation camera that can be imaged to wider range and be used as only being imaged in same range
Many situations of camera.In the present example embodiment, presupposed information can be automatically determined and update.In this way, can carry out
Preset touring be arranged so that being not necessarily to user's operation and realizing the camera shooting using single camera to wide scope.
Whether it is detected for each analysis monitoring target of the multiple regions in monitoring target zone.In this way, can be with
In view of monitoring the region for monitoring target in target zone and being likely to be detected (region that monitoring target is likely to occur), monitoring
Presupposed information is determined whiles region (region that monitoring target can not occur) that target can not be detected etc..
It, can be for target (for example, such as by that will monitor that target is set as at least one facial of movable body, people, people
Vehicle, the people of movement and specific people etc. movable body) suitably monitor.
For score of each setting corresponding to image analysis result for monitoring the multiple regions in target zone, and with
Region with the score for being equal to or higher than threshold value is arranged on the mode at the center of shooting image to determine default camera position.This
Sample can be suitably using the score computation method that the active region for presetting touring camera shooting may be needed to be given high score
Determine desired presupposed information.
For example, under score is set to higher situation in the region with higher monitoring target detection frequency, with right
Monitoring target (for example, people) frequently by path etc. carry out the mode of touring camera shooting and determine presupposed information.In this way, can fit
Locality carries out crowding inspection, movable monitoring of a suspect etc..On the other hand, for low monitoring target detection frequency
In the case that the area score of rate is set to height, so that preset to the place etc. that people's rareness occurs the side of touring camera shooting
Formula determines presupposed information.
It can be by being arranged based on the region in the shooting image for detecting monitoring target, the size of monitoring target, monitoring
The scores of the weightings such as target detection possibility more suitably determines presupposed information.
If the top score of predetermined quantity is selected, and is respectively set according to the region for corresponding to selected score
Default camera position is determined to shoot the mode at the center of image, then default patrol can be carried out to region with high priority
Return camera shooting.By the way that predetermined quantity is arranged, actual use excessively can be not suitable for avoid the quantity of the default camera position of registration.
Zoom ratio is determined according to the size (average value, intermediate value, mode) of monitoring target, and the image shot in this way determines packet
Include monitoring target.
Preset the camera time section of touring period be set in the default camera position with higher score it is longer.This
Sample, it may be necessary to which the region for enlivening touring camera shooting can strongly be monitored.
Transfer sequence (camera shooting sequence) is determined in a manner of switching default camera position according to the descending of score.In this way,
It may need the active region for presetting touring camera shooting that can strongly be monitored.
As described above, based on pre-image analysis result come determine presupposed information (camera angle, camera time section and
Camera shooting sequence).In this way, by surrounding the touring side of default camera position referring to presupposed information determining as described above with camera
Formula controls the camera angle of camera, the monitoring target zone for the image pickup scope for being greater than camera can be effectively performed touring
Camera shooting, it is possible thereby to suitably be monitored.
Presupposed information is determined by each camera shooting period being taken for the image for image analysis, it can be according to pre-
If the touring presupposed information for being switched reference by the period carried out.In this way, even if the object presence in monitoring target zone can
Energy property in the case where difference, still can neatly switch visual angle and it is possible thereby to suitably carry out touring take the photograph in day part
Picture.
When the presupposed information determined based on image analysis result is presented to user (for example, observer) to obtain user's
, can be to avoid the unnecessary presupposed information of registration when confirmation operation, and can more suitably be monitored.
In exemplary embodiment as described above, the case where essentially describing using single camera 20.Alternatively,
Present invention could apply to the systems including multiple cameras 20.
Figure 15 is to illustrate the exemplary network for the operating environment for representing the Web camera system including multiple cameras 20
The figure of connection structure.As shown in Figure 15, using multiple (3) camera 20A to 20C.
Camera 20A have structure identical with camera 20 as described above, thus have the function of PTZ change and can
To switch visual angle.
On the other hand, camera 20B and 20C is the fixed camera without view angle switch function.More specifically, in addition to
It is not arranged other than view angle control unit 126 shown in Fig. 2, camera 20B and 20C have structure identical with camera 20.
Figure 16 is to illustrate the exemplary figure of arrangement of camera 20A to 20C.
As shown in Figure 16, camera 20A to 20C monitors the same monitoring target zone by ABCD plane definition.
Here, not only camera 20A but also camera 20B and 20C to storage device 30 send image analysis result.This
Sample, in the default setting processing executed by server unit 40, other than the image analysis result obtained by camera 20A,
The image analysis result obtained by camera 20B and 20C can also be used.
As described above, using multiple camera 20A to 20C and by using the image from camera 20A to 20C point
Result is analysed to determine presupposed information.It is applied in combination multiple cameras in this way, initial setting up and time needed for updating can be with
It is shortened.
The quantity of camera is not limited to the quantity in Figure 15, it is only necessary at least one photograph that there is visual angle to change function be arranged
Camera, and other any cameras can have or can not have visual angle and change function.
In exemplary embodiment as described above, image analysis processing is executed in camera (photographic device) side.As
Selection can execute image analysis processing by server unit 40.More specifically, server unit 40 may include Fig. 2 institute
The image analyzing unit 123 shown.
The default setting processing executed by the server unit 40 according to exemplary embodiment as described above can be by shining
Camera 20 or image display device 50 execute.In exemplary embodiment as described above, the image data from camera 20
File is stored in storage device 30.Alternatively, image data file can be by camera 20 or image display device 50
It keeps.
In exemplary embodiment as described above, by analyzing the image shot by preset touring camera 20
To determine presupposed information.Alternatively, it can be clapped by analyzing by the camera for being different from preset touring camera 20
The image taken the photograph determines presupposed information, as long as the figure that can analyze the camera shooting by carrying out to entire monitoring target zone and obtain
As.
Using the structure according to exemplary embodiment as described above, can more easily be arranged for by such as camera
Deng photographic device carry out touring camera shooting information.
Other embodiments
It can also be recorded in storage medium by reading and executing and (also can more completely be known as that " non-transitory computer can
Read storage medium ") on for execute one or more above-described embodiments function computer executable instructions (for example, one
A or multiple programs) and/or include one or more circuits for carrying out one or more functions of above-described embodiment
The computer of the system of (for example, specific integrated circuit (ASIC)) or device realizes various embodiments of the present invention, and passes through
The computer of system or device from storage medium for example, by reading and executing for executing one or more above-described embodiments
The computer executable instructions of function and/or one or more circuits are controlled to carry out the function of one or more above-described embodiments
Can method realize various embodiments of the present invention.Computer may include one or more processors (for example, central processing list
Member (CPU), microprocessing unit (MPU)), and may include the network of independent computer or independent processor, to read
And execute computer executable instructions.Computer executable instructions for example can be provided to calculating from network or storage media
Machine.Storage medium may include such as hard disk, random access memory (RAM), read-only memory (ROM), distributed computing system
The memory of system, CD (such as compact disk (CD), digital versatile disc (DVD) or Blu-ray Disc (BD) TM), flash memory device,
One or more of storage card etc..
The embodiment of the present invention can also be realized by following method, that is, pass through network or various storage mediums
The software (program) for executing the function of above-described embodiment is supplied to system or device, the computer of the system or device or in
The method that Central Processing Unit (CPU), microprocessing unit (MPU) read and execute program.
Although referring to exemplary embodiments describe the present invention, but it is to be understood that the present invention is not limited to institutes
Disclosed exemplary embodiment.Scope of the appended claims should be given with widest explanation so that its cover it is all these
Variation example and equivalent structure and function.
Claims (16)
1. a kind of image processing apparatus, the image processing apparatus is in response to instruction from the user or presupposed information, to set
One or more camera positions in the set of locations that photographic device will use in order, the image processing apparatus include:
Camera control unit is configured to control photographic device, in the set of locations to be used in order by the photographic device
In one or more camera positions be not yet set in the state of shoot image;
Analytical unit is configured to the movement of test object, human body and face from the image shot by the photographic device
At least one;
Determination unit, be configured to by using the analytical unit to by the photographic device be arranged the set of locations it
The analysis of the image of preceding shooting as a result, to determine the camera position in the set of locations;And
Control unit is configured to control the photographic device, to determine in order by using by the determination unit
At least one camera position imaged.
2. image processing apparatus according to claim 1, wherein the camera control unit executes control, so that setting
Switch the camera shooting direction of the photographic device before setting the set of locations, and
Wherein, the determination unit by using before the set of locations is set with multiple camera shooting directions of the photographic device
The multiple images of shooting, to determine the camera position in the set of locations.
3. image processing apparatus according to claim 1, wherein the determination unit is also filled by using by the camera shooting
The image shot before the set of locations is set is set, to determine the camera time of each camera position for the set of locations
Section.
4. image processing apparatus according to claim 1, wherein the determination unit is also filled by using by the camera shooting
The image shot before the set of locations is set is set, to determine the camera shooting sequence for the set of locations.
5. image processing apparatus according to claim 1,
Wherein, position of the determination unit based on the movement, human body and/or the face detected by the analytical unit,
To determine the camera shooting position for being directed to the set of locations from the image shot before the set of locations is arranged by the photographic device
It sets.
6. image processing apparatus according to claim 1, wherein
The analytical unit by the image that is shot by the photographic device compared between predetermined image pattern, from by institute
Test object region in the image of photographic device shooting is stated,
Wherein, the determination unit is based on being shot before the set of locations is arranged by the analytical unit from the photographic device
Image in the position of the subject area that detects, to determine the camera position in the set of locations.
7. image processing apparatus according to claim 5, wherein the determination unit is by the camera shooting model of the photographic device
Enclose and be divided into multiple regions, according to divide region it is each in the movement detected, human body and/or face it is at least any
The detected status of one is arranged score to be directed to the corresponding region that divides, and is set based on division each of region is directed to
The score set determines the camera position in the set of locations.
8. image processing apparatus according to claim 7, wherein the determination unit is by described in set of locations shooting
Multiple modes for dividing division region in region, with the detection frequency for being equal to or higher than threshold value, it is described to determine
Camera position in set of locations, the detection frequency are to be specified based on the detection to the movement, human body and/or the face that detect
Monitoring target detection frequency.
9. image processing apparatus according to claim 7, wherein the determination unit is by described in set of locations shooting
Multiple modes for dividing division region in region, with the detection frequency lower than threshold value, to determine the set of locations
In camera position, the detection frequency be the monitoring specified based on the detection to the movement, human body and/or the face that detect
The detection frequency of target.
10. image processing apparatus according to claim 7, wherein the determination unit setting can based on position or detection
Can property at least one and the score that weights, the position be in the image of shooting based on to detect movement, human body and/
Or face detection and specified monitoring target position, it is described detection possibility be it is described monitor target detection possibility.
11. image processing apparatus according to claim 1, wherein the determination unit according to based on to movement, human body and
At least one of at least detection of any one of face and the average value of the size of specified monitoring target, intermediate value or mode,
To determine for the zoom ratio for being used the photographic device that position is imaged in the set of locations.
12. image processing apparatus according to claim 1, described image processing unit further include:
Display control unit is configured to show the camera position determined by the determination unit on display picture.
13. image processing apparatus according to claim 1, wherein the determination unit determines institute for each camera shooting period
State the camera position of set of locations.
14. a kind of control method of image processing apparatus, the image processing apparatus is in response to instruction from the user or presets
Information, to set one or more camera positions in the set of locations that photographic device will use in order, the control method
The following steps are included:
Initial control photographic device, in one or more camera shooting positions for the set of locations to be used in order by the photographic device
It sets and shoots image in the state of being not yet set;
At least one of the movement of test object, human body and face from the image shot by the photographic device;
By using the detection to the image shot before the set of locations is arranged by the photographic device as a result, to determine
The camera position of the set of locations;And
The photographic device is controlled, by using at least one camera position in order determined in the determining step
To be imaged.
15. control method according to claim 14, wherein the initial rate-determining steps execute control, so that being arranged
Switch the camera shooting direction of the photographic device before the set of locations, and
Wherein, the determining step by using before the set of locations is set with multiple camera shooting directions of the photographic device
The multiple images of shooting, to determine the camera position of the set of locations.
16. control method according to claim 14, wherein the determining step is also by using by the photographic device
The image shot before the set of locations is set, to determine the camera time section of each camera position for the set of locations.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2014235000A JP6532217B2 (en) | 2014-11-19 | 2014-11-19 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING SYSTEM |
JP2014-235000 | 2014-11-19 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105611230A CN105611230A (en) | 2016-05-25 |
CN105611230B true CN105611230B (en) | 2019-07-19 |
Family
ID=54782420
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510789241.3A Active CN105611230B (en) | 2014-11-19 | 2015-11-17 | Image processing apparatus and image processing method |
Country Status (4)
Country | Link |
---|---|
US (1) | US20160142680A1 (en) |
EP (1) | EP3024227B1 (en) |
JP (1) | JP6532217B2 (en) |
CN (1) | CN105611230B (en) |
Families Citing this family (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI537885B (en) * | 2015-01-07 | 2016-06-11 | 晶睿通訊股份有限公司 | Monitoring method and monitoring system |
US10192128B2 (en) * | 2015-03-27 | 2019-01-29 | Nec Corporation | Mobile surveillance apparatus, program, and control method |
RU2636745C1 (en) * | 2016-08-22 | 2017-11-28 | Общество С Ограниченной Ответственностью "Дисикон" | Method and system of monitoring territory using controlled video camera |
JP6765917B2 (en) * | 2016-09-21 | 2020-10-07 | キヤノン株式会社 | Search device, its imaging device, and search method |
KR102640281B1 (en) * | 2016-10-13 | 2024-02-26 | 한화비전 주식회사 | Method of controlling surveillance camera and surveillance system adopting the method |
JP2018107587A (en) * | 2016-12-26 | 2018-07-05 | 株式会社日立国際電気 | Monitoring system |
JP6819478B2 (en) * | 2017-06-21 | 2021-01-27 | 住友電気工業株式会社 | Computer program, preset information registration method and preset information registration device |
US20190027004A1 (en) * | 2017-07-20 | 2019-01-24 | Synology Incorporated | Method for performing multi-camera automatic patrol control with aid of statistics data in a surveillance system, and associated apparatus |
JP6703284B2 (en) * | 2017-09-27 | 2020-06-03 | キヤノンマーケティングジャパン株式会社 | Image processing system, image processing system control method, and program |
GB2570447A (en) * | 2018-01-23 | 2019-07-31 | Canon Kk | Method and system for improving construction of regions of interest |
CN108921098B (en) * | 2018-07-03 | 2020-08-18 | 百度在线网络技术(北京)有限公司 | Human motion analysis method, device, equipment and storage medium |
US11594060B2 (en) * | 2018-08-30 | 2023-02-28 | Panasonic Intellectual Property Management Co., Ltd. | Animal information management system and animal information management method |
JP7317495B2 (en) * | 2018-12-04 | 2023-07-31 | 株式会社東芝 | Surveillance system and surveillance camera device |
JP6721076B1 (en) * | 2019-03-25 | 2020-07-08 | 日本電気株式会社 | Information processing device, camera control method, program, camera device, and image processing system |
DE112019007278T5 (en) * | 2019-05-03 | 2022-05-12 | Toyota Motor Europe | Image capture device for finding an object |
DE112019007277T5 (en) * | 2019-05-03 | 2022-05-12 | Toyota Motor Europe | Image capture device for tracking an object |
JP2021032964A (en) * | 2019-08-20 | 2021-03-01 | エスゼット ディージェイアイ テクノロジー カンパニー リミテッドSz Dji Technology Co.,Ltd | Control device, imaging system, control method and program |
JPWO2021145071A1 (en) | 2020-01-14 | 2021-07-22 | ||
JP7358263B2 (en) * | 2020-02-12 | 2023-10-10 | 本田技研工業株式会社 | Vehicle control device, vehicle control method, and vehicle control program |
WO2023063088A1 (en) * | 2021-10-14 | 2023-04-20 | Nec Corporation | Method, apparatus, system and non-transitory computer readable medium for adaptively adjusting detection area |
US20230230379A1 (en) * | 2022-01-19 | 2023-07-20 | Target Brands, Inc. | Safety compliance system and method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1829319A (en) * | 2005-02-28 | 2006-09-06 | 索尼株式会社 | Information processing system, information processing apparatus and method, and program |
CN102577347A (en) * | 2009-06-29 | 2012-07-11 | 博世安防系统有限公司 | Omni-directional intelligent autotour and situational aware dome surveillance camera system and method |
EP2642747A1 (en) * | 2012-03-21 | 2013-09-25 | Axis AB | A movable monitoring device, a method therein and a monitoring system comprising the movable monitoring device. |
CN104144326A (en) * | 2014-07-15 | 2014-11-12 | 深圳奇沃智联科技有限公司 | Robot monitoring system with image recognition and automatic patrol route setting function |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4324030B2 (en) * | 2004-06-25 | 2009-09-02 | キヤノン株式会社 | Camera control apparatus, camera control method, and storage medium |
JP4558696B2 (en) * | 2006-09-25 | 2010-10-06 | パナソニック株式会社 | Automatic body tracking device |
US20100033577A1 (en) * | 2008-08-05 | 2010-02-11 | I2C Technologies, Ltd. | Video surveillance and remote monitoring |
JP5120199B2 (en) * | 2008-10-23 | 2013-01-16 | 株式会社Jvcケンウッド | Video patrol monitoring system |
JP5187275B2 (en) * | 2009-05-29 | 2013-04-24 | 株式会社Jvcケンウッド | Photographing method and photographing apparatus |
US8564667B2 (en) * | 2009-08-21 | 2013-10-22 | Empire Technology Development Llc | Surveillance system |
US8531525B2 (en) * | 2009-12-22 | 2013-09-10 | Utc Fire & Security Americas Corporation, Inc. | Surveillance system and method for operating same |
JP2011188258A (en) * | 2010-03-09 | 2011-09-22 | Canon Inc | Camera system |
AU2010201740B2 (en) * | 2010-04-30 | 2013-03-07 | Canon Kabushiki Kaisha | Method, apparatus and system for performing a zoom operation |
JP5820210B2 (en) * | 2011-09-15 | 2015-11-24 | キヤノン株式会社 | IMAGING DEVICE AND IMAGING DEVICE CONTROL METHOD |
US10931920B2 (en) * | 2013-03-14 | 2021-02-23 | Pelco, Inc. | Auto-learning smart tours for video surveillance |
-
2014
- 2014-11-19 JP JP2014235000A patent/JP6532217B2/en active Active
-
2015
- 2015-11-16 US US14/942,741 patent/US20160142680A1/en not_active Abandoned
- 2015-11-17 CN CN201510789241.3A patent/CN105611230B/en active Active
- 2015-11-18 EP EP15195239.7A patent/EP3024227B1/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1829319A (en) * | 2005-02-28 | 2006-09-06 | 索尼株式会社 | Information processing system, information processing apparatus and method, and program |
CN102577347A (en) * | 2009-06-29 | 2012-07-11 | 博世安防系统有限公司 | Omni-directional intelligent autotour and situational aware dome surveillance camera system and method |
EP2642747A1 (en) * | 2012-03-21 | 2013-09-25 | Axis AB | A movable monitoring device, a method therein and a monitoring system comprising the movable monitoring device. |
CN104144326A (en) * | 2014-07-15 | 2014-11-12 | 深圳奇沃智联科技有限公司 | Robot monitoring system with image recognition and automatic patrol route setting function |
Also Published As
Publication number | Publication date |
---|---|
CN105611230A (en) | 2016-05-25 |
EP3024227A1 (en) | 2016-05-25 |
JP2016100696A (en) | 2016-05-30 |
JP6532217B2 (en) | 2019-06-19 |
EP3024227B1 (en) | 2019-09-25 |
US20160142680A1 (en) | 2016-05-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105611230B (en) | Image processing apparatus and image processing method | |
EP2549738B1 (en) | Method and camera for determining an image adjustment parameter | |
KR101337060B1 (en) | Imaging processing device and imaging processing method | |
CN1905629B (en) | Image capturing apparatus and image capturing method | |
JP4819001B2 (en) | Imaging apparatus and method, program, image processing apparatus and method, and program | |
EP1696398B1 (en) | Information processing system, information processing apparatus and information processing method , program, and recording medium | |
US20100321505A1 (en) | Target tracking apparatus, image tracking apparatus, methods of controlling operation of same, and digital camera | |
US9092659B2 (en) | Subject determination apparatus that determines whether or not subject is specific subject | |
JP6465600B2 (en) | Video processing apparatus and video processing method | |
CN105359502B (en) | Follow-up mechanism, method for tracing and the non-volatile memory medium for storing tracing program | |
US9779290B2 (en) | Detecting apparatus, detecting method and computer readable recording medium recording program for detecting state in predetermined area within images | |
CN108337429A (en) | Image processing equipment and image processing method | |
US10127424B2 (en) | Image processing apparatus, image processing method, and image processing system | |
JP2010114752A (en) | Device and method of imaging and program | |
KR20150078275A (en) | Digital Photographing Apparatus And Method For Capturing a Moving Subject | |
JP2017076288A (en) | Information processor, information processing method and program | |
CN109120844A (en) | Video camera controller, camera shooting control method and storage medium | |
CN112017210A (en) | Target object tracking method and device | |
CN103685876A (en) | Imaging apparatus and imaging method | |
KR101288881B1 (en) | Set up a number of areas of surveillance and monitoring of surveillance cameras in the area to shoot enlarged system | |
US20230148125A1 (en) | Image processing apparatus and method, and image capturing apparatus | |
JP2021071794A (en) | Main subject determination device, imaging device, main subject determination method, and program | |
KR102107137B1 (en) | Method and Apparatus for Detecting Event by Using Pan-Tilt-Zoom Camera | |
US11394877B2 (en) | Control apparatus, image pickup apparatus, and control method | |
JP5929774B2 (en) | Image acquisition method, apparatus, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |