WO2013094115A1 - 時刻同期情報算出装置、時刻同期情報算出方法および時刻同期情報算出プログラム - Google Patents
時刻同期情報算出装置、時刻同期情報算出方法および時刻同期情報算出プログラム Download PDFInfo
- Publication number
- WO2013094115A1 WO2013094115A1 PCT/JP2012/007324 JP2012007324W WO2013094115A1 WO 2013094115 A1 WO2013094115 A1 WO 2013094115A1 JP 2012007324 W JP2012007324 W JP 2012007324W WO 2013094115 A1 WO2013094115 A1 WO 2013094115A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- visual event
- detection
- information
- time
- event
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/04—Synchronising
- H04N5/06—Generation of synchronising signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/68—Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
- H04N23/681—Motion detection
- H04N23/6811—Motion detection based on the image signal
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19602—Image analysis to detect motion of the intruder, e.g. by frame subtraction
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19639—Details of the system layout
- G08B13/19645—Multiple cameras, each having view on one of a plurality of scenes, e.g. multiple cameras for multi-room surveillance or for tracking an object by view hand-over
Definitions
- the present invention relates to a time synchronization information calculation device, a time synchronization information calculation method, and a time synchronization information calculation program for synchronizing a plurality of videos.
- FIG. 10 is an explanatory diagram showing an example of the configuration of the synchronization device.
- the synchronization apparatus shown in FIG. 10 is an apparatus to which the method described in Patent Document 1 is applied.
- the synchronization device includes a time function generation unit 1, a photographing unit 2, a feature amount calculation unit 3, a time correlation calculation unit 4, and an illumination unit 5.
- the photographing unit 2 captures an image of the subject 8 and outputs the captured image to the feature amount calculating unit 3.
- the feature quantity calculation means 3 receives the video output from the photographing means 2 and outputs the feature quantity time function b (t) to the time correlation calculation means 4.
- the time correlation calculation unit 4 compares the time function a (t) output from the time function generation unit 1 with the feature amount time function b (t) output from the feature amount calculation unit 3 to generate time synchronization information. And output.
- the time function generation means 1 outputs a time function a (t) that takes a value determined for each frame. Specifically, a rectangular wave function having a duty ratio of 1: 1 in which positive and negative values change for each frame is used.
- the time function a (t) is output to the illumination unit 5 and the time correlation calculation unit 4.
- the illuminating means 5 illuminates the subject 8 with varying the intensity of light according to the value of the input time function a (t).
- the photographing unit 2 photographs the subject 8 irradiated with the light from the illumination unit 5, acquires a video of the subject, and outputs it to the feature amount calculating unit 3.
- the feature amount calculation means 3 calculates the sum of luminance values in a frame from the subject image as a feature amount, and calculates a feature amount time function b (t) that is a function representing a temporal change of the calculated feature amount.
- the calculated feature time function b (t) is output to the time correlation calculation means 4.
- the time correlation calculating means 4 calculates the time correlation between the time function a (t) and the feature amount time function b (t) by the following equation (1).
- the time correlation value q in the equation (1) changes depending on the phase difference between the time function a (t) and the feature time function b (t). That is, the time correlation value q is a value corresponding to the time lag of the operation of the illumination unit 5 and the photographing unit 2. Therefore, time synchronization information can be obtained from the time correlation value q.
- Patent Document 1 needs to apply modulated illumination in order to synchronize images captured by a plurality of cameras.
- modulated illumination in order to synchronize images captured by a plurality of cameras.
- An object of the present invention is to provide a time synchronization information calculation device, a time synchronization information calculation method, and a time synchronization information calculation program capable of synchronizing time between a plurality of cameras without requiring a special device. To do.
- a time synchronization information calculation apparatus is provided corresponding to a plurality of video acquisition means for acquiring a video and the plurality of video acquisition means, and analyzes a video acquired by the plurality of video acquisition means to generate a visual event.
- a plurality of visual event detection means for generating visual event detection information including information indicating a detection time of the visual event, and visual event detection information generated by the plurality of visual event detection means, Visual event integration means for generating time synchronization information for synchronizing the time of each video acquired by the video acquisition means.
- the time synchronization information calculation method inputs a plurality of videos, analyzes the input videos to detect a visual event, and generates visual event detection information including information indicating a detection time of the visual event. Then, the generated visual event detection information is integrated, and time synchronization information for synchronizing the times of the plurality of videos is generated.
- the time synchronization information calculation program includes a process for inputting a plurality of videos to a computer, a visual event detected by analyzing the input videos, and a visual including information indicating the detection time of the visual event.
- Visual event detection processing for generating event detection information and visual event integration processing for integrating the generated visual event detection information and generating time synchronization information for synchronizing the times of the plurality of videos are executed. It is characterized by that.
- the present invention it is possible to synchronize the time between a plurality of cameras without detecting a special device by detecting a visual event that occurs naturally during shooting.
- Embodiment 1 FIG. A first embodiment of the present invention will be described below with reference to the drawings.
- FIG. 1 is a block diagram showing a configuration of a time synchronization information calculation apparatus according to the present invention.
- the time synchronization information calculation apparatus includes video acquisition means 100-1 to 100-N, visual event detection means 101-1 to 101-N, and visual event integration means 102.
- Video acquisition means 100-1 to 100-N are devices capable of video acquisition such as cameras.
- the video acquisition means 100-1 to 100-N output the acquired video to the visual event detection means 101-1 to 101-N, respectively.
- the video acquisition means 100-1 to 100-N may be devices that read video captured by a camera or the like from a recording medium.
- a device that reads the video tape may be used as the video acquisition means.
- Visual event detection means 101-1 to 101-N input video from video acquisition means 100-1 to 100-N. Each of the visual event detection means 101-1 to 101-N generates visual event detection information, and outputs the generated visual event detection information to the visual event integration means 102.
- the visual event integration unit 102 inputs visual event detection information from the visual event detection units 101-1 to 101-N.
- the visual event integration unit 102 generates and outputs time synchronization information.
- the visual event detection means 101-1 to 101-N and the visual event integration means 102 are realized by a CPU provided in the time synchronization information calculation device.
- the CPU of the computer may read the time synchronization information calculation program and operate as the visual event detection means 101-1 to 101-N and the visual event integration means 102 according to the program.
- the time synchronization information calculation program may be stored in a computer-readable recording medium.
- the visual event detection means 101-1 to 101-N and the visual event integration means 102 may be realized by separate hardware.
- the image acquisition means 100-1 to 100-N acquire images.
- the video acquisition units 100-1 to 100-N are arranged at positions where they can capture almost the same area and can detect the same visual event.
- the visual event is not limited to a visual event that can be detected from videos acquired by all video acquisition means, but may be a visual event that can be detected from videos acquired by only some of the video acquisition means.
- a visual event is an event that can be visually detected from video information.
- a visual event is, for example, a change in the brightness of the entire screen or a partial area, or a change or movement of a person's posture or state, such as a person squatting, falling down, running, or passing a specific position, or an automatic door Or a change in the state of a specific object such as a door of a showcase or the occurrence of a specific event such as a fall or breakage of an object.
- the video acquisition means 100-1 to 100-N output the acquired video to the visual event detection means 101-1 to 101-N, respectively.
- Visual event detection means 101-1 to 101-N generate a frame image from the input video, and detect a visual event from the generated frame image. Details of the visual event detection method will be described later.
- the visual event detection means 101-1 to 101-N when the input video is an analog video, the visual event detection means 101-1 to 101-N generate a frame image by capturing using a video capturing method.
- the input video is H.264.
- the visual event detection means 101-1 to 101-N decode the video by a corresponding decoding method to generate a frame image. To do.
- the visual event detection means 101-1 to 101-N may be able to detect without completely decoding the video depending on the type of visual event.
- the visual event detection means 101-1 to 101-N may perform the minimum necessary decoding, extract the feature amount, and detect the visual event. For example, a change in brightness can be detected by obtaining a difference between the average values of the pixel values of each frame, as will be described later. Therefore, the visual event detecting means 101-1 to 101-N are H.264.
- the average value of each frame may be calculated by extracting only the DC component of each block of H.264 or MPEG-2 / 4 and taking the average of the extracted DC components.
- the visual event detection means 101-1 to 101-N do not need to be able to detect all the above-described visual events, and it is sufficient that at least one of them can be detected.
- the visual event detection means 101-1 to 101-N output the detected results to the visual event integration means 102 as visual event detection information.
- the visual event detection information includes information indicating the type of the detected event (hereinafter referred to as event type information) and time information (hereinafter referred to as event detection time information) when the occurrence of the event is detected.
- the event type is, for example, a change in brightness or a change in the state of a person.
- the time information is time information associated with the video input to the visual event detection means 101-1 to 101-N.
- the event detection time information included in the visual event detection information output by the visual event detection unit 101-1 is generated based on the time information attached to the video input to the visual event detection unit 101-1.
- the visual event detection information may include information indicating the reliability of event detection.
- the visual event integration means 102 Based on the visual event detection information input from each visual event detection means, the visual event integration means 102 detects a time lag accompanying the video acquired by each video acquisition means, and synchronizes the time between the videos. Necessary information, that is, time synchronization information is generated.
- the visual event integration unit 102 obtains a likelihood function of each visual event occurrence from each visual event detection information.
- FIG. 2 is an explanatory diagram showing how the likelihood function f i (t) for occurrence of a visual event is obtained.
- the likelihood function is a function representing the likelihood (probability) that a visual event has occurred at a certain time t.
- the likelihood function for the event type j can be calculated as in equation (2).
- t i, j, k indicates the k-th detection time of the event type j included in the visual event detection information i.
- K j indicates the number of times an event of event type j has occurred.
- g j (t) is a time direction likelihood distribution function for the event type j.
- the time direction likelihood distribution function g j (t) is stored in advance in a storage unit (not shown) included in the time synchronization information calculation device.
- g j (t) is a function that models how much the true value of the event occurrence time can deviate back and forth.
- g j (t) may be set for each event type.
- the visual event integration unit 102 calculates time synchronization information between the videos output by each video acquisition unit based on the event occurrence likelihood function calculated from each visual event detection information.
- time synchronization is achieved within a time period in which expansion and contraction of the time axis between video times is not a problem, only the time shift amount (offset) of each video needs to be corrected.
- ⁇ 1 ⁇ ⁇ N is required to maximize the F ( ⁇ 1 ⁇ ⁇ N) as shown in formula (3).
- the expansion / contraction of the time axis is modeled and the same processing as described above is performed.
- the visual event integration unit 102 performs the same processing by incorporating a linear change of the time axis. That is, the visual event integration unit 102 obtains ⁇ 1 to ⁇ N and ⁇ 1 to ⁇ N that maximize the following equation (5).
- the time synchronization information can be calculated in this way.
- time information can be calculated similarly even when a more complicated model is used.
- the visual event detection information includes reliability information
- the information is reflected as a weight and the above-described processing is performed.
- the reliability of detection of the time t i, j, k and the event type j included in the event detection information i is ⁇ i, j, k
- the weight corresponding to ⁇ i, j, k is set to the event
- An event occurrence likelihood function may be obtained using a value obtained by multiplying g j (t ⁇ t i, j, k ) of the detection information i.
- F ( ⁇ 1, ..., ⁇ N) or F ( ⁇ 1, ..., ⁇ N, ⁇ 1, ..., ⁇ N) when obtaining the synchronization information to maximize the calculation for the entire visual event detection information may be performed for several groups, and finally integrated as a whole.
- an event such as opening a door can be captured only for an image showing the door. Therefore, it is possible to group visual event detection information generated from an image showing a door.
- visual event detection information is grouped according to visual events that can be captured.
- the time synchronization information is adjusted between the groups by using a visual event that can be commonly captured among the groups, for example, a detection result of a change in brightness. May be.
- FIG. 3 is a flowchart showing the operation of the visual event integration unit 102.
- the visual event integration unit 102 calculates the likelihood function of event occurrence in order for each event type with respect to the visual event to be detected (steps S1 to S3, S5).
- the visual event integration unit 102 selects the first event type (step S1).
- the visual event integration unit 102 calculates a likelihood function of event occurrence for the selected event type by the above-described method (step S2).
- the visual event integration unit 102 determines whether or not the event type selected in step S1 is the last event type (step S3). If the event type is not the last event type (N in step S3), the visual event integration unit 102 selects the next event type (step S5) and returns to the process in step S2. When it is the last event type (Y in step S3), the process proceeds to step S4.
- the visual event integration unit 102 uses F ( ⁇ 1 ,..., ⁇ N ) in Equation (3) and F ( ⁇ 1 ,..., Equation (5) using the event occurrence likelihood function calculated in Step S2.
- time synchronization can be achieved between the plurality of videos. Therefore, even if the time of each camera has shifted from the time when the time between cameras was first adjusted, the time synchronization information generated by the time synchronization information calculation device can be used to reset the time between cameras.
- the time between images taken by each camera can be synchronized. For example, even if the time is different between videos taken by a plurality of surveillance cameras, the videos can be synchronized.
- the position information extracted in this way can be used, for example, for detecting an intrusion into a specific area of a person. Further, by analyzing a person flow line composed of the extracted time-series position information, it can be used for a system or service for acquiring information on marketing or store layout. It can also be used in systems and services for extracting worker flow lines and analyzing work efficiency in factories and distribution warehouses.
- time synchronization information is extracted in real time from video captured by a camera.
- the time synchronization information is extracted by processing images taken from a plurality of cameras offline. It may be.
- the visual event may be detected intermittently. That is, once synchronization is established, the visual event may not be detected for a while, and synchronization may be established again after a predetermined time has elapsed. By executing intermittently in this way, it is possible to suppress the power required to extract the time synchronization information, compared to the case of always operating.
- time synchronization information that enables time synchronization between a plurality of cameras is generated based on the visual event detected by the visual event detection means 101-1 to 101-N. Yes. Therefore, it is not necessary to use a special device such as a lighting device that emits modulated light as in the device described in Patent Document 1, and the cost can be reduced.
- FIG. 4 is a block diagram showing an example of the configuration of the visual event detection means.
- the visual event detection means is an event detection means, that is, brightness change detection means 201, person posture change / motion detection means 202, specific object state change detection means 203, and specific state change detection. Means 204. Further, the visual event detection means includes visual event detection information integration means 210.
- Brightness change detection means 201 inputs a video and generates brightness change detection information based on the input video.
- the brightness change detection unit 201 outputs the generated brightness change detection information to the visual event detection information integration unit 210.
- Person posture change / motion detection means 202 inputs video and generates human posture change / motion detection information based on the input video.
- the person posture change / motion detection unit 202 outputs the generated person posture change / motion detection information to the visual event detection information integration unit 210.
- the specific object state change detection means 203 inputs a video and generates specific object state change detection information based on the input video.
- the specific object state change detection unit 203 outputs the generated specific object state change detection information to the visual event detection information integration unit 210.
- the specific state change detection means 204 inputs a video and generates specific state change detection information based on the input video.
- the specific state change detection unit 204 outputs the generated specific state change detection information to the visual event detection information integration unit 210.
- the visual event detection information integration unit 210 integrates the input brightness change detection information, person posture change / motion detection information, specific object state change detection information, and specific state change detection information to generate visual event detection information.
- the visual event detection information integration unit 210 outputs the generated visual event detection information.
- Brightness change detection means 201 detects the brightness of the entire screen or a partial area from the input video. As a factor that causes a change in brightness, in the case of indoors, lighting ON / OFF can be mentioned. In this case, since the brightness of the entire screen changes, the brightness change detecting unit 201 detects and outputs the change.
- the brightness change detection unit 201 may detect a change in brightness based only on the video area in which the area is reflected. . Further, the brightness change detection unit 201 may detect a change in brightness due to opening and closing of the blinds. Further, the brightness change detection unit 201 may detect a change in brightness due to weather, such as lightning inserted from lightning or clouds.
- the brightness change detection means 201 outputs the result of detecting the brightness change to the visual event detection information integration means 210 as brightness change detection information.
- the brightness change detection information is information including the time when the brightness change is detected.
- the brightness change detection unit 201 may include this information in the brightness change detection information.
- the brightness change detection means 201 may include event type information for classifying and distinguishing them in the brightness change detection information.
- Person posture change / motion detection means 202 extracts a person region from the input video and detects a change in the posture, state / motion of the person.
- Changes in the posture and state of the detection target include postures such as squatting (sitting) from standing, standing up from squatting (sitting), bending to lift an object, leaning on something A change in state.
- the posture change / motion detection unit 202 may detect a change in posture caused by bowing, raising a hand, stretching, or turning around.
- the movements to be detected include various actions such as passing a certain position, picking up an object, making a phone call, wearing a hat, and walking.
- the person posture change / motion detection unit 202 outputs the result of detecting the change in the posture and state of the person and the motion to the visual event detection information integration unit 210 as person posture change / motion detection information.
- the person posture change / motion detection information is information including the time when the posture or state change or motion of the person is detected, and event type information for distinguishing the detected posture / state change or motion. If the information on the reliability of the detection result can be acquired together with the person posture change / motion detection information, the person posture change / motion detection unit 202 may include this information in the person posture change / motion detection information. Good.
- the specific object state change detecting means 203 extracts a specific object region from the input video and detects a change in the state of the specific object.
- Specific objects to be detected and their state changes include the opening and closing of doors (automatic doors), the opening and closing of refrigeration / freezing shelf doors, the switching of images displayed on the display, and the state of objects whose state changes regularly Change.
- the specific object state change detection unit 203 may detect a change in the traffic light. Details of this detection will be described later.
- the specific object state change detection unit 203 outputs a result of detecting a change in the state of a specific object to the visual event detection information integration unit 210 as specific object state change detection information.
- the specific object state change detection information is information including a time when a change in the state of the specific object is detected and event type information for distinguishing the detected change in the state of the specific object.
- the specific object state change detection unit 203 may include this information in the specific object state change detection information.
- the specific state change detecting means 204 detects the occurrence of a specific event from the input video. Specific events to be detected include object fall, collision, breakage, and the like. When the input video is a road monitoring video, the specific state change detection unit 204 may detect a change in the flow of the vehicle due to a signal change.
- the specific state change detection unit 204 outputs the result of detecting the occurrence of a specific state change or specific event to the visual event detection information integration unit 210 as specific state change detection information.
- the specific state change detection information is information including a time when a specific state change or occurrence of a specific event is detected and event type information for distinguishing the detected specific state change or specific event.
- the specific state change detection means 204 may include this information in specific state change detection information.
- Visual event detection information integration means 210 integrates brightness change detection information, person posture change / motion detection information, specific object state change detection information, and specific state change detection information, and outputs visual event detection information.
- the integration performed by the visual event detection information integration unit 210 may be integration at a level at which each information is multiplexed.
- the visual event detection information integration unit 210 may sort each information in time order and store the information in the visual event detection information, or store the information in units of event types at regular intervals. Good.
- the visual event detection means 101-1 to 101-N include all of the brightness change detection means 201, the person posture change / motion detection means 202, the specific object state change detection means 203, and the specific state change detection means 204. It does not have to be.
- the visual event detection unit includes only the brightness change detection unit 201 and the person posture change / motion detection unit 202
- the visual event detection information integration unit 210 includes the brightness change detection unit 201 and the person posture change / Only the output result of the motion detection unit 202 is integrated by the visual event detection information integration unit 210 and output.
- FIG. 5 is a block diagram showing an example of the configuration of the brightness change detection means 201.
- the brightness change detection unit 201 includes an inter-frame pixel value difference calculation unit 300 and a pixel value difference determination unit 301.
- the inter-frame pixel value difference calculation means 300 generates inter-frame pixel value difference information based on the input video.
- the pixel value difference determination unit 301 generates brightness change detection information from the generated inter-frame pixel value difference information.
- the inter-frame pixel value difference calculating means 300 inputs a video and calculates a pixel value difference between frames of the input video.
- inter-frame pixel value difference calculation means 300 may calculate a difference between consecutive frames, or may calculate an inter-frame difference after several frames.
- the inter-frame pixel value difference calculating means 300 may obtain the difference between the pixel values between the frames for each pixel, or divide the frame into a plurality of regions, and calculate the statistic (average, total) , Median, mode, etc.) may be obtained. Note that the pixel or area for which the difference is obtained may be the entire screen or only a part of the area in the screen. Further, the inter-frame pixel value difference calculating means 300 may obtain a difference between statistics of pixel values calculated for the entire frame.
- the inter-frame pixel value difference calculating unit 300 outputs the calculation result to the pixel value difference determining unit 301 as inter-frame pixel value difference information.
- the pixel value difference determining means 301 determines whether or not a brightness change has occurred based on the input inter-frame pixel value difference information.
- the pixel value difference determination unit 301 calculates the difference value acquired from the inter-frame pixel value difference information or the sum of absolute values calculated from the difference values. It is determined whether a change in brightness has occurred depending on whether a statistic such as an average, median or mode exceeds a predetermined threshold.
- the pixel value difference determination unit 301 may use an entire image as the difference value used when calculating the statistic, or may obtain a difference value only for a specific region in the image. . For example, when there is a region where the pixel value frequently changes due to specular reflection or the like even if the overall brightness does not change, the pixel value difference determination unit 301 determines the pixel value of the region excluding this region. A statistic may be calculated as a target. In addition, when detecting fluctuations in brightness caused by the opening and closing of lightning, blinds, and changes in sunlight, the pixel value difference determination unit 301 detects brightness fluctuations caused by these in areas close to windows and blinds. The above-mentioned statistics may be calculated only for the region where the pixel value is likely to change when this occurs, and the presence or absence of brightness fluctuations may be determined.
- the pixel value difference determination unit 301 When the inter-frame pixel value difference information is difference information for each region, the pixel value difference determination unit 301 similarly calculates a statistic of the difference value for each region, and the calculated statistic exceeds a predetermined threshold value. Whether or not there is a change in brightness may be determined depending on whether or not it is.
- the pixel value difference determination unit 301 may determine whether there is a change in brightness depending on whether the statistic exceeds a predetermined threshold.
- the pixel value difference determination unit 301 When the change in brightness is detected, the pixel value difference determination unit 301 outputs the detected time information to the visual event detection information integration unit 210 as brightness change detection information.
- the pixel value difference determination unit 301 may include an index indicating how much the statistic used for determining whether there is brightness fluctuation exceeds a threshold value in the brightness change detection information as reliability information.
- FIG. 6 is a block diagram showing an example of the configuration of the person posture change / motion detection means 202.
- the person posture change / motion detection unit 202 includes a person region extraction unit 320, a person posture determination unit 321, and a person posture change / motion determination unit 322.
- the person area extraction means 320 generates person area information based on the input video.
- the person posture determination means 321 generates person posture information using the generated person area information.
- the person posture change / motion determination means 322 generates person posture change / motion detection information using the generated person posture information.
- Person area extraction means 320 inputs video.
- the person area extraction unit 320 extracts a person area from the input video.
- the process of extracting a person area can be realized using various methods.
- the person area extracting unit 320 extracts a person by extracting a still area from an input image, constructing a background image, calculating a difference from the image, and detecting a moving object. May be used. In this case, if it can be assumed that the object does not include anything other than a person, the person area extraction unit 320 may regard the object as a person area. If the moving object includes something other than a person, the person area extracting unit 320 determines whether the obtained moving object area is a person and extracts the person area. This determination can be made using a discriminator that has learned the characteristics of the person region.
- the person area extracting means 320 may use a method of directly extracting a person area from an image without taking the difference between the input image and the background image. For example, there is a method of detecting a part of a person area using a discriminator that has learned the characteristics of a human body such as the head area, face, and upper body, and obtaining the person area from the detection result. In that case, the person area extracting means 320 acquires a certain area as a person area downward from the area where the head or face is detected. Then, the person area extraction unit 320 generates information representing the acquired person area as person area information.
- the person area information is, for example, the coordinates of the upper left and lower right points of a rectangle surrounding the person area.
- the person area information may be information representing the silhouette of the area obtained from the background difference.
- the person area information may be expressed using an area shape description method standardized in MPEG-4 video coding or an area shape description method standardized in MPEG-7.
- the person area extraction unit 320 outputs the acquired person area information to the person posture determination unit 321.
- Person posture determination means 321 determines a specific posture of a person included in the person area based on the input person area information.
- the person posture determination unit 321 determines the specific posture of the person using, for example, a discriminator that has learned the specific human posture. For example, when determining the sitting posture, the human posture determination unit 321 uses a discriminator that has previously learned the characteristics of a person sitting as a human posture to determine whether a person included in the human region is sitting. Is determined. When there are a plurality of postures to be discriminated, the human posture discriminating means 321 may determine using a discriminator that has learned the features of each posture.
- the person posture determination unit 321 performs a determination of a specific posture of the person for each person region included in the person region information, and generates a determination result as the person posture information.
- the person posture determination unit 321 outputs the generated person posture information to the person posture change / motion determination unit 322.
- Person posture change / motion discriminating means 322 determines whether or not the posture of each person included in the person posture information shows a specific change. For example, when the posture changes from a standing state to a sitting state, the person posture change / motion discriminating means 322 integrates the information indicating the changed time into the visual event detection information as the person posture change / motion detection information. Output to means 210. At this time, the person posture change / motion discriminating means 322 may include the obtained reliability information in the person posture change / motion detection information if the reliability information of the change detection can be obtained at the same time.
- FIG. 7 is a block diagram showing another configuration example of the person posture change / motion detection means.
- the person posture change / motion detection means 212 includes a person region extraction means 320 and a specific motion determination means 331.
- the specific action determining unit 331 generates person posture change / motion detection information using the person area information input by the person area extracting unit 320.
- Person area extraction means 320 inputs video.
- the person area extraction unit 320 outputs the person area information acquired from the input video to the specific operation determination unit 331.
- the specific action determining unit 331 determines a specific action of a person included in the person area based on the input person area information.
- the specific action determining unit 331 determines the specific action of the person using, for example, a discriminator that has learned the characteristics of the specific action of the person. For example, when determining the action of raising the hand, the specific action determining unit 331 determines using a discriminator that has learned the characteristics of the video section that raises the hand.
- the discriminator may have a function of extracting and discriminating the characteristics of the hand-lifting action from the image itself, or by applying a model representing a human shape, from the time change of the relative relationship of each part of the fitted model A function for discriminating the action of raising the hand may be provided.
- the specific action determined by the specific action determining unit 331 is not limited to the action of raising the hand, but may be another action.
- specific actions that can be determined only from a change in the position of a person, such as “beyond a specific position (for example, a reference line drawn on the floor or the like)” or “start walking”, the specific action determining unit 331 Alternatively, the determination may be made based on a change in the position of the person. For example, when determining whether or not the vehicle has passed through the automatic door, the specific action determining unit 331 may determine whether or not a specific part of the body such as the foot position or the head position has passed.
- the specific motion determination unit 331 When the specific motion is detected, the specific motion determination unit 331 outputs information indicating the detected time to the visual event detection information integration unit 210 as person posture change / motion detection information. At this time, the specific motion determination unit 331 may include the acquired reliability information in the human posture change / motion detection information when the reliability information of the motion detection can be acquired at the same time.
- FIG. 8 is a block diagram illustrating an example of the configuration of the specific object state change detection unit 203.
- the specific object state change detecting unit 203 includes a specific object region extracting unit 341 and a specific object state change determining unit 342.
- the specific object area extraction unit 341 generates specific object area information based on the input video.
- the specific object state change determination unit 342 generates specific object state change detection information using the generated specific object region information.
- the specific object region extraction means 341 inputs video.
- the specific object region extraction unit 341 detects a specific object region from the input video. For example, when the specific object is a door or an automatic door, the specific object area extraction unit 341 detects the area.
- the specific object area extraction unit 341 detects a specific object area (hereinafter referred to as a specific object area).
- the specific object region extraction unit 341 detects the specific object region using, for example, a classifier that has learned the characteristics of the specific object.
- the specific object region extraction unit 341 outputs information representing the detected region to the specific object state change determination unit 342 as specific object region information.
- the specific object area information can be expressed by the same description method as the person area information. If the specific object is always at a fixed position in the screen, the user may store information indicating the position in the specific object region information in advance.
- the specific object state change determining unit 342 determines a change in the state of the specific object in the specific object area indicated by the input specific object area information.
- the specific object state change determination unit 342 determines a change in the state of the specific object using, for example, a discriminator that has learned the characteristics of the state change of the specific object. For example, when the opening / closing of a door or an automatic door is determined, the specific object state change determination unit 342 opens / closes the door or the automatic door using a discriminator that learns the state of the door being closed or opened. Is determined.
- the specific object state change determination unit 342 may extract the edge region of the door without using a discriminator, analyze the movement of the extracted region, and determine whether the door is opened or closed. Specifically, the specific object state change determination unit 342 determines that the door is opened when it detects that the edge region of the door starts to move from a position where the door is closed.
- the specific object state change determination unit 342 extracts the pixel value information of the specific part, which is the specific video.
- a change in state may be detected by determining whether or not it matches the one.
- the specific object state change determination unit 342 may detect a black frame or a specific color frame when they are periodically displayed.
- the specific object state change discriminating means 342 may simply detect a shot change point as a state change. For example, when the specific object is a traffic light, the specific object state change determination unit 342 may detect a change in the pixel value of the specific portion (signal portion).
- the specific object state change determination unit 342 detects the movement of the moving portion and detects the state change. It may be detected.
- various existing methods can be used for motion detection.
- the specific object state change determination unit 342 When the specific object state change determination unit 342 detects a specific state change of the specific object, the specific object state change determination unit 342 outputs information indicating the detected time to the visual event detection information integration unit 210 as specific object state change detection information. At this time, the specific object state change determination unit 342 may include the acquired reliability information in the specific object state change detection information when the reliability information of the state change detection can be acquired at the same time.
- FIG. 9 is a block diagram illustrating an example of the configuration of the specific state change detection unit 204.
- the specific state change detection unit 204 includes a moving object region extraction unit 360 and a specific state change determination unit 361.
- the moving object region extracting means 360 generates moving object region information based on the input video.
- the specific state change discriminating means 361 generates specific state change detection information using the generated moving body region information.
- the moving object region extraction means 360 inputs a video.
- the moving object region extracting unit 360 extracts moving object regions from the input video.
- the moving object region extracting unit 360 outputs moving object region information representing the extracted moving object region to the specific state change determining unit 361.
- a method based on the above-described background difference may be used, or various methods for extracting an existing moving body region may be used.
- the moving object area information can be expressed by the same description method as that of the person area information described above.
- the specific state change determining unit 361 detects a specific state change of the moving object included in the moving object region indicated by the input moving object region information.
- the specific state change determination unit 361 detects whether or not there is anything falling in the moving object. Specifically, the specific state change determination unit 361 analyzes the movement of each moving object and detects whether the moving object has fallen by detecting the moving object moving vertically downward. To do.
- Various existing methods such as a method based on an optical flow can be used to detect the movement of the moving object.
- the specific state change determination unit 361 detects a collision between a plurality of moving object regions.
- the specific state change determination unit 361 may detect a state in which a plurality of distant animal body regions are approaching and finally combined at the same position.
- the specific state change discriminating means 361 tracks the moving object region between frames and calculates the position in order to detect the connection of moving object regions.
- Various existing tracking methods can be used for tracking the body region.
- the specific state change determination unit 361 may detect that the vehicle has started moving from a stopped state. In this case, the specific state change determination unit 361 can detect that the vehicle has started to move by analyzing the movement of the moving object region.
- the specific state change determining unit 361 When the specific state change determining unit 361 detects a specific state change, it outputs information indicating the detected time as the specific state change detection information. At this time, the specific state change determination unit 361 may include the acquired reliability information in the specific state change detection information, when the reliability information on detection of the state change can be acquired at the same time.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
- Closed-Circuit Television Systems (AREA)
- Studio Devices (AREA)
Abstract
Description
以下、本発明の第1の実施形態を図面を参照して説明する。
2 撮影手段
3 特徴量算出手段
4 時間相関算出手段
5 照明手段
8 被写体
100-1~100-N 映像取得手段
101-1~101-N 視覚イベント検知手段
102 視覚イベント統合手段
201 明るさ変化検知手段
202、212 人物姿勢変化・動作検知手段
203 特定物体状態変化検知手段
204 特定状態変化検知手段
210 視覚イベント検知情報統合手段
300 フレーム間画素値差分算出手段
301 画素値差分判別手段
320 人物領域抽出手段
321 人物姿勢判別手段
322 人物姿勢変化・動作判別手段
331 特定動作判別手段
341 特定物体領域抽出手段
342 特定物体状態変化判別手段
360 動物体領域抽出手段
361 特定状態変化判別手段
Claims (10)
- 映像を取得する複数の映像取得手段と、
前記複数の映像取得手段に対応して設けられ、前記複数の映像取得手段が取得した映像を分析して視覚イベントを検知し、前記視覚イベントの検知時刻を示す情報を含む視覚イベント検知情報を生成する複数の視覚イベント検知手段と、
前記複数の視覚イベント検知手段が生成した視覚イベント検知情報を統合し、前記複数の映像取得手段が取得した各映像の時刻間の同期をとるための時刻同期情報を生成する視覚イベント統合手段とを備えた
ことを特徴とする時刻同期情報算出装置。 - 視覚イベント統合手段は、
複数の視覚イベント検知手段がそれぞれ入力した視覚イベント検知情報をもとに、検知された視覚イベントが生じた尤度を時間の関数として表すイベント生起の尤度関数を求め、
各視覚イベント検知情報に対応するイベント生起の尤度関数を時間軸方向に補正した値が最大となる時間軸方向の補正量を算出し、算出した補正量を含む時刻同期情報を生成する
請求項1に記載の時刻同期情報算出装置。 - 視覚イベント統合手段は、
検知時刻の尤度を反映した関数をイベント種別ごとに記憶し、
各視覚イベントの検知時刻と、前記関数とをもとに、イベント生起の尤度関数を生成する
請求項2に記載の時刻同期情報算出装置。 - 視覚イベント統合手段は、
視覚イベント検知情報から視覚イベントの検知の信頼度を取得し、
検知時刻の尤度を反映した関数に、前記視覚イベントの検知の信頼度を乗じ、
各視覚イベントの検知時刻と、前記視覚イベントの検知の信頼度を乗じた前記関数とをもとに、イベント生起の尤度関数を生成する
請求項3に記載の時刻同期情報算出装置。 - 視覚イベント検知手段は
視覚イベントを検知する1つまたは複数の事象検知手段と、
前記事象検知手段の検知結果を統合し、統合した前記検知結果を視覚イベント検知情報として出力する視覚イベント検知結果統合手段とを含む
請求項1から請求項4のうちのいずれか1項に記載の時刻同期情報算出装置。 - 事象検知手段が、
入力される映像から明るさの変化を視覚イベントとして検知する明るさ変化検知手段、入力される映像中の人物の姿勢の変化または動作を視覚イベントとして検知する人物姿勢変化・動作検知手段、入力される映像から特定の物体の状態変化を視覚イベントとして検知する特定物体状態変化検知手段および入力される映像から特定の状態や事象の発生を視覚イベントとして検知する特定状態変化検知手段のうちのいずれかである
請求項5に記載の時刻同期情報算出装置。 - 複数の映像を入力し、
入力した前記複数の映像を分析して視覚イベントを検知し、
前記視覚イベントの検知時刻を示す情報を含む視覚イベント検知情報を生成し、
生成した前記視覚イベント検知情報を統合し、前記複数の映像の時刻間の同期をとるための時刻同期情報を生成する
ことを特徴とする時刻同期情報算出方法。 - 視覚イベント検知情報をもとに、検知された視覚イベントが生じた尤度を時間の関数として表すイベント生起の尤度関数を求め、
各視覚イベント検知情報に対応するイベント生起の尤度関数を時間軸方向に補正した値が最大となる時間軸方向の補正量を算出し、算出した補正量を含む時刻同期情報を生成する
請求項7に記載の時刻同期情報算出方法。 - コンピュータに、
複数の映像を入力する処理と、
入力した前記複数の映像を分析して視覚イベントを検知し、前記視覚イベントの検知時刻を示す情報を含む視覚イベント検知情報を生成する視覚イベント検知処理と、
生成した前記視覚イベント検知情報を統合し、前記複数の映像の時刻間の同期をとるための時刻同期情報を生成する視覚イベント統合処理とを実行させる
ための時刻同期情報算出プログラム。 - コンピュータに、
視覚イベント統合処理で、
入力した視覚イベント検知情報をもとに、検知された視覚イベントが生じた尤度を時間の関数として表すイベント生起の尤度関数を求める処理と、
各視覚イベント検知情報に対応するイベント生起の尤度関数を時間軸方向に補正した値が最大となる時間軸方向の補正量を算出し、算出した補正量を含む時刻同期情報を生成する処理とを実行させる
請求項9に記載の時刻同期情報算出プログラム。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/366,137 US9210300B2 (en) | 2011-12-19 | 2012-11-15 | Time synchronization information computation device for synchronizing a plurality of videos, time synchronization information computation method for synchronizing a plurality of videos and time synchronization information computation program for synchronizing a plurality of videos |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2011-277155 | 2011-12-19 | ||
JP2011277155 | 2011-12-19 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013094115A1 true WO2013094115A1 (ja) | 2013-06-27 |
Family
ID=48668034
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2012/007324 WO2013094115A1 (ja) | 2011-12-19 | 2012-11-15 | 時刻同期情報算出装置、時刻同期情報算出方法および時刻同期情報算出プログラム |
Country Status (3)
Country | Link |
---|---|
US (1) | US9210300B2 (ja) |
JP (1) | JPWO2013094115A1 (ja) |
WO (1) | WO2013094115A1 (ja) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10205867B2 (en) | 2014-06-30 | 2019-02-12 | Panasonic Intellectual Property Management Co., Ltd. | Image photographing method performed with terminal device having camera function |
US10356183B2 (en) | 2014-05-27 | 2019-07-16 | Panasonic Intellectual Property Management Co., Ltd. | Method for sharing photographed images between users |
US10602071B2 (en) | 2015-06-19 | 2020-03-24 | Sony Semicondutor Solutions Corporation | Imaging device and control method |
WO2020250520A1 (ja) * | 2019-06-14 | 2020-12-17 | マツダ株式会社 | 外部環境認識装置 |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3043569A1 (en) | 2015-01-08 | 2016-07-13 | Koninklijke KPN N.V. | Temporal relationships of media streams |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010130086A (ja) * | 2008-11-25 | 2010-06-10 | Casio Computer Co Ltd | 画像処理装置及びプログラム |
JP2010135926A (ja) * | 2008-12-02 | 2010-06-17 | Tohoku Univ | 視覚センサ同期装置および視覚センサ同期方法 |
Family Cites Families (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6542183B1 (en) * | 1995-06-28 | 2003-04-01 | Lynx Systems Developers, Inc. | Event recording apparatus |
FI112549B (fi) * | 1999-03-01 | 2003-12-15 | Honeywell Oy | Menetelmä prosessia tarkkailevilta kameroilta saatavan kuvainformaation synkronoimiseksi |
US6690374B2 (en) * | 1999-05-12 | 2004-02-10 | Imove, Inc. | Security camera system for tracking moving objects in both forward and reverse directions |
AU2001240100A1 (en) * | 2000-03-10 | 2001-09-24 | Sensormatic Electronics Corporation | Method and apparatus for video surveillance with defined zones |
US6586592B2 (en) * | 2000-06-20 | 2003-07-01 | Pharmacia & Upjohn Company | Bis-arylsulfones |
US7796162B2 (en) * | 2000-10-26 | 2010-09-14 | Front Row Technologies, Llc | Providing multiple synchronized camera views for broadcast from a live venue activity to remote viewers |
JP3996428B2 (ja) * | 2001-12-25 | 2007-10-24 | 松下電器産業株式会社 | 異常検知装置及び異常検知システム |
US7880766B2 (en) * | 2004-02-03 | 2011-02-01 | Panasonic Corporation | Detection area adjustment apparatus |
CA2569524A1 (en) * | 2004-06-01 | 2005-12-15 | Supun Samarasekera | Method and system for performing video flashlight |
WO2005125208A1 (ja) * | 2004-06-15 | 2005-12-29 | Matsushita Electric Industrial Co., Ltd. | 監視装置および車両周辺監視装置 |
US7990422B2 (en) * | 2004-07-19 | 2011-08-02 | Grandeye, Ltd. | Automatically expanding the zoom capability of a wide-angle video camera |
US20060125920A1 (en) * | 2004-12-10 | 2006-06-15 | Microsoft Corporation | Matching un-synchronized image portions |
US8089563B2 (en) * | 2005-06-17 | 2012-01-03 | Fuji Xerox Co., Ltd. | Method and system for analyzing fixed-camera video via the selection, visualization, and interaction with storyboard keyframes |
US8022987B2 (en) * | 2005-06-30 | 2011-09-20 | Sandia Corporation | Information-based self-organization of sensor nodes of a sensor network |
US7576639B2 (en) * | 2006-03-14 | 2009-08-18 | Mobileye Technologies, Ltd. | Systems and methods for detecting pedestrians in the vicinity of a powered industrial vehicle |
US20070291118A1 (en) * | 2006-06-16 | 2007-12-20 | Shu Chiao-Fe | Intelligent surveillance system and method for integrated event based surveillance |
US8665333B1 (en) * | 2007-01-30 | 2014-03-04 | Videomining Corporation | Method and system for optimizing the observation and annotation of complex human behavior from video sources |
US20080219278A1 (en) * | 2007-03-06 | 2008-09-11 | International Business Machines Corporation | Method for finding shared sub-structures within multiple hierarchies |
US8339456B2 (en) * | 2008-05-15 | 2012-12-25 | Sri International | Apparatus for intelligent and autonomous video content generation and streaming |
WO2011072157A2 (en) * | 2009-12-09 | 2011-06-16 | Cale Fallgatter | Imaging of falling objects |
CN102223473A (zh) * | 2010-04-16 | 2011-10-19 | 鸿富锦精密工业(深圳)有限公司 | 摄像装置及利用其动态追踪特定物体的方法 |
US8380039B2 (en) * | 2010-11-09 | 2013-02-19 | Eastman Kodak Company | Method for aligning different photo streams |
US8786680B2 (en) * | 2011-06-21 | 2014-07-22 | Disney Enterprises, Inc. | Motion capture from body mounted cameras |
-
2012
- 2012-11-15 JP JP2013550090A patent/JPWO2013094115A1/ja active Pending
- 2012-11-15 WO PCT/JP2012/007324 patent/WO2013094115A1/ja active Application Filing
- 2012-11-15 US US14/366,137 patent/US9210300B2/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010130086A (ja) * | 2008-11-25 | 2010-06-10 | Casio Computer Co Ltd | 画像処理装置及びプログラム |
JP2010135926A (ja) * | 2008-12-02 | 2010-06-17 | Tohoku Univ | 視覚センサ同期装置および視覚センサ同期方法 |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10356183B2 (en) | 2014-05-27 | 2019-07-16 | Panasonic Intellectual Property Management Co., Ltd. | Method for sharing photographed images between users |
US10862977B2 (en) | 2014-05-27 | 2020-12-08 | Panasonic Intellectual Property Management Co., Ltd. | Method for sharing photographed images between users |
US10205867B2 (en) | 2014-06-30 | 2019-02-12 | Panasonic Intellectual Property Management Co., Ltd. | Image photographing method performed with terminal device having camera function |
US10602047B2 (en) | 2014-06-30 | 2020-03-24 | Panasonic Intellectual Property Management Co., Ltd. | Image photographing method performed with terminal device having camera function |
US10602071B2 (en) | 2015-06-19 | 2020-03-24 | Sony Semicondutor Solutions Corporation | Imaging device and control method |
WO2020250520A1 (ja) * | 2019-06-14 | 2020-12-17 | マツダ株式会社 | 外部環境認識装置 |
JP2020205498A (ja) * | 2019-06-14 | 2020-12-24 | マツダ株式会社 | 外部環境認識装置 |
Also Published As
Publication number | Publication date |
---|---|
US9210300B2 (en) | 2015-12-08 |
US20140313413A1 (en) | 2014-10-23 |
JPWO2013094115A1 (ja) | 2015-04-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102087702B (zh) | 图像处理设备和图像处理方法 | |
WO2013094115A1 (ja) | 時刻同期情報算出装置、時刻同期情報算出方法および時刻同期情報算出プログラム | |
CN102892007B (zh) | 促进多个摄像机间颜色平衡同步和获得跟踪的方法和系统 | |
WO2014171258A1 (ja) | 情報処理システム、情報処理方法及びプログラム | |
US20120141000A1 (en) | Method and system for image analysis | |
CN110853295A (zh) | 一种高空抛物预警方法和装置 | |
WO2014199786A1 (ja) | 撮影システム | |
US10366482B2 (en) | Method and system for automated video image focus change detection and classification | |
JPWO2014010174A1 (ja) | 画角変動検知装置、画角変動検知方法および画角変動検知プログラム | |
JP5832910B2 (ja) | 画像監視装置 | |
JP5762250B2 (ja) | 画像信号処理装置および画像信号処理方法 | |
KR101625538B1 (ko) | 도시방범이 가능한 다차선 자동차 번호판 인식시스템 | |
CN101715070B (zh) | 特定监控视频中的背景自动更新方法 | |
WO2014192441A1 (ja) | 画像処理システム、画像処理方法及びプログラム | |
Kreković et al. | A method for real-time detection of human fall from video | |
US20220366570A1 (en) | Object tracking device and object tracking method | |
KR20080018642A (ko) | 원격 응급상황 모니터링 시스템 및 방법 | |
CN110633648A (zh) | 一种自然行走状态下的人脸识别方法和系统 | |
US9361705B2 (en) | Methods and systems for measuring group behavior | |
KR102128319B1 (ko) | 팬틸트줌 카메라 기반의 영상 재생방법 및 장치 | |
CN112232107A (zh) | 一种图像式烟雾探测系统及方法 | |
CN111627049A (zh) | 高空抛物的确定方法、装置、存储介质及处理器 | |
KR101396838B1 (ko) | 다수의 모션 모델을 선택적으로 이용하는 영상 안정화 방법 및 시스템 | |
CN110910449A (zh) | 识别物体三维位置的方法和系统 | |
US20240111835A1 (en) | Object detection systems and methods including an object detection model using a tailored training dataset |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12858985 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2013550090 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14366137 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 12858985 Country of ref document: EP Kind code of ref document: A1 |