WO2021068553A1 - Procédé, appareil et dispositif de surveillance - Google Patents

Procédé, appareil et dispositif de surveillance Download PDF

Info

Publication number
WO2021068553A1
WO2021068553A1 PCT/CN2020/097694 CN2020097694W WO2021068553A1 WO 2021068553 A1 WO2021068553 A1 WO 2021068553A1 CN 2020097694 W CN2020097694 W CN 2020097694W WO 2021068553 A1 WO2021068553 A1 WO 2021068553A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
hop
target
next hop
information
Prior art date
Application number
PCT/CN2020/097694
Other languages
English (en)
Chinese (zh)
Inventor
陈庆
张增东
丁杉
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2021068553A1 publication Critical patent/WO2021068553A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the invention relates to the field of security and protection, in particular to a monitoring method.
  • a basic function based on suspicious persons monitoring and alerting is: use cameras to photograph pedestrians, identify suspicious persons, and track the movement trajectory of suspicious persons.
  • the invention provides an embodiment of a monitoring method, including: acquiring a monitoring task, the monitoring task indicating a target object; determining a target camera based on the characteristic information of the target object, the target camera being used for monitoring In the first area, the target object is located in the first area at the current moment; predicting a next-hop camera, and the area monitored by the next-hop camera is the predicted surveillance area that the target object will enter at the next moment; sending The information of the target camera and the information of the next hop camera.
  • This solution can predict the monitoring range that the target object will enter at the next moment, so as to obtain the information of the next hop camera in advance.
  • the predicting the next hop camera includes: predicting the next hop camera according to the information of the target camera and/or the information of the target object. This solution provides a specific solution for predicting the next hop camera.
  • the predicting the next-hop camera according to the information of the target camera and/or the information of the target object specifically includes: according to the information of the target camera and / Or the information of the target object, predict the geographic location of the next hop area; and output a list of the next hop cameras according to the geographic location of the next hop area.
  • the prediction of the next hop camera based on the information of the target camera and/or the information of the target object specifically includes any one or a combination of the following :
  • the first type statistics the historical motion trajectory of the object shot by the target camera, and predict the next hop camera based on the statistical historical motion trajectory of the object;
  • the second type according to the target object leaving the first Predict the next-hop camera based on the position and direction of movement in a region;
  • the third type Count the historical motion trajectory of the target object leaving the first region, and predict based on the statistical historical motion trajectory of the target object
  • the fourth type predict the next-hop camera based on the geographic location information where the target camera is located. This program provides several specific solutions for predicting the next hop camera.
  • the number of the next-hop cameras includes at least two; the predicted next-hop cameras include: each next-hop camera and each next-hop that output predictions The confidence level of the camera, which is used to characterize the possibility that each next-hop camera will capture the target object. This solution further provides confidence in the prediction when predicting the next hop camera.
  • the method further includes: predicting the possible position of the target object after entering the area captured by the next-hop camera; sending the position to at least one of the next-hop cameras An adjustment signal, where the position adjustment signal is used to instruct the next-hop camera that receives the adjustment signal to adjust the possible position to a visible position.
  • This solution further predicts the possible position of the target object after entering the area photographed by the next-hop camera, which can facilitate camera adjustment and detect the target object entering the monitoring range in advance.
  • the method further includes: according to the information of the target camera and the information of the next hop camera, instructing the video of the target camera and the next hop respectively The video of the camera is played.
  • a monitoring device including: an acquisition module for acquiring a monitoring task that indicates a target object; an analysis module for determining a target camera based on characteristic information of the target object, the target The camera is used to monitor the first area, and the target object is located in the first area at the current moment; the prediction module is used to predict the next hop camera, and the area monitored by the next hop camera is the predicted target object The surveillance area that is entered at the next moment; a sending module is used to send the information of the target camera and the information of the next hop camera.
  • a computer-readable medium stores instructions.
  • the first aspect and various possible implementation manners of the first aspect can be executed, and corresponding Technical effect.
  • a computer program product contains instructions.
  • a computing processor executes the instructions, the first aspect and various possible implementation manners of the first aspect can be executed, and have corresponding technical effects. .
  • a monitoring method including: triggering a monitoring task, the monitoring task indicating a target object; receiving information of the target camera and information of the next hop camera, wherein the target camera is used to monitor a first area, The target object is located in the first area at the current moment, and the area monitored by the at least one next-hop camera is the predicted surveillance area that the target object will enter at the next moment.
  • This program introduces a monitoring method for the predicted next hop camera.
  • the first possible implementation method of the fifth aspect further comprising: obtaining the confidence level of each next hop camera, the confidence level being used to characterize the probability that each next hop camera will capture the target object at the next moment .
  • the program further introduces the method of obtaining confidence.
  • the second possible implementation method of the fifth aspect further includes: respectively acquiring the multimedia data captured by the target camera and the at least one next-hop camera according to the information of the target camera and the information of the at least one next-hop camera.
  • the multimedia data shot by a one-hop camera; the multimedia data shot by the target camera and the multimedia data shot by the at least one next-hop camera are respectively played on the screen.
  • This program introduces a playback program.
  • the third possible implementation method of the fifth aspect further includes: selecting a display mode of the multimedia data captured by at least the next hop camera according to the confidence of each next hop camera.
  • the program introduces a display mode.
  • selecting the display mode of the multimedia data captured by the at least next hop camera specifically includes: Confidence, use the big screen to play the multimedia data captured by the next hop camera with high confidence; or, according to the confidence of each next hop camera, use the small screen to play the multimedia data shot by the next hop camera with low confidence data.
  • This program introduces a display mode based on confidence.
  • a monitoring device including: a task module for triggering a monitoring task, the monitoring task indicating a target object; a processing module for receiving information about the target camera and information about the next hop camera, where all The target camera is used to monitor a first area, the target object is located in the first area at the current moment, and the area monitored by the at least one next-hop camera is the predicted surveillance area that the target object will enter at the next moment .
  • a computer-readable medium stores instructions.
  • a computing processor executes the instructions, the first aspect and various possible implementation manners of the first aspect can be executed, and corresponding Technical effect.
  • a computer program product contains instructions.
  • a computing processor executes the instructions, the first aspect and various possible implementation manners of the first aspect can be executed, and have corresponding technical effects .
  • Fig. 1 is a structural diagram of an embodiment of a monitoring system
  • FIG. 2 is a flowchart of an embodiment of a monitoring method
  • Figure 3 is an example of the monitoring range
  • Figure 4 is an example of alarm information
  • Figure 5 is an example of the confidence of different cameras
  • Figure 6(a)-6(b) is a schematic diagram of the target person moving between cameras
  • Figure 7 is a schematic diagram showing the method
  • Figure 8 is a flowchart of an embodiment of a monitoring method
  • Figure 9 is a structural diagram of an embodiment of a monitoring device
  • Figure 10 is a structural diagram of an embodiment of a monitoring device
  • Figure 11 is a schematic diagram of an embodiment of a monitoring device.
  • Fig. 1 is a structural diagram of an embodiment of a monitoring system.
  • the monitoring system includes: a data analysis platform 11, a multimedia data center 12 communicating with the data analysis platform 11, and a display platform 13.
  • the display platform 13 communicates with camera 141 (camera A), camera 142 (camera B), camera 143 (camera C), camera 144 (camera D), and camera 145 (camera E); or, the display platform 13 communicates with The multimedia data center communicates.
  • Fig. 2 is a flowchart of an embodiment of a monitoring method.
  • Step 21 The data analysis platform 11 obtains a monitoring task.
  • the monitoring task includes a monitoring scope and a monitoring target list. This step can be executed by the task release module of the data analysis platform 11.
  • the triggering of the monitoring task can be generated on the data analysis platform 11 through user input.
  • the data analysis platform 11 receives the input monitoring range and monitoring list input by the user through a user interface (UI), and starts it in the UI. After the monitored button is clicked, the monitoring task is started.
  • the monitoring task can also come from other devices, such as a mobile terminal/personal computer/server communicating with the data analysis platform; or, from the display platform 13.
  • the camera ID that needs to be monitored this time can be determined from the monitoring range.
  • the monitoring range may be a monitoring camera list in which the camera IDs to be monitored are directly recorded; or the monitoring range records the geographic coordinate range, and the camera IDs to be monitored can be determined from the geographic coordinate range.
  • the monitoring range in this embodiment directly describes the ID of the monitoring camera. Camera A, camera B, camera C, camera D, and camera E are monitored, and camera F is not monitored.
  • the camera ID may be the serial number or code of the camera, and may also be the address of the camera (for example, IP address, MAC address). All information that can directly or indirectly distinguish a certain camera from other cameras belongs to the camera ID described in the embodiment of the present invention.
  • the monitoring list carries: a monitoring target ID.
  • the surveillance target is a moving object that needs to be tracked by the camera.
  • the monitoring target ID may be the target person's number, the target person's passport number, the target person's ID card number, and other information that can distinguish the target person from other persons.
  • the monitoring target ID may be the license plate of the target vehicle.
  • the monitoring list may further carry monitoring target characteristics. The target feature is used to subsequently match the target person. According to actual needs, the monitoring target characteristics may also be stored in a local or non-local storage device for the data analysis platform 11 to use the monitoring target ID for query.
  • the server 11 may also store one or more of the name, gender, and age of the target person.
  • Step 22 The data analysis platform 11 uses the target person characteristics to match the person characteristics of the persons in the video shot by the camera (video person characteristics).
  • the multimedia data center 12 communicates with multiple cameras, and can obtain the IDs of the multiple cameras and the images of persons captured by the multiple cameras.
  • the data analysis platform 11 acquiring the characteristics of the person in the video includes: the data analysis platform 11 sends the monitoring range to the multimedia data center 12, and the multimedia data center 12 follows the camera IDs listed in the monitoring range, The videos taken by these cameras are sent to the data analysis platform 11, and the data analysis platform 11 extracts the video person characteristics from the received videos. This transmission can be real-time.
  • a camera may also extract the video person characteristics, and then send the extracted person characteristics to the data analysis platform 11 for matching.
  • the multimedia data center 12 may be integrated into the data analysis platform 11.
  • the data analysis platform 11 obtains the videos (or the characteristics of the people in the videos) taken by these cameras by itself.
  • the specific method for matching the characteristics of the target person and the characteristics of the video person may be to compare the similarity between the two.
  • the value of the similarity between the feature of the video person and the feature of the target person reaches the preset threshold, this means that the matched video person and the target person are more likely to be the same person, and an alarm signal of successful matching can be issued.
  • the alarm signal may further include information such as the name, gender, and age described in FIG. 4.
  • Figure 4 is a schematic diagram of an alarm signal, in which the name of the target person is Jason, he appears in the video of camera A, his gender is male and his age is 25 years old.
  • the data analysis platform 11 may further send the ID of the camera 141 (camera A) to the display platform 13. After the display platform 13 receives the ID of the first camera, it can obtain the video captured by the first camera in real time through the camera 141 (or the multimedia data center 12) for playback.
  • a method for comparing similarity is: the target person characteristic acquired by the data analysis platform 11 from the target list (or local storage device or remote storage device) is a 256-dimensional floating point (float) array ;
  • the data analysis platform 11 obtains the video frame from the multimedia data center 12, parses the person image in the frame into a 256-dimensional floating-point array (video person characteristics), compares the two floating-point arrays, and compares the similarity of the floating-point arrays
  • the degree of similarity is used as the similarity between the target person and the video person.
  • Step 23 When the matching is successful, the data analysis platform 11 predicts that when the target person leaves the monitoring range of camera A (the first monitoring range) at the current moment, the monitoring range that the target person will enter at the next moment (the second monitoring range) Range) corresponds to the camera.
  • the instruction display platform 13 extracts the video from the prediction to the camera. When the data analysis platform issues this instruction, the target person has not yet entered the second monitoring range.
  • the time point to start the prediction may be: after the matching is successful, for example: after the matching is successful, and the target person is in the first monitoring range; or after the matching is successful, and the target person has left the first monitoring range, When it has not entered the monitoring range of the next camera.
  • the current moment refers to the point in time when the matching is performed; the next moment refers to the point in time when the target person leaves the current monitoring range and immediately enters the monitoring range of the next camera.
  • the target person does not enter the monitoring range of other cameras.
  • the target person is in the monitoring range of camera A, and it is expected that the target person will enter the monitoring range of camera B and the monitoring range of camera F in sequence.
  • the monitoring area that the target person enters at the next moment refers to the monitoring range of camera B (not the monitoring range of camera F).
  • the data analysis platform 11 has multiple options for sending videos to the cameras of the display platform. For example: select the cameras with the top k confidence levels, and instruct the display platform 13 to obtain the videos of these cameras; select the cameras with the confidence levels greater than the confidence threshold to send the videos to the display platform.
  • the data analysis platform 11 may further predict the possible position of the target person entering the next camera monitoring range.
  • the monitoring range of the camera C is the monitoring range 31; the monitoring range of the camera A is the monitoring range 32.
  • the data analysis platform 11 can obtain the current geographic location of the target person Jensen (75° east longitude, 31.5° north latitude), which is within the monitoring range 32 of camera A, and camera A shoots The direction of movement to Jensen is north. According to his movement trajectory, he will reach a new coordinate (75° east longitude, 32.5° north latitude). This new coordinate belongs to the monitoring range 31 of the camera C, specifically at the boundary of the monitoring range 31.
  • the data analysis platform 11 sends adjustment information to the camera C.
  • the adjustment information may include the position of the target person, that is, (75° east longitude, 32.5° north latitude); or, the adjustment information may include an adjustment direction (adjust the visible range of the camera C to the south).
  • the camera C adjusts its visual range according to this adjustment information, (75°E, 32.5°N) is included in its monitoring range, which means that after Jensen enters the monitoring range 31, it will Was monitored immediately. Therefore, this embodiment has better monitoring timeliness.
  • the data analysis platform 11 can generate adjustment information corresponding to the camera B and the camera D, and send it to the camera B and the camera D respectively, so that when Jensen enters the monitoring range of the camera B or the camera D, he Can be monitored as soon as possible.
  • Method 1 When the target person leaves the first area this time, record the position when he leaves and the direction of movement when he leaves. Because the walking rule of the person is to continue moving in the original direction in most cases, it can be estimated The subsequent movement direction of the personnel is taken out, so that the position reached by continuing to advance in the original direction (this position belongs to the second area) is taken as the possible position of the entered monitoring range. That is, it predicts the monitoring range that the target person is about to enter, and predicts the possible location of entering the monitoring range.
  • Method 2 Obtain the historical movement track of the target person in the first area from a database, and count the position and movement direction of the target person leaving the first area.
  • the movement direction obtained by statistics is used as the most likely movement direction of the target person this time, so as to predict the monitoring that the target person is about to enter. Range and possible locations to enter the monitoring range. Or, the next monitoring range obtained by statistics is taken as the monitoring range that may enter this time, so as to predict the monitoring range that the target person is about to enter and the possible location of entering the monitoring range.
  • Optional statistics based on time period.
  • the target person will enter the monitoring range of camera D from the monitoring range of camera A in 55% of the cases from 8:00 to 10:00 in the morning, then the following prediction can be made: In this monitoring, the The target person will enter the monitoring range of camera D from the monitoring range of camera A in 55% of the cases.
  • the visual range of the camera B is adjusted in advance, and after the adjustment, the camera B regards the true south of its monitoring range as the visual range.
  • the program can also be used for statistics and forecasts in time periods.
  • Method 4 Predict at least one next hop camera based on the geographic location information where the target camera is located. For example: geographically, there are 3 cameras around camera A, then these 3 cameras may all be the next hop cameras. Or there is a camera A in the middle of a road, and a camera B and a camera C at both ends of the road. If the target person is currently located in the monitoring range of the camera A, the target object may enter the monitoring range of the camera B or the monitoring range of the camera C next. It can be considered that at the next moment the target person has a 50% chance of entering the monitoring range of camera B, and a 50% chance of entering the monitoring range of camera C.
  • the weights of method 1, method 2, and method 3 are 40%, 40%, and 20% respectively.
  • Step 24 The display platform 13 receives an instruction from the data analysis platform, the instruction carries the IDs of the K cameras, and caches the real-time video information of the k cameras. After certain video (or image) data is cached, the display platform 13 can play the video of the k cameras. This is because whether the display platform 13 obtains the video information of k cameras is triggered by the prediction result, not by the successful detection of the target person in the second monitoring range. Therefore, when the target person has not entered the second monitoring range, the data analysis platform can already instruct the K cameras to send videos for the display platform 13 to buffer. This means that when the target person has not entered (or just) entered the second area, the display platform 13 can already display the actions of the target person.
  • Another display method is: after the target person enters the second monitoring range, the camera C sends the video to the data analysis platform 11, and the data analysis platform 11 performs the method described in step 22 Matching, the video is not sent to the display platform 13 for caching until the matching is successful.
  • the display speed of this embodiment is faster, and seamless switching of surveillance videos can be achieved under ideal conditions.
  • the data analysis platform 11 After the data analysis platform 11 detects the target person (see the matching scheme in step 22), the data analysis platform 11 instructs the display platform 13 to select from camera A or Obtain the real-time video of camera A from the multimedia data center 12; (2) When the data analysis platform 11 predicts the monitoring range of the next hop camera that the target person will enter, the data analysis platform 11 instructs the display platform 13 Get real-time video of at least one next hop camera.
  • the two commands in (1) and (2) can be sent at the same time in the order of time, or any one of them can be in the front.
  • the camera may also be instructed to send the video to the display platform 13, or the multimedia data center 12 may be instructed to send the video of the camera to the display platform 13. Or instruct the display platform 13 to generate video from the camera to the display platform.
  • the camera video displayed by the display platform may be a real-time video of the camera, or in some cases, it may also be a non-real-time video with a certain lag.
  • the display platform 13 has four screens, namely a large screen 41, a small screen 42, a small screen 43, and a small screen 44.
  • the content of camera A is played on the big screen, and the display platform 13 caches the content of the top 3 cameras (that is, camera B, camera C, and camera D) in advance;
  • the display platform 13 caches the content of the top 3 cameras (that is, camera B, camera C, and camera D) in advance;
  • the display platform 13 caches the content of the top 3 cameras (that is, camera B, camera C, and camera D) in advance;
  • the display platform 13 caches the content of the top 3 cameras (that is, camera B, camera C, and camera D) in advance;
  • the display platform 13 caches the content of the top 3 cameras (that is, camera B, camera C, and camera D) in advance;
  • the display platform 13 caches the content of the top 3 cameras (that is, camera B, camera C, and camera D) in advance;
  • the display platform 13 caches the content of the top 3
  • the display platform can also discriminately play the multimedia data of the current camera and the predicted multimedia data of the next hop camera. For example: the multimedia data of the current camera is played on the big screen, and the multimedia data of the next hop camera is played on the small screen.
  • Step 25 The camera that has received the adjustment adjusts its visual range.
  • the adjusted visual range includes the position where the target person enters the surveillance range of the camera. It should be noted that this step is an optional step and is only executed when the data analysis platform 11 sends adjustment information.
  • the above process is an example of video matching and caching.
  • this method is not only suitable for video, in other embodiments, media data such as pictures and sounds can also be matched and cached in advance.
  • Step 41 The display platform 13 sends a monitoring task to the data analysis platform 11.
  • the monitoring task includes a monitoring scope and a monitoring target list.
  • Step 42 The data analysis platform 11 uses the target person characteristics to match the person characteristics of the person in the video shot by the camera (video person characteristics), and instructs the display platform to obtain the video of the camera that is successfully matched.
  • the multimedia data center 12 communicates with multiple cameras, and can obtain the IDs of the multiple cameras and the images of persons captured by the multiple cameras.
  • the data analysis platform 11 is, for example, a cloud computing platform. This step is similar to step 22, and you can refer to step 22 for details, so it will not be repeated.
  • Step 43 When the matching is successful, the data analysis platform 11 predicts that when the target person leaves the monitoring range of camera A (the first monitoring range) at the current moment, the monitoring range that the target person will enter at the next moment (the second monitoring range) Range) corresponds to the camera.
  • This step is similar to step 23, except that in step 23, among the cameras predicted in step 23, k cameras are screened according to their confidence, and the IDs of these k cameras are carried in the instruction and sent to the display platform 13; In the step, the predicted IDs of all cameras are sent to the display platform 13 through the instruction.
  • the instruction further carries the confidence level corresponding to each camera.
  • Step 44 The display platform 13 receives an instruction from the data analysis platform, and the instruction carries the camera ID and the confidence level corresponding to each camera ID.
  • the display platform screens out k cameras according to the confidence levels corresponding to each camera ID, and obtains the videos of these k cameras for caching and display.
  • the screening method is for example: selecting the camera with the top k confidence level; selecting the camera with the confidence level greater than the confidence threshold.
  • this embodiment moves the operation of screening out k cameras from the data analysis platform 11 to the display platform 13.
  • step 24 please refer to step 24, which will not be repeated here.
  • Step 45 refer to step 25, which will not be repeated here.
  • FIG. 9 provides an embodiment of a monitoring device.
  • the monitoring device 9 may be a data analysis platform 11 or a program running in the data analysis platform 11.
  • the monitoring device 9 includes: an acquisition module 51 for acquiring a monitoring task that indicates a target object; an analysis module 52, which communicates with the acquisition module 51, for determining a target based on the characteristic information of the target object
  • the target camera is used to monitor the first area, and the target object is located in the first area at the current moment;
  • the prediction module 53 which communicates with the analysis module 52, is used to predict the next hop camera, the next
  • the area monitored by the one-hop camera is the predicted surveillance area that the target object will enter at the next moment;
  • the sending module 54 communicates with the analysis module 52 and the prediction module 53, and is used to send the target camera information and the Information of the next hop camera.
  • the monitoring device 5 can execute the method in FIG. 2. Specifically, step 21 can be performed by the acquisition module 51; step 22 can be performed by the analysis module 52; step 23 can be performed by the prediction module 53; the instructions in step 24 and all of the instructions in step 23
  • the adjustment information can be executed by the sending module 54. Since the specific functions of each module have been described in detail in the previous method embodiments, they will not be repeated here.
  • FIG 10 provides another embodiment of a monitoring device.
  • the monitoring device 6 may be a display platform 13 or a program running on the display platform 13.
  • the monitoring device 6 includes: a task module 61 for triggering a monitoring task, the monitoring task indicating a target object; a processing module 62 for receiving information about the target camera and information about the next hop camera, wherein the target camera It is used to monitor the first area, the target object is located in the first area at the current moment, and the area monitored by the at least one next-hop camera is the predicted surveillance area that the target object will enter at the next moment.
  • the monitoring device 6 further includes a confidence level acquiring module 62: configured to acquire the confidence level of each next hop camera, and the confidence level is used to characterize that each next hop camera captures the camera at the next moment. The likelihood of the target object.
  • a confidence level acquiring module 62 configured to acquire the confidence level of each next hop camera, and the confidence level is used to characterize that each next hop camera captures the camera at the next moment. The likelihood of the target object.
  • the monitoring device 6 further includes a multimedia acquisition module 63, configured to obtain the multimedia data captured by the target camera according to the information of the target camera and the information of the at least one next hop camera. And the multimedia data captured by the at least one next hop camera; the playback module 64 is configured to respectively play the multimedia data captured by the target camera and the multimedia data captured by the at least one next hop camera on the screen.
  • a multimedia acquisition module 63 configured to obtain the multimedia data captured by the target camera according to the information of the target camera and the information of the at least one next hop camera.
  • the multimedia data captured by the at least one next hop camera the playback module 64 is configured to respectively play the multimedia data captured by the target camera and the multimedia data captured by the at least one next hop camera on the screen.
  • the monitoring device 6 may execute the method executed by the display platform 13 described in FIG. 8. Specifically, step 41 can be performed by the task module 61; in step 44, the confidence level can be obtained by the confidence level obtaining module 62, the multimedia data can be obtained by the multimedia obtaining module 63, and the playback operation can be performed by the playing module 64.
  • the present invention also provides an embodiment of a monitoring device.
  • the monitoring device 7 includes a processor 71 and an interface 72 for executing the method in steps 21-25.
  • the processor 71 is configured to execute: obtain a monitoring task, the monitoring task indicates a target object; according to the characteristic information of the target object, determine a target camera, the target camera is used to monitor the first area, the target object is The current moment is located in the first area; the next-hop camera is predicted, and the area monitored by the next-hop camera is the predicted surveillance area that the target object will enter at the next moment.
  • the interface is used for executing: sending the information of the target camera and the information of the next hop camera.
  • the monitoring device may be a data analysis platform 11.
  • the monitoring device 7 further includes a memory (for example, a memory), the memory is used to store a computer program, and the processor 71 is used to execute steps 21-25 by running the computer program in the memory. method.
  • a memory for example, a memory
  • the processor 71 is used to execute steps 21-25 by running the computer program in the memory. method.
  • the present invention also provides an embodiment of a computer-readable medium, the computer-readable storage medium stores instructions, which are used to execute the method of steps 21-25 when the computing processor executes the instructions.
  • the present invention also provides a computer program product, characterized in that the computer program product contains instructions, and when the computing processor executes the instructions, it is used to perform the method of the steps 21-25 of the claims.
  • the invention also provides an embodiment of the display device.
  • the display device (for example, the display platform 13) includes a processor and an interface, and the processor is used to execute the methods in steps 41-45.
  • the display device further includes a memory (for example, a memory), the memory is used to store a computer program, and the processor for displaying the video is used to execute steps 41-45 by running the computer program in the memory. method.
  • the present invention also provides an embodiment of a computer-readable medium, the computer-readable storage medium stores instructions, which are used to execute the method of steps 41-45 when the computing processor executes the instructions.
  • the present invention also provides a computer program product, characterized in that the computer program product contains instructions, and when the computing processor executes the instructions, it is used to execute the method of the steps 41-45 of claims.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

Technique de surveillance, faisant appel aux étapes suivantes : l'acquisition, au moyen d'une mise en correspondance d'image, de la caméra courante qui est en train de surveiller un objet cible au moment courant ; puis la prédiction d'une caméra de saut suivant correspondant à la zone de surveillance suivante dans laquelle l'objet cible est sur le point d'entrer prochainement ; et la réalisation d'opérations, telles que l'acquisition, la mise en mémoire cache et la lecture, sur un contenu multimédia de la caméra de saut suivant à l'avance selon un résultat de prédiction.
PCT/CN2020/097694 2019-10-10 2020-06-23 Procédé, appareil et dispositif de surveillance WO2021068553A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201910959793.2 2019-10-10
CN201910959793 2019-10-10
CN201911026899.3A CN112653832A (zh) 2019-10-10 2019-10-26 一种监控方法、装置和设备
CN201911026899.3 2019-10-26

Publications (1)

Publication Number Publication Date
WO2021068553A1 true WO2021068553A1 (fr) 2021-04-15

Family

ID=75343372

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/097694 WO2021068553A1 (fr) 2019-10-10 2020-06-23 Procédé, appareil et dispositif de surveillance

Country Status (2)

Country Link
CN (1) CN112653832A (fr)
WO (1) WO2021068553A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112305534A (zh) * 2019-07-26 2021-02-02 杭州海康威视数字技术股份有限公司 目标检测方法、装置、设备及存储介质
CN115426450A (zh) * 2022-08-09 2022-12-02 浙江大华技术股份有限公司 摄像机参数调整方法、装置、计算机设备和可读存储介质

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113992860B (zh) * 2021-12-28 2022-04-19 北京国电通网络技术有限公司 基于云边协同的行为识别方法、装置、电子设备和介质
CN114500952A (zh) * 2022-02-14 2022-05-13 深圳市中壬速客信息技术有限公司 园区动态监控的控制方法、装置、设备和计算机存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010268186A (ja) * 2009-05-14 2010-11-25 Panasonic Corp 監視画像表示装置
CN104284146A (zh) * 2013-07-11 2015-01-14 松下电器产业株式会社 跟踪辅助装置、跟踪辅助系统和跟踪辅助方法
CN105245850A (zh) * 2015-10-27 2016-01-13 太原市公安局 跨监控摄像头进行目标追踪的方法、装置和系统
CN109522814A (zh) * 2018-10-25 2019-03-26 清华大学 一种基于视频数据的目标追踪方法及装置
CN109905679A (zh) * 2019-04-09 2019-06-18 梅州讯联科技发展有限公司 监控方法、装置以及系统

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4709101B2 (ja) * 2006-09-01 2011-06-22 キヤノン株式会社 自動追尾カメラ装置
CN201248107Y (zh) * 2008-04-30 2009-05-27 深圳市飞瑞斯科技有限公司 主从式摄像机智能视频监控系统
CN101458434B (zh) * 2009-01-08 2010-09-08 浙江大学 精确测量和预测乒乓球轨迹系统
CN101572804B (zh) * 2009-03-30 2012-03-21 浙江大学 多摄像机智能控制方法及装置
CN102176246A (zh) * 2011-01-30 2011-09-07 西安理工大学 一种多相机目标接力跟踪系统的相机接力关系确定方法
CN103152554B (zh) * 2013-03-08 2017-02-08 浙江宇视科技有限公司 一种移动目标智能跟踪装置
CN103558856A (zh) * 2013-11-21 2014-02-05 东南大学 动态环境下服务动机器人导航方法
CN103763513A (zh) * 2013-12-09 2014-04-30 北京计算机技术及应用研究所 一种分布式跟踪监控方法及其系统
CN104660998B (zh) * 2015-02-16 2018-08-07 阔地教育科技有限公司 一种接力跟踪方法及系统
CN104965964B (zh) * 2015-08-06 2018-01-30 山东建筑大学 一种基于监控视频分析的建筑物人员分布模型建立方法
CN105718750B (zh) * 2016-01-29 2018-08-17 长沙理工大学 一种车辆行驶轨迹的预测方法及系统
CN106709436B (zh) * 2016-12-08 2020-04-24 华中师范大学 面向轨道交通全景监控的跨摄像头可疑行人目标跟踪系统
CN108961756A (zh) * 2018-07-26 2018-12-07 深圳市赛亿科技开发有限公司 一种自动实时交通车流量、人流量统计方法及系统
CN108965826B (zh) * 2018-08-21 2021-01-12 北京旷视科技有限公司 监控方法、装置、处理设备及存储介质
CN114282732A (zh) * 2021-12-29 2022-04-05 重庆紫光华山智安科技有限公司 一种区域人流量的预测方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010268186A (ja) * 2009-05-14 2010-11-25 Panasonic Corp 監視画像表示装置
CN104284146A (zh) * 2013-07-11 2015-01-14 松下电器产业株式会社 跟踪辅助装置、跟踪辅助系统和跟踪辅助方法
CN105245850A (zh) * 2015-10-27 2016-01-13 太原市公安局 跨监控摄像头进行目标追踪的方法、装置和系统
CN109522814A (zh) * 2018-10-25 2019-03-26 清华大学 一种基于视频数据的目标追踪方法及装置
CN109905679A (zh) * 2019-04-09 2019-06-18 梅州讯联科技发展有限公司 监控方法、装置以及系统

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112305534A (zh) * 2019-07-26 2021-02-02 杭州海康威视数字技术股份有限公司 目标检测方法、装置、设备及存储介质
CN112305534B (zh) * 2019-07-26 2024-03-19 杭州海康威视数字技术股份有限公司 目标检测方法、装置、设备及存储介质
CN115426450A (zh) * 2022-08-09 2022-12-02 浙江大华技术股份有限公司 摄像机参数调整方法、装置、计算机设备和可读存储介质
CN115426450B (zh) * 2022-08-09 2024-04-16 浙江大华技术股份有限公司 摄像机参数调整方法、装置、计算机设备和可读存储介质

Also Published As

Publication number Publication date
CN112653832A (zh) 2021-04-13

Similar Documents

Publication Publication Date Title
WO2021068553A1 (fr) Procédé, appareil et dispositif de surveillance
JP6696615B2 (ja) 監視システム、監視方法、及び監視プログラムを記憶する記録媒体
CN108073577A (zh) 一种基于人脸识别的报警方法和系统
US7542588B2 (en) System and method for assuring high resolution imaging of distinctive characteristics of a moving object
WO2020057346A1 (fr) Procédé et appareil de surveillance vidéo, serveur de surveillance et système de surveillance vidéo
US11640726B2 (en) Person monitoring system and person monitoring method
CN110853295A (zh) 一种高空抛物预警方法和装置
CN111222373B (zh) 一种人员行为分析方法、装置和电子设备
US20150338497A1 (en) Target tracking device using handover between cameras and method thereof
JPWO2018198373A1 (ja) 映像監視システム
JP2007219948A (ja) ユーザ異常検出装置、及びユーザ異常検出方法
WO2021095351A1 (fr) Dispositif et procédé de surveillance ainsi que programme
JP2021114771A (ja) 情報処理装置、制御方法、及びプログラム
KR101840300B1 (ko) Cctv 영상의 검색 장치 및 방법
CN110633648A (zh) 一种自然行走状态下的人脸识别方法和系统
JP2019009529A (ja) 顔認証装置、人物追跡システム、人物追跡方法、および、人物追跡プログラム
CN103187083B (zh) 一种基于时域视频融合的存储方法及其系统
US20190027004A1 (en) Method for performing multi-camera automatic patrol control with aid of statistics data in a surveillance system, and associated apparatus
JP6941457B2 (ja) 監視システム
CN112365520A (zh) 一种基于视频大数据资源效能测评的行人目标实时追踪系统及方法
CN114639172B (zh) 高空抛物预警方法、系统、电子设备及存储介质
JP6435640B2 (ja) 混雑度推定システム
JP7392738B2 (ja) 表示システム、表示処理装置、表示処理方法、およびプログラム
JP7235612B2 (ja) 人物検索システムおよび人物検索方法
CN112437233A (zh) 视频生成方法、视频处理方法、装置和摄像设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20875631

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20875631

Country of ref document: EP

Kind code of ref document: A1