WO2021068553A1 - Monitoring method, apparatus and device - Google Patents

Monitoring method, apparatus and device Download PDF

Info

Publication number
WO2021068553A1
WO2021068553A1 PCT/CN2020/097694 CN2020097694W WO2021068553A1 WO 2021068553 A1 WO2021068553 A1 WO 2021068553A1 CN 2020097694 W CN2020097694 W CN 2020097694W WO 2021068553 A1 WO2021068553 A1 WO 2021068553A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
hop
target
next hop
information
Prior art date
Application number
PCT/CN2020/097694
Other languages
French (fr)
Chinese (zh)
Inventor
陈庆
张增东
丁杉
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2021068553A1 publication Critical patent/WO2021068553A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the invention relates to the field of security and protection, in particular to a monitoring method.
  • a basic function based on suspicious persons monitoring and alerting is: use cameras to photograph pedestrians, identify suspicious persons, and track the movement trajectory of suspicious persons.
  • the invention provides an embodiment of a monitoring method, including: acquiring a monitoring task, the monitoring task indicating a target object; determining a target camera based on the characteristic information of the target object, the target camera being used for monitoring In the first area, the target object is located in the first area at the current moment; predicting a next-hop camera, and the area monitored by the next-hop camera is the predicted surveillance area that the target object will enter at the next moment; sending The information of the target camera and the information of the next hop camera.
  • This solution can predict the monitoring range that the target object will enter at the next moment, so as to obtain the information of the next hop camera in advance.
  • the predicting the next hop camera includes: predicting the next hop camera according to the information of the target camera and/or the information of the target object. This solution provides a specific solution for predicting the next hop camera.
  • the predicting the next-hop camera according to the information of the target camera and/or the information of the target object specifically includes: according to the information of the target camera and / Or the information of the target object, predict the geographic location of the next hop area; and output a list of the next hop cameras according to the geographic location of the next hop area.
  • the prediction of the next hop camera based on the information of the target camera and/or the information of the target object specifically includes any one or a combination of the following :
  • the first type statistics the historical motion trajectory of the object shot by the target camera, and predict the next hop camera based on the statistical historical motion trajectory of the object;
  • the second type according to the target object leaving the first Predict the next-hop camera based on the position and direction of movement in a region;
  • the third type Count the historical motion trajectory of the target object leaving the first region, and predict based on the statistical historical motion trajectory of the target object
  • the fourth type predict the next-hop camera based on the geographic location information where the target camera is located. This program provides several specific solutions for predicting the next hop camera.
  • the number of the next-hop cameras includes at least two; the predicted next-hop cameras include: each next-hop camera and each next-hop that output predictions The confidence level of the camera, which is used to characterize the possibility that each next-hop camera will capture the target object. This solution further provides confidence in the prediction when predicting the next hop camera.
  • the method further includes: predicting the possible position of the target object after entering the area captured by the next-hop camera; sending the position to at least one of the next-hop cameras An adjustment signal, where the position adjustment signal is used to instruct the next-hop camera that receives the adjustment signal to adjust the possible position to a visible position.
  • This solution further predicts the possible position of the target object after entering the area photographed by the next-hop camera, which can facilitate camera adjustment and detect the target object entering the monitoring range in advance.
  • the method further includes: according to the information of the target camera and the information of the next hop camera, instructing the video of the target camera and the next hop respectively The video of the camera is played.
  • a monitoring device including: an acquisition module for acquiring a monitoring task that indicates a target object; an analysis module for determining a target camera based on characteristic information of the target object, the target The camera is used to monitor the first area, and the target object is located in the first area at the current moment; the prediction module is used to predict the next hop camera, and the area monitored by the next hop camera is the predicted target object The surveillance area that is entered at the next moment; a sending module is used to send the information of the target camera and the information of the next hop camera.
  • a computer-readable medium stores instructions.
  • the first aspect and various possible implementation manners of the first aspect can be executed, and corresponding Technical effect.
  • a computer program product contains instructions.
  • a computing processor executes the instructions, the first aspect and various possible implementation manners of the first aspect can be executed, and have corresponding technical effects. .
  • a monitoring method including: triggering a monitoring task, the monitoring task indicating a target object; receiving information of the target camera and information of the next hop camera, wherein the target camera is used to monitor a first area, The target object is located in the first area at the current moment, and the area monitored by the at least one next-hop camera is the predicted surveillance area that the target object will enter at the next moment.
  • This program introduces a monitoring method for the predicted next hop camera.
  • the first possible implementation method of the fifth aspect further comprising: obtaining the confidence level of each next hop camera, the confidence level being used to characterize the probability that each next hop camera will capture the target object at the next moment .
  • the program further introduces the method of obtaining confidence.
  • the second possible implementation method of the fifth aspect further includes: respectively acquiring the multimedia data captured by the target camera and the at least one next-hop camera according to the information of the target camera and the information of the at least one next-hop camera.
  • the multimedia data shot by a one-hop camera; the multimedia data shot by the target camera and the multimedia data shot by the at least one next-hop camera are respectively played on the screen.
  • This program introduces a playback program.
  • the third possible implementation method of the fifth aspect further includes: selecting a display mode of the multimedia data captured by at least the next hop camera according to the confidence of each next hop camera.
  • the program introduces a display mode.
  • selecting the display mode of the multimedia data captured by the at least next hop camera specifically includes: Confidence, use the big screen to play the multimedia data captured by the next hop camera with high confidence; or, according to the confidence of each next hop camera, use the small screen to play the multimedia data shot by the next hop camera with low confidence data.
  • This program introduces a display mode based on confidence.
  • a monitoring device including: a task module for triggering a monitoring task, the monitoring task indicating a target object; a processing module for receiving information about the target camera and information about the next hop camera, where all The target camera is used to monitor a first area, the target object is located in the first area at the current moment, and the area monitored by the at least one next-hop camera is the predicted surveillance area that the target object will enter at the next moment .
  • a computer-readable medium stores instructions.
  • a computing processor executes the instructions, the first aspect and various possible implementation manners of the first aspect can be executed, and corresponding Technical effect.
  • a computer program product contains instructions.
  • a computing processor executes the instructions, the first aspect and various possible implementation manners of the first aspect can be executed, and have corresponding technical effects .
  • Fig. 1 is a structural diagram of an embodiment of a monitoring system
  • FIG. 2 is a flowchart of an embodiment of a monitoring method
  • Figure 3 is an example of the monitoring range
  • Figure 4 is an example of alarm information
  • Figure 5 is an example of the confidence of different cameras
  • Figure 6(a)-6(b) is a schematic diagram of the target person moving between cameras
  • Figure 7 is a schematic diagram showing the method
  • Figure 8 is a flowchart of an embodiment of a monitoring method
  • Figure 9 is a structural diagram of an embodiment of a monitoring device
  • Figure 10 is a structural diagram of an embodiment of a monitoring device
  • Figure 11 is a schematic diagram of an embodiment of a monitoring device.
  • Fig. 1 is a structural diagram of an embodiment of a monitoring system.
  • the monitoring system includes: a data analysis platform 11, a multimedia data center 12 communicating with the data analysis platform 11, and a display platform 13.
  • the display platform 13 communicates with camera 141 (camera A), camera 142 (camera B), camera 143 (camera C), camera 144 (camera D), and camera 145 (camera E); or, the display platform 13 communicates with The multimedia data center communicates.
  • Fig. 2 is a flowchart of an embodiment of a monitoring method.
  • Step 21 The data analysis platform 11 obtains a monitoring task.
  • the monitoring task includes a monitoring scope and a monitoring target list. This step can be executed by the task release module of the data analysis platform 11.
  • the triggering of the monitoring task can be generated on the data analysis platform 11 through user input.
  • the data analysis platform 11 receives the input monitoring range and monitoring list input by the user through a user interface (UI), and starts it in the UI. After the monitored button is clicked, the monitoring task is started.
  • the monitoring task can also come from other devices, such as a mobile terminal/personal computer/server communicating with the data analysis platform; or, from the display platform 13.
  • the camera ID that needs to be monitored this time can be determined from the monitoring range.
  • the monitoring range may be a monitoring camera list in which the camera IDs to be monitored are directly recorded; or the monitoring range records the geographic coordinate range, and the camera IDs to be monitored can be determined from the geographic coordinate range.
  • the monitoring range in this embodiment directly describes the ID of the monitoring camera. Camera A, camera B, camera C, camera D, and camera E are monitored, and camera F is not monitored.
  • the camera ID may be the serial number or code of the camera, and may also be the address of the camera (for example, IP address, MAC address). All information that can directly or indirectly distinguish a certain camera from other cameras belongs to the camera ID described in the embodiment of the present invention.
  • the monitoring list carries: a monitoring target ID.
  • the surveillance target is a moving object that needs to be tracked by the camera.
  • the monitoring target ID may be the target person's number, the target person's passport number, the target person's ID card number, and other information that can distinguish the target person from other persons.
  • the monitoring target ID may be the license plate of the target vehicle.
  • the monitoring list may further carry monitoring target characteristics. The target feature is used to subsequently match the target person. According to actual needs, the monitoring target characteristics may also be stored in a local or non-local storage device for the data analysis platform 11 to use the monitoring target ID for query.
  • the server 11 may also store one or more of the name, gender, and age of the target person.
  • Step 22 The data analysis platform 11 uses the target person characteristics to match the person characteristics of the persons in the video shot by the camera (video person characteristics).
  • the multimedia data center 12 communicates with multiple cameras, and can obtain the IDs of the multiple cameras and the images of persons captured by the multiple cameras.
  • the data analysis platform 11 acquiring the characteristics of the person in the video includes: the data analysis platform 11 sends the monitoring range to the multimedia data center 12, and the multimedia data center 12 follows the camera IDs listed in the monitoring range, The videos taken by these cameras are sent to the data analysis platform 11, and the data analysis platform 11 extracts the video person characteristics from the received videos. This transmission can be real-time.
  • a camera may also extract the video person characteristics, and then send the extracted person characteristics to the data analysis platform 11 for matching.
  • the multimedia data center 12 may be integrated into the data analysis platform 11.
  • the data analysis platform 11 obtains the videos (or the characteristics of the people in the videos) taken by these cameras by itself.
  • the specific method for matching the characteristics of the target person and the characteristics of the video person may be to compare the similarity between the two.
  • the value of the similarity between the feature of the video person and the feature of the target person reaches the preset threshold, this means that the matched video person and the target person are more likely to be the same person, and an alarm signal of successful matching can be issued.
  • the alarm signal may further include information such as the name, gender, and age described in FIG. 4.
  • Figure 4 is a schematic diagram of an alarm signal, in which the name of the target person is Jason, he appears in the video of camera A, his gender is male and his age is 25 years old.
  • the data analysis platform 11 may further send the ID of the camera 141 (camera A) to the display platform 13. After the display platform 13 receives the ID of the first camera, it can obtain the video captured by the first camera in real time through the camera 141 (or the multimedia data center 12) for playback.
  • a method for comparing similarity is: the target person characteristic acquired by the data analysis platform 11 from the target list (or local storage device or remote storage device) is a 256-dimensional floating point (float) array ;
  • the data analysis platform 11 obtains the video frame from the multimedia data center 12, parses the person image in the frame into a 256-dimensional floating-point array (video person characteristics), compares the two floating-point arrays, and compares the similarity of the floating-point arrays
  • the degree of similarity is used as the similarity between the target person and the video person.
  • Step 23 When the matching is successful, the data analysis platform 11 predicts that when the target person leaves the monitoring range of camera A (the first monitoring range) at the current moment, the monitoring range that the target person will enter at the next moment (the second monitoring range) Range) corresponds to the camera.
  • the instruction display platform 13 extracts the video from the prediction to the camera. When the data analysis platform issues this instruction, the target person has not yet entered the second monitoring range.
  • the time point to start the prediction may be: after the matching is successful, for example: after the matching is successful, and the target person is in the first monitoring range; or after the matching is successful, and the target person has left the first monitoring range, When it has not entered the monitoring range of the next camera.
  • the current moment refers to the point in time when the matching is performed; the next moment refers to the point in time when the target person leaves the current monitoring range and immediately enters the monitoring range of the next camera.
  • the target person does not enter the monitoring range of other cameras.
  • the target person is in the monitoring range of camera A, and it is expected that the target person will enter the monitoring range of camera B and the monitoring range of camera F in sequence.
  • the monitoring area that the target person enters at the next moment refers to the monitoring range of camera B (not the monitoring range of camera F).
  • the data analysis platform 11 has multiple options for sending videos to the cameras of the display platform. For example: select the cameras with the top k confidence levels, and instruct the display platform 13 to obtain the videos of these cameras; select the cameras with the confidence levels greater than the confidence threshold to send the videos to the display platform.
  • the data analysis platform 11 may further predict the possible position of the target person entering the next camera monitoring range.
  • the monitoring range of the camera C is the monitoring range 31; the monitoring range of the camera A is the monitoring range 32.
  • the data analysis platform 11 can obtain the current geographic location of the target person Jensen (75° east longitude, 31.5° north latitude), which is within the monitoring range 32 of camera A, and camera A shoots The direction of movement to Jensen is north. According to his movement trajectory, he will reach a new coordinate (75° east longitude, 32.5° north latitude). This new coordinate belongs to the monitoring range 31 of the camera C, specifically at the boundary of the monitoring range 31.
  • the data analysis platform 11 sends adjustment information to the camera C.
  • the adjustment information may include the position of the target person, that is, (75° east longitude, 32.5° north latitude); or, the adjustment information may include an adjustment direction (adjust the visible range of the camera C to the south).
  • the camera C adjusts its visual range according to this adjustment information, (75°E, 32.5°N) is included in its monitoring range, which means that after Jensen enters the monitoring range 31, it will Was monitored immediately. Therefore, this embodiment has better monitoring timeliness.
  • the data analysis platform 11 can generate adjustment information corresponding to the camera B and the camera D, and send it to the camera B and the camera D respectively, so that when Jensen enters the monitoring range of the camera B or the camera D, he Can be monitored as soon as possible.
  • Method 1 When the target person leaves the first area this time, record the position when he leaves and the direction of movement when he leaves. Because the walking rule of the person is to continue moving in the original direction in most cases, it can be estimated The subsequent movement direction of the personnel is taken out, so that the position reached by continuing to advance in the original direction (this position belongs to the second area) is taken as the possible position of the entered monitoring range. That is, it predicts the monitoring range that the target person is about to enter, and predicts the possible location of entering the monitoring range.
  • Method 2 Obtain the historical movement track of the target person in the first area from a database, and count the position and movement direction of the target person leaving the first area.
  • the movement direction obtained by statistics is used as the most likely movement direction of the target person this time, so as to predict the monitoring that the target person is about to enter. Range and possible locations to enter the monitoring range. Or, the next monitoring range obtained by statistics is taken as the monitoring range that may enter this time, so as to predict the monitoring range that the target person is about to enter and the possible location of entering the monitoring range.
  • Optional statistics based on time period.
  • the target person will enter the monitoring range of camera D from the monitoring range of camera A in 55% of the cases from 8:00 to 10:00 in the morning, then the following prediction can be made: In this monitoring, the The target person will enter the monitoring range of camera D from the monitoring range of camera A in 55% of the cases.
  • the visual range of the camera B is adjusted in advance, and after the adjustment, the camera B regards the true south of its monitoring range as the visual range.
  • the program can also be used for statistics and forecasts in time periods.
  • Method 4 Predict at least one next hop camera based on the geographic location information where the target camera is located. For example: geographically, there are 3 cameras around camera A, then these 3 cameras may all be the next hop cameras. Or there is a camera A in the middle of a road, and a camera B and a camera C at both ends of the road. If the target person is currently located in the monitoring range of the camera A, the target object may enter the monitoring range of the camera B or the monitoring range of the camera C next. It can be considered that at the next moment the target person has a 50% chance of entering the monitoring range of camera B, and a 50% chance of entering the monitoring range of camera C.
  • the weights of method 1, method 2, and method 3 are 40%, 40%, and 20% respectively.
  • Step 24 The display platform 13 receives an instruction from the data analysis platform, the instruction carries the IDs of the K cameras, and caches the real-time video information of the k cameras. After certain video (or image) data is cached, the display platform 13 can play the video of the k cameras. This is because whether the display platform 13 obtains the video information of k cameras is triggered by the prediction result, not by the successful detection of the target person in the second monitoring range. Therefore, when the target person has not entered the second monitoring range, the data analysis platform can already instruct the K cameras to send videos for the display platform 13 to buffer. This means that when the target person has not entered (or just) entered the second area, the display platform 13 can already display the actions of the target person.
  • Another display method is: after the target person enters the second monitoring range, the camera C sends the video to the data analysis platform 11, and the data analysis platform 11 performs the method described in step 22 Matching, the video is not sent to the display platform 13 for caching until the matching is successful.
  • the display speed of this embodiment is faster, and seamless switching of surveillance videos can be achieved under ideal conditions.
  • the data analysis platform 11 After the data analysis platform 11 detects the target person (see the matching scheme in step 22), the data analysis platform 11 instructs the display platform 13 to select from camera A or Obtain the real-time video of camera A from the multimedia data center 12; (2) When the data analysis platform 11 predicts the monitoring range of the next hop camera that the target person will enter, the data analysis platform 11 instructs the display platform 13 Get real-time video of at least one next hop camera.
  • the two commands in (1) and (2) can be sent at the same time in the order of time, or any one of them can be in the front.
  • the camera may also be instructed to send the video to the display platform 13, or the multimedia data center 12 may be instructed to send the video of the camera to the display platform 13. Or instruct the display platform 13 to generate video from the camera to the display platform.
  • the camera video displayed by the display platform may be a real-time video of the camera, or in some cases, it may also be a non-real-time video with a certain lag.
  • the display platform 13 has four screens, namely a large screen 41, a small screen 42, a small screen 43, and a small screen 44.
  • the content of camera A is played on the big screen, and the display platform 13 caches the content of the top 3 cameras (that is, camera B, camera C, and camera D) in advance;
  • the display platform 13 caches the content of the top 3 cameras (that is, camera B, camera C, and camera D) in advance;
  • the display platform 13 caches the content of the top 3 cameras (that is, camera B, camera C, and camera D) in advance;
  • the display platform 13 caches the content of the top 3 cameras (that is, camera B, camera C, and camera D) in advance;
  • the display platform 13 caches the content of the top 3 cameras (that is, camera B, camera C, and camera D) in advance;
  • the display platform 13 caches the content of the top 3 cameras (that is, camera B, camera C, and camera D) in advance;
  • the display platform 13 caches the content of the top 3
  • the display platform can also discriminately play the multimedia data of the current camera and the predicted multimedia data of the next hop camera. For example: the multimedia data of the current camera is played on the big screen, and the multimedia data of the next hop camera is played on the small screen.
  • Step 25 The camera that has received the adjustment adjusts its visual range.
  • the adjusted visual range includes the position where the target person enters the surveillance range of the camera. It should be noted that this step is an optional step and is only executed when the data analysis platform 11 sends adjustment information.
  • the above process is an example of video matching and caching.
  • this method is not only suitable for video, in other embodiments, media data such as pictures and sounds can also be matched and cached in advance.
  • Step 41 The display platform 13 sends a monitoring task to the data analysis platform 11.
  • the monitoring task includes a monitoring scope and a monitoring target list.
  • Step 42 The data analysis platform 11 uses the target person characteristics to match the person characteristics of the person in the video shot by the camera (video person characteristics), and instructs the display platform to obtain the video of the camera that is successfully matched.
  • the multimedia data center 12 communicates with multiple cameras, and can obtain the IDs of the multiple cameras and the images of persons captured by the multiple cameras.
  • the data analysis platform 11 is, for example, a cloud computing platform. This step is similar to step 22, and you can refer to step 22 for details, so it will not be repeated.
  • Step 43 When the matching is successful, the data analysis platform 11 predicts that when the target person leaves the monitoring range of camera A (the first monitoring range) at the current moment, the monitoring range that the target person will enter at the next moment (the second monitoring range) Range) corresponds to the camera.
  • This step is similar to step 23, except that in step 23, among the cameras predicted in step 23, k cameras are screened according to their confidence, and the IDs of these k cameras are carried in the instruction and sent to the display platform 13; In the step, the predicted IDs of all cameras are sent to the display platform 13 through the instruction.
  • the instruction further carries the confidence level corresponding to each camera.
  • Step 44 The display platform 13 receives an instruction from the data analysis platform, and the instruction carries the camera ID and the confidence level corresponding to each camera ID.
  • the display platform screens out k cameras according to the confidence levels corresponding to each camera ID, and obtains the videos of these k cameras for caching and display.
  • the screening method is for example: selecting the camera with the top k confidence level; selecting the camera with the confidence level greater than the confidence threshold.
  • this embodiment moves the operation of screening out k cameras from the data analysis platform 11 to the display platform 13.
  • step 24 please refer to step 24, which will not be repeated here.
  • Step 45 refer to step 25, which will not be repeated here.
  • FIG. 9 provides an embodiment of a monitoring device.
  • the monitoring device 9 may be a data analysis platform 11 or a program running in the data analysis platform 11.
  • the monitoring device 9 includes: an acquisition module 51 for acquiring a monitoring task that indicates a target object; an analysis module 52, which communicates with the acquisition module 51, for determining a target based on the characteristic information of the target object
  • the target camera is used to monitor the first area, and the target object is located in the first area at the current moment;
  • the prediction module 53 which communicates with the analysis module 52, is used to predict the next hop camera, the next
  • the area monitored by the one-hop camera is the predicted surveillance area that the target object will enter at the next moment;
  • the sending module 54 communicates with the analysis module 52 and the prediction module 53, and is used to send the target camera information and the Information of the next hop camera.
  • the monitoring device 5 can execute the method in FIG. 2. Specifically, step 21 can be performed by the acquisition module 51; step 22 can be performed by the analysis module 52; step 23 can be performed by the prediction module 53; the instructions in step 24 and all of the instructions in step 23
  • the adjustment information can be executed by the sending module 54. Since the specific functions of each module have been described in detail in the previous method embodiments, they will not be repeated here.
  • FIG 10 provides another embodiment of a monitoring device.
  • the monitoring device 6 may be a display platform 13 or a program running on the display platform 13.
  • the monitoring device 6 includes: a task module 61 for triggering a monitoring task, the monitoring task indicating a target object; a processing module 62 for receiving information about the target camera and information about the next hop camera, wherein the target camera It is used to monitor the first area, the target object is located in the first area at the current moment, and the area monitored by the at least one next-hop camera is the predicted surveillance area that the target object will enter at the next moment.
  • the monitoring device 6 further includes a confidence level acquiring module 62: configured to acquire the confidence level of each next hop camera, and the confidence level is used to characterize that each next hop camera captures the camera at the next moment. The likelihood of the target object.
  • a confidence level acquiring module 62 configured to acquire the confidence level of each next hop camera, and the confidence level is used to characterize that each next hop camera captures the camera at the next moment. The likelihood of the target object.
  • the monitoring device 6 further includes a multimedia acquisition module 63, configured to obtain the multimedia data captured by the target camera according to the information of the target camera and the information of the at least one next hop camera. And the multimedia data captured by the at least one next hop camera; the playback module 64 is configured to respectively play the multimedia data captured by the target camera and the multimedia data captured by the at least one next hop camera on the screen.
  • a multimedia acquisition module 63 configured to obtain the multimedia data captured by the target camera according to the information of the target camera and the information of the at least one next hop camera.
  • the multimedia data captured by the at least one next hop camera the playback module 64 is configured to respectively play the multimedia data captured by the target camera and the multimedia data captured by the at least one next hop camera on the screen.
  • the monitoring device 6 may execute the method executed by the display platform 13 described in FIG. 8. Specifically, step 41 can be performed by the task module 61; in step 44, the confidence level can be obtained by the confidence level obtaining module 62, the multimedia data can be obtained by the multimedia obtaining module 63, and the playback operation can be performed by the playing module 64.
  • the present invention also provides an embodiment of a monitoring device.
  • the monitoring device 7 includes a processor 71 and an interface 72 for executing the method in steps 21-25.
  • the processor 71 is configured to execute: obtain a monitoring task, the monitoring task indicates a target object; according to the characteristic information of the target object, determine a target camera, the target camera is used to monitor the first area, the target object is The current moment is located in the first area; the next-hop camera is predicted, and the area monitored by the next-hop camera is the predicted surveillance area that the target object will enter at the next moment.
  • the interface is used for executing: sending the information of the target camera and the information of the next hop camera.
  • the monitoring device may be a data analysis platform 11.
  • the monitoring device 7 further includes a memory (for example, a memory), the memory is used to store a computer program, and the processor 71 is used to execute steps 21-25 by running the computer program in the memory. method.
  • a memory for example, a memory
  • the processor 71 is used to execute steps 21-25 by running the computer program in the memory. method.
  • the present invention also provides an embodiment of a computer-readable medium, the computer-readable storage medium stores instructions, which are used to execute the method of steps 21-25 when the computing processor executes the instructions.
  • the present invention also provides a computer program product, characterized in that the computer program product contains instructions, and when the computing processor executes the instructions, it is used to perform the method of the steps 21-25 of the claims.
  • the invention also provides an embodiment of the display device.
  • the display device (for example, the display platform 13) includes a processor and an interface, and the processor is used to execute the methods in steps 41-45.
  • the display device further includes a memory (for example, a memory), the memory is used to store a computer program, and the processor for displaying the video is used to execute steps 41-45 by running the computer program in the memory. method.
  • the present invention also provides an embodiment of a computer-readable medium, the computer-readable storage medium stores instructions, which are used to execute the method of steps 41-45 when the computing processor executes the instructions.
  • the present invention also provides a computer program product, characterized in that the computer program product contains instructions, and when the computing processor executes the instructions, it is used to execute the method of the steps 41-45 of claims.

Abstract

A monitoring technique, comprising: acquiring, by means of image matching, the current camera that is monitoring a target object at the current moment; then predicting a next-hop camera corresponding to the next monitoring area that the target object is about to enter next; and performing operations, such as acquisition, caching and playing, on multimedia content of the next-hop camera in advance according to a prediction result.

Description

一种监控方法、装置和设备Monitoring method, device and equipment 技术领域Technical field
本发明涉及安防领域,尤其涉及一种监控方法。The invention relates to the field of security and protection, in particular to a monitoring method.
背景技术Background technique
在信息技术日益发达的现代社会,智能安防系统已渗透到生活的方方面面,在家庭安防、智慧城市建设、平安城市等领域均发挥着不可替代的作用。在银行、铁路、体育场馆、写字楼、市场等等场所都有迫切的依靠摄像机进行监控的需求。In a modern society with increasingly developed information technology, smart security systems have penetrated into all aspects of life, and have played an irreplaceable role in the fields of home security, smart city construction, and safe cities. In banks, railways, stadiums, office buildings, markets, etc., there is an urgent need to rely on cameras for monitoring.
一种基于可疑人员监控告警的基本功能是:使用摄像机拍摄行人,对可疑人员进行识别,跟踪可疑人员的运动轨迹。A basic function based on suspicious persons monitoring and alerting is: use cameras to photograph pedestrians, identify suspicious persons, and track the movement trajectory of suspicious persons.
然而,由于人员往往是处于移动之中。因此,在通常情况下,可疑人员并不会停留在单个摄像机的摄像范围之内,其运动轨迹会在多个多个摄像机的监控范围之间切换。在这种场景下,当可疑人员从当前摄像机的监控范围进入下一个摄像机的监控范围时,现有技术需要重新对可以人员进行识别,识别成功之后再次启动跟踪。这在方法反应迟钝,难以对人员进行持续监控。However, because people are often on the move. Therefore, under normal circumstances, a suspicious person does not stay within the shooting range of a single camera, and its motion trajectory will switch between the monitoring ranges of multiple cameras. In this scenario, when a suspicious person enters the monitoring range of the next camera from the monitoring range of the current camera, the prior art needs to re-identify the available persons, and start tracking again after the recognition is successful. This is a slow response to the method and it is difficult to continuously monitor the personnel.
发明内容Summary of the invention
第一方面,被发明提供提供一种监控方法的实施例,包括:获取监控任务,所述监控任务指示目标对象;根据所述目标对象的特征信息,确定目标摄像机,所述目标摄像机用于监控第一区域,所述目标对象在当前时刻位于所述第一区域;预测下一跳摄像机,所述下一跳摄像机所监控的区域为预测的所述目标对象在下一时刻进入的监控区域;发送所述目标摄像机的信息和所述下一跳摄像机的信息。In the first aspect, the invention provides an embodiment of a monitoring method, including: acquiring a monitoring task, the monitoring task indicating a target object; determining a target camera based on the characteristic information of the target object, the target camera being used for monitoring In the first area, the target object is located in the first area at the current moment; predicting a next-hop camera, and the area monitored by the next-hop camera is the predicted surveillance area that the target object will enter at the next moment; sending The information of the target camera and the information of the next hop camera.
该方案可以预测出目标对象在下一时刻会进入的监控范围,从而提前获得下一跳摄像机的信息。This solution can predict the monitoring range that the target object will enter at the next moment, so as to obtain the information of the next hop camera in advance.
第一方面的第一种可能实现方法中,所述预测下一跳摄像机包括:根据所述目标摄像机的信息和/或所述目标对象的信息,预测所述下一跳摄像机。该方案提供一种具体预测下一跳摄像机的方案。In the first possible implementation method of the first aspect, the predicting the next hop camera includes: predicting the next hop camera according to the information of the target camera and/or the information of the target object. This solution provides a specific solution for predicting the next hop camera.
第一方面的第二种可能实现方法中,所述根据所述目标摄像机的信息和/或所述目标对象的信息,预测所述下一跳摄像机,具体包括:根据所述目标摄像机的信息和/或所述目标对象的信息,预测所述下一跳区域的地理位置;根据所述下一跳区域的地理位置,输出所述下一跳摄像机的列表。该方案提供一种具体预测下一跳摄像机的方案。In the second possible implementation method of the first aspect, the predicting the next-hop camera according to the information of the target camera and/or the information of the target object specifically includes: according to the information of the target camera and / Or the information of the target object, predict the geographic location of the next hop area; and output a list of the next hop cameras according to the geographic location of the next hop area. This solution provides a specific solution for predicting the next hop camera.
第一方面的第三种可能实现方法中,所述根据所述目标摄像机的信息和/或所述目标对象的信息,预测所述下一跳摄像机,具体包括以下任意一种或者几种的组合:第一种:统计所述目标摄像机所拍摄的对象的历史运动轨迹,根据统计的对象的历史运动轨迹预测所述下一跳摄像机;第二种:根据所述目标对象本次离开所述第一区域时的的位置和运动方向,预测所述下一跳摄像机;第三种:统计所述目标对象离开所述第一区域的历史运动轨迹,根据统计的所述目标对象的历史运动轨迹预测所述下一跳摄像机;第四种:根据所述目标摄像机所在的地理位置信息,预测所述下一跳摄像机。该方案提供几种具体预测下一跳摄像机的方案。In the third possible implementation method of the first aspect, the prediction of the next hop camera based on the information of the target camera and/or the information of the target object specifically includes any one or a combination of the following : The first type: statistics the historical motion trajectory of the object shot by the target camera, and predict the next hop camera based on the statistical historical motion trajectory of the object; the second type: according to the target object leaving the first Predict the next-hop camera based on the position and direction of movement in a region; the third type: Count the historical motion trajectory of the target object leaving the first region, and predict based on the statistical historical motion trajectory of the target object The next-hop camera; the fourth type: predict the next-hop camera based on the geographic location information where the target camera is located. This program provides several specific solutions for predicting the next hop camera.
第一方面的第四种可能实现方法中,所述下一跳摄像机包括的数量为至少两个;所述预测下一跳摄像机包括:输出预测的每个下一跳摄像机和每个下一跳摄像机的置信度,所 述置信度用于表征每个下一跳摄像机拍摄到所述目标对象的可能性。该方案在预测下一跳摄像机时进一步提供预测的置信度。In the fourth possible implementation method of the first aspect, the number of the next-hop cameras includes at least two; the predicted next-hop cameras include: each next-hop camera and each next-hop that output predictions The confidence level of the camera, which is used to characterize the possibility that each next-hop camera will capture the target object. This solution further provides confidence in the prediction when predicting the next hop camera.
第一方面的第五种可能实现方法中,所述方法还包括:预测所述目标对象进入所述下一跳摄像机所拍摄区域之后所在的可能位置;向至少一个所述下一跳摄像机发送位置调整信号,所述位置调整信号用于指示所述接收到调整信号的下一跳摄像机将所述可能位置调整为可视位置。该方案进一步预测所述目标对象进入所述下一跳摄像机所拍摄区域之后所在的可能位置,可以便于摄像机进行调整,提前检测到进入监控范围的目标对象。In the fifth possible implementation method of the first aspect, the method further includes: predicting the possible position of the target object after entering the area captured by the next-hop camera; sending the position to at least one of the next-hop cameras An adjustment signal, where the position adjustment signal is used to instruct the next-hop camera that receives the adjustment signal to adjust the possible position to a visible position. This solution further predicts the possible position of the target object after entering the area photographed by the next-hop camera, which can facilitate camera adjustment and detect the target object entering the monitoring range in advance.
第一方面的第六种可能实现方法中,所述方法还包括:根据所述目标摄像机的信息和所述下一跳摄像机的信息,指令分别对所述目标摄像机的视频和所述下一跳摄像机的视频进行播放。In the sixth possible implementation method of the first aspect, the method further includes: according to the information of the target camera and the information of the next hop camera, instructing the video of the target camera and the next hop respectively The video of the camera is played.
第二方面,提供一种监控装置,包括:获取模块,用于获取监控任务,所述监控任务指示目标对象;分析模块,用于根据所述目标对象的特征信息,确定目标摄像机,所述目标摄像机用于监控第一区域,所述目标对象在当前时刻位于所述第一区域;预测模块,用于预测下一跳摄像机,所述下一跳摄像机所监控的区域为预测的所述目标对象在下一时刻进入的监控区域;发送模块,用于发送所述目标摄像机的信息和所述下一跳摄像机的信息。In a second aspect, a monitoring device is provided, including: an acquisition module for acquiring a monitoring task that indicates a target object; an analysis module for determining a target camera based on characteristic information of the target object, the target The camera is used to monitor the first area, and the target object is located in the first area at the current moment; the prediction module is used to predict the next hop camera, and the area monitored by the next hop camera is the predicted target object The surveillance area that is entered at the next moment; a sending module is used to send the information of the target camera and the information of the next hop camera.
该方案以及该方案的各个方面,对应于第一方面的各个可能实现方法。并具有相应的有益效果。This solution and various aspects of the solution correspond to each possible implementation method of the first aspect. And has corresponding beneficial effects.
第三方面,提供一种计算机可读介质,所述计算机可读存储介质存储指令,当计算的处理器执行所述指令,可以执行第一方面以及第一方面各种可能实现方式,并具有相应的技术效果。In a third aspect, a computer-readable medium is provided, and the computer-readable storage medium stores instructions. When a computing processor executes the instructions, the first aspect and various possible implementation manners of the first aspect can be executed, and corresponding Technical effect.
第四方面,提供一种计算机程序产品,所述计算机程序产品包含指令,当计算的处理器执行所述指令,可以执行第一方面以及第一方面各种可能实现方式,并具有相应的技术效果。In a fourth aspect, a computer program product is provided. The computer program product contains instructions. When a computing processor executes the instructions, the first aspect and various possible implementation manners of the first aspect can be executed, and have corresponding technical effects. .
第五方面,提供一种监控方法,包括:触发监控任务,所述监控任务指示目标对象;接收目标摄像机的信息和下一跳摄像机的信息,其中,所述目标摄像机用于监控第一区域,所述目标对象在当前时刻位于所述第一区域,所述至少一个下一跳摄像机所监控的区域为预测的所述目标对象在下一时刻进入的监控区域。该方案介绍了一种对预测的下一跳摄像机的监控方法。In a fifth aspect, a monitoring method is provided, including: triggering a monitoring task, the monitoring task indicating a target object; receiving information of the target camera and information of the next hop camera, wherein the target camera is used to monitor a first area, The target object is located in the first area at the current moment, and the area monitored by the at least one next-hop camera is the predicted surveillance area that the target object will enter at the next moment. This program introduces a monitoring method for the predicted next hop camera.
第五方面的第一种可能实现方法,还包括:获取每个下一跳摄像机的置信度,所述置信度用于表征每个下一跳摄像机在下一时刻拍摄到所述目标对象的可能性。该方案进一步介绍了获取置信度的方法。The first possible implementation method of the fifth aspect, further comprising: obtaining the confidence level of each next hop camera, the confidence level being used to characterize the probability that each next hop camera will capture the target object at the next moment . The program further introduces the method of obtaining confidence.
第五方面的第二种可能实现方法,还包括:根据所述目标摄像机的信息和所述至少一个下一跳摄像机的信息,分别获取所述目标摄像机所拍摄的多媒体数据和所述至少一个下一跳摄像机所拍摄的多媒体数据;分别在屏幕上播放所述目标摄像机所拍摄的多媒体数据和所述至少一个下一跳摄像机所拍摄的多媒体数据。该方案介绍了一种播放方案。The second possible implementation method of the fifth aspect further includes: respectively acquiring the multimedia data captured by the target camera and the at least one next-hop camera according to the information of the target camera and the information of the at least one next-hop camera. The multimedia data shot by a one-hop camera; the multimedia data shot by the target camera and the multimedia data shot by the at least one next-hop camera are respectively played on the screen. This program introduces a playback program.
第五方面的第三种可能实现方法,还包括:根据每个下一跳摄像机的置信度,选择所述至少下一跳摄像机所拍摄的多媒体数据的展示模式。该方案介绍一种展示模式。The third possible implementation method of the fifth aspect further includes: selecting a display mode of the multimedia data captured by at least the next hop camera according to the confidence of each next hop camera. The program introduces a display mode.
第五方面的第四种可能实现方法,根据每个下一跳摄像机的置信度,选择所述至少下一跳摄像机所拍摄的多媒体数据的展示模式,具体包括:根据每个下一跳摄像机的置信度, 使用大屏幕播放置信度高的下一跳摄像机所拍摄的多媒体数据;或者,根据每个下一跳摄像机的置信度,使用小屏幕播放置信度低的下一跳摄像机所拍摄的多媒体数据。该方案介绍一种通过置信度进行的展示模式。In the fourth possible implementation method of the fifth aspect, according to the confidence of each next hop camera, selecting the display mode of the multimedia data captured by the at least next hop camera specifically includes: Confidence, use the big screen to play the multimedia data captured by the next hop camera with high confidence; or, according to the confidence of each next hop camera, use the small screen to play the multimedia data shot by the next hop camera with low confidence data. This program introduces a display mode based on confidence.
第六方面,提供一种监控装置,包括:任务模块,用于触发监控任务,所述监控任务指示目标对象;处理模块,用于接收目标摄像机的信息和下一跳摄像机的信息,其中,所述目标摄像机用于监控第一区域,所述目标对象在当前时刻位于所述第一区域,所述至少一个下一跳摄像机所监控的区域为预测的所述目标对象在下一时刻进入的监控区域。该方案以及该方案的各个方面,对应于第一方面的各个可能实现方法。并具有相应的有益效果。In a sixth aspect, a monitoring device is provided, including: a task module for triggering a monitoring task, the monitoring task indicating a target object; a processing module for receiving information about the target camera and information about the next hop camera, where all The target camera is used to monitor a first area, the target object is located in the first area at the current moment, and the area monitored by the at least one next-hop camera is the predicted surveillance area that the target object will enter at the next moment . This solution and various aspects of the solution correspond to each possible implementation method of the first aspect. And has corresponding beneficial effects.
第七方面,提供一种计算机可读介质,所述计算机可读存储介质存储指令,当计算的处理器执行所述指令,可以执行第一方面以及第一方面各种可能实现方式,并具有相应的技术效果。In a seventh aspect, a computer-readable medium is provided. The computer-readable storage medium stores instructions. When a computing processor executes the instructions, the first aspect and various possible implementation manners of the first aspect can be executed, and corresponding Technical effect.
第八方面,提供一种计算机程序产品,所述计算机程序产品包含指令,当计算的处理器执行所述指令,可以执行第一方面以及第一方面各种可能实现方式,并具有相应的技术效果。In an eighth aspect, a computer program product is provided. The computer program product contains instructions. When a computing processor executes the instructions, the first aspect and various possible implementation manners of the first aspect can be executed, and have corresponding technical effects .
附图说明Description of the drawings
为了更清楚地说明本发明实施例技术方案,下面将对实施例和现有技术描述中所需要使用的图作简单地介绍,显而易见地,下面描述中的图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,还可以根据这些图获得其它的图。In order to explain the technical solutions of the embodiments of the present invention more clearly, the following will briefly introduce the figures used in the embodiments and the prior art description. Obviously, the figures in the following description are only some embodiments of the present invention. For those of ordinary skill in the art, other graphs can also be obtained from these graphs.
图1是监控系统实施例架构图;Fig. 1 is a structural diagram of an embodiment of a monitoring system;
图2是监控方法实施例流程图;Figure 2 is a flowchart of an embodiment of a monitoring method;
图3是监控范围示例;Figure 3 is an example of the monitoring range;
图4是告警信息示例;Figure 4 is an example of alarm information;
图5是不同摄像机置信度实例;Figure 5 is an example of the confidence of different cameras;
图6(a)-6(b)是目标人员在摄像机间移动示意图;Figure 6(a)-6(b) is a schematic diagram of the target person moving between cameras;
图7是展示方法的示意图;Figure 7 is a schematic diagram showing the method;
图8是一个监控方法实施例流程图;Figure 8 is a flowchart of an embodiment of a monitoring method;
图9是一种监控装置实施例架构图;Figure 9 is a structural diagram of an embodiment of a monitoring device;
图10是一种监控装置实施例架构图;Figure 10 is a structural diagram of an embodiment of a monitoring device;
图11是一种监控设备实施例示意图。Figure 11 is a schematic diagram of an embodiment of a monitoring device.
具体实施方式Detailed ways
图1是监控系统实施例架构图。监控系统包括:数据分析平台11、与所述数据分析平台11通信的多媒体数据中心12以及展示平台13。所述展示平台13与摄像机141(摄像机A)、摄像机142(摄像机B)、摄像机143(摄像机C)、摄像机144(摄像机D)以及摄像机145(摄像机E)通信;或者,所述展示平台13与所述多媒体数据中心通信。Fig. 1 is a structural diagram of an embodiment of a monitoring system. The monitoring system includes: a data analysis platform 11, a multimedia data center 12 communicating with the data analysis platform 11, and a display platform 13. The display platform 13 communicates with camera 141 (camera A), camera 142 (camera B), camera 143 (camera C), camera 144 (camera D), and camera 145 (camera E); or, the display platform 13 communicates with The multimedia data center communicates.
图2是监控方法实施例流程图。Fig. 2 is a flowchart of an embodiment of a monitoring method.
步骤21,数据分析平台11获取监控任务。所述监控任务中包括监控范围和监控目标清单。本步骤可以由所述数据分析平台11的任务发布模块执行。Step 21: The data analysis platform 11 obtains a monitoring task. The monitoring task includes a monitoring scope and a monitoring target list. This step can be executed by the task release module of the data analysis platform 11.
所述监控任务的触发可以通过用户的输入在所述数据分析平台11生成,例如:所述数据分析平台11通过用户界面(UI)接收用户输入的输入监控范围、监控清单,以及在UI中启动监控的按钮被点击后,启动监控任务。此外,所述监控任务也可以来自其他设备,例如来自于与所述数据分析平台通信的移动终端/个人电脑/服务器;或者,来自所述展示 平台13。The triggering of the monitoring task can be generated on the data analysis platform 11 through user input. For example, the data analysis platform 11 receives the input monitoring range and monitoring list input by the user through a user interface (UI), and starts it in the UI. After the monitored button is clicked, the monitoring task is started. In addition, the monitoring task can also come from other devices, such as a mobile terminal/personal computer/server communicating with the data analysis platform; or, from the display platform 13.
由所述监控范围可以确定出本次需要监控的摄像机ID。例如:监控范围可以是一个监控摄像机清单,所述监控清单中直接记录有需要监控的摄像机ID;或者,监控范围记录了地理坐标范围,由所述地理坐标范围可以确定需要监控的摄像机ID。参见图3,本实施例中监控范围直接描述了监控摄像机的ID,摄像机A、摄像机B、摄像机C、摄像机D和摄像机E被监控,摄像机F未被监控。The camera ID that needs to be monitored this time can be determined from the monitoring range. For example, the monitoring range may be a monitoring camera list in which the camera IDs to be monitored are directly recorded; or the monitoring range records the geographic coordinate range, and the camera IDs to be monitored can be determined from the geographic coordinate range. Referring to FIG. 3, the monitoring range in this embodiment directly describes the ID of the monitoring camera. Camera A, camera B, camera C, camera D, and camera E are monitored, and camera F is not monitored.
所述摄像机ID可以是摄像机的编号或者代号,还可以是摄像机的地址(例如IP地址、MAC地址)。凡是可以直接或者间接的把某个摄像机和其他摄像机区分开的信息,都属于本发明实施例中所描述的摄像机ID。The camera ID may be the serial number or code of the camera, and may also be the address of the camera (for example, IP address, MAC address). All information that can directly or indirectly distinguish a certain camera from other cameras belongs to the camera ID described in the embodiment of the present invention.
所述监控清单中携带有:监控目标ID。监控目标是需要被摄像机跟踪的运动物体。当监控目标是人员,那么所述监控目标ID可以是目标人员编号、目标人员护照号码、目标人员身份证号码等能够把所述目标人员与其他人员区分开的信息。当监控目标是车辆,那么所述监控目标ID可以是目标车辆的号牌。可选的,所述监控清单中可以进一步携带监控目标特征。所述目标特征用于后续进行目标人员的匹配。根据实际情况的需要,所述监控目标特征也可由存储在本地或者非本地的存储设备中,供所述数据分析平台11使用监控目标ID进行查询。可选的,所述服务器11中还可以存储有目标人员的:姓名、性别和年龄中的一种或者多种。The monitoring list carries: a monitoring target ID. The surveillance target is a moving object that needs to be tracked by the camera. When the monitoring target is a person, the monitoring target ID may be the target person's number, the target person's passport number, the target person's ID card number, and other information that can distinguish the target person from other persons. When the monitoring target is a vehicle, the monitoring target ID may be the license plate of the target vehicle. Optionally, the monitoring list may further carry monitoring target characteristics. The target feature is used to subsequently match the target person. According to actual needs, the monitoring target characteristics may also be stored in a local or non-local storage device for the data analysis platform 11 to use the monitoring target ID for query. Optionally, the server 11 may also store one or more of the name, gender, and age of the target person.
步骤22,所述数据分析平台11用所述目标人员特征与摄像机所拍摄的视频中人员的人员特征(视频人员特征)进行匹配。其中,所述多媒体数据中心12和多个摄像机通信,可以获取这多个摄像机的ID以及这多个摄像机拍摄的人员图像。Step 22: The data analysis platform 11 uses the target person characteristics to match the person characteristics of the persons in the video shot by the camera (video person characteristics). Wherein, the multimedia data center 12 communicates with multiple cameras, and can obtain the IDs of the multiple cameras and the images of persons captured by the multiple cameras.
所述所述数据分析平台11获取视频中人员的特征包括:所述数据分析平台11发送所述监控范围给多媒体数据中心12,所述多媒体数据中心12按照所述监控范围中罗列的摄像机ID,发送这些摄像机所拍摄的视频给所述数据分析平台11,所述数据分析平台11从收到的视频中提取出所述视频人员特征。这种发送可以是实时的。此外,也可以由摄像机提取所述视频人员特征,然后把提取出的所述人员特征发送给所述数据分析平台11进行匹配。The data analysis platform 11 acquiring the characteristics of the person in the video includes: the data analysis platform 11 sends the monitoring range to the multimedia data center 12, and the multimedia data center 12 follows the camera IDs listed in the monitoring range, The videos taken by these cameras are sent to the data analysis platform 11, and the data analysis platform 11 extracts the video person characteristics from the received videos. This transmission can be real-time. In addition, a camera may also extract the video person characteristics, and then send the extracted person characteristics to the data analysis platform 11 for matching.
所述多媒体数据中心12可以集成在所述数据分析平台11中,这种情况下,所述数据分析平台11自行获取这些摄像机所拍摄的视频(或者视频中人员的特征)。The multimedia data center 12 may be integrated into the data analysis platform 11. In this case, the data analysis platform 11 obtains the videos (or the characteristics of the people in the videos) taken by these cameras by itself.
对目标人员特征和所述视频人员特征这二者进行匹配的具体方法可以是比较二者的相似度。当视频人员特征和目标人员特征的相似度的值达到预设阈值,这意味着被匹配的视频人员与目标人员是同一人的可能性较大,可以发出匹配成功的告警信号。可选的,所述告警信号中还可以进一步包括图4中所描述的姓名、性别和年龄等信息。图4是一个告警信号示意图,其中,目标人员的姓名是詹森(Jason),他出现在摄像机A的视频中,其性别为男,年龄25岁。The specific method for matching the characteristics of the target person and the characteristics of the video person may be to compare the similarity between the two. When the value of the similarity between the feature of the video person and the feature of the target person reaches the preset threshold, this means that the matched video person and the target person are more likely to be the same person, and an alarm signal of successful matching can be issued. Optionally, the alarm signal may further include information such as the name, gender, and age described in FIG. 4. Figure 4 is a schematic diagram of an alarm signal, in which the name of the target person is Jason, he appears in the video of camera A, his gender is male and his age is 25 years old.
假设匹配成功的视频来自于摄像机141,所述数据分析平台11还可以进一步发送所述摄像机141的ID(摄像机A)给所述展示平台13。所述展示平台13收到所述第一摄像机的ID之后,可以通过摄像机141(或者多媒体数据中心12)获取所述第一摄像机实时拍摄的视频进行播放。Assuming that the matched video comes from the camera 141, the data analysis platform 11 may further send the ID of the camera 141 (camera A) to the display platform 13. After the display platform 13 receives the ID of the first camera, it can obtain the video captured by the first camera in real time through the camera 141 (or the multimedia data center 12) for playback.
一种比较相似度进行的方法是:所述数据分析平台11从所述目标清单(或者本地存储设备、或者远端存储设备)所获取的目标人员特征是一个256维的浮点(float)数组;数据分析平台11从多媒体数据中心12获取视频中帧,把帧中人员图像解析成256维的浮点数组(视频人员特征),把这两个浮点数组进行比较,把浮点数组的相似度作为所述目标人员和视频人员的相似度。A method for comparing similarity is: the target person characteristic acquired by the data analysis platform 11 from the target list (or local storage device or remote storage device) is a 256-dimensional floating point (float) array ; The data analysis platform 11 obtains the video frame from the multimedia data center 12, parses the person image in the frame into a 256-dimensional floating-point array (video person characteristics), compares the two floating-point arrays, and compares the similarity of the floating-point arrays The degree of similarity is used as the similarity between the target person and the video person.
步骤23,当匹配成功,所述数据分析平台11预测:当所述目标人员离开当前时刻所在的摄像机A监控范围(第一监控范围)后,下一个时刻将会进入的监控范围(第二监控范围)所对应的摄像机。指令展示平台13从预测到摄像机中提取视频。在所述数据分析平台发出这个指令时,所述目标人员尚未进入第二监控范围。开始预测的时间点可以是:当匹 配成功之后,例如:匹配成功之后,并且所述目标人员在第一监控范围时;或者匹配成功后,并且所述目标人员离开了所述第一监控范围、尚未进入下一个摄像机的监控范围时。Step 23: When the matching is successful, the data analysis platform 11 predicts that when the target person leaves the monitoring range of camera A (the first monitoring range) at the current moment, the monitoring range that the target person will enter at the next moment (the second monitoring range) Range) corresponds to the camera. The instruction display platform 13 extracts the video from the prediction to the camera. When the data analysis platform issues this instruction, the target person has not yet entered the second monitoring range. The time point to start the prediction may be: after the matching is successful, for example: after the matching is successful, and the target person is in the first monitoring range; or after the matching is successful, and the target person has left the first monitoring range, When it has not entered the monitoring range of the next camera.
这里介绍一下当前时刻和下一时刻的关系。当前时刻是指在进行匹配时的时间点;下一时刻,是指目标人员在离开当前监控范围后,紧接着进入下一个摄像机的监控范围时的时间点。在当前时刻和下一时刻之间,所述目标人员没有进入其他摄像机的监控范围。例如,在进行匹配时,所述目标人员在摄像机A的监控范围,预计所述目标人员接下来会依次进入摄像机B的监控范围、摄像机F的监控范围。那么所述目标人员下一时刻进入的监控区域,是指摄像机B的监控范围(不是摄像机F的监控范围)。Here is the relationship between the current moment and the next moment. The current moment refers to the point in time when the matching is performed; the next moment refers to the point in time when the target person leaves the current monitoring range and immediately enters the monitoring range of the next camera. Between the current moment and the next moment, the target person does not enter the monitoring range of other cameras. For example, during matching, the target person is in the monitoring range of camera A, and it is expected that the target person will enter the monitoring range of camera B and the monitoring range of camera F in sequence. Then the monitoring area that the target person enters at the next moment refers to the monitoring range of camera B (not the monitoring range of camera F).
由于未来的不确定性,因此预测到摄像机数目可以不止一个,所述目标人员进入不同摄像机监控范围的置信度不同。参见图5,目标人员詹森从摄像机A的监控范围离开后,即将进入的监控范围所对应的摄像机包括:摄像机B、摄像机C、摄像机D和摄像机E,但是詹森进入这些监控范围的可能性不同,他进入摄像机C的可能性最大,达到60%,而进入摄像机E的监控范围的可能性只有3%。所述数据分析平台11有多种方案选择发送视频给所述展示平台的摄像机。例如:选择置信度排序在前k的摄像机,指令展示平台13获取这些摄像机的视频;选择置信度大于置信度阈值的摄像机把视频发送给展示平台。Due to the uncertainty in the future, it is predicted that the number of cameras may be more than one, and the confidence that the target person enters the monitoring range of different cameras is different. Refer to Figure 5, after the target person Jason leaves the monitoring range of camera A, the cameras corresponding to the monitoring range that will be entered include: camera B, camera C, camera D, and camera E, but it is possible for Jason to enter these monitoring ranges Different, he has the greatest probability of entering camera C, reaching 60%, while the probability of entering the monitoring range of camera E is only 3%. The data analysis platform 11 has multiple options for sending videos to the cameras of the display platform. For example: select the cameras with the top k confidence levels, and instruct the display platform 13 to obtain the videos of these cameras; select the cameras with the confidence levels greater than the confidence threshold to send the videos to the display platform.
可选的,所述数据分析平台11还可以进一步预测所述目标人员进入下一个摄像机监控范围的可能位置。参见图6(a),摄像机C的监控范围是监控范围31;摄像机A的监控范围是监控范围32。根据从摄像机A获得的信息,所述数据分析平台11可以获得目标人员詹森当前的地理位置是(东经75°,北纬31.5°),这个位置在摄像机A的监控范围32之内,摄像机A拍摄到詹森的移动方向是向北移动。根据其运动轨迹,他将会到达新的坐标(东经75°,北纬32.5°),这个新的坐标属于摄像机C的监控范围31,具体而言位于监控范围31的边界。也就是说,预计詹森将会从摄像机C的南方进入摄像机C的监控范围。然而,摄像机当前时刻的可视范围是范围311,这意味着即使詹森进入监控范围31,摄像机C也无法立刻监控到他。Optionally, the data analysis platform 11 may further predict the possible position of the target person entering the next camera monitoring range. Referring to Fig. 6(a), the monitoring range of the camera C is the monitoring range 31; the monitoring range of the camera A is the monitoring range 32. According to the information obtained from camera A, the data analysis platform 11 can obtain the current geographic location of the target person Jensen (75° east longitude, 31.5° north latitude), which is within the monitoring range 32 of camera A, and camera A shoots The direction of movement to Jensen is north. According to his movement trajectory, he will reach a new coordinate (75° east longitude, 32.5° north latitude). This new coordinate belongs to the monitoring range 31 of the camera C, specifically at the boundary of the monitoring range 31. In other words, it is expected that Jason will enter the surveillance range of camera C from the south of camera C. However, the visible range of the camera at the current moment is the range 311, which means that even if Jensen enters the monitoring range 31, the camera C cannot immediately monitor him.
为此,本实施例中,所述数据分析平台11发送调整信息给摄像机C。所述调整信息可以包括所述目标人员的位置,也就是(东经75°,北纬32.5°);或者,所述调整信息可以包括调整方向(将摄像机C的可视范围调整为南方)。参见图6(b),当所述摄像机C按照这个调整信息调整自己的可视范围之后,(东经75°,北纬32.5°)被纳入其监控范围,这意味着詹森进入监控范围31之后会立刻被监控到。因此本实施例具有更好的监控时效。按照类似的方法,所述数据分析平台11可以生成与摄像机B和摄像机D对应的调整信息,并分别发送给摄像机B和摄像机D,以便当詹森进入摄像机B或者摄像机D的监控范围时,他可以尽快被监控到。To this end, in this embodiment, the data analysis platform 11 sends adjustment information to the camera C. The adjustment information may include the position of the target person, that is, (75° east longitude, 32.5° north latitude); or, the adjustment information may include an adjustment direction (adjust the visible range of the camera C to the south). Referring to Figure 6(b), when the camera C adjusts its visual range according to this adjustment information, (75°E, 32.5°N) is included in its monitoring range, which means that after Jensen enters the monitoring range 31, it will Was monitored immediately. Therefore, this embodiment has better monitoring timeliness. According to a similar method, the data analysis platform 11 can generate adjustment information corresponding to the camera B and the camera D, and send it to the camera B and the camera D respectively, so that when Jensen enters the monitoring range of the camera B or the camera D, he Can be monitored as soon as possible.
下面提供几种置信度的计算方法。Several confidence calculation methods are provided below.
方法1:当所述目标人员本次从第一区域离开时,记录其离开时的位置和离开时的运动方向,由于人员的行走规律是在多数情况下是沿原方向继续运动,因此可以估计出人员后续的运动方向,从而把沿原方向继续前进所到达的位置(这个位置属于第二区域)作为进入的监控范围的可能位置。也就是预测出了所述目标人员即将进入的监控范围,以及预测出了进入监控范围的可能位置。Method 1: When the target person leaves the first area this time, record the position when he leaves and the direction of movement when he leaves. Because the walking rule of the person is to continue moving in the original direction in most cases, it can be estimated The subsequent movement direction of the personnel is taken out, so that the position reached by continuing to advance in the original direction (this position belongs to the second area) is taken as the possible position of the entered monitoring range. That is, it predicts the monitoring range that the target person is about to enter, and predicts the possible location of entering the monitoring range.
方法2:从数据库中获取所述目标人员在所述第一区域内的历史运动轨迹,统计所述目标人员离开所述第一区域的位置和运动方向。当所述目标人员在本次从相同的位置离开所述第一区域时,把统计得到的运动方向作为所述目标人员本次最可能的运动方向,从而预测出所述目标人员即将进入的监控范围以及进入监控范围的可能位置。或者把统计得到的下一个进入的监控范围作为本次可能进入的监控范围,从而预测出所述目标人员即将进入的监控范围以及进入监控范围的可能位置。可选的按时间段进行统计。例如:所述目标人员在上午8:00-10:00,有55%的情况下会从摄像机A的监控范围进入摄像机D的监控范围,那么可以进行如下预测:在本次监控中,所述目标人员有55%的情况下会从摄像机A的监控 范围进入摄像机D的监控范围。Method 2: Obtain the historical movement track of the target person in the first area from a database, and count the position and movement direction of the target person leaving the first area. When the target person leaves the first area from the same position this time, the movement direction obtained by statistics is used as the most likely movement direction of the target person this time, so as to predict the monitoring that the target person is about to enter. Range and possible locations to enter the monitoring range. Or, the next monitoring range obtained by statistics is taken as the monitoring range that may enter this time, so as to predict the monitoring range that the target person is about to enter and the possible location of entering the monitoring range. Optional statistics based on time period. For example: the target person will enter the monitoring range of camera D from the monitoring range of camera A in 55% of the cases from 8:00 to 10:00 in the morning, then the following prediction can be made: In this monitoring, the The target person will enter the monitoring range of camera D from the monitoring range of camera A in 55% of the cases.
方法3:统计数据库中记录的第一监控范围内所有人员的运动规律,得出这些人员进入其他摄像机的监控范围的比例,从而预测出所述目标人员即将进入的监控范围。例如,在过去有1000个人离开了第一监控范围,这1000个人中有400人接下来进入了摄像机B的监控范围。那么我们可以认为所述目标人员本次进入所述摄像机B的监控范围的置信度是400/1000=40%。该方法也可以预测所述目标人员进入下一个监控范围的可能位置。例如:这400人中,有300人从正南方进入所述摄像机B的监控范围。这意味着,当所述目标人员本次进入所述摄像机B的监控范围,那么所述目标人员本次有300/400=75%的几率从正南方进入所述摄像机B的监控范围,因此可以提前调整所述摄像机B的可视范围,调整之后,所述摄像机B把其监控范围的正南方作为可视范围。该方案亦可以分时间段进行统计和预测。Method 3: Calculate the movement laws of all persons in the first monitoring range recorded in the database, and obtain the proportion of these persons entering the monitoring range of other cameras, thereby predicting the monitoring range that the target person is about to enter. For example, in the past, 1,000 people left the first monitoring range, and 400 of these 1,000 people entered the monitoring range of camera B next. Then we can consider that the confidence that the target person enters the monitoring range of the camera B this time is 400/1000=40%. This method can also predict the possible position of the target person entering the next monitoring range. For example, among the 400 people, 300 people entered the surveillance range of the camera B from the south. This means that when the target person enters the monitoring range of the camera B this time, there is a 300/400=75% chance that the target person enters the monitoring range of the camera B from the south, so it can The visual range of the camera B is adjusted in advance, and after the adjustment, the camera B regards the true south of its monitoring range as the visual range. The program can also be used for statistics and forecasts in time periods.
方法4:根据所述目标摄像机所在的地理位置信息,预测至少一个下一跳摄像机。例如:从地理位置上,摄像机A的周围有3个摄像机,那么这3个摄像机都可能是下一跳摄像机。或者在一条道路的中部有摄像机A,道路的两端有摄像机B和摄像机C。所述目标人员当前位于摄像机A的监控范围,那么所述目标对象接下来可能进入摄像机B的监控范围,也可能进入摄像机摄像机C的监控范围。可以认为在下一时刻目标人员有50%的几率进入摄像机B的监控范围,50%的几率进入摄像机C的监控范围。Method 4: Predict at least one next hop camera based on the geographic location information where the target camera is located. For example: geographically, there are 3 cameras around camera A, then these 3 cameras may all be the next hop cameras. Or there is a camera A in the middle of a road, and a camera B and a camera C at both ends of the road. If the target person is currently located in the monitoring range of the camera A, the target object may enter the monitoring range of the camera B or the monitoring range of the camera C next. It can be considered that at the next moment the target person has a 50% chance of entering the monitoring range of camera B, and a 50% chance of entering the monitoring range of camera C.
也可以把以上方法中的2种或者3种或者4种结合使用,分别设定不同的权重值。例如:方法1、方法2、方法3的权重分别是40%、40%和20%,按照方法1、方法2、方法3,得出所述目标人员进入摄像机B的监控范围的置信度分别是30%,50%和40%,那么按照权重综合得到的,所述目标人员进入摄像机B的置信度是:30%*40%+50%*40%+40%*20%=40%。You can also combine 2 or 3 or 4 of the above methods to set different weight values. For example, the weights of method 1, method 2, and method 3 are 40%, 40%, and 20% respectively. According to method 1, method 2, and method 3, the confidence that the target person enters the monitoring range of camera B is obtained respectively 30%, 50% and 40%, then based on the weights, the confidence that the target person enters the camera B is: 30%*40%+50%*40%+40%*20%=40%.
步骤24,所述展示平台13接收所述数据分析平台的指令,指令中携带所述K个摄像机的ID,对所述k个摄像机的实时视频信息缓存。在缓存到一定的视频(或图像)数据之后,所述展示平台13就可以对所述k个摄像机的视频进行播放。由于展示平台13是否获取k个摄像机的视频信息是由预测结果触发的,而不是由所述在第二监控范围中成功检测到所述目标人员触发的。因此,在所述目标人员尚未进入第二监控范围时,所述数据分析平台就已经可以指示所述K个摄像机发送视频供所述展示平台13缓存。这意味着,当所述目标人员尚未进入(或者刚刚)进入第二区域时,所述展示平台13已经可以对所述目标人员的行动进行展示。Step 24: The display platform 13 receives an instruction from the data analysis platform, the instruction carries the IDs of the K cameras, and caches the real-time video information of the k cameras. After certain video (or image) data is cached, the display platform 13 can play the video of the k cameras. This is because whether the display platform 13 obtains the video information of k cameras is triggered by the prediction result, not by the successful detection of the target person in the second monitoring range. Therefore, when the target person has not entered the second monitoring range, the data analysis platform can already instruct the K cameras to send videos for the display platform 13 to buffer. This means that when the target person has not entered (or just) entered the second area, the display platform 13 can already display the actions of the target person.
另外一种展示方法是:在所述目标人员进入所述第二监控范围之后,所述摄像机C把视频发送给所述数据分析平台11,所述数据分析平台11按照步骤22所描述的方法进行匹配,直到匹配成功之后才发送视频给所述展示平台13进行缓存。和这另外一种展示方法相比,本实施例展示的速度更快,在理想情况下可以实现无缝切换监控视频。Another display method is: after the target person enters the second monitoring range, the camera C sends the video to the data analysis platform 11, and the data analysis platform 11 performs the method described in step 22 Matching, the video is not sent to the display platform 13 for caching until the matching is successful. Compared with this other display method, the display speed of this embodiment is faster, and seamless switching of surveillance videos can be achieved under ideal conditions.
需要说明的是,本实施例中:(1)所述数据分析平台11在检测到目标人员(参见步骤22的匹配方案)后,所述数据分析平台11指令所述展示平台13从摄像机A或从多媒体数据中心12获取摄像机A的实时视频;(2)当数据分析平台11预测到所述目标人员会进入的下一跳摄像机的监控范围时,所述数据分析平台11指令所述展示平台13获取至少一个下一跳摄像机的实时视频。(1)和(2)中的两个指令在发送时间顺序上可以同时,也可以任意一个在前面。此外,除了指令所述展示平台13从摄像机获取视频之外,也可以指令摄像机发送视频给所述展示平台13,或者指令所述所述多媒体数据中心12发送摄像机的视频给所述展示平台13,或者指令所述展示平台13发生摄像机的视频给展示平台。展示平台进行展示的摄像机视频可以是摄像机的实时视频,在一些情况下,也可以是有一定滞后的非实时视频。It should be noted that, in this embodiment: (1) After the data analysis platform 11 detects the target person (see the matching scheme in step 22), the data analysis platform 11 instructs the display platform 13 to select from camera A or Obtain the real-time video of camera A from the multimedia data center 12; (2) When the data analysis platform 11 predicts the monitoring range of the next hop camera that the target person will enter, the data analysis platform 11 instructs the display platform 13 Get real-time video of at least one next hop camera. The two commands in (1) and (2) can be sent at the same time in the order of time, or any one of them can be in the front. In addition, in addition to instructing the display platform 13 to obtain the video from the camera, the camera may also be instructed to send the video to the display platform 13, or the multimedia data center 12 may be instructed to send the video of the camera to the display platform 13. Or instruct the display platform 13 to generate video from the camera to the display platform. The camera video displayed by the display platform may be a real-time video of the camera, or in some cases, it may also be a non-real-time video with a certain lag.
参见图7是一种展示方法的示意图。如图所示,所述展示平台13拥有4个屏幕,分别是大屏41、小屏42、小屏43以及小屏44。在詹森离开监控范围32之前,大屏播放摄像机 A的内容,并且所述展示平台13提前缓存置信度在前3名的摄像机(也就是摄像机B、摄像机C和摄像机D)的内容;当詹森离开监控范围32之后,由于摄像机C的置信度最高,因此所述大屏切换为播放摄像机C的视频;所述小屏41和小屏42分别播放摄像机B和摄像机D的视频(与置信度排序一致)。或者:按照置信度大小顺序选择对应的屏幕播放。例如置信度最高的摄像机C,选择最左侧的屏幕播放,置信度最低的摄像机E使用最右侧的屏幕播放。Refer to Figure 7 for a schematic diagram of a display method. As shown in the figure, the display platform 13 has four screens, namely a large screen 41, a small screen 42, a small screen 43, and a small screen 44. Before Jensen leaves the monitoring area 32, the content of camera A is played on the big screen, and the display platform 13 caches the content of the top 3 cameras (that is, camera B, camera C, and camera D) in advance; After Sen leaves the monitoring range 32, since the confidence of camera C is the highest, the large screen is switched to play the video of camera C; the small screen 41 and the small screen 42 play the video of camera B and camera D respectively (with the confidence The order is consistent). Or: Select the corresponding screen to play in the order of the degree of confidence. For example, camera C with the highest confidence selects the leftmost screen to play, and camera E with the lowest confidence uses the rightmost screen to play.
此外,展示平台也可以对当前摄像机的多媒体数据以及预测到的下一跳摄像机的多媒体数据进行区别播放。例如:当前摄像机的多媒体数据使用大屏幕播放,下一跳摄像机的多媒体数据使用小屏幕播放。In addition, the display platform can also discriminately play the multimedia data of the current camera and the predicted multimedia data of the next hop camera. For example: the multimedia data of the current camera is played on the big screen, and the multimedia data of the next hop camera is played on the small screen.
步骤25,接收到调整的摄像机对自身的可视范围进行调整。调整之后的可视范围包括了所述目标人员进入摄像机监控范围的位置。需要说明的是,本步骤是可选步骤,仅在所述数据分析平台11发送调整信息的情况下执行。Step 25: The camera that has received the adjustment adjusts its visual range. The adjusted visual range includes the position where the target person enters the surveillance range of the camera. It should be noted that this step is an optional step and is only executed when the data analysis platform 11 sends adjustment information.
参见图6(b),当所述摄像机C按照这个调整信息调整自己的可视范围之后,(东经75°,北纬32.5°)被纳入其监控范围,这意味着詹森进入监控范围31之后会立刻被监控到。因此具有更好的监控时效。Referring to Figure 6(b), when the camera C adjusts its visual range according to this adjustment information, (75°E, 32.5°N) is included in its monitoring range, which means that after Jensen enters the monitoring range 31, it will Was monitored immediately. Therefore, it has better monitoring timeliness.
以上流程对视频的匹配和缓存进行举例。然而该方法不仅适用于视频,在其他实施例中,还可以对图片、声音等媒体数据进行匹配以及提前缓存。The above process is an example of video matching and caching. However, this method is not only suitable for video, in other embodiments, media data such as pictures and sounds can also be matched and cached in advance.
下面结合图8介绍另一个监控方法实施例流程图。这个实施例和图2所描述的实施例的原理相同,下面仅就区别点进行重点描述。The following describes a flowchart of another embodiment of a monitoring method in conjunction with FIG. 8. The principle of this embodiment is the same as that of the embodiment described in FIG. 2, and the following description will focus on the differences.
步骤41,展示平台13发送监控任务给所述数据分析平台11。所述监控任务中包括监控范围和监控目标清单。关于监控范围和监控目标清单的内容和作用,可以参见步骤21,本步骤和上一个实施例的步骤21相比,区别在于监控任务的触发者不同。Step 41: The display platform 13 sends a monitoring task to the data analysis platform 11. The monitoring task includes a monitoring scope and a monitoring target list. Regarding the contents and functions of the monitoring scope and monitoring target list, refer to step 21. Compared with step 21 of the previous embodiment, this step differs in that the trigger of the monitoring task is different.
步骤42,所述数据分析平台11用所述目标人员特征与摄像机所拍摄的视频中人员的人员特征(视频人员特征)进行匹配,指令展示平台获取匹配成功的摄像机的视频。其中,所述多媒体数据中心12和多个摄像机通信,可以获取这多个摄像机的ID以及这多个摄像机拍摄的人员图像。所述数据分析平台11例如是云计算平台。本步骤和步骤22相似,具体可以参考步骤22,因此不做赘述。Step 42: The data analysis platform 11 uses the target person characteristics to match the person characteristics of the person in the video shot by the camera (video person characteristics), and instructs the display platform to obtain the video of the camera that is successfully matched. Wherein, the multimedia data center 12 communicates with multiple cameras, and can obtain the IDs of the multiple cameras and the images of persons captured by the multiple cameras. The data analysis platform 11 is, for example, a cloud computing platform. This step is similar to step 22, and you can refer to step 22 for details, so it will not be repeated.
步骤43,当匹配成功,所述数据分析平台11预测:当所述目标人员离开当前时刻所在的摄像机A监控范围(第一监控范围)后,下一个时刻将会进入的监控范围(第二监控范围)所对应的摄像机。发送指令以指令所述展示平台获取预测到的摄像机中的至少一个摄像机的。本步骤和步骤23相似,不用之处在于,在步骤23中预测到摄像机中,根据置信度筛选k个摄像机,把这k个摄像机的ID携带在指令之中发送给展示平台13;而在本步骤中,通过所述指令把预测到的全部摄像机的ID发送给所述展示平台13,此外,所述指令中进一步携带各个摄像机对应的置信度。Step 43: When the matching is successful, the data analysis platform 11 predicts that when the target person leaves the monitoring range of camera A (the first monitoring range) at the current moment, the monitoring range that the target person will enter at the next moment (the second monitoring range) Range) corresponds to the camera. Send an instruction to instruct the display platform to acquire at least one of the predicted cameras. This step is similar to step 23, except that in step 23, among the cameras predicted in step 23, k cameras are screened according to their confidence, and the IDs of these k cameras are carried in the instruction and sent to the display platform 13; In the step, the predicted IDs of all cameras are sent to the display platform 13 through the instruction. In addition, the instruction further carries the confidence level corresponding to each camera.
步骤44,所述展示平台13接收所述数据分析平台的指令,指令中携带摄像机ID以及各个摄像机ID对应的置信度。所述展示平台根据各个摄像机ID所对应的置信度,筛选出k个摄像机,获取这k个摄像机的视频进行缓存以及进行展示。筛选方法例如:选择置信度排序在前k的摄像机;选择置信度大于置信度阈值的摄像机。Step 44: The display platform 13 receives an instruction from the data analysis platform, and the instruction carries the camera ID and the confidence level corresponding to each camera ID. The display platform screens out k cameras according to the confidence levels corresponding to each camera ID, and obtains the videos of these k cameras for caching and display. The screening method is for example: selecting the camera with the top k confidence level; selecting the camera with the confidence level greater than the confidence threshold.
可以看出,和图2所描述的实施例相比,本实施例把筛选出k个摄像机的操作由数据分析平台11挪到展示平台13。其余部分可以参考步骤24,此处不做赘述。It can be seen that, compared with the embodiment described in FIG. 2, this embodiment moves the operation of screening out k cameras from the data analysis platform 11 to the display platform 13. For the rest, please refer to step 24, which will not be repeated here.
步骤45,参考步骤25,此处不做赘述。 Step 45, refer to step 25, which will not be repeated here.
图9提供了一种监控装置实施例。监控装置9可以是数据分析平台11或者所述数据分析平台11中运行的程序。所述监控装置9包括:获取模块51,用于获取监控任务,所述监控任务指示目标对象;分析模块52,和所述获取模块51通信,用于根据所述目标对象的特征信息,确定目标摄像机,所述目标摄像机用于监控第一区域,所述目标对象在当前时刻 位于所述第一区域;预测模块53,和所述分析模块52通信,用于预测下一跳摄像机,所述下一跳摄像机所监控的区域为预测的所述目标对象在下一时刻进入的监控区域;发送模块54,和所述分析模块52、预测模块53通信,用于发送所述目标摄像机的信息和所述下一跳摄像机的信息。Figure 9 provides an embodiment of a monitoring device. The monitoring device 9 may be a data analysis platform 11 or a program running in the data analysis platform 11. The monitoring device 9 includes: an acquisition module 51 for acquiring a monitoring task that indicates a target object; an analysis module 52, which communicates with the acquisition module 51, for determining a target based on the characteristic information of the target object The target camera is used to monitor the first area, and the target object is located in the first area at the current moment; the prediction module 53, which communicates with the analysis module 52, is used to predict the next hop camera, the next The area monitored by the one-hop camera is the predicted surveillance area that the target object will enter at the next moment; the sending module 54 communicates with the analysis module 52 and the prediction module 53, and is used to send the target camera information and the Information of the next hop camera.
在没有特别说明的情况下,所述监控装置5可以执行图2中的方法。具体而言,步骤21可以由所述获取模块51执行;步骤22可以由所述分析模块52执行;步骤23可以由所述预测模块53执行;步骤24中的所述指令和步骤23中的所述调整信息可以由所述发送模块54执行。由于各个模块的具体功能在前面的方法实施例中已经有了详细描述,因此此处不做赘述。Unless otherwise specified, the monitoring device 5 can execute the method in FIG. 2. Specifically, step 21 can be performed by the acquisition module 51; step 22 can be performed by the analysis module 52; step 23 can be performed by the prediction module 53; the instructions in step 24 and all of the instructions in step 23 The adjustment information can be executed by the sending module 54. Since the specific functions of each module have been described in detail in the previous method embodiments, they will not be repeated here.
图10提供了另外一种监控装置实施例。监控装置6可以是展示平台13或者所述展示平台13中运行的程序。所述监控装置6包括:任务模块61,用于触发监控任务,所述监控任务指示目标对象;处理模块62,用于接收目标摄像机的信息和下一跳摄像机的信息,其中,所述目标摄像机用于监控第一区域,所述目标对象在当前时刻位于所述第一区域,所述至少一个下一跳摄像机所监控的区域为预测的所述目标对象在下一时刻进入的监控区域。Figure 10 provides another embodiment of a monitoring device. The monitoring device 6 may be a display platform 13 or a program running on the display platform 13. The monitoring device 6 includes: a task module 61 for triggering a monitoring task, the monitoring task indicating a target object; a processing module 62 for receiving information about the target camera and information about the next hop camera, wherein the target camera It is used to monitor the first area, the target object is located in the first area at the current moment, and the area monitored by the at least one next-hop camera is the predicted surveillance area that the target object will enter at the next moment.
可选的情况下,所述监控装置6还包括置信度获取模块62:用于获取每个下一跳摄像机的置信度,所述置信度用于表征每个下一跳摄像机在下一时刻拍摄到所述目标对象的可能性。Optionally, the monitoring device 6 further includes a confidence level acquiring module 62: configured to acquire the confidence level of each next hop camera, and the confidence level is used to characterize that each next hop camera captures the camera at the next moment. The likelihood of the target object.
可选的情况下,所述监控装置6还包括多媒体获取模块63,用于根据所述目标摄像机的信息和所述至少一个下一跳摄像机的信息,分别获取所述目标摄像机所拍摄的多媒体数据和所述至少一个下一跳摄像机所拍摄的多媒体数据;播放模块64,用于分别在屏幕上播放所述目标摄像机所拍摄的多媒体数据和所述至少一个下一跳摄像机所拍摄的多媒体数据。Optionally, the monitoring device 6 further includes a multimedia acquisition module 63, configured to obtain the multimedia data captured by the target camera according to the information of the target camera and the information of the at least one next hop camera. And the multimedia data captured by the at least one next hop camera; the playback module 64 is configured to respectively play the multimedia data captured by the target camera and the multimedia data captured by the at least one next hop camera on the screen.
在没有特别说明的情况下,所述监控装置6可以执行图8所描述的所述展示平台13所执行的方法。具体而言:步骤41可由任务模块61执行;步骤44中,置信度的获取可由置信度获取模块62执行,多媒体数据的获取可由多媒体获取模块63执行,播放操作可由播放模块64执行。Unless otherwise specified, the monitoring device 6 may execute the method executed by the display platform 13 described in FIG. 8. Specifically, step 41 can be performed by the task module 61; in step 44, the confidence level can be obtained by the confidence level obtaining module 62, the multimedia data can be obtained by the multimedia obtaining module 63, and the playback operation can be performed by the playing module 64.
参见图11,本发明还还提供一种监控设备的实施例。监控设备7包括处理器71和接口72,用于执行步骤21-25的方法。所述处理器71用于执行:获取监控任务,所述监控任务指示目标对象;根据所述目标对象的特征信息,确定目标摄像机,所述目标摄像机用于监控第一区域,所述目标对象在当前时刻位于所述第一区域;预测下一跳摄像机,所述下一跳摄像机所监控的区域是预测的所述目标对象在下一时刻进入的监控区域。所述接口用于执行:发送所述目标摄像机的信息和所述下一跳摄像机的信息。监控设备可以是数据分析平台11。具体执行过程可以参照步骤21-步骤25以及图2。可选的,所述监控设备7还包括存储器(例如内存),所述存储器用于存储计算机程序,所述处理器71用于通过运行所述存储器中的所述计算机程序执行步骤21-25的方法。Referring to Fig. 11, the present invention also provides an embodiment of a monitoring device. The monitoring device 7 includes a processor 71 and an interface 72 for executing the method in steps 21-25. The processor 71 is configured to execute: obtain a monitoring task, the monitoring task indicates a target object; according to the characteristic information of the target object, determine a target camera, the target camera is used to monitor the first area, the target object is The current moment is located in the first area; the next-hop camera is predicted, and the area monitored by the next-hop camera is the predicted surveillance area that the target object will enter at the next moment. The interface is used for executing: sending the information of the target camera and the information of the next hop camera. The monitoring device may be a data analysis platform 11. For the specific execution process, refer to step 21 to step 25 and FIG. 2. Optionally, the monitoring device 7 further includes a memory (for example, a memory), the memory is used to store a computer program, and the processor 71 is used to execute steps 21-25 by running the computer program in the memory. method.
本发明还提供一种计算机可读介质的实施例,所述计算机可读存储介质存储指令,当计算的处理器执行所述指令用于执行步骤21-25的方法。The present invention also provides an embodiment of a computer-readable medium, the computer-readable storage medium stores instructions, which are used to execute the method of steps 21-25 when the computing processor executes the instructions.
本发明还提供一种计算机程序产品,其特征在于,所述计算机程序产品包含指令,当 计算的处理器执行所述指令用于执行权利要求步骤21-25的方法。The present invention also provides a computer program product, characterized in that the computer program product contains instructions, and when the computing processor executes the instructions, it is used to perform the method of the steps 21-25 of the claims.
本发明还还提供一种展示设备的实施例。展示设备(例如展示平台13)包括处理器和接口,处理器用于执行步骤41-45中的方法。可选的,所述展示设备还包括存储器(例如内存),所述存储器用于存储计算机程序,所述展示视频的处理器用于通过运行所述存储器中的所述计算机程序执行步骤41-45的方法。The invention also provides an embodiment of the display device. The display device (for example, the display platform 13) includes a processor and an interface, and the processor is used to execute the methods in steps 41-45. Optionally, the display device further includes a memory (for example, a memory), the memory is used to store a computer program, and the processor for displaying the video is used to execute steps 41-45 by running the computer program in the memory. method.
本发明还提供一种计算机可读介质的实施例,所述计算机可读存储介质存储指令,当计算的处理器执行所述指令用于执行步骤41-45的方法。The present invention also provides an embodiment of a computer-readable medium, the computer-readable storage medium stores instructions, which are used to execute the method of steps 41-45 when the computing processor executes the instructions.
本发明还提供一种计算机程序产品,其特征在于,所述计算机程序产品包含指令,当计算的处理器执行所述指令用于执行权利要求步骤41-45的方法。The present invention also provides a computer program product, characterized in that the computer program product contains instructions, and when the computing processor executes the instructions, it is used to execute the method of the steps 41-45 of claims.
以上所述,以上实施例仅用以说明本发明的技术方案而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,然而本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。As mentioned above, the above embodiments are only used to illustrate the technical solutions of the present invention and not to limit them; although the present invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that: The technical solutions recorded in the embodiments are modified, or some of the technical features are equivalently replaced; these modifications or replacements do not cause the essence of the corresponding technical solutions to deviate from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (33)

  1. 一种监控方法,其特征在于,包括:A monitoring method, characterized in that it comprises:
    获取监控任务,所述监控任务指示目标对象;Acquiring a monitoring task, where the monitoring task indicates a target object;
    根据所述目标对象的特征信息,确定目标摄像机,所述目标摄像机用于监控第一区域,所述目标对象在当前时刻位于所述第一区域;Determine a target camera according to the characteristic information of the target object, where the target camera is used to monitor a first area, and the target object is located in the first area at the current moment;
    预测下一跳摄像机,所述下一跳摄像机所监控的区域为预测的所述目标对象在下一时刻进入的监控区域;Predicting a next-hop camera, where the area monitored by the next-hop camera is the predicted monitoring area that the target object will enter at the next moment;
    发送所述目标摄像机的信息和所述下一跳摄像机的信息。Send the information of the target camera and the information of the next hop camera.
  2. 根据权利要求1所述的监控方法,其特征在于,所述预测下一跳摄像机包括:The monitoring method according to claim 1, wherein the predicting the next hop camera comprises:
    根据所述目标摄像机的信息和/或所述目标对象的信息,预测所述下一跳摄像机。Predict the next hop camera based on the information of the target camera and/or the information of the target object.
  3. 根据权利要求2所述的监控方法,其特征在于,所述根据所述目标摄像机的信息和/或所述目标对象的信息,预测所述下一跳摄像机,具体包括:The monitoring method according to claim 2, wherein the predicting the next hop camera according to the information of the target camera and/or the information of the target object specifically comprises:
    根据所述目标摄像机的信息和/或所述目标对象的信息,预测所述下一跳区域的地理位置;Predict the geographic location of the next hop area according to the information of the target camera and/or the information of the target object;
    根据所述下一跳区域的地理位置,输出所述下一跳摄像机的列表。According to the geographic location of the next hop area, output the list of next hop cameras.
  4. 根据权利要求2或3所述的监控方法,其特征在于,所述根据所述目标摄像机的信息和/或所述目标对象的信息,预测所述下一跳摄像机,具体包括以下任意一种或者几种的组合:The monitoring method according to claim 2 or 3, wherein the prediction of the next hop camera based on the information of the target camera and/or the information of the target object specifically includes any one of the following: Several combinations:
    第一种:统计所述目标摄像机所拍摄的对象的历史运动轨迹,根据统计的对象的历史运动轨迹预测所述下一跳摄像机;The first method: Calculate the historical motion trajectory of the object shot by the target camera, and predict the next-hop camera based on the statistical historical motion trajectory of the object;
    第二种:根据所述目标对象本次离开所述第一区域时的的位置和运动方向,预测所述下一跳摄像机;The second type: predict the next-hop camera according to the position and movement direction of the target object when it leaves the first area this time;
    第三种:统计所述目标对象离开所述第一区域的历史运动轨迹,根据统计的所述目标对象的历史运动轨迹预测所述下一跳摄像机;The third type: statistics the historical movement track of the target object leaving the first area, and predict the next-hop camera based on the statistical historical movement track of the target object;
    第四种:根据所述目标摄像机所在的地理位置信息,预测所述下一跳摄像机。The fourth type: predict the next hop camera based on the geographic location information where the target camera is located.
  5. 根据权利要求1-4任一项所述的监控方法,其特征在于,所述下一跳摄像机包括的数量为至少两个;所述预测下一跳摄像机包括:The monitoring method according to any one of claims 1 to 4, wherein the number of the next hop camera includes at least two; and the predicted next hop camera includes:
    输出预测的每个下一跳摄像机和每个下一跳摄像机的置信度,所述置信度用于表征每个下一跳摄像机拍摄到所述目标对象的可能性。The predicted confidence of each next hop camera and each next hop camera is output, and the confidence is used to characterize the probability that each next hop camera captures the target object.
  6. 根据权利要求1-5任一项所述的监控方法,其特征在于,所述方法还包括:The monitoring method according to any one of claims 1-5, wherein the method further comprises:
    预测所述目标对象进入所述下一跳摄像机所拍摄区域之后所在的可能位置;Predict the possible position of the target object after entering the area captured by the next-hop camera;
    向至少一个所述下一跳摄像机发送位置调整信号,所述位置调整信号用于指示所述接收到调整信号的下一跳摄像机将所述可能位置调整为可视位置。Send a position adjustment signal to at least one of the next hop cameras, where the position adjustment signal is used to instruct the next hop camera that has received the adjustment signal to adjust the possible position to a visible position.
  7. 根据权利要求1-5任一项所述的监控方法,其特征在于,所述方法还包括:The monitoring method according to any one of claims 1-5, wherein the method further comprises:
    根据所述目标摄像机的信息和所述下一跳摄像机的信息,指令分别对所述目标摄像机的视频和所述下一跳摄像机的视频进行播放。According to the information of the target camera and the information of the next hop camera, instruct to play the video of the target camera and the video of the next hop camera respectively.
  8. 一种监控方法,其特征在于,包括:A monitoring method, characterized in that it comprises:
    触发监控任务,所述监控任务指示目标对象;Trigger a monitoring task, the monitoring task indicates the target object;
    接收目标摄像机的信息和下一跳摄像机的信息,其中,所述目标摄像机用于监控第 一区域,所述目标对象在当前时刻位于所述第一区域,所述至少一个下一跳摄像机所监控的区域为预测的所述目标对象在下一时刻进入的监控区域。Receive information about a target camera and information about a next-hop camera, where the target camera is used to monitor a first area, the target object is located in the first area at the current moment, and the at least one next-hop camera monitors The area of is the predicted monitoring area that the target object will enter at the next moment.
  9. 根据权利要求8所述的监控方法,其特征在于,还包括:The monitoring method according to claim 8, further comprising:
    获取每个下一跳摄像机的置信度,所述置信度用于表征每个下一跳摄像机在下一时刻拍摄到所述目标对象的可能性。Obtain the confidence of each next-hop camera, where the confidence is used to characterize the possibility that each next-hop camera will capture the target object at the next moment.
  10. 根据权利要求8或9所述的监控方法,其特征在于,还包括:The monitoring method according to claim 8 or 9, further comprising:
    根据所述目标摄像机的信息和所述至少一个下一跳摄像机的信息,分别获取所述目标摄像机所拍摄的多媒体数据和所述至少一个下一跳摄像机所拍摄的多媒体数据;Acquiring the multimedia data captured by the target camera and the multimedia data captured by the at least one next hop camera, respectively, according to the information of the target camera and the information of the at least one next hop camera;
    分别在屏幕上播放所述目标摄像机所拍摄的多媒体数据和所述至少一个下一跳摄像机所拍摄的多媒体数据。The multimedia data captured by the target camera and the multimedia data captured by the at least one next-hop camera are respectively played on the screen.
  11. 根据权利要求10所述的监控方法,其特征在于,还包括:The monitoring method according to claim 10, further comprising:
    根据每个下一跳摄像机的置信度,选择所述至少下一跳摄像机所拍摄的多媒体数据的展示模式。According to the confidence of each next hop camera, the display mode of the multimedia data captured by the at least the next hop camera is selected.
  12. 根据权利要求11所述的监控方法,其特征在于,根据每个下一跳摄像机的置信度,选择所述至少下一跳摄像机所拍摄的多媒体数据的展示模式,具体包括:The monitoring method according to claim 11, wherein, according to the confidence level of each next hop camera, selecting the display mode of the multimedia data shot by the at least the next hop camera specifically includes:
    根据每个下一跳摄像机的置信度,使用大屏幕播放置信度高的下一跳摄像机所拍摄的多媒体数据;或者,According to the confidence of each next hop camera, use a large screen to play the multimedia data captured by the next hop camera with high confidence; or,
    根据每个下一跳摄像机的置信度,使用小屏幕播放置信度低的下一跳摄像机所拍摄的多媒体数据。According to the confidence of each next hop camera, a small screen is used to play the multimedia data captured by the next hop camera with low confidence.
  13. 一种监控装置,其特征在于,包括:A monitoring device, characterized in that it comprises:
    获取模块,用于获取监控任务,所述监控任务指示目标对象;An acquisition module, for acquiring a monitoring task, the monitoring task indicating a target object;
    分析模块,用于根据所述目标对象的特征信息,确定目标摄像机,所述目标摄像机用于监控第一区域,所述目标对象在当前时刻位于所述第一区域;An analysis module, configured to determine a target camera according to the characteristic information of the target object, the target camera is used to monitor a first area, and the target object is located in the first area at the current moment;
    预测模块,用于预测下一跳摄像机,所述下一跳摄像机所监控的区域为预测的所述目标对象在下一时刻进入的监控区域;The prediction module is used to predict the next hop camera, and the area monitored by the next hop camera is the predicted monitoring area that the target object will enter at the next moment;
    发送模块,用于发送所述目标摄像机的信息和所述下一跳摄像机的信息。The sending module is used to send the information of the target camera and the information of the next hop camera.
  14. 根据权利要求13所述的监控装置,其特征在于,所述预测模块具体用于:The monitoring device according to claim 13, wherein the prediction module is specifically configured to:
    根据所述目标摄像机的信息和/或所述目标对象的信息,预测所述下一跳摄像机。Predict the next hop camera based on the information of the target camera and/or the information of the target object.
  15. 根据权利要求14所述的监控装置,其特征在于,所述预测模块具体用于:The monitoring device according to claim 14, wherein the prediction module is specifically configured to:
    根据所述目标摄像机的信息和/或所述目标对象的信息,预测所述下一跳区域的地理位置;Predict the geographic location of the next hop area according to the information of the target camera and/or the information of the target object;
    根据所述下一跳区域的地理位置,输出所述下一跳摄像机的列表。According to the geographic location of the next hop area, output the list of next hop cameras.
  16. 根据权利要求14或15所述的监控装置,其特征在于,,所述预测模块具体用于执行以下任意一种或者几种的组合:The monitoring device according to claim 14 or 15, wherein the prediction module is specifically configured to perform any one or a combination of the following:
    第一种:统计所述目标摄像机所拍摄的对象的历史运动轨迹,根据统计的对象的历史运动轨迹预测所述下一跳摄像机;The first method: Calculate the historical motion trajectory of the object shot by the target camera, and predict the next-hop camera based on the statistical historical motion trajectory of the object;
    第二种:根据所述目标对象本次离开所述第一区域时的的位置和运动方向,预测所述下一跳摄像机;The second type: predict the next-hop camera according to the position and movement direction of the target object when it leaves the first area this time;
    第三种:统计所述目标对象离开所述第一区域的历史运动轨迹,根据统计的所述目 标对象的历史运动轨迹预测所述下一跳摄像机;The third type: Calculate the historical movement track of the target object leaving the first area, and predict the next-hop camera based on the statistical historical movement track of the target object;
    第四种:根据所述目标摄像机所在的地理位置信息,预测所述下一跳摄像机。The fourth type: predict the next hop camera based on the geographic location information where the target camera is located.
  17. 根据权利要求13-16任一项所述的监控装置,其特征在于,所述下一跳摄像机的数量为至少两个;所述预测模块还用于:The monitoring device according to any one of claims 13-16, wherein the number of the next-hop cameras is at least two; and the prediction module is further configured to:
    输出预测的每个下一跳摄像机和每个下一跳摄像机的置信度,所述置信度用于表征每个下一跳摄像机拍摄到所述目标对象的可能性。The predicted confidence of each next hop camera and each next hop camera is output, and the confidence is used to characterize the probability that each next hop camera captures the target object.
  18. 根据权利要求13-17任一项所述的监控装置,其特征在于,所述预测模块还用于:The monitoring device according to any one of claims 13-17, wherein the prediction module is further configured to:
    预测所述目标对象进入所述下一跳摄像机所拍摄区域之后所在的可能位置;Predict the possible position of the target object after entering the area captured by the next-hop camera;
    向至少一个所述下一跳摄像机发送位置调整信号,所述位置调整信号用于指示所述接收到调整信号的下一跳摄像机将所述可能位置调整为可视位置。Send a position adjustment signal to at least one of the next hop cameras, where the position adjustment signal is used to instruct the next hop camera that has received the adjustment signal to adjust the possible position to a visible position.
  19. 根据权利要求13-17任一项所述的监控装置,其特征在于,所述装置还包括:The monitoring device according to any one of claims 13-17, wherein the device further comprises:
    根据所述目标摄像机的信息和所述下一跳摄像机的信息,指令分别对所述目标摄像机的视频和所述下一跳摄像机的视频进行播放。According to the information of the target camera and the information of the next hop camera, instruct to play the video of the target camera and the video of the next hop camera respectively.
  20. 一种监控装置,其特征在于,包括:A monitoring device, characterized in that it comprises:
    任务模块,用于触发监控任务,所述监控任务指示目标对象;The task module is used to trigger a monitoring task, and the monitoring task indicates a target object;
    处理模块,用于接收目标摄像机的信息和下一跳摄像机的信息,其中,所述目标摄像机用于监控第一区域,所述目标对象在当前时刻位于所述第一区域,所述至少一个下一跳摄像机所监控的区域为预测的所述目标对象在下一时刻进入的监控区域。The processing module is configured to receive information of a target camera and information of a next-hop camera, wherein the target camera is used to monitor a first area, the target object is located in the first area at the current moment, and the at least one next-hop camera The area monitored by the one-hop camera is the predicted monitoring area that the target object will enter at the next moment.
  21. 根据权利要求20所述的监控装置,其特征在于,所述监控装置还包括置信度获取模块:The monitoring device according to claim 20, wherein the monitoring device further comprises a confidence degree obtaining module:
    用于获取每个下一跳摄像机的置信度,所述置信度用于表征每个下一跳摄像机在下一时刻拍摄到所述目标对象的可能性。It is used to obtain the confidence of each next hop camera, and the confidence is used to characterize the possibility that each next hop camera will capture the target object at the next moment.
  22. 根据权利要求20或21所述的监控装置,其特征在于,还包括:The monitoring device according to claim 20 or 21, further comprising:
    多媒体获取模块,用于根据所述目标摄像机的信息和所述至少一个下一跳摄像机的信息,分别获取所述目标摄像机所拍摄的多媒体数据和所述至少一个下一跳摄像机所拍摄的多媒体数据;The multimedia acquisition module is configured to acquire the multimedia data captured by the target camera and the multimedia data captured by the at least one next hop camera, respectively, according to the information of the target camera and the information of the at least one next hop camera ;
    播放模块,用于分别在屏幕上播放所述目标摄像机所拍摄的多媒体数据和所述至少一个下一跳摄像机所拍摄的多媒体数据。The playing module is configured to respectively play the multimedia data shot by the target camera and the multimedia data shot by the at least one next-hop camera on the screen.
  23. 根据权利要求22所述的监控装置,其特征在于,所述播放模块具体用于:The monitoring device according to claim 22, wherein the playing module is specifically configured to:
    根据每个下一跳摄像机的置信度,选择所述至少下一跳摄像机所拍摄的多媒体数据的展示模式。According to the confidence of each next hop camera, the display mode of the multimedia data captured by the at least the next hop camera is selected.
  24. 根据权利要求23所述的监控装置,其特征在于,所述播放模块具体用于:The monitoring device according to claim 23, wherein the playing module is specifically configured to:
    根据每个下一跳摄像机的置信度,使用大屏幕播放置信度高的下一跳摄像机所拍摄的多媒体数据;或者,According to the confidence of each next hop camera, use a large screen to play the multimedia data captured by the next hop camera with high confidence; or,
    根据每个下一跳摄像机的置信度,使用小屏幕播放置信度低的下一跳摄像机所拍摄的多媒体数据。According to the confidence of each next hop camera, a small screen is used to play the multimedia data captured by the next hop camera with low confidence.
  25. 一种监控设备,包括处理器和接口,所述处理器用于执行:A monitoring device includes a processor and an interface, and the processor is used to execute:
    获取监控任务,所述监控任务指示目标对象;Acquiring a monitoring task, where the monitoring task indicates a target object;
    根据所述目标对象的特征信息,确定目标摄像机,所述目标摄像机用于监控第一区域,所述目标对象在当前时刻位于所述第一区域;Determine a target camera according to the characteristic information of the target object, where the target camera is used to monitor a first area, and the target object is located in the first area at the current moment;
    预测下一跳摄像机,所述下一跳摄像机所监控的区域为预测的所述目标对象在下一时刻进入的监控区域;Predicting a next-hop camera, where the area monitored by the next-hop camera is the predicted monitoring area that the target object will enter at the next moment;
    所述接口用于执行:The interface is used to execute:
    发送所述目标摄像机的信息和所述下一跳摄像机的信息。Send the information of the target camera and the information of the next hop camera.
  26. 根据权利要求25所述的监控设备,其特征在于,所述处理器用于:The monitoring device according to claim 25, wherein the processor is configured to:
    根据所述目标摄像机的信息和/或所述目标对象的信息,预测所述下一跳摄像机。Predict the next hop camera based on the information of the target camera and/or the information of the target object.
  27. 根据权利要求26所述的监控设备,其特征在于,所述处理器用于:The monitoring device according to claim 26, wherein the processor is configured to:
    根据所述目标摄像机的信息和/或所述目标对象的信息,预测所述下一跳区域的地理位置;Predict the geographic location of the next hop area according to the information of the target camera and/or the information of the target object;
    根据所述下一跳区域的地理位置,输出所述下一跳摄像机的列表。According to the geographic location of the next hop area, output the list of next hop cameras.
  28. 根据权利要求26或27所述的监控设备,其特征在于,所述处理器根据所述目标摄像机的信息和/或所述目标对象的信息,预测所述下一跳摄像机,具体包括以下任意一种或者几种的组合:The monitoring device according to claim 26 or 27, wherein the processor predicts the next hop camera according to the information of the target camera and/or the information of the target object, specifically including any one of the following Kind or combination of several:
    第一种:统计所述目标摄像机所拍摄的对象的历史运动轨迹,根据统计的对象的历史运动轨迹预测所述下一跳摄像机;The first method: Calculate the historical motion trajectory of the object shot by the target camera, and predict the next-hop camera based on the statistical historical motion trajectory of the object;
    第二种:根据所述目标对象本次离开所述第一区域时的的位置和运动方向,预测所述下一跳摄像机;The second type: predict the next-hop camera according to the position and movement direction of the target object when it leaves the first area this time;
    第三种:统计所述目标对象离开所述第一区域的历史运动轨迹,根据统计的所述目标对象的历史运动轨迹预测所述下一跳摄像机;The third type: statistics the historical movement track of the target object leaving the first area, and predict the next-hop camera based on the statistical historical movement track of the target object;
    第四种:根据所述目标摄像机所在的地理位置信息,预测所述下一跳摄像机。The fourth type: predict the next hop camera based on the geographic location information where the target camera is located.
  29. 根据权利要求20-28任一项所述的监控设备,其特征在于,所述下一跳摄像机包括的数量为至少两个;所述处理器预测下一跳摄像机包括:The monitoring device according to any one of claims 20-28, wherein the number of the next-hop cameras is at least two; and the processor predicts that the next-hop cameras include:
    输出预测的每个下一跳摄像机和每个下一跳摄像机的置信度,所述置信度用于表征每个下一跳摄像机拍摄到所述目标对象的可能性。The predicted confidence of each next hop camera and each next hop camera is output, and the confidence is used to characterize the probability that each next hop camera captures the target object.
  30. 根据权利要求20-29任一项所述的监控设备,其特征在于,所述处理器还用于:The monitoring device according to any one of claims 20-29, wherein the processor is further configured to:
    预测所述目标对象进入所述下一跳摄像机所拍摄区域之后所在的可能位置;Predict the possible position of the target object after entering the area captured by the next-hop camera;
    向至少一个所述下一跳摄像机发送位置调整信号,所述位置调整信号用于指示所述接收到调整信号的下一跳摄像机将所述可能位置调整为可视位置。Send a position adjustment signal to at least one of the next hop cameras, where the position adjustment signal is used to instruct the next hop camera that has received the adjustment signal to adjust the possible position to a visible position.
  31. 根据权利要求20-29任一项所述的监控设备,其特征在于,所述处理器还用于:The monitoring device according to any one of claims 20-29, wherein the processor is further configured to:
    根据所述目标摄像机的信息和所述下一跳摄像机的信息,指令分别对所述目标摄像机的视频和所述下一跳摄像机的视频进行播放。According to the information of the target camera and the information of the next hop camera, instruct to play the video of the target camera and the video of the next hop camera respectively.
  32. 一种计算机可读介质,其特征在于,所述计算机可读存储介质存储指令,当计算的处理器执行所述指令用于执行权利要求1-7所述的方法。A computer-readable medium, wherein the computer-readable storage medium stores instructions, which are used to execute the method of claims 1-7 when the computing processor executes the instructions.
  33. 一种计算机程序产品,其特征在于,所述计算机品包含指令,当计算的处理器执行所述指令用于执行权利要求1-7所述的方法。A computer program product, characterized in that the computer product contains instructions, which are used to execute the method of claims 1-7 when the computing processor executes the instructions.
PCT/CN2020/097694 2019-10-10 2020-06-23 Monitoring method, apparatus and device WO2021068553A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201910959793 2019-10-10
CN201910959793.2 2019-10-10
CN201911026899.3A CN112653832A (en) 2019-10-10 2019-10-26 Monitoring method, device and equipment
CN201911026899.3 2019-10-26

Publications (1)

Publication Number Publication Date
WO2021068553A1 true WO2021068553A1 (en) 2021-04-15

Family

ID=75343372

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/097694 WO2021068553A1 (en) 2019-10-10 2020-06-23 Monitoring method, apparatus and device

Country Status (2)

Country Link
CN (1) CN112653832A (en)
WO (1) WO2021068553A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112305534A (en) * 2019-07-26 2021-02-02 杭州海康威视数字技术股份有限公司 Target detection method, device, equipment and storage medium
CN115426450A (en) * 2022-08-09 2022-12-02 浙江大华技术股份有限公司 Camera parameter adjusting method and device, computer equipment and readable storage medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113992860B (en) * 2021-12-28 2022-04-19 北京国电通网络技术有限公司 Behavior recognition method and device based on cloud edge cooperation, electronic equipment and medium
CN114500952A (en) * 2022-02-14 2022-05-13 深圳市中壬速客信息技术有限公司 Control method, device and equipment for dynamic monitoring of park and computer storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010268186A (en) * 2009-05-14 2010-11-25 Panasonic Corp Monitoring image display device
CN104284146A (en) * 2013-07-11 2015-01-14 松下电器产业株式会社 Tracking assistance device, tracking assistance system and tracking assistance method
CN105245850A (en) * 2015-10-27 2016-01-13 太原市公安局 Method, device and system for tracking target across surveillance cameras
CN109522814A (en) * 2018-10-25 2019-03-26 清华大学 A kind of target tracking method and device based on video data
CN109905679A (en) * 2019-04-09 2019-06-18 梅州讯联科技发展有限公司 Monitoring method, device and system

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4709101B2 (en) * 2006-09-01 2011-06-22 キヤノン株式会社 Automatic tracking camera device
CN201248107Y (en) * 2008-04-30 2009-05-27 深圳市飞瑞斯科技有限公司 Master-slave camera intelligent video monitoring system
CN101458434B (en) * 2009-01-08 2010-09-08 浙江大学 System for precision measuring and predicting table tennis track and system operation method
CN101572804B (en) * 2009-03-30 2012-03-21 浙江大学 Multi-camera intelligent control method and device
CN102176246A (en) * 2011-01-30 2011-09-07 西安理工大学 Camera relay relationship determining method of multi-camera target relay tracking system
CN103152554B (en) * 2013-03-08 2017-02-08 浙江宇视科技有限公司 Intelligent moving target tracking device
CN103558856A (en) * 2013-11-21 2014-02-05 东南大学 Service mobile robot navigation method in dynamic environment
CN103763513A (en) * 2013-12-09 2014-04-30 北京计算机技术及应用研究所 Distributed tracking and monitoring method and system
CN104660998B (en) * 2015-02-16 2018-08-07 阔地教育科技有限公司 A kind of relay tracking method and system
CN104965964B (en) * 2015-08-06 2018-01-30 山东建筑大学 A kind of building personnel's distributed model method for building up based on monitor video analysis
CN105718750B (en) * 2016-01-29 2018-08-17 长沙理工大学 A kind of prediction technique and system of vehicle driving trace
CN106709436B (en) * 2016-12-08 2020-04-24 华中师范大学 Track traffic panoramic monitoring-oriented cross-camera suspicious pedestrian target tracking system
CN108961756A (en) * 2018-07-26 2018-12-07 深圳市赛亿科技开发有限公司 A kind of automatic real-time traffic vehicle flowrate, people flow rate statistical method and system
CN108965826B (en) * 2018-08-21 2021-01-12 北京旷视科技有限公司 Monitoring method, monitoring device, processing equipment and storage medium
CN114282732A (en) * 2021-12-29 2022-04-05 重庆紫光华山智安科技有限公司 Regional pedestrian flow prediction method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010268186A (en) * 2009-05-14 2010-11-25 Panasonic Corp Monitoring image display device
CN104284146A (en) * 2013-07-11 2015-01-14 松下电器产业株式会社 Tracking assistance device, tracking assistance system and tracking assistance method
CN105245850A (en) * 2015-10-27 2016-01-13 太原市公安局 Method, device and system for tracking target across surveillance cameras
CN109522814A (en) * 2018-10-25 2019-03-26 清华大学 A kind of target tracking method and device based on video data
CN109905679A (en) * 2019-04-09 2019-06-18 梅州讯联科技发展有限公司 Monitoring method, device and system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112305534A (en) * 2019-07-26 2021-02-02 杭州海康威视数字技术股份有限公司 Target detection method, device, equipment and storage medium
CN112305534B (en) * 2019-07-26 2024-03-19 杭州海康威视数字技术股份有限公司 Target detection method, device, equipment and storage medium
CN115426450A (en) * 2022-08-09 2022-12-02 浙江大华技术股份有限公司 Camera parameter adjusting method and device, computer equipment and readable storage medium
CN115426450B (en) * 2022-08-09 2024-04-16 浙江大华技术股份有限公司 Camera parameter adjustment method, device, computer equipment and readable storage medium

Also Published As

Publication number Publication date
CN112653832A (en) 2021-04-13

Similar Documents

Publication Publication Date Title
WO2021068553A1 (en) Monitoring method, apparatus and device
JP6696615B2 (en) Monitoring system, monitoring method, and recording medium storing monitoring program
JP6573346B1 (en) Person search system and person search method
CN108073577A (en) A kind of alarm method and system based on recognition of face
WO2020057346A1 (en) Video monitoring method and apparatus, monitoring server and video monitoring system
Wheeler et al. Face recognition at a distance system for surveillance applications
WO2020057355A1 (en) Three-dimensional modeling method and device
US11640726B2 (en) Person monitoring system and person monitoring method
CN110853295A (en) High-altitude parabolic early warning method and device
CN111222373B (en) Personnel behavior analysis method and device and electronic equipment
US20150338497A1 (en) Target tracking device using handover between cameras and method thereof
JPWO2018198373A1 (en) Video surveillance system
JP2007219948A (en) User abnormality detection equipment and user abnormality detection method
JP6867056B2 (en) Information processing equipment, control methods, and programs
CN109410278B (en) Target positioning method, device and system
WO2021095351A1 (en) Monitoring device, monitoring method, and program
CN110633648A (en) Face recognition method and system in natural walking state
CN103187083B (en) A kind of storage means based on time domain video fusion and system thereof
US20190027004A1 (en) Method for performing multi-camera automatic patrol control with aid of statistics data in a surveillance system, and associated apparatus
KR20170096838A (en) Apparatus and method for searching cctv image
JP2006093955A (en) Video processing apparatus
JP2019009529A (en) Face authentication apparatus, person tracking system, person tracking method, and person tracking program
JP6941457B2 (en) Monitoring system
JP6435640B2 (en) Congestion degree estimation system
JP7392738B2 (en) Display system, display processing device, display processing method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20875631

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20875631

Country of ref document: EP

Kind code of ref document: A1