CN114549580A - Method and device for realizing indoor monitoring, server and client - Google Patents

Method and device for realizing indoor monitoring, server and client Download PDF

Info

Publication number
CN114549580A
CN114549580A CN202011352293.1A CN202011352293A CN114549580A CN 114549580 A CN114549580 A CN 114549580A CN 202011352293 A CN202011352293 A CN 202011352293A CN 114549580 A CN114549580 A CN 114549580A
Authority
CN
China
Prior art keywords
monitoring
moving target
camera device
camera
monitoring moving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011352293.1A
Other languages
Chinese (zh)
Inventor
郑卫东
黎书胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN202011352293.1A priority Critical patent/CN114549580A/en
Publication of CN114549580A publication Critical patent/CN114549580A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Development Economics (AREA)
  • Signal Processing (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Alarm Systems (AREA)

Abstract

The application discloses a method and a device for realizing indoor monitoring, a server side and a client side. Furthermore, real-time monitoring of goods taking behaviors of the mobile object is achieved, and further, the generated order information, payment information and the like are displayed in real time.

Description

Method and device for realizing indoor monitoring, server and client
Technical Field
The present application relates to, but not limited to, positioning technologies, and in particular, to a method and an apparatus for implementing indoor monitoring, a server, and a client.
Background
In order to realize normal operation activities of off-line indoor retail places such as supermarkets, convenience stores and the like, a plurality of cameras are usually installed in the off-line indoor retail places to monitor various indoor information, and it is desirable to track the cameras in a cross-camera manner, that is, to find the same person under different cameras. Although the related art solves the problem that a plurality of cameras track the same person, real-time tracking cannot be realized, all paths of object movement cannot be restored, and real-time monitoring cannot be realized.
In the related art, a real-time tracking mode for moving objects such as people and goods in an offline retail scene is not provided.
Disclosure of Invention
The application provides a method and a device for realizing indoor monitoring, a server side and a client side, which can realize real-time monitoring of an offline indoor retail shop.
The embodiment of the invention provides a method for realizing indoor monitoring, which comprises the following steps:
the server determines a first camera device for capturing and monitoring a moving target according to the moving object information captured by the plurality of camera devices;
sending the code stream of the first camera device to a client;
and calculating a second camera device which can possibly capture the monitoring moving target according to the calibration information calibrated to the camera device and the moving direction of the monitoring moving target, and sending the information of the second camera device to the client.
In one illustrative example, the determining a first camera to capture a monitoring moving object includes:
the server detects a target of each frame of image of each camera device and extracts a first feature vector according to a detected moving object;
detecting a moving target in the obtained monitoring moving target information, and extracting a second feature vector;
utilizing the second feature vectors to perform vector comparison calculation on the feature vectors detected by all the camera devices, and finding out the camera devices corresponding to the first feature vectors with the similarity higher than a preset threshold value in a preset number;
and taking the camera device with the highest similarity where the monitored moving target is located as the first camera device.
In an exemplary embodiment, when the monitoring moving object moves out of the first imaging device, the method further includes:
and calculating the camera device which is most likely to shoot the monitoring moving target according to the foot point motion trail of the image of the monitoring moving target, and determining the camera device which shoots the monitoring moving target as the first camera device again.
In one illustrative example, the calculating may capture a second camera monitoring the moving object, including:
after the first camera device is determined, calculating a second camera device which is most likely to shoot the monitoring moving target according to the foot point motion track of the image of the monitoring moving target;
and sending information of all possible second camera devices to the client.
In an exemplary embodiment, the calculating may capture a second camera for monitoring the moving object, and further includes:
calculating the shooting range of each camera device in the actual site through the visual field calibration of the camera devices, and converting the shooting range of the actual site into a map visual range;
the second camera device which is most likely to shoot the monitoring moving target according to the foot point motion trail of the image of the monitoring moving target comprises:
when the monitoring moving target appears in a certain camera device, calculating the coordinates of the foot points of the image of the monitoring moving target on the map in real time, and calculating whether the monitoring moving target is in the map visible range of other cameras according to the coordinates;
if the monitoring moving target is in the map visible range of other camera devices, calculating the shortest distance between the monitoring moving target and the visible range boundary of the camera devices; if the monitoring moving target is out of the map visible range of other camera devices, calculating the shortest distance between the monitoring moving target and the boundary of the visible range of the camera devices;
and selecting a preset number of image pickup devices closest to the center of the field of view of the image pickup devices or the image pickup device which is most easily in the field of view of the image pickup devices as the second image pickup device.
In an illustrative example, when the monitoring mobile target performs self-service cash register, the method further comprises the following steps:
and the server calculates and obtains the cash register where the monitoring moving target is located according to the calibration information and the position of the monitoring moving target, and sends the information of the cash register to the client.
The embodiment of the present application further provides a method for implementing indoor monitoring, including:
initiating a tracking request to a server, wherein the tracking request comprises monitoring moving target information;
receiving a code stream of a first camera device from a server side, and receiving information of a second camera device from the server side;
and monitoring the monitoring moving target according to the code stream of the first camera device and the real-time code stream of the second camera device.
In one illustrative example, further comprising:
and searching the monitoring moving target in the second camera device, and searching the monitoring moving target in other camera devices in the whole field.
In one illustrative example, further comprising:
and receiving information of the cash register machine from the server side, directly connecting with a cash register machine local area network, and acquiring and displaying operation data of the monitoring moving target in real time.
In an illustrative example, further comprising:
and pulling data for monitoring a camera device at the cash register machine through the server side to perform small window display.
An embodiment of the present application further provides a server, including: the device comprises a first processing module and a transceiver module; wherein the content of the first and second substances,
the first processing module is arranged to determine a first camera device for capturing and monitoring a moving target according to the moving object information captured by the plurality of camera devices; calculating a second camera device which can capture the monitoring moving target according to the calibration information calibrated to the camera device and the moving direction of the monitoring moving target, wherein the second camera device comprises more than one camera device;
the receiving and sending module is used for sending the code stream of the first camera device to the client; and sending the information of the second camera to the client.
In an exemplary embodiment, the matching of the monitoring moving target information from the moving object information captured by the plurality of cameras in the first processing module includes:
carrying out target detection on each frame of picture of each camera device, and extracting a first feature vector according to a detected moving object; detecting a moving target in the obtained monitoring moving target information, and extracting a second feature vector; utilizing the second feature vectors to perform vector comparison calculation on the feature vectors detected by all the camera devices, and finding out the camera devices corresponding to the first feature vectors with the similarity higher than a preset threshold value in a preset number; and taking the camera device with the highest similarity where the monitored moving target is located as the first camera device.
In an exemplary embodiment, when the monitoring moving object moves out of the first camera, the first processing module is further configured to:
and calculating the camera device which most probably shoots the monitoring moving target according to the foot point motion trail of the image of the monitoring moving target, and determining the camera device which shoots the monitoring moving target as the first camera device again.
In one illustrative example, the calculating in the first processing module may capture a second camera device monitoring the moving object, including:
and calculating a second camera device which is most likely to shoot the monitoring moving target according to the foot point motion trail of the image of the monitoring moving target, and sending the information of all the calculated possible second camera devices to the client.
In one illustrative example, the first processing module is further configured to: calculating the shooting range of each camera device in the actual site through the visual field calibration of the camera devices, and converting the shooting range of the actual site into a map visual range;
the second camera device which is most likely to shoot the monitoring moving target is calculated according to the foot point motion trail of the image of the monitoring moving target in the first processing module, and the second camera device comprises:
when the monitoring moving target appears in a certain camera device, calculating the coordinates of the foot points of the image of the monitoring moving target on the map in real time, and calculating whether the monitoring moving target is in the map visible range of other cameras according to the coordinates; if the monitoring moving target is in the map visible range of other camera devices, calculating the shortest distance between the monitoring moving target and the visible range boundary of the camera devices; if the monitoring moving target is out of the map visible range of other camera devices, calculating the shortest distance between the monitoring moving target and the boundary of the visible range of the camera devices; and selecting a preset number of image pickup devices closest to the center of the field of view of the image pickup devices or the image pickup device which is most easily in the field of view of the image pickup devices as the second image pickup device.
In an illustrative example, when the monitoring of the moving object performs self-service cash registering, the first processing module is further configured to:
and calculating to obtain the cash register where the monitoring moving target is located currently through the calibration information and the position of the monitoring moving target, and sending the information of the cash register to the client.
An embodiment of the present application further provides a client, including: the interface module and the second processing module; wherein the content of the first and second substances,
the interface module is set to initiate a tracking request to the server, wherein the tracking request comprises monitoring moving target information; receiving a code stream from a first camera device of a server side, and receiving information from a second camera device of the server side, wherein the second camera device comprises more than one camera device;
and the second processing module is arranged for monitoring the monitoring moving target according to the code stream of the first camera device and the real-time code stream of the second camera device.
In one illustrative example, the second processing module is further configured to:
and if the monitoring moving target cannot be searched in the second camera device, other camera devices search the monitoring moving target in the whole field.
In one illustrative example, the second processing module is further configured to:
and receiving information from the cash register machine of the server side, directly connecting with a cash register machine local area network, and acquiring and displaying operation data of the monitoring mobile target in real time.
In one illustrative example, the second processing module is further configured to: and pulling data of a camera device for monitoring the cash register machine through the server side to perform small window display.
The embodiment of the application realizes real-time monitoring of the moving objects in the indoor retail store under the line. Furthermore, real-time monitoring of goods taking behaviors of the mobile object is achieved, and further, the generated order information, payment information and the like are displayed in real time.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings are included to provide a further understanding of the claimed subject matter and are incorporated in and constitute a part of this specification, illustrate embodiments of the subject matter and together with the description serve to explain the principles of the subject matter and not to limit the subject matter.
Fig. 1 is a schematic flow chart of a method for implementing indoor monitoring in an embodiment of the present application;
FIG. 2(a) is an exemplary diagram of a lost person tracking application scenario in an embodiment of the present application;
fig. 2(b) is an exemplary diagram of an application scenario for tracking early warning personnel in the embodiment of the present application;
fig. 3 is a schematic structural diagram of a server in an embodiment of the present application;
fig. 4 is a schematic structural diagram of a client in the embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more apparent, embodiments of the present application will be described in detail below with reference to the accompanying drawings. It should be noted that the embodiments and features of the embodiments in the present application may be arbitrarily combined with each other without conflict.
In one exemplary configuration of the present application, a computing device includes one or more processors (CPUs), input/output interfaces, a network interface, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include non-transitory computer readable media (transient media), such as modulated data signals and carrier waves.
The steps illustrated in the flow charts of the figures may be performed in a computer system such as a set of computer-executable instructions. Also, while a logical order is shown in the flow diagrams, in some cases, the steps shown or described may be performed in an order different than here.
In order to maintain normal operation activities of off-line indoor retail stores such as supermarkets, convenience stores and the like, for example, to avoid theft, the on-line one-to-one tracking of staff of the retail stores is mainly realized. Therefore, not only a large amount of labor cost is consumed, but also more dangerous personnel enter the field; moreover, too many staff trailing the scene can influence the shopping experience of other users in the scene, be unfavorable for retail scene experience to promote.
Fig. 1 is a schematic flowchart of a method for implementing indoor monitoring in an embodiment of the present application, as shown in fig. 1, including:
step 100: the server determines a first camera device for capturing and monitoring a moving target according to the moving object information captured by the plurality of camera devices.
In an exemplary embodiment, the server may be disposed indoors, such as an offline indoor retail store, and can timely obtain the activity information of the moving object in the offline indoor retail store.
In an exemplary embodiment, before determining to capture the first camera monitoring the moving object, the step may further include:
the server receives a tracking request from the client, wherein the tracking request comprises monitoring moving target information, such as face information (such as face pictures and the like) and/or human body information (such as human body pictures and the like). Here, the client refers to a dedicated monitoring terminal or a mobile terminal for monitoring indoor activities by professionals such as security, police, and the like.
In one illustrative example, step 100 may comprise:
the server detects a target of each frame of picture of each camera device and extracts a first feature vector according to a detected moving object such as a human body;
detecting the obtained monitoring moving target information such as a moving target in a moving target image such as a human body, and extracting a second feature vector;
utilizing the second feature vectors to perform vector comparison calculation on the feature vectors detected by all the camera devices, and finding out the camera devices corresponding to the first feature vectors with the similarity higher than a preset threshold value in a preset number;
and taking the camera device with the highest similarity in which the monitoring moving target is positioned as the first camera device.
It should be noted that, a plurality of moving objects may be caught by the camera device in the field, and after the feature comparison, the similarity between the features of some moving objects and the features of the monitored moving object exceeds the preset threshold, so that the camera device corresponding to the plurality of first feature vectors is likely to be found.
The target detection may include, for example, a face detection algorithm, a human body detection algorithm, and the like, and is mainly used for extracting the face features and the face feature attributes (such as whether to wear glasses, hair length, hair color, skin color, and the like), and also used for extracting the human body features and the human body feature attributes (such as whether to wear a skirt, clothes color, trousers color, shoe color, whether to have a bag on the hand, and the like).
In an exemplary example, the method for implementing indoor monitoring may further include:
the server side pulls the real-time code stream through a plurality of camera devices arranged indoors;
and identifying the moving object according to the obtained code stream, such as: face recognition, human body recognition, ReID, etc.;
the identified moving object information, such as: face information, human body information and the like are uploaded to a cloud server.
In this way, the cloud server can judge whether the obtained mobile object information exists in a preset blacklist, and if a mobile object corresponding to the obtained mobile object information can be matched from the blacklist, it is determined that a risk mobile object enters a monitoring room; at this moment, the cloud server can send an alarm to the client in real time, so that the client prompts that the mobile object is at risk to be monitored indoors, for example, the alarm can be given through a popup window. In one embodiment, the client can send a tracking request to the server by taking the risk mobile object as a monitoring mobile target so as to monitor the map position of the risk mobile object in the monitored room in real time.
In an exemplary embodiment, when the cloud server finds that the risk mobile object has historical risk orders, the client can obtain the risk orders through the cloud server to display the risk orders. Here, the risk mobile object is associated with a mobile object identifier (also referred to as member information) on the cloud server, and the server can return the member information to the client, so that the client obtains the past order information of the member.
In the embodiment of the present application, the image pickup device may include a conventional camera. In order to track people, goods and behaviors in the offline indoor retail store, the installation posture of the camera can be properly adjusted, and the relation information of the goods or the goods shelf and the camera is adjusted. The specific implementation of how to adjust is not used to limit the scope of protection of the present application, and may be implemented by using a calibration tool. In one embodiment, in order to ensure the coverage of the field of view of the whole field and avoid the situation of too many dead angles and the situation of losing the mobile object, the suitable adjustment of the installation posture of the camera may include, but is not limited to, such as: and adjusting the installation posture of the camera, the monitoring visual field range and the like. In addition, because the feature values obtained by the excessive color difference of the images have difference (the same color has great color difference under different configurations), the adjustment of the camera may also include the adjustment of the image color of the camera, and this adjustment may include the adjustment of information such as the saturation, brightness, and the like of the image, so as to ensure the success rate of the feature matching. In one embodiment, the calibration tool is used for calibrating the visual field range of the camera, and mainly functions to determine the mapping relation between the image coordinate system of the image shot by the camera and the map coordinate system, namely the map coordinate system in the field drawn by the business application drawing tool such as CAD and the like.
In an exemplary embodiment, when the monitoring moving object moves out of the first camera, the step 100 further includes:
and calculating the camera device which most probably shoots the monitoring moving target according to the foot point motion trail of the image of the monitoring moving target, and determining the camera device which shoots the monitoring moving target as the first camera device again.
In an exemplary example, if the monitoring moving object is not found, the monitoring moving object is searched for in the image pickup apparatuses in the whole field.
The process of searching for the monitoring moving target can be realized by comparing the feature vectors, that is, the similarity is calculated, the image of the moving object with the highest similarity is determined as the tracking target, that is, the monitoring moving target, and the camera device for shooting the monitoring moving target is used as the first camera device.
In an exemplary embodiment, if it is determined again that the moving object captured by the first imaging device and having the highest similarity to the monitoring moving object is not the monitoring moving object, the method further includes: and switching to the camera device which shoots the monitored moving target at present through manual switching, and taking the camera device as a new first camera device.
Step 101: and sending the code stream of the first camera device to the client.
In an exemplary example, the server further sends the monitoring moving object foot point information (for confirming the position of the monitoring moving object in the map), the monitoring moving object information such as human body information (for confirming the position of the monitoring moving object in the captured image) to the client.
Step 102: and calculating a second camera device which can possibly capture the monitoring moving target according to the calibration information and the moving direction of the monitoring moving target, and sending the information of the second camera device to the client. Here, the second image pickup device may include one or more image pickup devices.
It should be noted that, the execution sequence between step 101 and step 102 is not strict, and step 101 and step 102 may also be executed simultaneously.
In this way, the client can open the real-time code streams of the auxiliary camera devices through a Network Video Recorder (NVR) to monitor, such as watching a monitored moving object, and monitoring the position of the moving object in an on-site map coordinate system. The NVR is a storage and forwarding part of the network video monitoring system, and the NVR and a video encoder or a network camera work cooperatively to complete video recording, storage and forwarding functions of videos.
In an exemplary embodiment, the tracking is enabled when the server receives a tracking request from the client and determines the first camera. When the tracking is started, because each frame of image shot by the camera device needs to be matched, at this time, the fast matching is carried out in the current first camera device according to the current first camera device, and the tracked object, namely the monitoring moving target, is quickly found.
In an exemplary embodiment, step 102 may be preceded by:
and calculating the shooting range of each camera device in the actual site through the visual field calibration of the camera devices, and converting the shooting range of the actual site into a map visual range.
In an exemplary embodiment, the calculating in step 102 may capture a second camera device for monitoring the moving object, and may include:
after the first camera device is determined, calculating a second camera device which is most likely to shoot the monitored moving target according to the foot point motion track of the image of the monitored moving target, and taking the calculated second camera device as an auxiliary camera device;
all possible information (e.g., identification information) of the second camera is sent to the client.
In an exemplary embodiment, calculating a second camera most likely to capture the monitoring moving object according to the foot point motion trajectory of the image of the monitoring moving object may include:
when a monitoring moving object appears in a certain camera device (such as the first camera device determined in step 100), calculating the coordinates of the foot points of the image of the monitoring moving object on the map in real time, and calculating whether the monitoring moving object is located in the map visible range of other cameras according to the coordinates;
if the monitoring moving target is in the map visual range of other camera devices (possibly entering the visual ranges of a plurality of camera devices at the same time), calculating the shortest distance between the monitoring moving target and the boundary of the visual range of the camera devices, wherein the longer the shortest distance is, the farther the target is from the boundary in the visual range of the camera, the farther the target is in the visual center; if the monitoring moving target is outside the map visual range of other camera devices, calculating the shortest distance between the monitoring moving target and the boundary of the visual range of the camera devices, wherein the shorter the shortest distance is, the closer the target is to the visual range of the camera, and the easier the target enters the visual range of the camera;
a predetermined number of the image pickup devices such as the first three (top3) closest to the center of the field of view of the image pickup device, or the image pickup device most likely to enter the field of view of the image pickup device, are selected as the secondary image pickup devices.
In an exemplary embodiment, when detecting the target, a moving object frame may be added to the detected moving object, and the direction of the moving object in the image, including up, down, left, right, and the position of the foot point, may be calculated. In this way, when the target detects the moving object frame, the image foot point coordinates of the moving object are converted into the map coordinate system according to the established mapping relation between the image coordinate system and the map coordinate system by using the image foot point coordinates of the moving object. Thus, the information such as the traveling speed and the traveling direction of the moving object can be calculated along with the continuous tracking of the tracking object.
For the client, after the client receives the information from the second camera device of the server, the client can search the tracking object, namely the monitoring moving target, in the auxiliary camera device, and if the tracking object cannot be found, other camera devices can search the tracking object in the whole field. Here, the process of finding the tracked object is realized by comparing the feature vectors, the similarity is calculated, and then the moving object with the highest similarity is used as the tracked object.
In an exemplary instance, if the obtained moving object with the highest similarity is not the tracking object, the client may continue to track the tracking object by manually switching to the other second camera.
In one illustrative example, when monitoring the moving object for self-service cash, the method further comprises:
the server side calculates and obtains a cash register such as a POS machine where the monitoring moving target is located currently through the calibration information and the position of the monitoring moving target, and sends the information of the POS machine to the client side;
correspondingly, the client further comprises:
directly connecting with a local area network of a POS machine, and acquiring and displaying operation data of the monitoring mobile target in real time; the client may further include: and pulling data of a camera device used for monitoring the POS machine through the server side to display in a small window.
In one illustrative example, calculating a cash register machine, such as a POS machine, in which the monitored moving object is currently located may include: and determining the POS machine closest to the monitoring moving target according to the foot point information of the monitoring moving target, and if the distance between the monitoring moving target and the POS machine is less than a preset threshold value, considering that the payment behavior happens when the monitoring moving target starts to interact with the POS machine. In one embodiment, a deep learning algorithm may be used to calculate whether the target is in contact with the cashier machine to provide a more accurate determination.
By the embodiment of the application, the real-time monitoring of the moving objects in the indoor retail store under the line is realized. Furthermore, real-time monitoring of goods taking behaviors of the mobile object is achieved, and further, the generated order information, payment information and the like are displayed in real time.
The present application further provides a computer-readable storage medium storing computer-executable instructions for performing any one of the methods for implementing indoor monitoring shown in fig. 1.
The present application further provides an apparatus for implementing indoor monitoring, including a memory and a processor, wherein the memory stores the following instructions executable by the processor: for executing the steps of any one of the above-mentioned methods for realizing indoor monitoring shown in fig. 1.
The present application further provides a method for implementing indoor monitoring, which may include:
and the client carries out tracking query on the monitoring moving target according to the received code stream of the first camera device and the information of the second camera device.
In an exemplary embodiment, the method may further include:
the client side inquires images specially shot by the monitoring moving target in the monitoring field according to the received code stream of the first camera device and the information of the second camera device, for example, photos of the front, the side, the back, the local part and other angles of the monitoring moving target can be obtained;
the client stores the multi-angle image information of the monitored moving target to the server target information base so as to improve the tracking accuracy of the server on the monitored moving target.
In one illustrative example, the client retrieval control logic may comprise: when a client initiates a real-time tracking request to a server, pictures and retrieval thresholds of a monitored moving target are transmitted to the server; the server side searches according to the pictures of the monitored moving targets, and when moving objects reaching the retrieval threshold value exist in the field, a list of cameras where the moving objects are located is returned to the client side; the camera with the highest score, i.e., the highest similarity, is used as the main camera, and can be selected by a worker on a client interface, for example. If the server side does not retrieve the mobile object with the required retrieval score threshold value, the client side can appropriately reduce the retrieval score threshold value and retrieve again until the mobile object cannot be searched until the lowest retrieval score threshold value is reached, and then the search is considered to be failed.
According to the method for realizing indoor monitoring, the video tracking window can be displayed in a form such as a dialog box at a client side through multi-angle combined tracking of the main camera device and the auxiliary camera devices, the camera device with the best snapshot angle serves as the main tracking window, the camera devices with the slightly poor snapshot angles at the periphery serve as the auxiliary tracking window, and multi-angle all-directional tracking is realized.
The method for realizing indoor monitoring provided by the embodiment of the application not only realizes real-time tracking of the mobile object, but also realizes tracking of goods taking or payment of the mobile object, and the like, for example: the system can display the goods taking information and the payment information of the user in real time by contrasting videos, and is convenient for more accurately judging whether the stealing behavior occurs.
The application also provides a method for realizing indoor monitoring, which comprises the following steps:
initiating a tracking request to a server, wherein the tracking request comprises monitoring moving target information;
receiving a code stream of a first camera device from a server side, and receiving information of a second camera device from the server side; here, the second image pickup device may include one or more image pickup devices;
and monitoring the monitored moving target according to the code stream of the first camera device and the real-time code stream of the second camera device.
In an exemplary instance, the client further receives, from the server, foot point information of the monitoring moving object (for confirming a position of the monitoring moving object in the map), and monitoring moving object information such as human body information (for confirming a position of the monitoring moving object in the image).
In an illustrative example, further comprising:
and searching the monitoring moving target in the second camera device, and searching the monitoring moving target in other camera devices in the whole field.
In one illustrative example, further comprising:
if the obtained moving object with the highest similarity is not the current monitoring moving target, the tracking of the monitoring moving target can be continuously realized by manually switching to other second camera devices.
In one illustrative example, further comprising:
the client receives the information from the POS machine of the server, is directly connected with the POS machine local area network, and acquires and displays the operation data of the monitoring moving target in real time. Further, the method can also comprise the following steps: and the service end pulls data for monitoring the camera device at the POS machine to display in a small window.
According to the method for realizing indoor monitoring, the user can accurately judge the position of the user in the field by combining the video more conveniently through the real-time positioning tracking of the map in the field.
The method for realizing indoor monitoring provided by the embodiment of the application can be applied to searching missing persons, suspicious person tracking, early warning person tracking and the like in a field.
Taking the application scenario of finding missing person in a field as an example, fig. 2(a) is an exemplary diagram of a tracking application scenario of a missing person in an embodiment of the present application, which substantially includes: the client uploads the photos of the lost people to the server as a monitoring moving target so as to initiate a tracking request. The method comprises the steps that a server receives a tracking request from a client, monitoring moving target information with the similarity meeting a preset threshold value with lost people is matched from moving object information captured by a plurality of camera devices through feature vector comparison, and the camera device capturing the monitoring moving target is determined as a first camera device; the server side sends the code stream of the first camera device to the client side; meanwhile, the server side can calculate a second camera device which is most likely to shoot the monitored moving target according to the foot point motion trail of the image of the monitored moving target, and sends the information of the second camera device to the client side; the client can open the real-time code stream of the second camera device serving as the auxiliary camera device through the NVR to monitor, such as watching a monitored moving target and monitoring the position of the moving target under an in-field map coordinate system, so that the lost person can be searched, determined and tracked in real time.
Taking the suspicious person tracking application scenario as an example, the method roughly comprises the following steps: and the client uploads the picture of the suspicious person to the server as a monitoring moving target so as to initiate a tracking request. The method comprises the steps that a server receives a tracking request from a client, monitoring moving target information with the similarity meeting a preset threshold value with suspicious personnel is matched from moving object information captured by a plurality of camera devices through feature vector comparison, and the camera device capturing the monitoring moving target is determined as a first camera device; the server side sends the code stream of the first camera device to the client side; meanwhile, the server side can calculate a second camera device which is most likely to shoot the monitored moving target according to the foot point motion trail of the image of the monitored moving target, and sends the information of the second camera device to the client side; the client can open the real-time code stream of the second camera device serving as the auxiliary camera device through the NVR to monitor, such as watching a monitored moving target and monitoring the position of the moving target under an in-field map coordinate system, so as to realize real-time tracking of the suspicious person.
Taking an early-warning person tracking application scenario as an example, fig. 2(b) is an exemplary diagram of an early-warning person tracking application scenario in the embodiment of the present application, which substantially includes: the server side pulls the real-time code stream through a plurality of camera devices arranged indoors; and identifying the moving object according to the obtained code stream, such as: face recognition, body recognition, ReID, etc.; the identified moving object information, such as: face information, human body information and the like are uploaded to a cloud server. The cloud server judges whether the obtained mobile object information exists in a preset blacklist or not through the preset blacklist, and if a mobile object corresponding to the obtained mobile object information can be matched from the blacklist, the mobile object with the risk is considered to enter a monitoring room; at this moment, the cloud server can push an alarm to the client in real time, so that the client prompts that the mobile object at risk is monitored indoors, and for example, the alarm can be given through a pop-up window. The client side can use the risk moving object as an early warning person while alarming, for example, a tracking request is sent to the server side by directly clicking a photo of the early warning person. The method comprises the steps that a server receives a tracking request from a client, monitoring moving target information with the similarity meeting a preset threshold value with early warning personnel is matched from moving object information captured by a plurality of camera devices through feature vector comparison, and the camera device capturing the monitoring moving target is determined as a first camera device; the server side sends the code stream of the first camera device to the client side; meanwhile, the server side can calculate a second camera device which is most likely to shoot the monitored moving target according to the foot point motion trail of the image of the monitored moving target, and sends the information of the second camera device to the client side; the client can open the real-time code streams of the second camera devices serving as the auxiliary camera devices through the NVR to monitor, such as watching a monitored moving target and monitoring the position of the moving target under an in-field map coordinate system, so that real-time tracking of the early warning personnel and tracking of goods taking/order payment information of the early warning personnel are achieved. Thus, all video recording behaviors of the early warning personnel in the field are extracted as evidence of risk behaviors.
Fig. 3 is a schematic structural diagram of a server in an embodiment of the present application, and as shown in fig. 3, the server at least includes: the device comprises a first processing module and a transceiver module; wherein the content of the first and second substances,
the first processing module is arranged to determine a first camera device for capturing and monitoring a moving target according to the moving object information captured by the plurality of camera devices; calculating a second camera device which can possibly capture the monitoring moving target according to the calibration information and the moving direction of the monitoring moving target, wherein the second camera device comprises more than one camera device;
the receiving and sending module is used for sending the code stream of the first camera device to the client; and sending the information of the second camera to the client.
In one illustrative example, the transceiver module is further configured to:
and sending the foot point information of the monitoring moving target (used for confirming the position of the monitoring moving target in the map), the monitoring moving target information such as human body information (used for confirming the position of the monitoring moving target in the shot image) to the client.
In one illustrative example, the first processing module is further configured to:
receiving a tracking request from a client, where the tracking request includes monitoring moving target information, such as face information (e.g., a face picture, etc.), and/or body information (e.g., a body picture, etc.).
In an exemplary embodiment, the matching of the monitoring moving target information from the moving object information captured by the plurality of image capturing devices in the first processing module may include:
carrying out target detection on each frame of picture of each camera device, and extracting a first feature vector according to a detected moving object such as a human body; detecting the obtained monitoring moving target information such as a moving target in a moving target image such as a human body, and extracting a second feature vector; utilizing the second feature vectors to perform vector comparison calculation on the feature vectors detected by all the camera devices, and finding out the camera devices corresponding to the first feature vectors with the similarity higher than a preset threshold value in a preset number; and taking the camera device with the highest similarity where the monitored moving target is located as the first camera device.
In one illustrative example, the first processing module is further configured to: pulling a real-time code stream through a plurality of camera devices arranged indoors; and identifying the moving object according to the obtained code stream, such as: face recognition, human body recognition, ReID, etc.; the transceiver module is further configured to: the identified moving object information, such as: face information, human body information and the like are uploaded to a cloud server.
In an exemplary embodiment, when the monitoring moving object moves out of the first camera, the first processing module is further configured to:
and calculating the camera device which most probably shoots the monitoring moving target according to the foot point motion trail of the image of the monitoring moving target, and determining the camera device which shoots the monitoring moving target as the first camera device again.
In an exemplary embodiment, the calculating in the first processing module may capture a second camera device monitoring the moving object, and may include: after the first camera device is determined, calculating a second camera device which is most likely to shoot the monitored moving target according to the foot point motion track of the image of the monitored moving target, and taking the calculated second camera device as an auxiliary camera device; all possible information (e.g., identification information) of the second camera is sent to the client.
In one illustrative example, the first processing module is further configured to: calculating the shooting range of each camera device in the actual site through the visual field calibration of the camera devices, and converting the shooting range of the actual site into a map visual range; the calculating, by the first processing module, a second camera device most likely to shoot the monitored moving target according to the foot point motion trajectory of the image of the monitored moving target may include:
when a monitoring moving object appears in a certain camera device (such as the first camera device determined in step 100), calculating the coordinates of the foot points of the image of the monitoring moving object on the map in real time, and calculating whether the monitoring moving object is located in the map visible range of other cameras according to the coordinates; if the monitoring moving target is in the map visual range of other camera devices (possibly entering the visual ranges of a plurality of camera devices at the same time), calculating the shortest distance between the monitoring moving target and the boundary of the visual range of the camera devices, wherein the longer the shortest distance is, the farther the target is from the boundary in the visual range of the camera, the farther the target is in the visual center; if the monitoring moving target is outside the map visual range of other camera devices, calculating the shortest distance between the monitoring moving target and the boundary of the visual range of the camera devices, wherein the shorter the shortest distance is, the closer the target is to the visual range of the camera, the easier the target enters the visual range of the camera; a predetermined number of the image pickup devices such as the first three (top3) closest to the center of the field of view of the image pickup device, or the image pickup device most likely to enter the field of view of the image pickup device, are selected as the secondary image pickup devices.
In an exemplary instance, when monitoring the moving object for self-service cash registration, the first processing module is further configured to:
and calculating to obtain a cash register such as a POS machine where the monitoring moving target is currently located through the calibration information and the position of the monitoring moving target, and sending the information of the POS machine to the client.
Fig. 4 is a schematic structural diagram of a client in the embodiment of the present application, and as shown in fig. 4, at least includes: the interface module and the second processing module; wherein the content of the first and second substances,
the interface module is configured to initiate a tracking request to the server, where the tracking request includes monitoring moving target information, such as face information (e.g., face pictures), and/or human body information (e.g., human body pictures); and receiving a code stream from a first camera device of the server, and receiving information from a second camera device of the server, wherein the second camera device comprises more than one camera device.
And the second processing module is configured to monitor the monitoring moving target, such as watching the monitoring moving target, monitoring the position of the monitoring moving target in an in-field map coordinate system and the like, according to the code stream of the first camera device and the real-time code stream of the second camera device opened by the NVR.
In one illustrative example, the second processing module is further configured to:
if the monitoring moving target cannot be found in the second camera device, other camera devices find the monitoring moving target in the whole field.
In one illustrative example, the second processing module is further configured to:
and if the obtained moving object with the highest similarity is not the monitoring moving target, the tracking of the monitoring moving target is continuously realized by manually switching to other second camera devices.
In one illustrative example, the second processing module is further configured to:
receiving information from a POS machine of a server, directly connecting the information with a local area network of the POS machine, and acquiring and displaying operation data of the monitored mobile target in real time; the following steps can be further provided: and pulling data of a camera device used for monitoring the POS machine through the server side to display in a small window.
By the embodiment of the application, the real-time monitoring of the moving objects in the indoor retail store under the line is realized. Furthermore, real-time monitoring of goods taking behaviors of the mobile object is achieved, and further, the generated order information, payment information and the like are displayed in real time.
Although the embodiments disclosed in the present application are described above, the descriptions are only for the convenience of understanding the present application, and are not intended to limit the present application. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure as defined by the appended claims.

Claims (20)

1. A method of implementing indoor monitoring, comprising:
the server side determines a first camera device for capturing and monitoring a moving target according to the moving object information captured by the plurality of camera devices;
sending the code stream of the first camera device to a client;
and calculating a second camera device which can possibly capture the monitoring moving target according to the calibration information calibrated to the camera device and the moving direction of the monitoring moving target, and sending the information of the second camera device to the client.
2. The method of claim 1, wherein the determining to capture the first camera monitoring the moving object comprises:
the server detects a target of each frame of image of each camera device and extracts a first feature vector according to a detected moving object;
detecting a moving target in the obtained monitoring moving target information, and extracting a second feature vector;
utilizing the second feature vectors to perform vector comparison calculation on the feature vectors detected by all the camera devices, and finding out the camera devices corresponding to the first feature vectors with the similarity higher than a preset threshold value in a preset number;
and taking the camera device with the highest similarity where the monitored moving target is located as the first camera device.
3. The method of claim 1, when the monitoring moving object moves out of the first camera, further comprising:
and calculating the camera device which is most likely to shoot the monitoring moving target according to the foot point motion trail of the image of the monitoring moving target, and determining the camera device which shoots the monitoring moving target as the first camera device again.
4. The method of claim 1, wherein the calculating a second camera likely to capture a surveillance moving object comprises:
after the first camera device is determined, calculating a second camera device which is most likely to shoot the monitoring moving target according to the foot point motion track of the image of the monitoring moving target;
and sending information of all possible second camera devices to the client.
5. The method of claim 4, the calculating a possible capture before a second camera monitoring a moving object, further comprising:
calculating the shooting range of each camera device in the actual site through the visual field calibration of the camera devices, and converting the shooting range of the actual site into a map visual range;
the second camera device which is most likely to shoot the monitoring moving target according to the foot point motion trail of the image of the monitoring moving target comprises:
when the monitoring moving target appears in a certain camera device, calculating the coordinates of the foot points of the image of the monitoring moving target on the map in real time, and calculating whether the monitoring moving target is in the map visible range of other cameras according to the coordinates;
if the monitoring moving target is in the map visible range of other camera devices, calculating the shortest distance between the monitoring moving target and the visible range boundary of the camera devices; if the monitoring moving target is out of the map visible range of other camera devices, calculating the shortest distance between the monitoring moving target and the boundary of the visible range of the camera devices;
and selecting a preset number of image pickup devices closest to the center of the field of view of the image pickup devices or the image pickup device which is most easily in the field of view of the image pickup devices as the second image pickup device.
6. The method of claim 1, when the monitoring of the moving object for self-service checkout further comprises:
and the server calculates and obtains the cash register where the monitoring moving target is located according to the calibration information and the position of the monitoring moving target, and sends the information of the cash register to the client.
7. A method of implementing indoor monitoring, comprising:
initiating a tracking request to a server, wherein the tracking request comprises monitoring moving target information;
receiving a code stream of a first camera device from a server side, and receiving information of a second camera device from the server side;
and monitoring the monitoring moving target according to the code stream of the first camera device and the real-time code stream of the second camera device.
8. The method of claim 7, further comprising:
and searching the monitoring moving target in the second camera device, and searching the monitoring moving target in other camera devices in the whole field.
9. The method of claim 7, further comprising:
and receiving information of the cash register machine from the server side, directly connecting with a cash register machine local area network, and acquiring and displaying operation data of the monitoring moving target in real time.
10. The method of claim 9, further comprising:
and pulling data for monitoring a camera device at the cash register machine through the server side to perform small window display.
11. A server, comprising: the device comprises a first processing module and a transceiver module; wherein the content of the first and second substances,
the first processing module is arranged to determine a first camera device for capturing a monitoring moving target according to the moving object information captured by the plurality of camera devices; calculating a second camera device which can capture the monitoring moving target according to the calibration information calibrated to the camera device and the moving direction of the monitoring moving target, wherein the second camera device comprises more than one camera device;
the receiving and sending module is used for sending the code stream of the first camera device to the client; and sending the information of the second camera to the client.
12. The server according to claim 11, wherein the matching of the monitoring moving target information from the moving object information captured by the plurality of cameras in the first processing module comprises:
carrying out target detection on each frame of picture of each camera device, and extracting a first feature vector according to a detected moving object; detecting a moving target in the obtained monitoring moving target information, and extracting a second feature vector; utilizing the second feature vectors to perform vector comparison calculation on the feature vectors detected by all the camera devices, and finding out the camera devices corresponding to the first feature vectors with the similarity higher than a preset threshold value in a preset number; and taking the camera device with the highest similarity where the monitored moving target is located as the first camera device.
13. The server according to claim 11, wherein when the monitoring moving object goes out of the first camera, the first processing module is further configured to:
and calculating the camera device which most probably shoots the monitoring moving target according to the foot point motion trail of the image of the monitoring moving target, and determining the camera device which shoots the monitoring moving target as the first camera device again.
14. The server according to claim 11, wherein the calculating in the first processing module may capture a second camera device monitoring a moving object, comprising:
and calculating a second camera device which is most likely to shoot the monitoring moving target according to the foot point motion trail of the image of the monitoring moving target, and sending the information of all the calculated possible second camera devices to the client.
15. The server of claim 11, the first processing module further configured to: calculating the shooting range of each camera device in the actual site through the visual field calibration of the camera devices, and converting the shooting range of the actual site into a map visual range;
the second camera device which is most likely to shoot the monitoring moving target is calculated according to the foot point motion trail of the image of the monitoring moving target in the first processing module, and the second camera device comprises:
when the monitoring moving target appears in a certain camera device, calculating the coordinates of the foot points of the image of the monitoring moving target on the map in real time, and calculating whether the monitoring moving target is in the map visible range of other cameras according to the coordinates; if the monitoring moving target is in the map visible range of other camera devices, calculating the shortest distance between the monitoring moving target and the visible range boundary of the camera devices; if the monitoring moving target is out of the map visible range of other camera devices, calculating the shortest distance between the monitoring moving target and the boundary of the visible range of the camera devices; and selecting a preset number of image pickup devices closest to the center of the field of view of the image pickup devices or the image pickup device which is most easily in the field of view of the image pickup devices as the second image pickup device.
16. The server according to claim 15, wherein when the monitoring mobile object performs self-service cash registration, the first processing module is further configured to:
and calculating to obtain the cash register where the monitoring moving target is located currently through the calibration information and the position of the monitoring moving target, and sending the information of the cash register to the client.
17. A client, comprising: the interface module and the second processing module; wherein the content of the first and second substances,
the interface module is set to initiate a tracking request to the server, wherein the tracking request comprises monitoring moving target information; receiving a code stream from a first camera device of a server side, and receiving information from a second camera device of the server side, wherein the second camera device comprises more than one camera device;
and the second processing module is used for monitoring the monitoring moving target according to the code stream of the first camera device and the real-time code stream of the second camera device.
18. The server of claim 17, wherein the second processing module is further configured to:
and if the monitoring moving target cannot be searched in the second camera device, other camera devices search the monitoring moving target in the whole field.
19. The server of claim 17, wherein the second processing module is further configured to:
and receiving information from the cash register of the server side, directly connecting with a cash register local area network, and acquiring and displaying operation data of the monitoring mobile target in real time.
20. The server of claim 19, the second processing module further configured to: and pulling data of a camera device for monitoring the cash register machine through the server side to perform small window display.
CN202011352293.1A 2020-11-26 2020-11-26 Method and device for realizing indoor monitoring, server and client Pending CN114549580A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011352293.1A CN114549580A (en) 2020-11-26 2020-11-26 Method and device for realizing indoor monitoring, server and client

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011352293.1A CN114549580A (en) 2020-11-26 2020-11-26 Method and device for realizing indoor monitoring, server and client

Publications (1)

Publication Number Publication Date
CN114549580A true CN114549580A (en) 2022-05-27

Family

ID=81668393

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011352293.1A Pending CN114549580A (en) 2020-11-26 2020-11-26 Method and device for realizing indoor monitoring, server and client

Country Status (1)

Country Link
CN (1) CN114549580A (en)

Similar Documents

Publication Publication Date Title
US11228715B2 (en) Video surveillance system and video surveillance method
US20230386214A1 (en) Information processing apparatus, information processing method, and information processing program
CN112041848B (en) System and method for counting and tracking number of people
JP6649306B2 (en) Information processing apparatus, information processing method and program
US7646887B2 (en) Optical flow for object recognition
US7929017B2 (en) Method and apparatus for stereo, multi-camera tracking and RF and video track fusion
WO2009148702A1 (en) Detecting and tracking targets in images based on estimated target geometry
KR20160014413A (en) The Apparatus and Method for Tracking Objects Based on Multiple Overhead Cameras and a Site Map
CN113468914B (en) Method, device and equipment for determining purity of commodity
JP3489491B2 (en) PERSONAL ANALYSIS DEVICE AND RECORDING MEDIUM RECORDING PERSONALITY ANALYSIS PROGRAM
KR20170006356A (en) Method for customer analysis based on two-dimension video and apparatus for the same
US10319204B1 (en) Systems and methods for retracing shrink events
KR102046591B1 (en) Image Monitoring System and Method for Monitoring Image
CN111918023B (en) Monitoring target tracking method and device
CN109583296A (en) One kind preventing error detection method, apparatus, system and computer storage medium
WO2021015672A1 (en) Surveillance system, object tracking system and method of operating the same
KR102226372B1 (en) System and method for object tracking through fusion of multiple cameras and lidar sensor
KR102584708B1 (en) System and Method for Crowd Risk Management by Supporting Under and Over Crowded Environments
CN114549580A (en) Method and device for realizing indoor monitoring, server and client
US20030004913A1 (en) Vision-based method and apparatus for detecting an event requiring assistance or documentation
KR101899318B1 (en) Hierarchical face object search method and face recognition method using the same, hierarchical face object search system and face recognition system using the same
Kim et al. Multi-object detection and behavior recognition from motion 3D data
CN110956644A (en) Motion trail determination method and system
US20230419674A1 (en) Automatically generating best digital images of a person in a physical environment
Pieri et al. Active video surveillance based on stereo and infrared imaging

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination