WO2020057355A1 - Procédé et dispositif de modélisation tridimensionnelle - Google Patents

Procédé et dispositif de modélisation tridimensionnelle Download PDF

Info

Publication number
WO2020057355A1
WO2020057355A1 PCT/CN2019/103833 CN2019103833W WO2020057355A1 WO 2020057355 A1 WO2020057355 A1 WO 2020057355A1 CN 2019103833 W CN2019103833 W CN 2019103833W WO 2020057355 A1 WO2020057355 A1 WO 2020057355A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
video data
camera
video
person
Prior art date
Application number
PCT/CN2019/103833
Other languages
English (en)
Chinese (zh)
Inventor
张恩勇
Original Assignee
深圳市九洲电器有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市九洲电器有限公司 filed Critical 深圳市九洲电器有限公司
Publication of WO2020057355A1 publication Critical patent/WO2020057355A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Definitions

  • the present invention relates to the technical field of video surveillance, and particularly to a video linkage monitoring method, a monitoring server, and a video linkage monitoring system.
  • Video surveillance technology is playing an increasingly important role in the field of security. It is widely used in cities because of its intuitiveness, convenience, and rich information content. Skynet, transportation, civil security and other fields.
  • the inventors found that the traditional technology has at least the following problems: During the video monitoring process, the camera often only captures the back of the target person for some important video pictures, which causes unnecessary trouble to analyze the important video pictures .
  • An object of the embodiments of the present invention is to provide a video linkage monitoring method, a monitoring server, and a video linkage monitoring system, which can comprehensively photograph the front and back of a target person.
  • the embodiments of the present invention provide the following technical solutions:
  • an embodiment of the present invention provides a video linkage monitoring method, which is applied to a monitoring server, the monitoring server communicates with multiple cameras, each of the cameras is installed at a different position in a preset area, and each The camera is used to capture area images at different angles within the preset area, and the method includes:
  • the target video data matches a preset video detection abnormal model, a target person is detected from the target video data, and it is determined whether the target video data includes a front image of the target person, and the front image includes the front image A face image of a target person, the target person being located in the preset area;
  • the target video data does not include a front image of the target person
  • an additional camera that is set opposite to the target camera is detected, and the additional camera is controlled to track the person and take a front image of the person.
  • the method further includes:
  • the target video data does not match the preset video detection abnormal model, discard the target video data and continue to detect whether the next target video data collected by the target camera matches the preset video detection abnormal model.
  • the method further includes:
  • the target video data includes a frontal image of the target person, controlling the target camera to track the target person.
  • the detecting an additional camera disposed opposite to the target camera includes:
  • the method further includes:
  • the training video data set includes video data of multiple abnormal scenes
  • the preprocessed video data is processed by a convolution algorithm to establish the video detection abnormal model.
  • an embodiment of the present invention provides a video linkage monitoring device, which is applied to a monitoring server, the monitoring server communicates with multiple cameras, each of the cameras is installed at a different position in a preset area, and each The camera is used to capture area images at different angles within the preset area, and the device includes:
  • a first detection module configured to detect whether the target video data collected by the target camera matches a preset video detection abnormal model
  • a second detection module configured to detect a target person from the target video data if the target video data matches a preset video detection abnormal model, and determine whether the target video data includes a frontal image of the target person, The front image includes a face image of the target person, and the target person is located in the preset area;
  • a third detection module configured to detect an additional camera opposite to the target camera if the target video data does not include a frontal image of the target person, and control the additional camera to track the person and shoot the person Front image.
  • the apparatus further includes:
  • the discarding module is configured to discard the target video data if the target video data does not match the preset video detection abnormal model, and continue to detect whether the next target video data collected by the target camera matches the preset video detection abnormal model.
  • an embodiment of the present invention provides a monitoring server, including:
  • At least one processor At least one processor
  • a memory connected in communication with the at least one processor; wherein the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processing
  • the device can be used to perform the video linkage monitoring method according to any one of the above.
  • an embodiment of the present invention provides a video linkage monitoring system, including:
  • the monitoring server communicates with each of the cameras.
  • an embodiment of the present invention provides a non-transitory computer-readable storage medium, where the non-transitory computer-readable storage medium stores computer-executable instructions, and the computer-executable instructions are used to cause a monitoring server to execute The video linkage monitoring method according to any one.
  • an embodiment of the present invention provides a computer program product.
  • the computer program product includes a computer program stored on a non-volatile computer-readable storage medium.
  • the computer program includes program instructions. When the instruction is executed by the monitoring server, the monitoring server is caused to execute any one of the video linkage monitoring methods.
  • the video linkage monitoring method, monitoring server, and video linkage monitoring system provided by various embodiments of the present invention, first, it is detected whether the target video data collected by the target camera matches a preset video detection abnormality model; second, if the target video data matches the preset
  • the video detection anomaly model detects a target person from the target video data and determines whether the target video data includes a front image of the target person.
  • the front image includes the face image of the target person, and the target person is located in a preset area; again, if the target video
  • An additional camera is detected opposite to the target camera. The additional camera is controlled to track the person and capture the front image of the person. Therefore, it can photograph the front and back of the target person in all directions, thereby bringing convenience and reducing unnecessary troubles in subsequent analysis of the target person.
  • FIG. 1 is a schematic structural diagram of a video linkage monitoring system according to an embodiment of the present invention.
  • FIG. 2 is a schematic flowchart of a video linkage monitoring method according to an embodiment of the present invention
  • FIG. 3 is a schematic structural diagram of a video linkage monitoring device according to an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of a video linkage monitoring device according to another embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of a monitoring server according to an embodiment of the present invention.
  • the video linkage monitoring method of the embodiment of the present invention can be executed in any suitable type of electronic device with computing capability, such as a monitoring server, a desktop computer, a smart phone, a tablet computer, and other electronic products.
  • the monitoring server here may be a physical server or a logical server virtualized by multiple physical servers.
  • the server may also be a server group composed of multiple servers that can communicate with each other, and each functional module may be separately distributed on each server in the server group.
  • the video linkage monitoring device may be used as a software system, independently provided in the above-mentioned client, or as one of the functional modules integrated in the processor, to execute the video linkage monitoring method according to the embodiment of the present invention.
  • FIG. 1 is a schematic structural diagram of a video linkage monitoring system according to an embodiment of the present invention.
  • the video surveillance system 100 includes a plurality of cameras 11, a surveillance server 12, and a mobile terminal 13.
  • the camera 11 is installed in a preset area for collecting video data. It can be understood that the camera 11 is fixedly installed in a preset area according to a preset rule, so as to cover the preset area as much as possible.
  • the camera is arranged on a wall surface, a ground, a roof, or an object surface of the preset area in combination with the specific structure and occlusion of the preset area.
  • Each camera forms a camera group, which is used to monitor a specific surveillance area.
  • Each camera is installed at a different position in a preset area.
  • Each camera is used to capture images of areas at different angles within a preset area.
  • the camera group can capture 360-degree objects in the preset area.
  • each camera in the camera group uploads the collected video data to the same monitoring server.
  • Different monitoring areas correspond to different monitoring servers.
  • the surveillance servers of the two do not share surveillance video with each other.
  • a combination of the camera 11 and a multi-dimensional rotating motor can be used to capture real-time capture of high-definition video frame images in the preset area.
  • a high-definition camera with waterproof function, small size, high resolution, long life, and universal communication interface is selected.
  • the camera 11 includes a network camera, an infrared high-definition camera, a high-speed dome camera, a low-light camera, and the like.
  • the camera 11 has a built-in network coding module.
  • the camera includes a lens, an image sensor, a sound sensor, an A / D converter, a controller, a control interface, a network interface, and so on.
  • the camera may be used to collect video data signals, and the video data signals are analog video signals.
  • the camera is mainly composed of a CMOS light-sensitive component and a peripheral circuit, and is used for converting an optical signal input from the lens into an electrical signal.
  • the network coding module has an embedded chip built therein, the embedded chip is used to convert the video data signals collected by the camera into digital signals, the video data signals are analog video signals, and the embedded chip also The digital signal may be compressed.
  • the embedded chip may be a Hi3516 high-efficiency compression chip.
  • the camera 11 sends the compressed digital signal to the monitoring server 12 through the WIFI network.
  • the monitoring server 12 may send the compressed digital signal to the mobile terminal 13.
  • the camera 11 further includes an infrared sensor, so that the camera 11 has a night vision function. Users on the network can directly view the camera image on the web server with a browser or directly access through the mobile terminal APP.
  • the camera 11 can more easily implement monitoring, especially remote monitoring, with simple construction and maintenance, better support for audio, Better support for alarm linkage, more flexible recording storage, richer product selection, higher-definition video effects and more perfect monitoring and management functions, and the camera can be directly connected to the local area network, which is the data collection and photoelectric signal
  • the conversion end is the data supply end of the entire network.
  • the monitoring server 12 is a device that provides computing services.
  • the composition of the monitoring server includes a processor, a hard disk, a memory, a system bus, and the like. Similar to a general computer architecture, the monitoring server is responsible for providing functions such as mobile terminal APP registration, user management, and device management. At the same time, it is responsible for the video data storage function of the camera, and remembers the IP and port of the mobile terminal and camera through the monitoring server, and transmits the IP and port of the corresponding mobile terminal and camera to each other, so that the camera and mobile end can know The other party's IP and port establish a connection and communication through the IP address and port.
  • the monitoring server obtains the video data of the camera and then analyzes the video data according to the artificial intelligence module. When abnormal video data is detected, it sends an alarm message to notify the mobile terminal.
  • the monitoring server 12 includes a processor, and the processor includes an artificial intelligence module.
  • the artificial intelligence module is responsible for real-time analysis of video data, detects abnormal times, and notifies the mobile terminal.
  • the specific implementation of the artificial intelligence module is divided into two parts, the establishment of a video anomaly detection model and the application of a video anomaly detection model.
  • the first is the establishment of the video anomaly detection model. There are three parts.
  • the first part training the video data set of the video anomaly detection model for the training and learning of the subsequent machines. It includes video data of various abnormal scenes, such as frequent crossing of vehicles, robbery, trailing theft, fights, group fights, screams, crying, smoke, noisy video data, and other abnormal scenes that need to be detected.
  • the training video dataset covers most application scenarios.
  • the second part the preprocessing of the video data set.
  • the video data is extracted 10 pictures per second, and each picture is converted into a picture of 255 pixels long and 255 pixels wide.
  • the third part the establishment of training model, using artificial intelligence convolution algorithm, Python code to build the training model.
  • the model includes an input layer, a hidden layer, and an output layer.
  • the input layer is an input pre-processed picture.
  • the hidden layer is used to calculate the features of the input picture.
  • the output layer is based on the calculated features of the hidden layer to output whether the video contains abnormal scenes.
  • the training process is: normal video is marked as 0, abnormal video is marked as 1, and then the abnormal video and normal video are input into the training system at the same time.
  • the model is transferred to the server, the data set is replaced with the video of the camera, and the model is run to detect whether the video of the camera is abnormal.
  • FIG. 2 is a schematic flowchart of a video linkage monitoring method according to an embodiment of the present invention.
  • the video linkage monitoring method S200 includes:
  • the target camera is any camera in the camera group. It can be understood that the “target” in the target camera is used to distinguish other cameras.
  • the monitoring server selects video data of a specific camera from the camera group for detection and analysis At this time, the specific camera is the target camera, and at the same time, the video data collected by the target camera is the target video data.
  • the "target” in the target camera is not used to limit the protection scope of the present invention, but is only used for differentiation.
  • the video detection abnormal model is pre-built by the administrator and stored in the monitoring server.
  • the video detection abnormal model is used to evaluate whether the target video data needs to be processed in a targeted manner.
  • the monitoring server When constructing a video detection anomaly model, first, the monitoring server obtains a training video data set.
  • the training video data set includes video data of multiple abnormal scenes.
  • the video data set of the training video anomaly detection model is used for training and learning of subsequent machines. , Which includes video data of various abnormal scenes, such as frequently interspersed and connected vehicles, robbery, trailing theft, fights, group fights, screams, crying, smoke, noisy video data, and other abnormal scenes that need to be detected .
  • the training video dataset covers most application scenarios.
  • the monitoring server preprocesses the video data of various abnormal scenes.
  • the video data is extracted 10 pictures per second, and each picture is converted into a picture with a length of 255 pixels and a width of 255 pixels.
  • the monitoring server processes the pre-processed video data through a convolution algorithm and establishes a video detection abnormal model, for example, the establishment of a training model, the use of artificial intelligence's convolution algorithm, and Python code to build the trained model.
  • the model includes an input layer, a hidden layer, and an output layer.
  • the input layer is an input pre-processed picture.
  • the hidden layer is used to calculate the features of the input picture.
  • the output layer is based on the calculated features of the hidden layer to output whether the video contains abnormal scenes.
  • the training process is: normal video is marked as 0, abnormal video is marked as 1, and then the abnormal video and normal video are input into the training system at the same time. Through the data set preprocessing and the calculation of the training model, it is distinguished whether the video is abnormal or not.
  • the target video data matches a preset video detection abnormal model, detect the target person from the target video data, and determine whether the target video data includes a front image of the target person.
  • the front image includes the face image of the target person.
  • the target person is located at Preset area
  • the monitoring server detects the target person from the target video data according to the image analysis algorithm, for example, A follows B, A pickpocket B, and camera monitoring To A ’s trailing action behavior, and send video data containing A ’s trailing action behavior to the monitoring server.
  • the monitoring server detects A ’s trailing action behavior, uses the video data as target video data, and uses the image analysis algorithm to The data identified A as the target person.
  • the monitoring server determines whether the target video data has facial feature points associated with the target person. If it exists, the target video data is considered to include the front image of the target person; if it does not exist, the target video is considered to be the target video. The data does not include the front image of the target person, and the target video data includes only the back image of the target person. For example, following the above example, if the monitoring server detects the face image of nail A in the target video data, the target camera is considered to have captured the front image of nail A. If the monitoring server does not detect the face image of nail A in the target video data, it is considered that the target camera captured the back image of nail A.
  • the target video data does not include a front image of the target person, detect an additional camera set opposite to the target camera, and control the additional camera to track the person and take a front image of the person.
  • the monitoring server when the monitoring server detects that the target video data does not include a frontal image of the target person, the monitoring server determines the current geographic location of the target person.
  • the monitoring server detects and covers all the additional cameras of the target person's current geographical position and determines the installation geographical positions of all the additional cameras according to the current geographical position of the target person, and determines the relationship with the target camera from the installation geographical positions of all the additional cameras. Install additional cameras that are relatively geographically located.
  • the surveillance server controls an additional camera relative to the installed geographic location of the target camera to track the person and take a frontal image of the person.
  • the target video data if the target video data does not match the preset video detection abnormal model, the target video data is discarded, and it continues to detect whether the next target video data collected by the target camera matches the preset video detection abnormal model.
  • the target video data includes a frontal image of the target person, control the target camera to track the target person.
  • the method provided in the embodiment of the present invention can photograph the front and back of the target person in all directions, thereby bringing convenience and reducing unnecessary troubles for subsequent analysis of the target person.
  • the monitoring server detects additional cameras that are set opposite the target camera First, the monitoring server obtains the light intensity in the preset area. For example, a light sensor set in the preset area collects the light intensity and transmits the light intensity to the monitoring server.
  • the monitoring server judges whether the light intensity is greater than a preset intensity threshold. If it is greater than that, it obtains the minimum illumination values of all the additional cameras set relative to the target camera, and traverses the additional cameras with the lowest minimum illumination value from the lowest illumination values of all the additional cameras. As a camera that tracks and captures the front image of a person, the surveillance server obtains the front image of the person in high definition as much as possible. If it is less than that, an additional camera set relative to the target camera is detected.
  • an embodiment of the present invention provides a video linkage monitoring device applied to a monitoring server.
  • the monitoring server communicates with multiple cameras, and each camera is installed at a different position in a preset area. Each camera Used to capture images of areas at different angles within a preset area.
  • the video linkage monitoring device according to the embodiment of the present invention can be used as one of the software functional units.
  • the video linkage monitoring device includes several instructions stored in a memory, and the processor can access the memory and call the instructions for execution to complete the video linkage. Monitoring methods.
  • the video linkage monitoring device 300 includes a first detection module 31, a second detection module 32, and a third detection module 33.
  • the first detection module 31 is configured to detect whether the target video data collected by the target camera matches a preset video detection abnormal model
  • the second detection module 32 is configured to detect a target person from the target video data if the target video data matches a preset video detection abnormal model, and determine whether the target video data includes a front image of the target person, The front image includes a face image of the target person, and the target person is located in the preset area;
  • the third detection module 33 is configured to detect an additional camera opposite to the target camera if the target video data does not include a frontal image of the target person, and control the additional camera to track the person and shoot the person Front image.
  • the method provided in the embodiment of the present invention can photograph the front and back of the target person in all directions, thereby bringing convenience and reducing unnecessary troubles for subsequent analysis of the target person.
  • the video linkage monitoring device 300 further includes a discarding module 34.
  • the discarding module 34 is configured to discard the target video data if the target video data does not match the preset video detection abnormal model, and continue to detect whether the next target video data collected by the target camera matches the preset video detection abnormal model.
  • the above-mentioned video linkage monitoring device can execute the video linkage monitoring method provided by the embodiment of the present invention, and has corresponding function modules and beneficial effects of the execution method.
  • the video linkage monitoring method provided in the embodiment of the present invention.
  • an embodiment of the present invention provides a monitoring server.
  • the monitoring server 500 includes: one or more processors 51 and a memory 52. Among them, one processor 51 is taken as an example in FIG. 5.
  • the processor 51 and the memory 52 may be connected through a bus or in other manners.
  • the connection through the bus is taken as an example.
  • the memory 52 is a non-volatile computer-readable storage medium and can be used to store non-volatile software programs, non-volatile computer executable programs, and modules, such as programs corresponding to the video linkage monitoring method in the embodiment of the present invention.
  • Instruction / module The processor 51 executes various functional applications and data processing of the video linkage monitoring device by running non-volatile software programs, instructions, and modules stored in the memory 52, that is, the video linkage monitoring method of the foregoing method embodiment and the foregoing device are implemented. The function of each module of the embodiment.
  • the memory 52 may include a high-speed random access memory, and may further include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other non-volatile solid-state storage device.
  • the memory 52 may optionally include memory remotely set with respect to the processor 51, and these remote memories may be connected to the processor 51 through a network. Examples of the above network include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
  • the program instructions / modules are stored in the memory 52, and when executed by the one or more processors 51, perform the video linkage monitoring method in any of the foregoing method embodiments, for example, execute each of the above-described each of FIG. 2 Steps; the functions of each module described in FIG. 3 and FIG. 4 can also be realized.
  • An embodiment of the present invention also provides a non-volatile computer storage medium.
  • the computer storage medium stores computer-executable instructions, and the computer-executable instructions are executed by one or more processors, such as a process in FIG. 5.
  • the processor 51 may enable the one or more processors to execute the video linkage monitoring method in any of the foregoing method embodiments, for example, to execute the video linkage monitoring method in any of the above method embodiments, for example, to execute the foregoing description to perform the foregoing description.
  • Each step shown in FIG. 2 described above is performed; the functions of each module described in FIG. 3 and FIG. 4 may also be implemented.
  • the embodiments of the device or device described above are only schematic, and the unit modules described as separate components may or may not be physically separated, and the components displayed as module units may or may not be physical units. , Can be located in one place, or can be distributed to multiple network module units. Some or all of the modules may be selected according to actual needs to achieve the objective of the solution of this embodiment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Alarm Systems (AREA)

Abstract

La présente invention concerne le domaine technique de la surveillance vidéo, et en particulier un procédé de surveillance de liaison vidéo, un serveur de surveillance et un système de surveillance de liaison vidéo. Le procédé consiste à : détecter si des données vidéo cibles capturées par une caméra cible sont mises en correspondance avec un modèle d'exception de détection vidéo prédéfini ; si les données vidéo cibles sont mises en correspondance avec le modèle d'exception de détection vidéo prédéfini, détecter une personne cible à partir des données vidéo cibles, et déterminer si les données vidéo cibles comprennent une image frontale de la personne cible, l'image frontale comprenant une image de visage de la personne cible, et la personne cible étant située dans une zone prédéfinie; et si les données vidéo cibles ne comprennent pas l'image frontale de la personne cible, détecter une caméra supplémentaire opposée à la caméra cible, et commander la caméra supplémentaire pour suivre la personne et photographier l'image frontale de la personne. Par conséquent, le côté avant et le côté arrière de la personne cible peuvent être photographiés dans toutes les directions, facilitant ainsi une analyse ultérieure sur la personne cible, et réduisant les problèmes inutiles.
PCT/CN2019/103833 2018-09-21 2019-08-30 Procédé et dispositif de modélisation tridimensionnelle WO2020057355A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811109513.0 2018-09-21
CN201811109513.0A CN109241933A (zh) 2018-09-21 2018-09-21 视频联动监控方法、监控服务器、视频联动监控系统

Publications (1)

Publication Number Publication Date
WO2020057355A1 true WO2020057355A1 (fr) 2020-03-26

Family

ID=65056678

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/103833 WO2020057355A1 (fr) 2018-09-21 2019-08-30 Procédé et dispositif de modélisation tridimensionnelle

Country Status (2)

Country Link
CN (1) CN109241933A (fr)
WO (1) WO2020057355A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111553264A (zh) * 2020-04-27 2020-08-18 中国科学技术大学先进技术研究院 一种适用于中小学生的校园非安全行为检测及预警方法
CN112329849A (zh) * 2020-11-04 2021-02-05 中冶赛迪重庆信息技术有限公司 基于机器视觉的废钢料场卸料状态识别方法、介质及终端
CN114640764A (zh) * 2022-03-01 2022-06-17 深圳市安软慧视科技有限公司 基于布控平台的目标检测方法、系统及相关设备

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109241933A (zh) * 2018-09-21 2019-01-18 深圳市九洲电器有限公司 视频联动监控方法、监控服务器、视频联动监控系统
CN109922311B (zh) * 2019-02-12 2022-01-28 平安科技(深圳)有限公司 基于音视频联动的监控方法、装置、终端及存储介质
CN111914588A (zh) * 2019-05-07 2020-11-10 杭州海康威视数字技术股份有限公司 进行监控的方法和系统
CN110210461B (zh) * 2019-06-27 2021-03-05 北京澎思科技有限公司 基于摄像机网格的多视图协同异常行为检测方法
CN112492261A (zh) * 2019-09-12 2021-03-12 华为技术有限公司 跟踪拍摄方法及装置、监控系统
CN110866692A (zh) * 2019-11-14 2020-03-06 北京明略软件系统有限公司 一种预警信息的生成方法、生成装置及可读存储介质
CN110929633A (zh) * 2019-11-19 2020-03-27 公安部第三研究所 基于小数据集实现涉烟车辆异常检测的方法
CN111242008B (zh) * 2020-01-10 2024-04-12 河南讯飞智元信息科技有限公司 打架事件检测方法、相关设备及可读存储介质
CN113552123A (zh) * 2020-04-17 2021-10-26 华为技术有限公司 视觉检测方法和视觉检测装置
CN111931564A (zh) * 2020-06-29 2020-11-13 北京大学 一种基于人脸识别的目标跟踪方法及装置
CN112034758B (zh) * 2020-08-31 2021-11-30 成都市达岸信息技术有限公司 一种低功耗多功能物联网安控监测装置及系统
CN112437255A (zh) * 2020-11-04 2021-03-02 中广核工程有限公司 核电厂智能视频监控系统及方法
CN116996665B (zh) * 2023-09-28 2024-01-26 深圳天健电子科技有限公司 基于物联网的智能监控方法、装置、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000092368A (ja) * 1998-09-09 2000-03-31 Canon Inc カメラ制御装置及びコンピュータ読み取り可能な記憶媒体
CN102254169A (zh) * 2011-08-23 2011-11-23 东北大学秦皇岛分校 基于多摄像机的人脸识别方法及系统
CN103237192A (zh) * 2012-08-20 2013-08-07 苏州大学 一种基于多摄像头数据融合的智能视频监控系统
CN103268680A (zh) * 2013-05-29 2013-08-28 北京航空航天大学 一种家庭智能监控防盗系统
CN109241933A (zh) * 2018-09-21 2019-01-18 深圳市九洲电器有限公司 视频联动监控方法、监控服务器、视频联动监控系统

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101572804B (zh) * 2009-03-30 2012-03-21 浙江大学 多摄像机智能控制方法及装置
CN102480593B (zh) * 2010-11-25 2014-04-16 杭州华三通信技术有限公司 双镜头摄像机切换方法及装置
CN101999888B (zh) * 2010-12-01 2012-07-25 北京航空航天大学 一种对体温异常者进行检测与搜寻的疫情防控系统
CN104079885A (zh) * 2014-07-07 2014-10-01 广州美电贝尔电业科技有限公司 无人监守联动跟踪的网络摄像方法及装置
WO2016153479A1 (fr) * 2015-03-23 2016-09-29 Longsand Limited Balayage facial de flux vidéo
CN105913037A (zh) * 2016-04-26 2016-08-31 广东技术师范学院 基于人脸识别与射频识别的监控跟踪系统
CN107592507A (zh) * 2017-09-29 2018-01-16 深圳市置辰海信科技有限公司 自动跟踪捕捉高清晰正面人脸照片的方法
CN108419014B (zh) * 2018-03-20 2020-02-21 北京天睿空间科技股份有限公司 利用全景摄像机和多台抓拍摄像机联动抓拍人脸的方法
CN108446630B (zh) * 2018-03-20 2019-12-31 平安科技(深圳)有限公司 机场跑道智能监控方法、应用服务器及计算机存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000092368A (ja) * 1998-09-09 2000-03-31 Canon Inc カメラ制御装置及びコンピュータ読み取り可能な記憶媒体
CN102254169A (zh) * 2011-08-23 2011-11-23 东北大学秦皇岛分校 基于多摄像机的人脸识别方法及系统
CN103237192A (zh) * 2012-08-20 2013-08-07 苏州大学 一种基于多摄像头数据融合的智能视频监控系统
CN103268680A (zh) * 2013-05-29 2013-08-28 北京航空航天大学 一种家庭智能监控防盗系统
CN109241933A (zh) * 2018-09-21 2019-01-18 深圳市九洲电器有限公司 视频联动监控方法、监控服务器、视频联动监控系统

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111553264A (zh) * 2020-04-27 2020-08-18 中国科学技术大学先进技术研究院 一种适用于中小学生的校园非安全行为检测及预警方法
CN111553264B (zh) * 2020-04-27 2023-04-18 中科永安(安徽)科技有限公司 一种适用于中小学生的校园非安全行为检测及预警方法
CN112329849A (zh) * 2020-11-04 2021-02-05 中冶赛迪重庆信息技术有限公司 基于机器视觉的废钢料场卸料状态识别方法、介质及终端
CN114640764A (zh) * 2022-03-01 2022-06-17 深圳市安软慧视科技有限公司 基于布控平台的目标检测方法、系统及相关设备

Also Published As

Publication number Publication date
CN109241933A (zh) 2019-01-18

Similar Documents

Publication Publication Date Title
WO2020057355A1 (fr) Procédé et dispositif de modélisation tridimensionnelle
WO2020057346A1 (fr) Procédé et appareil de surveillance vidéo, serveur de surveillance et système de surveillance vidéo
WO2020057353A1 (fr) Procédé de suivi d'objet basé sur une balle à grande vitesse, serveur de surveillance, et système de vidéosurveillance
WO2020094091A1 (fr) Procédé de capture d'image, caméra de surveillance et système de surveillance
JP6127152B2 (ja) セキュリティ監視システム及び相応な警報触発方法
JP5213105B2 (ja) 映像ネットワークシステム及び映像データ管理方法
WO2020029921A1 (fr) Procédé et dispositif de surveillance
KR101425505B1 (ko) 객체인식기술을 이용한 지능형 경계 시스템의 감시 방법
US20160142680A1 (en) Image processing apparatus, image processing method, and storage medium
CN101860679A (zh) 图像捕捉方法和数码相机
US9521377B2 (en) Motion detection method and device using the same
CN103929592A (zh) 全方位智能监控设备及方法
US10657783B2 (en) Video surveillance method based on object detection and system thereof
KR101442669B1 (ko) 지능형 객체감지를 통한 범죄행위 판별방법 및 그 장치
CN110191324B (zh) 图像处理方法、装置、服务器及存储介质
CN110267011B (zh) 图像处理方法、装置、服务器及存储介质
CN108197614A (zh) 一种基于人脸识别技术的考场监控摄像机及系统
KR102077632B1 (ko) 로컬 영상분석과 클라우드 서비스를 활용하는 하이브리드 지능형 침입감시 시스템
CN111800604A (zh) 基于枪球联动检测人形和人脸数据的方法及装置
CN114491466B (zh) 一种基于私有云技术的智能训练系统
CN113660455B (zh) 一种基于dvs数据的跌倒检测方法、系统、终端
JP2022189835A (ja) 撮像装置
KR20230173667A (ko) Ai 기반 객체인식을 통한 감시 카메라의 셔터값 조절
US11509818B2 (en) Intelligent photography with machine learning
TWI448976B (zh) 超廣角影像處理方法與其系統

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19863520

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 11.08.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19863520

Country of ref document: EP

Kind code of ref document: A1