CN111541872A - Network camera decoding method based on OpenCV - Google Patents

Network camera decoding method based on OpenCV Download PDF

Info

Publication number
CN111541872A
CN111541872A CN202010309248.1A CN202010309248A CN111541872A CN 111541872 A CN111541872 A CN 111541872A CN 202010309248 A CN202010309248 A CN 202010309248A CN 111541872 A CN111541872 A CN 111541872A
Authority
CN
China
Prior art keywords
pedestrian
height
network camera
real
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010309248.1A
Other languages
Chinese (zh)
Other versions
CN111541872B (en
Inventor
廖兴旺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Ruis Technology Co ltd
Original Assignee
Fujian Ruis Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Ruis Technology Co ltd filed Critical Fujian Ruis Technology Co ltd
Priority to CN202010309248.1A priority Critical patent/CN111541872B/en
Publication of CN111541872A publication Critical patent/CN111541872A/en
Application granted granted Critical
Publication of CN111541872B publication Critical patent/CN111541872B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/188Capturing isolated or intermittent images triggered by the occurrence of a predetermined event, e.g. an object reaching a predetermined position
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/107Measuring physical dimensions, e.g. size of the entire body or parts thereof
    • A61B5/1072Measuring physical dimensions, e.g. size of the entire body or parts thereof measuring distances on the body, e.g. measuring length, height or thickness
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/107Measuring physical dimensions, e.g. size of the entire body or parts thereof
    • A61B5/1079Measuring physical dimensions, e.g. size of the entire body or parts thereof using optical or photographic means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/65Control of camera operation in relation to power supply
    • H04N23/651Control of camera operation in relation to power supply for reducing power consumption by affecting camera operations, e.g. sleep mode, hibernation mode or power off of selective parts of the camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Dentistry (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a network camera decoding method based on OpenCV, which relates to the field of security and comprises the following steps: firstly, responding to a first barrier detected by a light curtain module, sending a starting instruction to a network camera corresponding to the light curtain module, and sequentially acquiring each frame of first image of the network camera in real time by a cloud platform based on OpenCV; then, identifying a first fixed point on the top of the head of the pedestrian in the first image, controlling the shooting direction of the network camera to point to the first fixed point, and acquiring the shooting angle of the network camera in real time; the shooting angle is an included angle between the network camera and the vertical direction; then, acquiring a first real-time shooting angle and a second real-time shooting angle to obtain the height of the pedestrian, and solving the abdominal thickness of the pedestrian; and finally, storing a first video formed by the first image of each frame, and marking the height, the abdominal thickness and the passing time stamp of the pedestrian. The method and the device effectively analyze the shooting video of the network camera through the OpenCV so as to obtain the required data through image processing.

Description

Network camera decoding method based on OpenCV
Technical Field
The invention relates to the field of intelligent security, in particular to a network camera decoding method based on OpenCV.
Background
OpenCV is a BSD license (open source) based distributed cross-platform computer vision library that can run on Linux, Windows, Android, and Mac OS operating systems. The method is light and efficient, is composed of a series of C functions and a small number of C + + classes, provides interfaces of languages such as Python, Ruby, MATLAB and the like, and realizes a plurality of general algorithms in the aspects of image processing and computer vision.
OpenCV is written in C + + language, and its main interface is also C + + language, but a large amount of C language interfaces are still reserved. The library also has a number of Python, Java and MATLAB/OCTAVE (version 2.5) interfaces. The API interface functions for these languages are available through online documentation. Support for C #, Ch, Ruby, GO is also provided today.
All new developments and algorithms are using the C + + interface. A GPU interface using CUDA was also implemented at 9 months 2010.
With the development of society, the requirements on video monitoring of city safety and road traffic safety are higher and higher, and the number of video monitoring points is exponentially increased according to the practical application requirements.
In the prior art, the camera is normally opened, and actually, no person passes through the common area in a typical office for a long time for camera monitoring, and unnecessary power consumption is caused when the camera is normally opened.
In addition, in the prior art, the pedestrian is generally subjected to portrait recognition or image processing to solve the height of the pedestrian, and the research on the aspect of the posture of the pedestrian is less.
Disclosure of Invention
In view of a part of defects in the prior art, the technical problem to be solved by the present invention is to provide a network camera decoding method based on OpenCV, which solves the problem in the prior art that no person passes through and the camera is normally open, and aims to detect a pedestrian through a light curtain and perform shooting when a pedestrian is present, so as to avoid long-time shooting of the camera, waste of the service life of the camera and power consumption of the camera.
In order to achieve the above object, the present invention provides a network camera decoding method based on OpenCV, where the method includes:
step S1, responding to the first barrier detected by the light curtain module, and sending a starting instruction to the network camera corresponding to the light curtain module; the light curtain module comprises a plurality of pairs of infrared geminate transistors which are arranged in parallel, and a light curtain surface formed by each pair of infrared geminate transistors is vertical to the ground; the light curtain module is used for detecting the passing of the pedestrian and the height of the pedestrian; the infrared pair transistors are divided into lower-section pair transistors, middle-section pair transistors and upper-section pair transistors according to the height; the lower section geminate transistors are mainly used for detecting leg and foot parts of the pedestrian, the middle section geminate transistors are mainly used for detecting abdomen and hip parts of the pedestrian, and the upper section geminate transistors are mainly used for detecting shoulder and head parts of the pedestrian;
step S2, in response to the network camera being started, the cloud platform sequentially collects each frame of first image of the network camera in real time based on the OpenCV;
step S3, the cloud platform analyzes the first image and carries out portrait recognition on the first image;
step S4, the cloud platform responds to the fact that a portrait is recognized in the first image, recognizes a first fixed point on the top of the head of the pedestrian in the first image, controls the shooting direction of the network camera to be directed at the first fixed point, and collects the shooting angle of the network camera in real time; the shooting angle is an included angle between the network camera and the vertical direction;
step S5, responding to the middle section geminate transistor detecting a middle section barrier with at least 100mm of height, collecting a first real-time shooting angle of the network camera
Figure BDA0002457039920000031
Step S6, responding to the exit of the middle section barrier detected by the middle section light, and collecting a second real-time shooting angle of the network camera
Figure BDA0002457039920000032
Step S7, responding to the first barrier detected by the light curtain module retreating, and sending a closing instruction to the network camera corresponding to the light curtain module;
step S8, receiving detection height data sent by the upper section geminate transistor of the light curtain module in the startup period of the network camera, and obtaining the height h of the pedestrian;
step S9, according to the first shooting angle
Figure BDA0002457039920000033
The second shooting angle
Figure BDA0002457039920000034
The height h is obtained by solving the abdomen thickness D of the pedestrian; the thickness of abdomen
Figure BDA0002457039920000035
The H is the height of the shooting assembly;
and step S10, storing a first video formed by the first image of each frame, and marking the height h, the abdominal thickness D and the passing time stamp of the pedestrian.
In this technical scheme, the light curtain detects the pedestrian, then opens the camera, and the light curtain does not detect the pedestrian then does not open the camera, based on this, can effectively improve camera life and using electricity wisely. According to the technical scheme, the shooting parameters are obtained through the camera according to the head of the user, the posture of the pedestrian is solved according to the height information of the user, and accurate posture data of the user are effectively obtained, so that the pedestrian data can be tracked and monitored later.
In a specific embodiment, the step S8 further includes:
and acquiring the real-time height of the barrier acquired by the light curtain module in real time, acquiring the maximum value in the real-time heights, and taking the maximum value as the height h of the pedestrian.
In a specific embodiment, the step S8 further includes:
and according to whether the hand-lifting action of the pedestrian is detected in the first real-time video or not, responding to the detection that the pedestrian has the hand-lifting action, and removing the real-time height of the time node.
In a specific embodiment, the step S8 further includes:
according to the step identification of the pedestrian in the first real-time video, according to the distance between the front foot and the rear foot of the pedestrianDistance S, correcting the real-time height of the pedestrian
Figure BDA0002457039920000041
α is the ratio of leg length to height, α is a preset value, 0.45- α -618, and the real-time height
Figure BDA0002457039920000042
And substituting the pedestrian posture solving module into the solution of the abdomen thickness D of the pedestrian.
In this technical scheme, in order to avoid the influence of pedestrian's stride to pedestrian's actual height, so, still convert so that obtain the pedestrian's top of the head height when the belly gets into the light curtain, based on this, can improve the precision of follow-up abdomen thickness solution.
In a specific embodiment, in a test stage, according to a corresponding relation between a distance S between front and rear feet of a pedestrian in a shot video image and a distance of a pixel point of the shot image; and in the application stage, reversely solving the distance S between the front foot and the rear foot of the pedestrian according to the pixel point distance of the shot image.
In a specific embodiment, the mid-section tube detects an obstruction greater than 100mm in height.
In a specific embodiment, the height of the lower pair of tubes ranges from 0 to 0.6m, the height of the middle pair of tubes ranges from 0.4m to 1.1m, and the height of the upper pair of tubes ranges from 1.0m to 2 m.
The invention has the beneficial effects that: 1) according to the invention, the camera is started when the light curtain detects the pedestrian, and the camera is not started when the light curtain does not detect the pedestrian. 2) According to the invention, the shooting parameters are obtained through the camera according to the head of the user, the pedestrian posture is solved according to the height information of the user, and more accurate user posture data are effectively obtained, so that the pedestrian data can be tracked and monitored later. 3) In the invention, in order to avoid the influence of the pedestrian striding on the actual height of the pedestrian, the height is converted so as to obtain the head height of the pedestrian when the belly enters the light curtain, and on the basis, the precision of the follow-up belly thickness solving can be improved.
Drawings
Fig. 1 is a schematic flowchart of a network camera decoding method based on OpenCV in an embodiment of the present invention;
FIG. 2 is a schematic diagram of a solution for abdominal thickness in an embodiment of the present invention;
FIG. 3 is a schematic view of a light curtain according to an embodiment of the present invention.
Detailed Description
The invention is further illustrated by the following examples in conjunction with the accompanying drawings:
as shown in fig. 1 to 3, in a first embodiment of the present invention, there is provided an OpenCV-based web camera 200 decoding method, including:
step S1, in response to the light curtain module 100 detecting the first obstacle, sending an opening instruction to the network camera 200 corresponding to the light curtain module 100; the light curtain module 100 includes a plurality of pairs of infrared pair tubes arranged in parallel, and a light curtain surface formed by each pair of infrared pair tubes is perpendicular to the ground; the light curtain module 100 is used for detecting the passing of a pedestrian and the height of the pedestrian; the infrared pair transistors are divided into a lower section pair transistor 130, a middle section pair transistor 120 and an upper section pair transistor 110 according to the height; the lower pair of tubes 130 are mainly used for detecting the leg and foot parts of the pedestrian, the middle pair of tubes 120 are mainly used for detecting the abdomen and hip parts of the pedestrian, and the upper pair of tubes 110 are mainly used for detecting the shoulder and head parts of the pedestrian;
step S2, in response to the network camera 200 being turned on, the cloud platform sequentially acquires, in real time, each frame of first image of the network camera 200 based on the OpenCV;
step S3, the cloud platform analyzes the first image and carries out portrait recognition on the first image;
step S4, in response to the identification of the portrait in the first image, the cloud platform identifies a first fixed point on the top of the head of the pedestrian in the first image, controls the shooting direction of the network camera 200 to point to the first fixed point, and collects the shooting angle of the network camera 200 in real time; the shooting angle is an included angle between the network camera 200 and the vertical direction;
step S5, in response to the detection of the middle section obstacle with at least 100mm height by the middle section geminate transistor 120, acquiring a first real-time shooting angle of the network camera 200
Figure BDA0002457039920000061
Step S6, collecting a second real-time shooting angle of the network camera 200 in response to the exit of the middle-section obstacle detected by the middle-section light
Figure BDA0002457039920000062
Step S7, sending a closing instruction to the webcam 200 corresponding to the light curtain module 100 in response to the light curtain module 100 detecting that the first obstacle has retreated;
step S8, receiving the detected height data of the webcam 200 sent by the upper tube 110 of the light curtain module 100 during the startup period, and obtaining the height h of the pedestrian;
step S9, according to the first shooting angle
Figure BDA0002457039920000063
The second shooting angle
Figure BDA0002457039920000064
The height h is obtained by solving the abdomen thickness D of the pedestrian; the thickness of abdomen
Figure BDA0002457039920000065
The H is the height of the shooting assembly;
and step S10, storing a first video formed by the first image of each frame, and marking the height h, the abdominal thickness D and the passing time stamp of the pedestrian.
In the walking process of the pedestrian, the front feet can reach the light curtain before the abdomen and the head, so that the front feet contact the light curtain when the lower segment detects an obstacle to light; meanwhile, the pedestrian passes through the light curtain, and the rear foot finally detects that the pedestrian is separated from the light curtain; therefore, the camera is opened to acquire the whole process that the user enters the light curtain.
As shown in connection with fig. 2, the solution for the abdominal thickness D is derived as follows:
D=D1+D2
Figure BDA0002457039920000067
Figure BDA0002457039920000066
the L is a distance between the light curtain module 100100 and a plumb line where the shooting component 210 is located;
thus, it is possible to obtain:
Figure BDA0002457039920000071
in fact, in the process of solving the abdominal thickness, the height used is the height of the pedestrian in the light curtain at the moment, and the abdominal thickness can be effectively solved based on the height.
Because the person has a striding behavior in the walking process, when both feet land and stride, the height of the top of the head is lower than the height of the person; and when the user lifts the foot and takes a step, the height of the head with the maximum height is the height. Therefore, in this embodiment, the step S8 further includes:
and acquiring the real-time height of the barrier acquired by the light curtain module 100 in real time, acquiring the maximum value in the real-time heights, and taking the maximum value as the height h of the pedestrian.
It is worth mentioning that when the pedestrian walks, there may be a hand-lifting behavior, and at this time, in order to avoid the influence on the height detection, the height detection data of the corresponding time node should be removed. In this embodiment, the step S8 further includes:
and according to whether the hand-lifting action of the pedestrian is detected in the first real-time video or not, responding to the detection that the pedestrian has the hand-lifting action, and removing the real-time height of the time node.
In the embodiment, the entrance and exit of the abdomen are detected through the middle-section infrared pair tubes, and when the human body of the user is straight, the height of the top of the head of the user is closer to the actual height.
In a special scene, a pedestrian is in a striding state when entering or exiting the light curtain, the height of the top of the head is lower than the height of the pedestrian, and the actual height of the top of the head of the pedestrian is corrected through the length of striding; therefore, in this embodiment, the step S8 further includes:
correcting the real-time height of the pedestrian according to the step identification of the pedestrian in the first real-time video and the distance S between the front foot and the rear foot of the pedestrian
Figure BDA0002457039920000072
α is the ratio of leg length to height, α is a preset value, 0.45- α -618, and the real-time height
Figure BDA0002457039920000073
And substituting the pedestrian posture solving module into the solution of the abdomen thickness D of the pedestrian.
In the embodiment, in the test stage, according to the corresponding relationship between the distance S between the front foot and the rear foot of the pedestrian in the shot video image and the pixel point distance of the shot image; and in the application stage, reversely solving the distance S between the front foot and the rear foot of the pedestrian according to the pixel point distance of the shot image.
In this embodiment, the middle section of the tube 120 detects an obstacle greater than 100mm in height.
In this embodiment, the height of the lower pair of tubes 130 ranges from 0 to 0.6m, the height of the middle pair of tubes 120 ranges from 0.4m to 1.1m, and the height of the upper pair of tubes 110 ranges from 1.0m to 2 m.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.

Claims (7)

1. An OpenCV-based web camera decoding method, the method comprising:
step S1, responding to the first barrier detected by the light curtain module, and sending a starting instruction to the network camera corresponding to the light curtain module; the light curtain module comprises a plurality of pairs of infrared geminate transistors which are arranged in parallel, and a light curtain surface formed by each pair of infrared geminate transistors is vertical to the ground; the light curtain module is used for detecting the passing of the pedestrian and the height of the pedestrian; the infrared pair transistors are divided into lower-section pair transistors, middle-section pair transistors and upper-section pair transistors according to the height; the lower section geminate transistors are mainly used for detecting leg and foot parts of the pedestrian, the middle section geminate transistors are mainly used for detecting abdomen and hip parts of the pedestrian, and the upper section geminate transistors are mainly used for detecting shoulder and head parts of the pedestrian;
step S2, in response to the network camera being started, the cloud platform sequentially collects each frame of first image of the network camera in real time based on the OpenCV;
step S3, the cloud platform analyzes the first image and carries out portrait recognition on the first image;
step S4, the cloud platform responds to the fact that a portrait is recognized in the first image, recognizes a first fixed point on the top of the head of the pedestrian in the first image, controls the shooting direction of the network camera to be directed at the first fixed point, and collects the shooting angle of the network camera in real time; the shooting angle is an included angle between the network camera and the vertical direction;
step S5, responding to the middle section geminate transistor detecting a middle section barrier with at least 100mm of height, collecting a first real-time shooting angle of the network camera
Figure FDA0002457039910000011
Step S6, responding to the exit of the middle section barrier detected by the middle section light, and collecting a second real-time shooting angle of the network camera
Figure FDA0002457039910000012
Step S7, responding to the first barrier detected by the light curtain module retreating, and sending a closing instruction to the network camera corresponding to the light curtain module;
step S8, receiving detection height data sent by the upper section geminate transistor of the light curtain module in the startup period of the network camera, and obtaining the height h of the pedestrian;
step S9, according to the first shooting angle
Figure FDA0002457039910000021
The second shooting angle
Figure FDA0002457039910000022
The height h is obtained by solving the abdomen thickness D of the pedestrian; the thickness of abdomen
Figure FDA0002457039910000023
The H is the height of the shooting assembly;
and step S10, storing a first video formed by the first image of each frame, and marking the height h, the abdominal thickness D and the passing time stamp of the pedestrian.
2. The OpenCV-based web camera decoding method according to claim 1, wherein the step S8 further includes:
and acquiring the real-time height of the barrier acquired by the light curtain module in real time, acquiring the maximum value in the real-time heights, and taking the maximum value as the height h of the pedestrian.
3. The OpenCV-based web camera decoding method according to claim 1, wherein the step S8 further includes:
and according to whether the hand-lifting action of the pedestrian is detected in the first real-time video or not, responding to the detection that the pedestrian has the hand-lifting action, and removing the real-time height of the time node.
4. The OpenCV-based web camera decoding method according to claim 1, wherein the step S8 further includes:
correcting the real-time height of the pedestrian according to the step identification of the pedestrian in the first real-time video and the distance S between the front foot and the rear foot of the pedestrian
Figure FDA0002457039910000024
α is the ratio of leg length to height, α is a preset value, 0.45- α -618, and the real-time height
Figure FDA0002457039910000025
And substituting the pedestrian posture solving module into the solution of the abdomen thickness D of the pedestrian.
5. The OpenCV-based network camera decoding method according to claim 4, wherein in a test phase, according to a correspondence between a distance S between front and rear feet of a pedestrian in a shot video image and a distance between pixels of the shot image; and in the application stage, reversely solving the distance S between the front foot and the rear foot of the pedestrian according to the pixel point distance of the shot image.
6. The OpenCV-based webcam decoding method of claim 1, wherein the mid-section pair tube detects an obstacle with a height greater than 100 mm.
7. The OpenCV-based webcam decoding method of claim 1, wherein the height of the lower pair of tubes ranges from 0 to 0.6m, the height of the middle pair of tubes ranges from 0.4m to 1.1m, and the height of the upper pair of tubes ranges from 1.0m to 2 m.
CN202010309248.1A 2020-04-20 2020-04-20 Network camera decoding method based on OpenCV Active CN111541872B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010309248.1A CN111541872B (en) 2020-04-20 2020-04-20 Network camera decoding method based on OpenCV

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010309248.1A CN111541872B (en) 2020-04-20 2020-04-20 Network camera decoding method based on OpenCV

Publications (2)

Publication Number Publication Date
CN111541872A true CN111541872A (en) 2020-08-14
CN111541872B CN111541872B (en) 2021-04-20

Family

ID=71977026

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010309248.1A Active CN111541872B (en) 2020-04-20 2020-04-20 Network camera decoding method based on OpenCV

Country Status (1)

Country Link
CN (1) CN111541872B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012123585A (en) * 2010-12-08 2012-06-28 Omron Corp Display control unit and vending machine
CN204481960U (en) * 2015-02-09 2015-07-15 桂林电子科技大学 A kind of visual analysis people flow rate statistical equipment based on laser detection line
WO2018106608A1 (en) * 2016-12-05 2018-06-14 Ring Inc. Passing vehicle filters for audio/video recording and communication devices
CN108319204A (en) * 2018-03-22 2018-07-24 京东方科技集团股份有限公司 Intelligent control method and system
CN110327604A (en) * 2019-06-26 2019-10-15 唐山师范学院 Callisthenics intelligent training device
CN110992545A (en) * 2019-12-12 2020-04-10 广州新科佳都科技有限公司 Pat formula fan door floodgate machine access system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012123585A (en) * 2010-12-08 2012-06-28 Omron Corp Display control unit and vending machine
CN204481960U (en) * 2015-02-09 2015-07-15 桂林电子科技大学 A kind of visual analysis people flow rate statistical equipment based on laser detection line
WO2018106608A1 (en) * 2016-12-05 2018-06-14 Ring Inc. Passing vehicle filters for audio/video recording and communication devices
CN108319204A (en) * 2018-03-22 2018-07-24 京东方科技集团股份有限公司 Intelligent control method and system
CN110327604A (en) * 2019-06-26 2019-10-15 唐山师范学院 Callisthenics intelligent training device
CN110992545A (en) * 2019-12-12 2020-04-10 广州新科佳都科技有限公司 Pat formula fan door floodgate machine access system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
沈庆虎: "远程视频监控与报警系统的设计和实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Also Published As

Publication number Publication date
CN111541872B (en) 2021-04-20

Similar Documents

Publication Publication Date Title
WO2021043073A1 (en) Urban pet movement trajectory monitoring method based on image recognition and related devices
EP2050042B1 (en) Pedestrian detection device and pedestrian detection method
CN110490902B (en) Target tracking method and device applied to smart city and computer equipment
CN112085010A (en) Mask detection and deployment system and method based on image recognition
CN106355242A (en) Interactive robot on basis of human face detection
CN106909879A (en) A kind of method for detecting fatigue driving and system
CN112911156B (en) Patrol robot and security system based on computer vision
CN106503632A (en) A kind of escalator intelligent and safe monitoring method based on video analysis
US20200020114A1 (en) Image-processing method for removing light zones
CN108737785B (en) Indoor automatic detection system that tumbles based on TOF 3D camera
CN111541872B (en) Network camera decoding method based on OpenCV
CN109583339A (en) A kind of ATM video brainpower watch and control method based on image procossing
CN111385542B (en) Light curtain control camera based on Internet of things
CN206331472U (en) A kind of interactive robot based on Face datection
CN107862298A (en) It is a kind of based on the biopsy method blinked under infrared eye
CN109784215A (en) A kind of in-vivo detection method and system based on improved optical flow method
CN116188748B (en) Image recognition system based on intelligent throat swab sampling equipment
WO2023231479A1 (en) Pupil detection method and apparatus, and storage medium and electronic device
CN111144260A (en) Detection method, device and system of crossing gate
CN111860100B (en) Pedestrian number determining method and device, electronic equipment and readable storage medium
CN113627255B (en) Method, device and equipment for quantitatively analyzing mouse behaviors and readable storage medium
CN111368726B (en) Construction site operation face personnel number statistics method, system, storage medium and device
CN109000634A (en) A kind of based reminding method and system of navigation object travelling route
KR101547239B1 (en) System and method for adjusting camera brightness based extraction of background image
CN113299091A (en) Intelligent traffic road surface illumination warning method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant