CN111970490B - User traffic monitoring method and related equipment - Google Patents

User traffic monitoring method and related equipment Download PDF

Info

Publication number
CN111970490B
CN111970490B CN202010786062.5A CN202010786062A CN111970490B CN 111970490 B CN111970490 B CN 111970490B CN 202010786062 A CN202010786062 A CN 202010786062A CN 111970490 B CN111970490 B CN 111970490B
Authority
CN
China
Prior art keywords
intelligent
building
camera
images
thermodynamic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010786062.5A
Other languages
Chinese (zh)
Other versions
CN111970490A (en
Inventor
蒋薇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wanyi Technology Co Ltd
Original Assignee
Wanyi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wanyi Technology Co Ltd filed Critical Wanyi Technology Co Ltd
Priority to CN202010786062.5A priority Critical patent/CN111970490B/en
Publication of CN111970490A publication Critical patent/CN111970490A/en
Application granted granted Critical
Publication of CN111970490B publication Critical patent/CN111970490B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The embodiment of the application discloses a user flow monitoring method and related equipment, which are applied to intelligent monitoring equipment, wherein the intelligent monitoring equipment is in communication connection with an intelligent camera, and the method comprises the following steps: collecting user flow data in a building through the intelligent camera; generating a thermodynamic diagram from the user traffic data, the thermodynamic diagram including thermodynamic information of at least one person; and fusing the thermodynamic diagrams in a three-dimensional visualization model of the building for displaying so as to monitor the user flow in the building. By adopting the embodiment of the application, the user flow of the building is displayed in the three-dimensional visual model, so that the person entering and exiting conditions and the person gathering conditions in the building can be intuitively known.

Description

User traffic monitoring method and related equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a user traffic monitoring method and related devices.
Background
At present, a large-scale market is a place where people frequently go, the situation that people flow too much often occurs in the market, and because the management of people flow and passenger flow in the market in the prior art is mostly in a digital form, the informatization degree is not high, people at each position or a specific position of the large-scale market cannot be monitored, and the people flow in and out situation, the people gathering situation and the like cannot be intuitively known. Therefore, once an accident occurs, people in the store cannot be effectively evacuated.
Disclosure of Invention
The embodiment of the application discloses a user flow monitoring method and related equipment, which are beneficial to intuitively perceiving the personnel in-and-out condition and the personnel gathering condition in a building by displaying the user flow of the building in a three-dimensional visual model.
The first aspect of the embodiment of the application discloses a user flow monitoring method, which is applied to intelligent monitoring equipment, wherein the intelligent monitoring equipment is in communication connection with an intelligent camera, and the method comprises the following steps: collecting user flow data in a building through the intelligent camera; generating a thermodynamic diagram from the user traffic data, the thermodynamic diagram comprising thermodynamic information of at least one person; and fusing the thermodynamic diagrams in a three-dimensional visualization model of the building for displaying so as to monitor the user flow in the building.
It can be seen that, in this embodiment, the intelligent camera is used to collect user flow data in a certain building; generating a thermodynamic diagram according to the user flow data, wherein the thermodynamic diagram comprises thermodynamic information of at least one person; and then, the thermodynamic diagram is fused in the three-dimensional visualization model of the building for displaying, so that the flow of the user in the building can be monitored in the three-dimensional visualization model, and the conditions of personnel entering and exiting and personnel gathering in the building can be intuitively known.
In some exemplary embodiments, the generating a thermodynamic diagram according to the user traffic data includes: generating N first thermodynamic diagrams according to N first user flow data, wherein the N first user flow data are collected by the N first intelligent cameras, the N first user flow data correspond to the N first intelligent cameras one by one, and the N first user flow data correspond to the N first thermodynamic diagrams one by one.
It can be seen that, in this embodiment, N intelligent cameras are arranged in a building and used for acquiring user flow data in the building, each intelligent camera corresponds to one shooting area, the user flow data of each area in the whole building can be acquired through the N intelligent cameras, and then the user flow data of each area is converted into a thermodynamic diagram.
In some exemplary embodiments, said fusing said thermodynamic diagram for presentation in a three-dimensional visualization model of said building comprises: establishing N first camera models in the three-dimensional visualization model according to the installation positions of the N first intelligent cameras in the building, wherein the N first camera models correspond to the N first intelligent cameras one by one; determining N first areas in the three-dimensional visualization model according to the N first camera models and the shooting range of each first intelligent camera in the N first intelligent cameras, wherein the N first areas correspond to the N first camera models one to one; displaying the N first thermodynamic diagrams in the N first areas, wherein the N first thermodynamic diagrams correspond to the N first areas one by one.
It can be seen that, in this embodiment, N first camera models are established in the three-dimensional visualization model of the building according to the installation positions of the N first intelligent cameras in the building, and the N first camera models and the N first intelligent cameras are in one-to-one correspondence; therefore, the three-dimensional visualization model can be divided into N first areas according to the N first camera models and the shooting range of each first intelligent camera in the N first intelligent cameras, and the N first areas correspond to the N first camera models one to one, so that the building is divided into areas by the N intelligent cameras according to the shooting range, the three-dimensional visualization model is divided into N first areas by visually mapping the N first camera models, that is, the N first areas on the three-dimensional visualization model correspond to the shooting range of each first intelligent camera in the building; and displaying the thermodynamic diagrams corresponding to each shooting range in the corresponding first area, namely visually displaying the personnel gathering condition in the whole building on the three-dimensional visual model.
In some exemplary embodiments, the generating a thermodynamic diagram according to the user traffic data includes: generating M second thermodynamic diagrams according to M second user flow data, wherein the M second user flow data are acquired by the M second intelligent cameras, the M second user flow data correspond to the M second intelligent cameras one by one, and the M second user flow data correspond to the M second thermodynamic diagrams one by one; and synthesizing a target thermodynamic diagram according to the M second thermodynamic diagrams.
It can be seen that, in this embodiment, for a certain building area in a building, M intelligent cameras are arranged in the building area for acquiring user traffic data in the building area, the M intelligent cameras can acquire the user traffic data in the area through multiple angles, then the user traffic data acquired through multiple angles are converted into multiple thermodynamic diagrams, and then the multiple thermodynamic diagrams are combined into one target thermodynamic diagram.
In some exemplary embodiments, said fusing said thermodynamic diagram for presentation in a three-dimensional visualization model of said building comprises: according to the installation positions of the M second intelligent cameras in the building, M second camera models are established in the three-dimensional visualization model, and the M second camera models correspond to the M second intelligent cameras one by one; determining M second areas in the three-dimensional visualization model according to the M second camera models and the shooting range of each second intelligent camera in the M second intelligent cameras, wherein the M second areas correspond to the M second camera models one to one; determining the overlapping areas of the M second areas to obtain a target area; and displaying the target thermodynamic diagram in the target area.
It can be seen that, in this embodiment, according to the installation positions of M second intelligent cameras in a building, M second camera models are established in a three-dimensional visualization model of the building, and the M second camera models correspond to the M second intelligent cameras one to one, so according to the M second camera models and the shooting range of each second intelligent camera in the M second intelligent cameras, M second regions can be determined in the three-dimensional visualization model, and the M second regions correspond to the M second camera models one to one; the M second intelligent cameras shoot the same building area in the building from different angles, so that the building area is reflected in the three-dimensional visualization model, namely the superposition area of the M second areas; and then, a target thermodynamic diagram synthesized by user flow data acquired by the M second intelligent cameras is displayed in the overlapping area, so that the condition of personnel entering and exiting or the condition of personnel flowing in the building area can be visually displayed on the three-dimensional visual model.
The second aspect of the embodiment of the application discloses a user flow monitoring device, is applied to intelligent monitoring equipment, intelligent monitoring equipment and intelligent camera communication connection, the device includes:
the acquisition unit is used for acquiring user flow data in a building through the intelligent camera;
a generating unit, configured to generate a thermodynamic diagram according to the user traffic data, where the thermodynamic diagram includes thermodynamic information of at least one person;
and the display unit is used for fusing the thermodynamic diagrams into a three-dimensional visualization model of the building for displaying so as to monitor the user flow in the building.
In some exemplary embodiments, the smart camera includes N first smart cameras, each of the N first smart cameras corresponds to one first shooting area, N is a positive integer, and the generating unit is configured to: generating N first thermodynamic diagrams according to N first user flow data, wherein the N first user flow data are collected by the N first intelligent cameras, the N first user flow data correspond to the N first intelligent cameras one by one, and the N first user flow data correspond to the N first thermodynamic diagrams one by one.
In some exemplary embodiments, the display unit is configured to: according to the installation positions of the N first intelligent cameras in the building, N first camera models are established in the three-dimensional visual model, and the N first camera models correspond to the N first intelligent cameras one by one; determining N first areas in the three-dimensional visualization model according to the N first camera models and the shooting range of each first intelligent camera in the N first intelligent cameras, wherein the N first areas correspond to the N first camera models one to one; displaying the N first thermodynamic diagrams in the N first areas, wherein the N first thermodynamic diagrams correspond to the N first areas one by one.
In some exemplary embodiments, the smart cameras include M second smart cameras, where the M second smart cameras correspond to the same second shooting area, and M is a positive integer, and the generating unit is configured to: generating M second thermodynamic diagrams according to M second user flow data, wherein the M second user flow data are collected by the M second intelligent cameras, the M second user flow data correspond to the M second intelligent cameras one by one, and the M second user flow data correspond to the M second thermodynamic diagrams one by one; and synthesizing a target thermodynamic diagram according to the M second thermodynamic diagrams.
In some exemplary embodiments, the display unit is configured to: according to the installation positions of the M second intelligent cameras in the building, M second camera models are established in the three-dimensional visualization model, and the M second camera models correspond to the M second intelligent cameras one by one; determining M second areas in the three-dimensional visualization model according to the M second camera models and the shooting range of each second intelligent camera in the M second intelligent cameras, wherein the M second areas correspond to the M second camera models one to one; determining the overlapping areas of the M second areas to obtain a target area; and displaying the target thermodynamic diagram in the target area.
A third aspect of embodiments of the present application discloses an intelligent monitoring device, comprising a processor, a memory, a communication interface, and one or more programs, stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps of the method according to any of the first aspects.
The fourth aspect of the present embodiment discloses a chip, which includes: a processor for calling and running a computer program from a memory so that a device on which the chip is installed performs the method of any of the first aspects described above.
A fifth aspect of embodiments of the present application discloses a computer-readable storage medium, which is characterized by storing a computer program for electronic data exchange, wherein the computer program causes a computer to execute the method according to any one of the first aspect.
A sixth aspect of embodiments of the present application discloses a computer program product, which causes a computer to execute the method according to any one of the first aspect.
These and other aspects of the present application will be more readily apparent from the following description of the embodiments.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic structural diagram of hardware of an intelligent monitoring device provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of a user traffic monitoring method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of another user traffic monitoring method according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a user traffic monitoring apparatus according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an intelligent monitoring device according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The intelligent monitoring device related to the embodiment of the present application may be an electronic device with a communication capability, and the intelligent monitoring device may include various handheld devices, vehicle-mounted devices, wearable devices, computing devices or other processing devices connected to a wireless modem with a wireless communication function, and various forms of User Equipment (UE), a Mobile Station (MS), a terminal device (terminal device), and the like.
Referring to fig. 1, fig. 1 is a schematic structural diagram of hardware of an intelligent monitoring device 100 according to an exemplary embodiment of the present application. The smart monitoring device 100 may be a smart monitoring device capable of running an application, such as a smart phone, a tablet computer, an electronic book, and the like. The intelligent monitoring device 100 in the present application may include one or more of the following components: a processor, memory, transceiver, etc.
A processor may include one or more processing cores. The processor, using various interfaces and lines to connect various components throughout the intelligent monitoring device 100, performs various functions of the intelligent monitoring device 100 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in memory, and calling data stored in memory. Alternatively, the processor may be implemented in hardware using at least one of Digital Signal Processing (DSP), field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the above modem may be implemented by a communication chip without being integrated into the processor.
The Memory may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory includes a non-transitory computer-readable medium. The memory may be used to store an instruction, a program, code, a set of codes, or a set of instructions. The memory may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for implementing at least one function (such as user traffic data collection, thermodynamic diagram generation, thermodynamic diagram presentation, etc.), instructions for implementing various method embodiments described below, and the like, and the operating system may be an Android (Android) system (including systems based on Android system deep development), an IOS system developed by apple inc (including systems based on IOS system deep development), or other systems. The stored data area may also store data created by the intelligent monitoring device 100 in use (such as a three-dimensional visualization model of a building, user traffic data, thermodynamic diagrams, etc.).
Referring to fig. 2, fig. 2 is a flowchart of a user traffic monitoring method applied to an intelligent monitoring device, where the intelligent monitoring device is in communication connection with an intelligent camera, and the method includes, but is not limited to, the following steps:
step 201, collecting user flow data in a building through the intelligent camera.
The intelligent camera can acquire user flow data of the building according to the building by shooting images or videos, and the building can be a shopping mall, an office building and the like.
Wherein, this application embodiment can gather market passenger flow data based on IOT video monitor system, can realize the people that automatic statistics every minute was shot as user flow data through intelligent camera to carry out data acquisition and update for the dimension in minute, thereby learn to shoot the interior people flow change in certain time span of within range at present.
Step 202, generating a thermodynamic diagram according to the user flow data, wherein the thermodynamic diagram comprises thermodynamic information of at least one person.
It can be understood that when the intelligent camera acquires the user flow, that is, people exist in the shooting range of the intelligent camera, the current user flow data can be converted into the thermodynamic diagram; when the intelligent camera does not collect the user flow, namely people do not exist in the shooting range of the intelligent camera, the intelligent camera does not process the user flow.
And step 203, fusing the thermodynamic diagram in a three-dimensional visualization model of the building for displaying so as to monitor the user flow in the building.
Specifically, a three-dimensional visualization model (building information model BIM) of the building can be uploaded to the intelligent monitoring device, and a camera model consistent with the installation position of the intelligent camera in the building is established in the three-dimensional visualization model, so that the actual personnel information collected by the intelligent camera can be reflected on the three-dimensional visualization model in a thermodynamic diagram manner, and the user traffic in the building can be intuitively monitored on the three-dimensional visualization model.
It can be seen that, in the embodiment, the intelligent camera is used for collecting user flow data in a certain building; generating a thermodynamic diagram according to the user flow data, wherein the thermodynamic diagram comprises thermodynamic information of at least one person; and then the thermodynamic diagram is fused in the three-dimensional visualization model of the building for displaying, so that the flow of the user in the building can be monitored in the three-dimensional visualization model, and the visual cognition of the personnel in and out condition and the personnel gathering condition in the building is facilitated.
In some exemplary embodiments, the generating a thermodynamic diagram according to the user traffic data includes: generating N first thermodynamic diagrams according to N first user flow data, wherein the N first user flow data are collected by the N first intelligent cameras, the N first user flow data correspond to the N first intelligent cameras one by one, and the N first user flow data correspond to the N first thermodynamic diagrams one by one.
The intelligent camera can be used as a division basis, the corresponding relation of the unique codes is established between the intelligent camera and each region of the three-dimensional model, people stream data and a shooting range corresponding to the intelligent camera can be taken by each intelligent camera, and the people stream data and the shooting range are reflected in the corresponding region of the three-dimensional visual model to form a thermodynamic diagram, so that people gathering conditions can be visually seen.
For example, a certain number of intelligent cameras can be installed in a building, accumulation of shooting ranges of the intelligent cameras can cover each corner of the whole building, each intelligent camera can collect user flow data in a corresponding shooting range, then the user flow data are converted into thermodynamic diagrams, all the intelligent cameras synchronously collect the user flow data, and therefore real-time user flow data of the whole building can be acquired online, and the thermodynamic diagrams of the whole building are further acquired.
It can be seen that, in this embodiment, N intelligent cameras are arranged in a building and used for acquiring user traffic data in the building, each intelligent camera corresponds to one shooting area, the user traffic data of each area in the whole building can be acquired through the N intelligent cameras, then the user traffic data of each area is converted into a thermodynamic diagram, and since the thermodynamic diagram can intuitively reflect the people gathering condition, the real-time people gathering condition of each area in the whole building can be intuitively displayed.
In some exemplary embodiments, said fusing said thermodynamic diagram for presentation in a three-dimensional visualization model of said building comprises: according to the installation positions of the N first intelligent cameras in the building, N first camera models are established in the three-dimensional visual model, and the N first camera models correspond to the N first intelligent cameras one by one; determining N first areas in the three-dimensional visualization model according to the N first camera models and the shooting range of each first intelligent camera in the N first intelligent cameras, wherein the N first areas correspond to the N first camera models one to one; displaying the N first thermodynamic diagrams in the N first areas, wherein the N first thermodynamic diagrams correspond to the N first areas one by one.
For example, according to the installation position of an intelligent camera in a building, a camera model is established in a model position corresponding to the installation position in a three-dimensional visualization model of the building, then the whole three-dimensional visualization model is divided into a plurality of areas according to the camera model, each area corresponds to the shooting range of each intelligent camera, and a thermodynamic diagram corresponding to each camera is displayed in the area corresponding to the shooting range of each intelligent camera, so that the thermodynamic diagram of the whole building can be displayed in the three-dimensional visualization model in real time on line, and further the user flow in the whole building can be monitored visually on the three-dimensional visualization model.
It can be seen that, in this embodiment, N first camera models are established in the three-dimensional visualization model of the building according to the installation positions of the N first intelligent cameras in the building, and the N first camera models and the N first intelligent cameras are in one-to-one correspondence; therefore, the three-dimensional visualization model can be divided into N first areas according to the N first camera models and the shooting range of each first intelligent camera in the N first intelligent cameras, and the N first areas correspond to the N first camera models one to one, so that the building is divided into areas by the N intelligent cameras according to the shooting range, the three-dimensional visualization model is divided into N first areas by visually mapping the N first camera models, that is, the N first areas on the three-dimensional visualization model correspond to the shooting range of each first intelligent camera in the building; and displaying the thermodynamic diagrams corresponding to each shooting range in the corresponding first area, namely, intuitively displaying the personnel gathering condition in the whole building on the three-dimensional visual model.
In some exemplary embodiments, the generating a thermodynamic diagram according to the user traffic data includes: generating M second thermodynamic diagrams according to M second user flow data, wherein the M second user flow data are acquired by the M second intelligent cameras, the M second user flow data correspond to the M second intelligent cameras one by one, and the M second user flow data correspond to the M second thermodynamic diagrams one by one; and synthesizing a target thermodynamic diagram according to the M second thermodynamic diagrams.
For example, for a key position or a specific position of a building, such as a key entrance and exit of a shopping mall, three intelligent cameras are provided to monitor the entrance and exit, people flow data (i.e., user flow data) collected by the three intelligent cameras can be respectively converted into thermodynamic diagrams to obtain 3 thermodynamic diagrams, and then the three thermodynamic diagrams are combined into one thermodynamic diagram, wherein the combined thermodynamic diagram can be used for reflecting the flowing condition of people at the entrance and exit; or the people flow data collected by the three intelligent cameras can be collected to generate total people flow data of the entrance and the exit, and then the total people flow data is converted into a thermodynamic diagram, wherein the thermodynamic diagram obtained by converting the total people flow data can also be used for reflecting the flowing condition of people at the entrance and the exit.
It can be seen that, in this embodiment, for a certain building area in a building, M intelligent cameras are arranged in the building area for acquiring user traffic data in the building area, the M intelligent cameras can acquire the user traffic data in the area through multiple angles, then the user traffic data acquired through multiple angles are converted into multiple thermodynamic diagrams, and then the multiple thermodynamic diagrams are combined into one target thermodynamic diagram.
In some exemplary embodiments, said fusing said thermodynamic diagram for presentation in a three-dimensional visualization model of said building comprises: according to the installation positions of the M second intelligent cameras in the building, M second camera models are established in the three-dimensional visualization model, and the M second camera models correspond to the M second intelligent cameras one by one; determining M second areas in the three-dimensional visualization model according to the M second camera models and the shooting range of each second intelligent camera in the M second intelligent cameras, wherein the M second areas correspond to the M second camera models one to one; determining the overlapping areas of the M second areas to obtain a target area; and displaying the target thermodynamic diagram in the target area.
For example, for a key position or a specific position of a building, such as a key entrance and exit of a shopping mall, three intelligent cameras are provided to monitor the entrance and exit, people flow data (namely user flow data) collected by the three intelligent cameras can be respectively converted into thermodynamic diagrams to obtain 3 thermodynamic diagrams, and then the three thermodynamic diagrams are combined into one thermodynamic diagram; then respectively establishing a camera model at the corresponding positions of the three intelligent cameras in the entrance model and the exit model of the three-dimensional visual model in the building, thereby obtaining three camera models; and then according to the shooting ranges of the three intelligent cameras, three areas are determined in the entrance model, the three area overlapping parts are the reflection of the three intelligent camera shooting range overlapping parts in the three-dimensional visual model, and the synthesized thermodynamic diagram is displayed in the three area overlapping parts, so that the personnel flow condition of the entrance is visually displayed in the three-dimensional visual model, and further the personnel flow condition of the key entrance and exit of the market can be visually monitored.
It can be seen that, in this embodiment, according to the installation positions of M second intelligent cameras in a building, M second camera models are established in a three-dimensional visualization model of the building, and the M second camera models correspond to the M second intelligent cameras one to one, so according to the M second camera models and the shooting range of each second intelligent camera in the M second intelligent cameras, M second regions can be determined in the three-dimensional visualization model, and the M second regions correspond to the M second camera models one to one; the M second intelligent cameras shoot the same building area in the building from different angles, so that the building area is reflected in the three-dimensional visualization model, namely the superposition area of the M second areas; and then, a target thermodynamic diagram synthesized by user flow data acquired by the M second intelligent cameras is displayed in the overlapping area, so that the condition of personnel entering and exiting or the condition of personnel flowing in the building area can be visually displayed on the three-dimensional visual model.
In some exemplary embodiments, the collecting, by the smart camera, user traffic data in a building includes: acquiring videos in a preset time period in a building through the intelligent camera; dividing the video in the preset time period into a plurality of video segments; analyzing each target video segment to obtain a plurality of video images, wherein the target video segment is any one of the plurality of video segments; performing face segmentation on each video image in the plurality of video images to obtain a plurality of faces; classifying the faces to obtain multiple types of faces, and taking the number of the types of the multiple types of faces as user flow data in the preset time period.
Specifically, a clustering algorithm may be adopted to classify a plurality of faces, so as to obtain a plurality of types of faces, and the number of the types of the faces is used as user flow data in the preset time period, that is, each person corresponds to one type.
It can be seen that, in the embodiment, the video in the building is collected, the video is segmented to obtain the video segment, the video segment is analyzed to obtain a plurality of video images, each video image is subjected to face segmentation to obtain a plurality of faces, the faces are classified, and the number of classes of face classification is used as the currently collected personnel data volume, so that the number of personnel in the building can be effectively collected.
In some exemplary embodiments, performing face segmentation on each of the plurality of video images to obtain a plurality of faces includes: carrying out grid division on the plurality of video images to obtain a plurality of grid images; and executing the following steps aiming at each grid image to obtain a plurality of faces: selecting A different points from the grid image, and performing circular image interception on the grid image by taking the A different points as circle centers to obtain A circular area images, wherein A is an integer larger than 3; selecting a target circular region image from the A circular region images, wherein the target circular region image is the circular region image which contains the largest number of characteristic points in the A circular region images; dividing the target circular area image to obtain B circular ring images, wherein the ring widths of the B circular ring images are the same; sequentially matching the B circular ring images with pre-stored template face images by using the circular ring image with the smallest radius in the B circular ring images as a characteristic point, and accumulating the matching values of the matched circular ring images; and when the accumulated matching value is larger than a preset matching threshold value, stopping the characteristic point matching, determining that the grid image has a human face, and extracting the human face in the grid image.
Specifically, a preset template face is pre-stored in the intelligent monitoring device, and the intelligent monitoring device can perform grid division on the video image to obtain a plurality of grid images; and then identifying whether a face exists in each grid image, and extracting the face in the grid image when the face exists so as to obtain the face in the whole video image.
As can be seen, in the present embodiment, a plurality of mesh images are obtained by performing mesh division on a video image; and carrying out face recognition on each grid image by using a specific face recognition algorithm, further extracting the face in each grid image, and counting the user flow from the video collected from the intelligent camera within a preset time period.
Referring to fig. 3, fig. 3 is another user traffic monitoring method provided in this embodiment of the present application, which is applied to an intelligent monitoring device, where the intelligent monitoring device is in communication connection with an intelligent camera, and the method includes, but is not limited to, the following steps:
301, acquiring N first user flow data in a building through N first intelligent cameras, wherein the N first user flow data correspond to the N first intelligent cameras one to one
Step 302, generating N first thermodynamic diagrams according to N first user flow data, wherein the N first user flow data are collected by the N first intelligent cameras, and the N first user flow data are in one-to-one correspondence with the N first thermodynamic diagrams
303, establishing N first camera models in the three-dimensional visual model according to the installation positions of the N first intelligent cameras in the building, wherein the N first camera models correspond to the N first intelligent cameras one to one
304, determining N first areas in the three-dimensional visualization model according to the N first camera models and the shooting range of each first intelligent camera in the N first intelligent cameras, wherein the N first areas correspond to the N first camera models one to one
Step 305, displaying the N first thermodynamic diagrams in the N first areas, wherein the N first thermodynamic diagrams correspond to the N first areas one to one
It can be seen that, in this embodiment, according to the installation positions of the N first intelligent cameras in the building, N first camera models are established in the three-dimensional visualization model of the building, and the N first camera models and the N first intelligent cameras are in one-to-one correspondence; therefore, the three-dimensional visualization model can be divided into N first areas according to the N first camera models and the shooting range of each first intelligent camera in the N first intelligent cameras, and the N first areas correspond to the N first camera models in a one-to-one manner, so that the N intelligent cameras divide the building into areas according to the shooting range, the three-dimensional visualization model is divided into N first areas by intuitively mapping the N first camera models into the N first camera models, that is, the N first areas on the three-dimensional visualization model correspond to the shooting range of each first intelligent camera in the building; and displaying the thermodynamic diagrams corresponding to each shooting range in the corresponding first area, namely, intuitively displaying the personnel gathering condition in the whole building on the three-dimensional visual model.
The above-mentioned scheme of the embodiment of the present application is introduced mainly from the perspective of interaction between network elements on the method side. It is understood that, in order to implement the above functions, the intelligent monitoring device includes a hardware structure and/or a software module for performing the respective functions. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a user traffic monitoring apparatus provided in an embodiment of the present application, where the user traffic monitoring apparatus 400 may include an acquisition unit 401, a generation unit 402, and a display unit 403, and the user traffic monitoring apparatus 400 is applied to an intelligent monitoring device, and the intelligent monitoring device is in communication connection with an intelligent camera, where details of each unit are as follows:
the acquisition unit 401 is used for acquiring user flow data in a building through the intelligent camera;
a generating unit 402, configured to generate a thermodynamic diagram according to the user traffic data, where the thermodynamic diagram includes thermodynamic information of at least one person;
and a presentation unit 403 for fusing the thermodynamic diagrams into a three-dimensional visualization model of the building for presentation so as to monitor the user traffic in the building.
In some exemplary embodiments, the smart cameras include N first smart cameras, each of the N first smart cameras corresponds to one first shooting area, N is a positive integer, and the generating unit 402 is configured to: generating N first thermodynamic diagrams according to N first user flow data, wherein the N first user flow data are collected by the N first intelligent cameras, the N first user flow data correspond to the N first intelligent cameras one by one, and the N first user flow data correspond to the N first thermodynamic diagrams one by one.
In some exemplary embodiments, the display unit 403 is configured to: establishing N first camera models in the three-dimensional visualization model according to the installation positions of the N first intelligent cameras in the building, wherein the N first camera models correspond to the N first intelligent cameras one by one; determining N first areas in the three-dimensional visualization model according to the N first camera models and the shooting range of each first intelligent camera in the N first intelligent cameras, wherein the N first areas correspond to the N first camera models one to one; displaying the N first thermodynamic diagrams in the N first areas, wherein the N first thermodynamic diagrams correspond to the N first areas one by one.
In some exemplary embodiments, the smart cameras include M second smart cameras, where the M second smart cameras correspond to the same second shooting area, and M is a positive integer, and the generating unit 402 is configured to: generating M second thermodynamic diagrams according to M second user flow data, wherein the M second user flow data are acquired by the M second intelligent cameras, the M second user flow data correspond to the M second intelligent cameras one by one, and the M second user flow data correspond to the M second thermodynamic diagrams one by one; and synthesizing a target thermodynamic diagram according to the M second thermodynamic diagrams.
In some exemplary embodiments, the display unit 403 is configured to: according to the installation positions of the M second intelligent cameras in the building, M second camera models are established in the three-dimensional visualization model, and the M second camera models correspond to the M second intelligent cameras one by one; determining M second areas in the three-dimensional visualization model according to the M second camera models and the shooting range of each second intelligent camera in the M second intelligent cameras, wherein the M second areas correspond to the M second camera models one to one; determining the overlapping areas of the M second areas to obtain a target area; and displaying the target thermodynamic diagram in the target area.
It should be noted that the implementation of each unit may also correspond to the corresponding description in the above method embodiments. Of course, the user traffic monitoring apparatus 400 provided in the embodiment of the present application includes, but is not limited to, the above unit modules, for example: the user traffic monitoring apparatus 400 may further include a storage unit 404. The memory unit 404 may be used to store program codes and data for the user traffic monitoring apparatus 400.
It can be seen that in the user traffic monitoring apparatus 400 depicted in fig. 4, the user traffic data in a certain building is collected by the intelligent camera; generating a thermodynamic diagram according to the user flow data, wherein the thermodynamic diagram comprises thermodynamic information of at least one person; and then, the thermodynamic diagram is fused in the three-dimensional visualization model of the building for displaying, so that the flow of the user in the building can be monitored in the three-dimensional visualization model, and the conditions of personnel entering and exiting and personnel gathering in the building can be intuitively known.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an intelligent monitoring device 510 according to an embodiment of the present application, as shown in fig. 5, the intelligent monitoring device 510 includes a communication interface 511, a processor 512, a memory 513, and at least one communication bus 514 for connecting the communication interface 511, the processor 512, and the memory 513, and the intelligent monitoring device 510 is in communication connection with an intelligent camera.
The memory 513 includes, but is not limited to, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM), or a portable read-only memory (CD-ROM), and the memory 513 is used for related instructions and data.
Communication interface 511 is used for receiving and transmitting data.
The processor 512 may be one or more Central Processing Units (CPUs), and in the case that the processor 512 is one CPU, the CPU may be a single-core CPU or a multi-core CPU.
The processor 512 in the intelligent monitoring device 510 is configured to read one or more program codes stored in the memory 513, and perform the following operations: collecting user flow data in a building through the intelligent camera; generating a thermodynamic diagram from the user traffic data, the thermodynamic diagram including thermodynamic information of at least one person; and fusing the thermodynamic diagrams in a three-dimensional visualization model of the building for displaying so as to monitor the user flow in the building.
It should be noted that, implementation of each operation may also correspond to the corresponding description in the foregoing method embodiments.
It can be seen that in the intelligent monitoring device 510 depicted in fig. 5, the user traffic data in a certain building is collected by the intelligent camera; generating a thermodynamic diagram according to the user flow data, wherein the thermodynamic diagram comprises thermodynamic information of at least one person; and then the thermodynamic diagram is fused in the three-dimensional visualization model of the building for displaying, so that the flow of the user in the building can be monitored in the three-dimensional visualization model, and the visual cognition of the personnel in and out condition and the personnel gathering condition in the building is facilitated.
The embodiment of the present application further provides a chip, where the chip includes at least one processor, a memory and an interface circuit, where the memory, the transceiver and the at least one processor are interconnected by a line, and the at least one memory stores a computer program; the method flows shown in the above method embodiments are implemented when the computer program is executed by the processor.
Embodiments of the present application further provide a computer-readable storage medium, in which a computer program is stored, and when the computer program runs on a computer, the method flows shown in the above method embodiments are implemented.
The embodiments of the present application further provide a computer program product, and when the computer program product runs on a computer, the method flows shown in the foregoing method embodiments are implemented.
It should be understood that the Processor mentioned in the embodiments of the present Application may be a Central Processing Unit (CPU), and may also be other general purpose processors, digital Signal Processors (DSP), application Specific Integrated Circuits (ASIC), field Programmable Gate Arrays (FPGA) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It will also be appreciated that the memory referred to in the embodiments herein may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of example, but not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), dynamic Random Access Memory (DRAM), synchronous Dynamic random access memory (Synchronous DRAM, SDRAM), double Data Rate Synchronous Dynamic random access memory (DDR SDRAM), enhanced Synchronous SDRAM (ESDRAM), synchronous link SDRAM (SLDRAM), and Direct Rambus RAM (DR RAM).
It should be noted that when the processor is a general-purpose processor, a DSP, an ASIC, an FPGA or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component, the memory (memory module) is integrated in the processor.
It should be noted that the memory described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one type of logical functional division, and other divisions may be realized in practice, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions may be stored in a computer-readable storage medium if they are implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, an optical disk, or other various media capable of storing program codes.
The steps in the method of the embodiment of the application can be sequentially adjusted, combined and deleted according to actual needs.
The modules in the device can be merged, divided and deleted according to actual needs.
The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (8)

1. A user flow monitoring method is applied to intelligent monitoring equipment, the intelligent monitoring equipment is in communication connection with an intelligent camera, and the method comprises the following steps:
collecting user flow data in a building through the intelligent camera;
the user's flow data through in the building is gathered to intelligent camera includes: acquiring videos in a preset time period in a building through the intelligent camera; dividing the video in the preset time period into a plurality of video segments; analyzing each target video segment to obtain a plurality of video images, wherein the target video segment is any one of the plurality of video segments; carrying out face segmentation on each video image in the plurality of video images to obtain a plurality of faces; classifying the faces to obtain multiple types of faces, and taking the number of the types of the multiple types of faces as user flow data in the preset time period;
the face segmentation is performed on each video image in the plurality of video images to obtain a plurality of faces, and the method comprises the following steps: carrying out grid division on the plurality of video images to obtain a plurality of grid images; and executing the following steps aiming at each grid image to obtain a plurality of faces: selecting A different points from the grid image, and performing circular image interception on the grid image by taking the A different points as circle centers to obtain A circular area images, wherein A is an integer larger than 3; selecting a target circular region image from the A circular region images, wherein the target circular region image is the circular region image which contains the largest number of characteristic points in the A circular region images; dividing the target circular area image to obtain B circular ring images, wherein the ring widths of the B circular ring images are the same; sequentially matching the B circular ring images with pre-stored template face images by using the circular ring image with the smallest radius in the B circular ring images as a characteristic point, and accumulating the matching values of the matched circular ring images; when the accumulated matching value is larger than a preset matching threshold value, stopping feature point matching, determining that a human face exists in the grid image, and extracting the human face in the grid image;
generating a thermodynamic diagram from the user traffic data, the thermodynamic diagram including thermodynamic information of at least one person;
fusing the thermodynamic diagram for presentation in a three-dimensional visualization model of the building to monitor user traffic within the building, comprising: the method comprises the steps of establishing N first camera models in the three-dimensional visualization model according to installation positions of N first intelligent cameras in the building, wherein the N first camera models correspond to the N first intelligent cameras one by one, determining N first areas in the three-dimensional visualization model according to shooting ranges of the N first camera models and each first intelligent camera in the N first intelligent cameras, wherein the N first areas correspond to the N first camera models one by one, displaying N first thermodynamic diagrams in the N first areas according to a unique coded corresponding relation established between the N first intelligent cameras and each area of the three-dimensional visualization model, and the N first thermodynamic diagrams correspond to the N first areas one by one, wherein each first camera in the N first intelligent cameras corresponds to one first intelligent shooting area, and N is a positive integer.
2. The method of claim 1, wherein generating a thermodynamic diagram from the user traffic data comprises:
generating N first thermodynamic diagrams according to N first user flow data, wherein the N first user flow data are collected by the N first intelligent cameras, the N first user flow data correspond to the N first intelligent cameras one by one, and the N first user flow data correspond to the N first thermodynamic diagrams one by one.
3. The method of claim 1, wherein the smart camera comprises M second smart cameras corresponding to a same second shooting area, M being a positive integer, and wherein generating the thermodynamic diagram according to the user traffic data comprises:
generating M second thermodynamic diagrams according to M second user flow data, wherein the M second user flow data are collected by the M second intelligent cameras, the M second user flow data are in one-to-one correspondence with the M second intelligent cameras, and the M second user flow data are in one-to-one correspondence with the M second thermodynamic diagrams;
and synthesizing a target thermodynamic diagram according to the M second thermodynamic diagrams.
4. The method of claim 3, wherein said fusing the thermodynamic diagram for presentation in a three-dimensional visualization model of the building comprises:
according to the installation positions of the M second intelligent cameras in the building, M second camera models are established in the three-dimensional visualization model, and the M second camera models correspond to the M second intelligent cameras one by one;
determining M second areas in the three-dimensional visualization model according to the M second camera models and the shooting range of each second intelligent camera in the M second intelligent cameras, wherein the M second areas correspond to the M second camera models one to one;
determining the overlapping areas of the M second areas to obtain a target area;
and displaying the target thermodynamic diagram in the target area.
5. The utility model provides a user flow monitoring device which characterized in that is applied to intelligent monitoring equipment, intelligent monitoring equipment and intelligent camera communication connection, the device includes:
the acquisition unit is used for acquiring user flow data in the building through the intelligent camera;
the acquisition unit is specifically configured to: acquiring videos in a preset time period in a building through the intelligent camera; dividing the video in the preset time period into a plurality of video segments; analyzing each target video segment to obtain a plurality of video images, wherein the target video segment is any one of the plurality of video segments; performing face segmentation on each video image in the plurality of video images to obtain a plurality of faces; classifying the faces to obtain multiple types of faces, and taking the number of the types of the multiple types of faces as user flow data in the preset time period;
the acquisition unit is specifically configured to: carrying out grid division on the plurality of video images to obtain a plurality of grid images; executing the following steps aiming at each grid image to obtain a plurality of faces: selecting A different points from the grid image, and performing circular image interception on the grid image by taking the A different points as circle centers to obtain A circular area images, wherein A is an integer greater than 3; selecting a target circular region image from the A circular region images, wherein the target circular region image is the circular region image which contains the largest number of characteristic points in the A circular region images; dividing the target circular area image to obtain B circular ring images, wherein the ring widths of the B circular ring images are the same; sequentially matching the B ring images with pre-stored template face images by feature points from the ring image with the smallest radius in the B ring images, and accumulating the matching values of the matched ring images; when the accumulated matching value is larger than a preset matching threshold value, stopping feature point matching, determining that a human face exists in the grid image, and extracting the human face in the grid image;
a generating unit, configured to generate a thermodynamic diagram according to the user traffic data, where the thermodynamic diagram includes thermodynamic information of at least one person;
the display unit is used for fusing the thermodynamic diagram into a three-dimensional visualization model of the building for displaying so as to monitor the user flow in the building, and comprises the following steps: the method comprises the steps of establishing N first camera models in the three-dimensional visualization model according to installation positions of N first intelligent cameras in the building, wherein the N first camera models correspond to the N first intelligent cameras one by one, determining N first areas in the three-dimensional visualization model according to shooting ranges of the N first camera models and each first intelligent camera in the N first intelligent cameras, wherein the N first areas correspond to the N first camera models one by one, displaying N first thermodynamic diagrams in the N first areas according to a unique coded corresponding relation established between the N first intelligent cameras and each area of the three-dimensional visualization model, and the N first thermodynamic diagrams correspond to the N first areas one by one, wherein each first camera in the N first intelligent cameras corresponds to one first intelligent shooting area, and N is a positive integer.
6. The apparatus of claim 5, wherein the generating unit is configured to:
generating N first thermodynamic diagrams according to N first user flow data, wherein the N first user flow data are collected by the N first intelligent cameras, the N first user flow data correspond to the N first intelligent cameras one by one, and the N first user flow data correspond to the N first thermodynamic diagrams one by one.
7. An intelligent monitoring device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs including instructions for performing the steps in the method of any of claims 1-4.
8. A computer-readable storage medium, characterized in that it stores a computer program for electronic data exchange, wherein the computer program causes a computer to perform the method according to any one of claims 1-4.
CN202010786062.5A 2020-08-06 2020-08-06 User traffic monitoring method and related equipment Active CN111970490B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010786062.5A CN111970490B (en) 2020-08-06 2020-08-06 User traffic monitoring method and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010786062.5A CN111970490B (en) 2020-08-06 2020-08-06 User traffic monitoring method and related equipment

Publications (2)

Publication Number Publication Date
CN111970490A CN111970490A (en) 2020-11-20
CN111970490B true CN111970490B (en) 2023-04-18

Family

ID=73365202

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010786062.5A Active CN111970490B (en) 2020-08-06 2020-08-06 User traffic monitoring method and related equipment

Country Status (1)

Country Link
CN (1) CN111970490B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112598384B (en) * 2020-12-24 2022-07-26 卡斯柯信号有限公司 Station passenger flow monitoring method and system based on building information model
CN112906552A (en) * 2021-02-07 2021-06-04 上海卓繁信息技术股份有限公司 Inspection method and device based on computer vision and electronic equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018084512A (en) * 2016-11-24 2018-05-31 凸版印刷株式会社 Three-dimensional shape heat measurement device, method for measuring three-dimensional shape heat, and program
WO2018233398A1 (en) * 2017-06-23 2018-12-27 北京易真学思教育科技有限公司 Method, device, and electronic apparatus for monitoring learning

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102453858B1 (en) * 2015-12-23 2022-10-14 한화테크윈 주식회사 Apparatus and method for image processing
CN110619807B (en) * 2018-06-20 2022-12-02 北京京东尚科信息技术有限公司 Method and device for generating global thermodynamic diagram
CN108985218A (en) * 2018-07-10 2018-12-11 上海小蚁科技有限公司 People flow rate statistical method and device, calculates equipment at storage medium
CN109146746A (en) * 2018-07-31 2019-01-04 湖北智旅云科技有限公司 A kind of scenic spot big data operation management platform based on GIS technology
CN109803230B (en) * 2019-01-24 2021-03-12 北京万相融通科技股份有限公司 Method for drawing personnel distribution thermodynamic diagram of bus taking area of station
CN110209245B (en) * 2019-06-17 2021-01-08 Oppo广东移动通信有限公司 Face recognition method and related product

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018084512A (en) * 2016-11-24 2018-05-31 凸版印刷株式会社 Three-dimensional shape heat measurement device, method for measuring three-dimensional shape heat, and program
WO2018233398A1 (en) * 2017-06-23 2018-12-27 北京易真学思教育科技有限公司 Method, device, and electronic apparatus for monitoring learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
沈颖慧 ; 潘志宏 ; .基于多技术融合的人员应急疏散系统研究.江苏建筑.2018,(第02期),114-117. *

Also Published As

Publication number Publication date
CN111970490A (en) 2020-11-20

Similar Documents

Publication Publication Date Title
CN109255352B (en) Target detection method, device and system
CN109697416B (en) Video data processing method and related device
CN110414313B (en) Abnormal behavior alarming method, device, server and storage medium
CN108256404B (en) Pedestrian detection method and device
CN108875540B (en) Image processing method, device and system and storage medium
CN109815818B (en) Target person tracking method, system and related device
CN111970490B (en) User traffic monitoring method and related equipment
WO2021135879A1 (en) Vehicle data monitoring method and apparatus, computer device, and storage medium
CN109657564A (en) A kind of personnel detection method, device, storage medium and terminal device on duty
CN109766779B (en) Loitering person identification method and related product
CN106878670B (en) A kind of method for processing video frequency and device
CN106372606A (en) Target object information generation method and unit identification method and unit and system
CN109815813B (en) Image processing method and related product
CN109740573B (en) Video analysis method, device, equipment and server
CN113255606A (en) Behavior recognition method and device, computer equipment and storage medium
CN110210457A (en) Method for detecting human face, device, equipment and computer readable storage medium
CN108881813A (en) A kind of video data handling procedure and device, monitoring system
CN109426785A (en) A kind of human body target personal identification method and device
CN109815839B (en) Loitering person identification method under micro-service architecture and related product
CN110348343A (en) A kind of act of violence monitoring method, device, storage medium and terminal device
CN108197585A (en) Recognition algorithms and device
Jin et al. DWCA-YOLOv5: An improve single shot detector for safety helmet detection
CN107316011B (en) Data processing method, device and storage medium
CN112860821A (en) Human-vehicle trajectory analysis method and related product
CN115953815A (en) Monitoring method and device for infrastructure site

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant