CN116704448B - Pedestrian recognition method and recognition system with multiple cameras - Google Patents
Pedestrian recognition method and recognition system with multiple cameras Download PDFInfo
- Publication number
- CN116704448B CN116704448B CN202310993574.2A CN202310993574A CN116704448B CN 116704448 B CN116704448 B CN 116704448B CN 202310993574 A CN202310993574 A CN 202310993574A CN 116704448 B CN116704448 B CN 116704448B
- Authority
- CN
- China
- Prior art keywords
- camera
- recognition
- identification
- index
- pressure
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 230000019771 cognition Effects 0.000 claims abstract description 65
- 230000000007 visual effect Effects 0.000 claims abstract description 61
- 230000006870 function Effects 0.000 claims abstract description 29
- 238000004458 analytical method Methods 0.000 claims abstract description 14
- 230000002159 abnormal effect Effects 0.000 claims description 54
- 238000004364 calculation method Methods 0.000 claims description 29
- 230000001960 triggered effect Effects 0.000 claims description 20
- 238000012544 monitoring process Methods 0.000 claims description 19
- 230000008569 process Effects 0.000 claims description 16
- 230000035772 mutation Effects 0.000 claims description 11
- 238000009825 accumulation Methods 0.000 claims description 9
- 230000005856 abnormality Effects 0.000 claims description 5
- 238000012423 maintenance Methods 0.000 claims description 4
- 230000001149 cognitive effect Effects 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 16
- 238000012545 processing Methods 0.000 abstract description 3
- 238000004590 computer program Methods 0.000 description 4
- 239000002699 waste material Substances 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 3
- 238000003908 quality control method Methods 0.000 description 3
- 230000009467 reduction Effects 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 241000533950 Leucojum Species 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000013439 planning Methods 0.000 description 1
- 238000013468 resource allocation Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/53—Recognition of crowd images, e.g. recognition of crowd congestion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/94—Hardware or software architectures specially adapted for image or video understanding
- G06V10/95—Hardware or software architectures specially adapted for image or video understanding structured as a network, e.g. client-server architectures
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Signal Processing (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention discloses a pedestrian recognition method and a recognition system with multiple cameras, which particularly relate to the field of data processing, and are characterized in that visual cognition coefficients are generated by collecting various information of the cameras, and comprehensive analysis is performed to generate signals with different participation degrees, so that the hidden problem of the cameras can be found conveniently and maintained; generating a middle participation degree signal for a camera at an edge position, calculating a comprehensive recognition index by combining a visual cognition coefficient, and dynamically adjusting recognition performance according to the state; collecting the number of pedestrians in unit time in a shooting picture of each key camera, and combining the comprehensive recognition indexes to obtain a recognition pressure index so as to generate a trigger signal; when a trigger signal is generated, triggering adjacent cameras to start the recognition function in a picture shrinking mode until the cameras with recognition pressures within a threshold range are found, and automatically closing pedestrian recognition functions of surrounding cameras; and further, the dynamic adjustment of the recognition pressure can be realized, the burden of a system is lightened, and the pedestrian recognition efficiency and effect are improved.
Description
Technical Field
The invention relates to the field of data processing, in particular to a pedestrian recognition method and a pedestrian recognition system with multiple cameras.
Background
A large number of cameras are arranged on the existing transportation hub for pedestrian monitoring and recognition. The system adopts the peripheral cameras to firstly identify pedestrians entering and exiting, and generate the identification numbers, and other cameras continuously track and monitor the corresponding pedestrians according to the identification numbers so as to reduce the repeated identification rate and the resource consumption. However, the conventional pedestrian recognition method does not monitor the recognition state of the peripheral camera in real time, resulting in a significant reduction in recognition efficiency and effect. The subsequent cameras perform monitoring and tracking according to the wrong identification numbers, and confusion and misjudgment may be generated. In addition, the camera in the lowered state is identified using the identification capability of the design standard, but its actual identification capability is far lower than the original standard, resulting in easy identification failure.
The traditional recognition mode lacks a mechanism for automatically reducing the recognition pressure and transferring the recognition pressure to the adjacent cameras, lacks dynamic adjustment of the recognition pressure and flexibility, and is difficult to cope with uncertain pedestrian flow and recognition tasks.
In order to solve the above problems, a technical solution is now provided.
Disclosure of Invention
In order to overcome the defects in the prior art, the embodiment of the invention provides the steps of collecting the original information and the abnormal information of the cameras, generating the visual cognition coefficient, comprehensively analyzing all the cameras, generating signals with different participation degrees, and further conveniently finding the hidden problem of the cameras and carrying out targeted maintenance; generating a middle participation degree signal aiming at a camera at an edge position, calculating a comprehensive recognition index by combining a visual cognition coefficient, and dynamically adjusting recognition performance according to the state; collecting the number of pedestrians in unit time in a shooting picture of each key camera, combining the comprehensive recognition indexes to obtain a recognition pressure index, and generating a trigger signal and a non-generated signal according to the pressure index; when a trigger signal is generated, triggering adjacent cameras to start the recognition function in a picture shrinking mode until the cameras with recognition pressures within a threshold range are found, and automatically closing pedestrian recognition functions of surrounding cameras; and further, the dynamic adjustment of the recognition pressure can be realized, the burden of the system is lightened, and the pedestrian recognition efficiency and effect are improved, so that the problems in the background technology are solved.
In order to achieve the above purpose, the present invention provides the following technical solutions:
step S100, counting all available cameras, collecting original information and abnormal information of a camera application process, and generating a visual cognition coefficient;
step S200, comprehensively analyzing all cameras according to the visual cognition coefficient, and generating a re-participation signal, a medium-participation signal and a low-participation signal according to analysis results;
step S300, acquiring an identification capacity index of a camera which is positioned at a monitoring edge position and generates a mid-participation degree signal, and combining the identification capacity index and a visual cognition coefficient to obtain a comprehensive identification index;
step S400, collecting the number of pedestrians in unit time in a shooting picture of each key camera, calculating the number of pedestrians and the comprehensive recognition index to obtain a recognition pressure index, comprehensively analyzing the recognition pressure index, and generating a trigger signal and a non-generated signal;
step S500, aiming at the condition of generating a trigger signal, reducing a shooting picture and triggering an adjacent camera to start a recognition function, if the recognition pressure of the adjacent camera still does not belong to the recognition pressure threshold range, continuing reducing the shooting picture and triggering cameras adjacent to the periphery of the adjacent camera triggered after the trigger signal is generated until the recognition pressure of the camera belongs to the recognition pressure threshold range and the surrounding camera triggered and started by the camera is automatically closed to the pedestrian recognition function when the recognition pressure of the camera does not belong to the recognition pressure threshold range and the shooting picture is not required to be reduced.
In a preferred embodiment, step S100 specifically includes the following:
the method comprises the steps of collecting original information and abnormal information of a camera, wherein the original information comprises a picture balance index and the abnormal information comprises an abnormal variable index.
In a preferred embodiment, the picture balance index acquisition logic is:
step S101: dividing a picture shot by a camera into n areas, and collecting the image contrast, the image brightness, the image definition and the noise number of each area;
step S102: calculating the deviation value of each area, wherein the calculation formula is as follows:wherein pl is a deviation value, A1, A2, A3, A4 are respectively the image contrast, image brightness, image definition, and the number of noise points, B1, B2, B3, B4 are respectively the standard values of the image contrast, the image brightness, the image definition, and the number of noise points, and w1, w2, w3, and w4 are respectivelyW1, w2, w3, w4 are all greater than 0, and the offset value is used for evaluating the degree of the offset standard of each shot picture;
step S103: calculating a deviation value of each area, calculating a deviation average value and a picture balance index, wherein the calculation formula of the deviation average value is as follows:the calculation formula of the deviation discrete value is: / >Wherein ppj is a deviation average value, pps is a deviation discrete value, i=1, 2, 3, … …, n;
step S104: calculating a picture deviation index, wherein the calculation formula is as follows:where CV is the picture balance index.
In a preferred embodiment, the logic for obtaining the abnormality variability index is:
step S111: collecting the times and duration of influence identification in the running process of the camera, counting the abnormal accumulated time in each unit time when the camera runs, and generating a first ranking table according to the time sequence of collecting the abnormal accumulated time in the historical running time; generating a second sorting table according to the magnitude of the abnormal accumulated time, and sorting the second sorting table according to the acquisition time sequence if the difference value of two abnormal times of adjacent acquisition times is smaller than a comparison threshold value;
step S112: the number of the reverse pairs between the first sorting table and the second sorting table is calculated, and the calculation method is as follows: 1. traversing each sample in the second sorting table; 2. for the abnormal accumulation time of the current sample, searching the sample corresponding to the abnormal accumulation time in the second sorting table, and calculating the difference value of the abnormal accumulation time of the samples; 3. if the difference is less than the comparison threshold and the index of the corresponding sample in the first sorted list is less than the index of the current sample, then an inverted pair is considered to be present;
Step S113: and calculating the fault variability between the first sorting table and the second sorting table, wherein the calculation formula is as follows:where KT is the failure variability, m is the number of samples in the sorted list, and sion is the number of pairs in reverse order.
In a preferred embodiment, the visual recognition coefficient is obtained by comprehensively calculating the picture balance index and the abnormal mutation index, and the calculation formula is as follows:wherein VCI is a visual cognitive coefficient, CV and KT are respectively a picture balance index and an abnormal mutation index, and +.>Preset proportional coefficients of the picture balance index and the abnormal mutation index respectively, and +.>Are all greater than 0.
In a preferred embodiment, step S200 specifically includes the following:
comparing the visual cognition coefficient with a first judgment threshold value and a second judgment threshold value;
if the visual cognition coefficient is larger than or equal to a judgment threshold value II, the participation degree of the camera in the pedestrian recognition work is high, and a heavy-duty signal is generated;
if the visual cognition coefficient is larger than or equal to the first judgment threshold value and smaller than the second judgment threshold value, the participation degree of the camera in the pedestrian recognition work is moderate, and a middle participation degree signal is generated;
if the visual cognition coefficient is smaller than the first judgment threshold value, the maintenance is needed, a low-participation-degree signal is generated, and an early warning signal is sent out.
In a preferred embodiment, step S300 specifically includes the following:
counting all cameras, and counting the camera marks which are positioned at the monitoring edge positions and generate the mid-participation signals according to the mounting positions of the camerasThe method is characterized by comprising the steps of recording a key camera, collecting the identification capability index of the key camera, namely stably identifying the maximum number of people in unit time, combining the identification capability index with a visual cognition coefficient, a first judgment threshold value and a second judgment threshold value to obtain a comprehensive identification index, wherein the calculation formula is as follows:in the formula, sr and rci are respectively an identification capacity index and a comprehensive identification index, the comprehensive identification index is used for replacing the identification capacity index, and VCI, AT1 and AT2 are respectively a visual cognition coefficient, a judgment threshold value I and a judgment threshold value II.
In a preferred embodiment, step S400 specifically includes the following:
collecting the number of pedestrians in unit time in each key camera shooting picture, marking the number as the actual number, calculating the actual number and the comprehensive identification index to obtain the identification pressure index, wherein the calculation formula is as follows:wherein pr is the identification pressure, sr and an are the comprehensive identification index and the actual number respectively;
comparing the identification pressure with an identification pressure threshold range;
If the identification pressure belongs to the identification pressure threshold range, the actual number in the shooting picture of the camera is smaller than the comprehensive identification capacity, and no signal is generated;
if the identification pressure does not belong to the identification pressure threshold range, the actual number in the shooting picture of the camera is larger than the comprehensive identification capacity, and a trigger signal is generated.
In a preferred embodiment, step S500 specifically includes the following:
when the identification pressure does not belong to the identification pressure threshold range, the camera reduces the shooting picture in an equal proportion by taking the center of the shooting picture as a base point to serve as a new identification picture, then the camera triggers the adjacent camera to start the pedestrian identification function, if the identification pressure of the adjacent camera still does not belong to the identification pressure threshold range, similarly, the camera reduces the shooting picture in an equal proportion by taking the center of the shooting picture as the base point, triggers the adjacent camera to start the pedestrian identification function again until the identification pressure of the camera after starting belongs to the identification pressure threshold range, and when the identification pressure of the camera belongs to the identification pressure threshold range, the identification picture and the shooting picture are equal, the surrounding cameras triggered and started by the camera automatically close the pedestrian identification function.
A pedestrian recognition system with multiple cameras comprises an initial acquisition unit, a preliminary judgment unit, a comprehensive analysis unit, a pressure cognition unit and a pressure distribution unit;
the initial acquisition unit is used for counting all available cameras, acquiring original information and abnormal information of the camera in the application process, generating a visual cognition coefficient signal and sending the visual cognition coefficient signal to the initial judgment unit;
the primary judging unit is used for comprehensively analyzing all cameras according to the visual cognition coefficient, generating a re-participation degree signal, a middle-participation degree signal and a low-participation degree signal according to the analysis result, generating a re-participation degree signal, a middle-participation degree signal and a low-participation degree signal, and transmitting the re-participation degree signal, the middle-participation degree signal and the low-participation degree signal to the comprehensive analyzing unit;
the comprehensive analysis unit collects the identification capacity index of the camera which is positioned at the monitoring edge position and generates the mid-participation degree signal, combines the identification capacity index and the visual cognition coefficient to obtain a comprehensive identification index, generates sum and identification index signals and sends the sum and identification index signals to the pressure cognition unit;
the pressure cognition unit is used for collecting the number of pedestrians in unit time in each key camera shooting picture, calculating the number of pedestrians and the comprehensive recognition index to obtain a recognition pressure index, comprehensively analyzing the recognition pressure index, generating a trigger signal and a non-generated signal, and generating the trigger signal and sending the trigger signal to the pressure distribution unit;
The pressure distribution unit reduces the shooting picture and triggers the adjacent cameras to start the recognition function under the condition of generating the trigger signal, if the recognition pressure of the adjacent cameras still does not belong to the recognition pressure threshold range, the shooting picture is continuously reduced and the adjacent cameras around the adjacent cameras triggered after the trigger signal is generated are triggered until the recognition pressure of the cameras belongs to the recognition pressure threshold range and the recognition function of pedestrians is automatically closed by the cameras around the cameras triggered and started by the cameras when the recognition pressure of the cameras does not need to be reduced.
The pedestrian recognition method and the recognition system with multiple cameras have the technical effects and advantages that:
1. and comprehensively calculating to obtain a visual cognition coefficient by collecting a picture balance index and an abnormal mutation index in the running process of the camera, and evaluating the pedestrian recognition effect of the camera. The visual cognition coefficient reflects the picture quality and stability of the camera, and the contribution degree of the camera to the pedestrian recognition task. Further, the visual cognition coefficient is compared with a set visual cognition threshold, a heavy participation degree signal and a low participation degree signal are generated through a comparison result, whether the recognition capability of the camera reaches an expected level or not is facilitated to be judged automatically, if the recognition effect is excellent, the heavy participation degree signal is generated, the camera is suitable for playing an important role in a high-quality pedestrian recognition task, and if the recognition effect is poor, the low participation degree signal is generated, the camera is suitable for being used in a low-quality or non-critical pedestrian recognition task, and adjustment and optimization are needed. By the mode, camera resources can be effectively utilized, and the overall performance and reliability of the pedestrian recognition system are improved;
2. Aiming at the camera which is positioned at the monitoring edge position and generates the mid-participation degree signal, the identification capability index is acquired, and the comprehensive identification index is obtained by combining the identification capability index with the visual cognition coefficient, the first judgment threshold value and the second judgment threshold value, so that the identification capability of the camera can be actively reduced under the condition that the identification capability of the camera is poor, and the identification capability of the camera can be dynamically adjusted according to the comprehensive identification index. When the camera is positioned at the monitoring edge position and the identification capacity is poor, the waste of resources in the low-efficiency identification process can be avoided by reducing the identification capacity, so that the overall performance and efficiency of the system are improved;
3. the actual recognition capacity and the actual recognition pressure of the cameras can be judged by comparing the actual number with the comprehensive recognition indexes to obtain the recognition pressure indexes through calculation, if the recognition pressure is in the threshold range, the cameras can normally recognize pedestrians and the pressure is smaller, signals are not required to be generated, the original recognition picture size is maintained, if the recognition pressure is not in the threshold range, the limitation of the recognition capacity of the cameras and the larger pressure are indicated, an adaptive recognition picture reduction strategy is triggered at the moment, and the pedestrians which are not recognized are transmitted to the adjacent cameras to be recognized until the recognition pressure is restored in the threshold range, so that the efficient utilization of camera resources is facilitated. When the camera can normally identify pedestrians and the pressure is small, the original picture range is maintained, and the resource waste is reduced. When the recognition capability of the camera is limited and the pressure is high, the system load can be effectively reduced, the overall recognition efficiency is improved, and the accuracy and the continuity of pedestrian recognition are ensured by automatically narrowing the picture range and utilizing the adjacent camera to assist in recognition. The dynamic adjustment mechanism can enable the camera to keep good performance under different scenes and loads, so that the utilization of system resources and the pedestrian recognition effect are optimized.
Drawings
FIG. 1 is a schematic flow chart of a pedestrian recognition method and a recognition system with multiple cameras according to the invention;
fig. 2 is a schematic structural diagram of a pedestrian recognition method and a recognition system with multiple cameras according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The significance of deploying a large number of cameras to identify and track the past passengers in large transportation hub places such as stations and the like is various. First, the deployment of cameras helps to improve the level of security. Through comprehensive monitoring and recognition functions, potential safety problems such as criminals, improper behaviors and the like can be timely found and dealt with, and safety and stability inside and outside the transportation junction are ensured. And secondly, the camera can monitor traffic flow and congestion in real time, provide data support for traffic management departments, optimize traffic signal control, improve traffic running efficiency and reduce traffic congestion. In addition, the identification and tracking of passengers helps to provide personalized services. For example, automatic navigation, real-time travel information, etc., provides a more convenient travel experience for the traveler. Finally, by analyzing the data collected by the cameras, important information can be provided for decision makers, the transportation hub planning and resource allocation are optimized, the transportation efficiency is improved, and sudden events and disasters are prevented and dealt with. In conclusion, the deployment of the cameras in large transportation junction places has important significance, and provides omnibearing support for safety, service and decision.
The camera therefore plays a decisive role in this context, on the basis of which an analysis of the operational recognition state of the camera is necessary.
Example 1
Fig. 1 shows a pedestrian recognition method with multiple cameras, which comprises the following steps:
step S100, counting all available cameras, collecting original information and abnormal information of a camera application process, and generating a visual cognition coefficient;
step S200, comprehensively analyzing all cameras according to the visual cognition coefficient, and generating a re-participation signal, a medium-participation signal and a low-participation signal according to analysis results;
step S300, acquiring an identification capacity index of a camera which is positioned at a monitoring edge position and generates a mid-participation degree signal, and combining the identification capacity index and a visual cognition coefficient to obtain a comprehensive identification index;
step S400, collecting the number of pedestrians in unit time in a shooting picture of each key camera, calculating the number of pedestrians and the comprehensive recognition index to obtain a recognition pressure index, comprehensively analyzing the recognition pressure index, and generating a trigger signal and a non-generated signal;
step S500, aiming at the condition of generating a trigger signal, reducing a shooting picture and triggering an adjacent camera to start a recognition function, if the recognition pressure of the adjacent camera still does not belong to the recognition pressure threshold range, continuing reducing the shooting picture and triggering cameras adjacent to the periphery of the adjacent camera triggered after the trigger signal is generated until the recognition pressure of the camera belongs to the recognition pressure threshold range and the surrounding camera triggered and started by the camera is automatically closed to the pedestrian recognition function when the recognition pressure of the camera does not belong to the recognition pressure threshold range and the shooting picture is not required to be reduced.
The step S100 specifically includes the following:
the method comprises the steps of collecting original information and abnormal information of a camera, wherein the original information comprises a picture balance index and the abnormal information comprises an abnormal variable index.
The acquisition logic of the picture balance index is as follows:
step S101: dividing a picture shot by a camera into n areas, and collecting the image contrast, the image brightness, the image definition and the noise number of each area;
step S102: calculating the deviation value of each area, wherein the calculation formula is as follows:wherein pl is a deviation value, A1, A2, A3, A4 are respectively the image contrast, image brightness, image definition, and the number of noise points, B1, B2, B3, B4 are respectively the standard values of the image contrast, the image brightness, the image definition, and the number of noise points, and w1, w2, w3, and w4 are respectivelyW1, w2, w3, w4 are all greater than 0, and the offset value is used for evaluating the degree of the offset standard of each shot picture;
step S103: calculating a deviation value of each area, calculating a deviation average value and a picture balance index, wherein the calculation formula of the deviation average value is as follows:the calculation formula of the deviation discrete value is: />Where ppj is the mean deviation value, pps is the discrete deviation value, i=1, 2, 3, … …, n.
Step S104: calculating a picture deviation index, wherein the calculation formula is as follows:in the formula, CV is a picture balance index, the picture balance index is used for reflecting the overall balance and stability of an image, and the larger picture balance index is used for indicating that the picture quality control of each subarea is unbalanced in a picture shot by a camera, so that the poor shooting quality control force of the camera is indicated, otherwise, the picture quality control is indicated to be relatively balanced, the shooting quality is indicated to be high, and the camera is in a controllable range.
The acquisition logic of the abnormality variable index is as follows:
step S111: the number and duration of the influence recognition in the operation process of the acquisition camera, such as jumping, snowflake and the like of a picture in the shooting process, are considered to be abnormal. Counting the abnormal accumulated time of each unit time when the camera operates, and generating a first ranking table according to the time sequence of collecting the abnormal accumulated time in the historical operation time; and generating a second sorting table according to the magnitude of the abnormal accumulated time, and sorting the second sorting table according to the acquisition time sequence if the difference value of two abnormal times of adjacent acquisition times is smaller than a comparison threshold value.
Step S112: the number of the reverse pairs between the first sorting table and the second sorting table is calculated, and the calculation method is as follows: 1. traversing each sample in the second sorting table; 2. for the abnormal accumulation time of the current sample, searching the sample corresponding to the abnormal accumulation time in the second sorting table, and calculating the difference value of the abnormal accumulation time of the samples; 3. if the difference is less than the comparison threshold and the index of the corresponding sample in the first sorted list is less than the index of the current sample, then an inverted pair is considered to be present;
Step S113: and calculating the fault variability between the first sorting table and the second sorting table, wherein the calculation formula is as follows:wherein KT is the failure variability and m is the orderingThe number of samples in the table, sion, is the number of pairs in reverse order.
When the fault variability is used for reflecting the consistency of the abnormal conditions and the time sequence of the operation of the camera, when the fault variability is larger, the abnormal conditions of the camera are indicated to have higher similarity between different time periods, which means that the times and the duration of the occurrence of the abnormality of the camera in the different time periods are not changed greatly, and the abnormal conditions are stable; on the contrary, when the fault variability is smaller, the abnormal condition of the camera is shown to have larger difference between different time periods, and larger fluctuation exists, which means that the times and duration of occurrence of the abnormality of the camera in different time periods are changed greatly, and the abnormal condition is unstable.
The picture equilibrium index and the abnormal mutation index are comprehensively calculated to obtain a visual cognition coefficient, wherein the calculation formula is as follows:wherein VCI is a visual cognitive coefficient, CV and KT are respectively a picture balance index and an abnormal mutation index, and +.>Preset proportional coefficients of the picture balance index and the abnormal mutation index respectively, and +. >Are all greater than 0.
The visual cognition coefficient is used for reflecting the contribution degree of the picture shot by the camera in the aspect of pedestrian recognition, namely the capability of the shot picture for pedestrian recognition, and reflects the recognition participation capability of the camera. The larger the visual cognition coefficient is, the better the picture quality of the camera is, the strong the controllability is, and the influence of the recognition fault in the running process is small, so that the camera can be used as an excellent pedestrian recognition carrier. This means that the picture quality of the camera is stable and suitable for high-quality pedestrian recognition tasks.
The step S200 specifically includes the following:
comparing the visual cognition coefficient with a first judgment threshold value and a second judgment threshold value;
if the visual cognition coefficient is larger than or equal to a judgment threshold value II, the participation degree of the camera in the pedestrian recognition work is high, the provided picture is good in recognition effect, and a heavy-duty signal is generated;
if the visual cognition coefficient is larger than or equal to the first judgment threshold and smaller than the second judgment threshold, the participation degree of the camera in the pedestrian recognition work is moderate, the provided picture is at a medium level in the aspect of recognition effect, which means that the picture of the camera has a certain contribution to the pedestrian recognition, but the sustainable pressure on the pedestrian recognition is limited, and a medium participation degree signal is generated;
If the visual cognition coefficient is smaller than the first judgment threshold value, the participation degree of the camera in the pedestrian recognition work is lower, the provided image is poor in recognition effect, the image is difficult to be used as a recognition carrier, maintenance is needed, a low participation degree signal is generated, and an early warning signal is sent.
According to the invention, the visual cognition coefficient is comprehensively calculated by collecting the picture balance index and the abnormal mutation index in the running process of the camera, so that the effect of the camera on identifying pedestrians is evaluated. The visual cognition coefficient reflects the picture quality and stability of the camera, and the contribution degree of the camera to the pedestrian recognition task. Further, the visual cognition coefficient is compared with a set visual cognition threshold, a heavy participation degree signal and a low participation degree signal are generated through a comparison result, whether the recognition capability of the camera reaches an expected level or not is facilitated to be judged automatically, if the recognition effect is excellent, the heavy participation degree signal is generated, the camera is suitable for playing an important role in a high-quality pedestrian recognition task, and if the recognition effect is poor, the low participation degree signal is generated, the camera is suitable for being used in a low-quality or non-critical pedestrian recognition task, and adjustment and optimization are needed. By the aid of the method, camera resources can be utilized more effectively, and overall performance and reliability of the pedestrian recognition system are improved.
The cameras at the edge positions are firstly contacted with pedestrians, so that pedestrians are firstly identified, then the pedestrian identification results are sent to other cameras in the same area, so that the pedestrians are known to be identified, then the other cameras can avoid repeated identification of the pedestrians, and the pedestrians are continuously tracked based on the current identification results, so that the burden and resource effect of a system are reduced.
The medium engagement signal indicates that the state of the camera has changed, and at this time, the standard of the design is required to be automatically reduced according to the state change degree so as to avoid starting overload operation.
The step S300 specifically includes the following:
counting all cameras, counting the cameras which are positioned at the monitoring edge positions according to the mounting positions of the cameras and generating the mid-participation signals, and marking the cameras as key cameras, collecting the identification ability indexes of the key cameras, namely stably identifying the maximum number of people in unit time, combining the identification ability indexes with the visual cognition coefficient, the first judgment threshold value and the second judgment threshold value to obtain a comprehensive identification index, wherein the calculation formula is as follows:in the formula, sr and rci are respectively an identification capacity index and a comprehensive identification index, the comprehensive identification index is used for replacing the identification capacity index, and VCI, AT1 and AT2 are respectively a visual cognition coefficient, a judgment threshold value I and a judgment threshold value II.
The position at the monitoring edge means that the mounting position of the cameras is at the outermost periphery of the whole monitoring area, and when a pedestrian wants to enter the area, the cameras firstly do the pictures shot by monitoring.
The invention aims at the camera which is positioned at the monitoring edge position and generates the mid-participation degree signal, acquires the identification capability index, combines the identification capability index with the visual cognition coefficient, the first judgment threshold value and the second judgment threshold value to obtain the comprehensive identification index, can actively reduce the identification capability of the camera under the condition of poor identification capability of the camera, and can dynamically adjust the identification capability of the camera according to the comprehensive identification index. When the camera is positioned at the monitoring edge position and the identification capacity is poor, the waste of resources in the inefficient identification process can be avoided by reducing the identification capacity, so that the overall performance and efficiency of the system are improved.
Step S400 specifically includes the following:
collecting the number of pedestrians in unit time in each key camera shooting picture, marking the number as the actual number, calculating the actual number and the comprehensive identification index to obtain the identification pressure index, wherein the calculation formula is as follows:wherein pr is the recognition pressure, sr and an are the comprehensive recognition index and the actual number respectively.
The recognition pressure is used for reflecting the pressure of the camera for recognizing pedestrians, and the recognition pressure is compared with a recognition pressure threshold range;
if the identification pressure belongs to the identification pressure threshold range, the actual number of the camera in the shooting picture is smaller than the comprehensive identification capacity, the identification picture of the camera is the size of the shooting picture, and no signal is generated;
if the identification pressure does not belong to the identification pressure threshold range, the actual number in the shooting picture of the camera is larger than the comprehensive identification capacity, and a trigger signal is generated.
The step S500 specifically includes the following:
when the identification pressure does not belong to the identification pressure threshold range, the camera reduces the shooting picture in an equal proportion by taking the center of the shooting picture as a base point to serve as a new identification picture, then the camera triggers the adjacent camera to start the pedestrian identification function, if the identification pressure of the adjacent camera still does not belong to the identification pressure threshold range, similarly, the camera reduces the shooting picture in an equal proportion by taking the center of the shooting picture as the base point, triggers the adjacent camera to start the pedestrian identification function again until the identification pressure of the camera after starting belongs to the identification pressure threshold range, and when the identification pressure of the camera belongs to the identification pressure threshold range, the identification picture and the shooting picture are equal, the surrounding cameras triggered and started by the camera automatically close the pedestrian identification function.
According to the invention, the actual number and the comprehensive identification index are calculated to obtain the identification pressure index, and the actual identification capacity and pressure of the camera can be judged by comparing the identification pressure with the set identification pressure threshold range. If the recognition pressure is within the threshold range, the camera can normally recognize pedestrians, the pressure is small, signals do not need to be generated, and the original recognition picture size is maintained. If the recognition pressure is not in the threshold range, the recognition capability of the camera is limited, and the pressure is high, at the moment, an adaptive recognition picture reduction strategy is triggered, and unidentified pedestrians are identified by the adjacent camera until the recognition pressure is restored in the threshold range, so that the efficient utilization of camera resources is facilitated. When the camera can normally identify pedestrians and the pressure is small, the original picture range is maintained, and the resource waste is reduced. When the recognition capability of the camera is limited and the pressure is high, the system load can be effectively reduced, the overall recognition efficiency is improved, and the accuracy and the continuity of pedestrian recognition are ensured by automatically narrowing the picture range and utilizing the adjacent camera to assist in recognition. The dynamic adjustment mechanism can enable the camera to keep good performance under different scenes and loads, so that the utilization of system resources and the pedestrian recognition effect are optimized.
Example 2
FIG. 2 shows a multi-camera pedestrian recognition system of the present invention, including an initial acquisition unit, a preliminary judgment unit, a comprehensive analysis unit, a pressure recognition unit, and a pressure distribution unit;
the initial acquisition unit is used for counting all available cameras, acquiring original information and abnormal information of the camera in the application process, generating a visual cognition coefficient signal and sending the visual cognition coefficient signal to the initial judgment unit;
the primary judging unit is used for comprehensively analyzing all cameras according to the visual cognition coefficient, generating a re-participation degree signal, a middle-participation degree signal and a low-participation degree signal according to the analysis result, generating a re-participation degree signal, a middle-participation degree signal and a low-participation degree signal, and transmitting the re-participation degree signal, the middle-participation degree signal and the low-participation degree signal to the comprehensive analyzing unit;
the comprehensive analysis unit collects the identification capacity index of the camera which is positioned at the monitoring edge position and generates the mid-participation degree signal, combines the identification capacity index and the visual cognition coefficient to obtain a comprehensive identification index, generates sum and identification index signals and sends the sum and identification index signals to the pressure cognition unit;
the pressure cognition unit is used for collecting the number of pedestrians in unit time in each key camera shooting picture, calculating the number of pedestrians and the comprehensive recognition index to obtain a recognition pressure index, comprehensively analyzing the recognition pressure index, generating a trigger signal and a non-generated signal, and generating the trigger signal and sending the trigger signal to the pressure distribution unit;
The pressure distribution unit reduces the shooting picture and triggers the adjacent cameras to start the recognition function under the condition of generating the trigger signal, if the recognition pressure of the adjacent cameras still does not belong to the recognition pressure threshold range, the shooting picture is continuously reduced and the adjacent cameras around the adjacent cameras triggered after the trigger signal is generated are triggered until the recognition pressure of the cameras belongs to the recognition pressure threshold range and the recognition function of pedestrians is automatically closed by the cameras around the cameras triggered and started by the cameras when the recognition pressure of the cameras does not need to be reduced.
The above formulas are all formulas with dimensionality removed and numerical calculation, the formulas are formulas with the latest real situation obtained by software simulation through collecting a large amount of data, and preset parameters and threshold selection in the formulas are set by those skilled in the art according to the actual situation.
The above embodiments may be implemented in whole or in part by software, hardware, firmware, or any other combination. When implemented in software, the above-described embodiments may be implemented in whole or in part in the form of a computer program product. The computer program product comprises one or more computer instructions or computer programs. When the computer instructions or computer program are loaded or executed on a computer, the processes or functions described in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable devices. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center by wired (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more sets of available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium. The semiconductor medium may be a solid state disk.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Finally: the foregoing description of the preferred embodiments of the application is not intended to limit the application to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and principles of the application are intended to be included within the scope of the application.
Claims (10)
1. The pedestrian recognition method with multiple cameras is characterized by comprising the following steps of:
step S100, counting all available cameras, collecting original information and abnormal information of a camera application process, and generating a visual cognition coefficient;
step S200, comprehensively analyzing all cameras according to the visual cognition coefficient, and generating a re-participation signal, a medium-participation signal and a low-participation signal according to analysis results;
step S300, acquiring an identification capacity index of a camera which is positioned at a monitoring edge position and generates a mid-participation degree signal, and combining the identification capacity index and a visual cognition coefficient to obtain a comprehensive identification index;
Step S400, collecting the number of pedestrians in unit time in a shooting picture of each key camera, calculating the number of pedestrians and the comprehensive recognition index to obtain a recognition pressure index, comprehensively analyzing the recognition pressure index, and generating a trigger signal and a non-generated signal;
step S500, aiming at the condition of generating a trigger signal, reducing a shooting picture and triggering an adjacent camera to start a recognition function, if the recognition pressure of the adjacent camera still does not belong to the recognition pressure threshold range, continuing reducing the shooting picture and triggering cameras adjacent to the periphery of the adjacent camera triggered after the trigger signal is generated until the recognition pressure of the camera belongs to the recognition pressure threshold range and the surrounding camera triggered and started by the camera is automatically closed to the pedestrian recognition function when the recognition pressure of the camera does not belong to the recognition pressure threshold range and the shooting picture is not required to be reduced.
2. The multi-camera pedestrian recognition method according to claim 1, wherein:
the step S100 specifically includes the following:
the method comprises the steps of collecting original information and abnormal information of a camera, wherein the original information comprises a picture balance index and the abnormal information comprises an abnormal variable index.
3. The multi-camera pedestrian recognition method according to claim 2, wherein:
The acquisition logic of the picture balance index is as follows:
step S101: dividing a picture shot by a camera into n areas, and collecting the image contrast, the image brightness, the image definition and the noise number of each area;
step S102: calculating the deviation value of each area, wherein the calculation formula is as follows:wherein pl is a deviation value, A1, A2, A3, A4 are respectively the image contrast, image brightness, image definition, and the number of noise points, B1, B2, B3, B4 are respectively the standard values of the image contrast, the image brightness, the image definition, and the number of noise points, and w1, w2, w3, and w4 are respectivelyW1, w2, w3, w4 are all greater than 0, and the offset value is used for evaluating the degree of the offset standard of each shot picture;
step S103: calculating a deviation value of each area, calculating a deviation average value and a picture balance index, wherein the calculation formula of the deviation average value is as follows:the calculation formula of the deviation discrete value is: />Wherein ppj is a deviation average value, pps is a deviation discrete value, i=1, 2, 3, … …, n;
step S104: calculating a picture deviation index, wherein the calculation formula is as follows:where CV is the picture balance index.
4. A multi-camera pedestrian recognition method in accordance with claim 3, wherein:
The acquisition logic of the abnormality variable index is as follows:
step S111: collecting the times and duration of influence identification in the running process of the camera, counting the abnormal accumulated time in each unit time when the camera runs, and generating a first ranking table according to the time sequence of collecting the abnormal accumulated time in the historical running time; generating a second sorting table according to the magnitude of the abnormal accumulated time, and sorting the second sorting table according to the acquisition time sequence if the difference value of two abnormal times of adjacent acquisition times is smaller than a comparison threshold value;
step S112: the number of the reverse pairs between the first sorting table and the second sorting table is calculated, and the calculation method is as follows: 1. traversing each sample in the second sorting table; 2. for the abnormal accumulation time of the current sample, searching the sample corresponding to the abnormal accumulation time in the second sorting table, and calculating the difference value of the abnormal accumulation time of the samples; 3. if the difference is less than the comparison threshold and the index of the corresponding sample in the first sorted list is less than the index of the current sample, then an inverted pair is considered to be present;
step S113: and calculating the fault variability between the first sorting table and the second sorting table, wherein the calculation formula is as follows:where KT is the failure variability, m is the number of samples in the sorted list, and sion is the number of pairs in reverse order.
5. The multi-camera pedestrian recognition method of claim 4, wherein:
the picture equilibrium index and the abnormal mutation index are comprehensively calculated to obtain a visual cognition coefficient, wherein the calculation formula is as follows:wherein VCI is a visual cognitive coefficient, CV and KT are respectively a picture balance index and an abnormal mutation index, and +.>Preset proportional coefficients of the picture balance index and the abnormal mutation index respectively, and +.>Are all greater than 0.
6. The multi-camera pedestrian recognition method of claim 5, wherein:
the step S200 specifically includes the following:
comparing the visual cognition coefficient with a first judgment threshold value and a second judgment threshold value;
if the visual cognition coefficient is larger than or equal to a judgment threshold value II, the participation degree of the camera in the pedestrian recognition work is high, and a heavy-duty signal is generated;
if the visual cognition coefficient is larger than or equal to the first judgment threshold value and smaller than the second judgment threshold value, the participation degree of the camera in the pedestrian recognition work is moderate, and a middle participation degree signal is generated;
if the visual cognition coefficient is smaller than the first judgment threshold value, the maintenance is needed, a low-participation-degree signal is generated, and an early warning signal is sent out.
7. The multi-camera pedestrian recognition method of claim 6, wherein:
The step S300 specifically includes the following:
counting all cameras, counting the cameras which are positioned at the monitoring edge positions according to the mounting positions of the cameras and generating the mid-participation signals, and marking the cameras as key cameras, collecting the identification ability indexes of the key cameras, namely stably identifying the maximum number of people in unit time, combining the identification ability indexes with the visual cognition coefficient, the first judgment threshold value and the second judgment threshold value to obtain a comprehensive identification index, wherein the calculation formula is as follows:in the formula, sr and rci are respectively an identification capacity index and a comprehensive identification index, the comprehensive identification index is used for replacing the identification capacity index, and VCI, AT1 and AT2 are respectively a visual cognition coefficient, a judgment threshold value I and a judgment threshold value II.
8. The multi-camera pedestrian recognition method of claim 7, wherein:
step S400 specifically includes the following:
collecting the number of pedestrians in unit time in each key camera shooting picture, marking the number as the actual number, calculating the actual number and the comprehensive identification index to obtain the identification pressure index, wherein the calculation formula is as follows:wherein pr is the identification pressure, sr and an are the comprehensive identification index and the actual number respectively;
comparing the identification pressure with an identification pressure threshold range;
If the identification pressure belongs to the identification pressure threshold range, the actual number in the shooting picture of the camera is smaller than the comprehensive identification capacity, and no signal is generated;
if the identification pressure does not belong to the identification pressure threshold range, the actual number in the shooting picture of the camera is larger than the comprehensive identification capacity, and a trigger signal is generated.
9. The multi-camera pedestrian recognition method of claim 8, wherein:
the step S500 specifically includes the following:
when the identification pressure does not belong to the identification pressure threshold range, the camera reduces the shooting picture in an equal proportion by taking the center of the shooting picture as a base point to serve as a new identification picture, then the camera triggers the adjacent camera to start the pedestrian identification function, if the identification pressure of the adjacent camera still does not belong to the identification pressure threshold range, similarly, the camera reduces the shooting picture in an equal proportion by taking the center of the shooting picture as the base point, triggers the adjacent camera to start the pedestrian identification function again until the identification pressure of the camera after starting belongs to the identification pressure threshold range, and when the identification pressure of the camera belongs to the identification pressure threshold range, the identification picture and the shooting picture are equal, the surrounding cameras triggered and started by the camera automatically close the pedestrian identification function.
10. A multi-camera pedestrian recognition system for implementing the recognition method of any one of claims 1-9, comprising an initial acquisition unit, a preliminary judgment unit, a comprehensive analysis unit, a pressure cognition unit, and a pressure distribution unit;
the initial acquisition unit is used for counting all available cameras, acquiring original information and abnormal information of the camera in the application process, generating a visual cognition coefficient signal and sending the visual cognition coefficient signal to the initial judgment unit;
the primary judging unit is used for comprehensively analyzing all cameras according to the visual cognition coefficient, generating a re-participation degree signal, a middle-participation degree signal and a low-participation degree signal according to the analysis result, generating a re-participation degree signal, a middle-participation degree signal and a low-participation degree signal, and transmitting the re-participation degree signal, the middle-participation degree signal and the low-participation degree signal to the comprehensive analyzing unit;
the comprehensive analysis unit collects the identification capacity index of the camera which is positioned at the monitoring edge position and generates the mid-participation degree signal, combines the identification capacity index and the visual cognition coefficient to obtain a comprehensive identification index, generates sum and identification index signals and sends the sum and identification index signals to the pressure cognition unit;
the pressure cognition unit is used for collecting the number of pedestrians in unit time in each key camera shooting picture, calculating the number of pedestrians and the comprehensive recognition index to obtain a recognition pressure index, comprehensively analyzing the recognition pressure index, generating a trigger signal and a non-generated signal, and generating the trigger signal and sending the trigger signal to the pressure distribution unit;
The pressure distribution unit reduces the shooting picture and triggers the adjacent cameras to start the recognition function under the condition of generating the trigger signal, if the recognition pressure of the adjacent cameras still does not belong to the recognition pressure threshold range, the shooting picture is continuously reduced and the adjacent cameras around the adjacent cameras triggered after the trigger signal is generated are triggered until the recognition pressure of the cameras belongs to the recognition pressure threshold range and the recognition function of pedestrians is automatically closed by the cameras around the cameras triggered and started by the cameras when the recognition pressure of the cameras does not need to be reduced.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310993574.2A CN116704448B (en) | 2023-08-09 | 2023-08-09 | Pedestrian recognition method and recognition system with multiple cameras |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310993574.2A CN116704448B (en) | 2023-08-09 | 2023-08-09 | Pedestrian recognition method and recognition system with multiple cameras |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116704448A CN116704448A (en) | 2023-09-05 |
CN116704448B true CN116704448B (en) | 2023-10-24 |
Family
ID=87831580
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310993574.2A Active CN116704448B (en) | 2023-08-09 | 2023-08-09 | Pedestrian recognition method and recognition system with multiple cameras |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116704448B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109086726A (en) * | 2018-08-10 | 2018-12-25 | 陈涛 | A kind of topography's recognition methods and system based on AR intelligent glasses |
CN110298278A (en) * | 2019-06-19 | 2019-10-01 | 中国计量大学 | A kind of underground parking garage Pedestrians and vehicles monitoring method based on artificial intelligence |
CN111079600A (en) * | 2019-12-06 | 2020-04-28 | 长沙海格北斗信息技术有限公司 | Pedestrian identification method and system with multiple cameras |
CN111385484A (en) * | 2018-12-28 | 2020-07-07 | 北京字节跳动网络技术有限公司 | Information processing method and device |
CN112270241A (en) * | 2020-10-22 | 2021-01-26 | 珠海大横琴科技发展有限公司 | Pedestrian re-identification method and device, electronic equipment and computer readable storage medium |
CN112800950A (en) * | 2021-01-21 | 2021-05-14 | 合肥品恩智能科技有限公司 | Large security activity face searching method based on deep learning |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6132452B2 (en) * | 2014-05-30 | 2017-05-24 | 株式会社日立国際電気 | Wireless communication apparatus and wireless communication system |
-
2023
- 2023-08-09 CN CN202310993574.2A patent/CN116704448B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109086726A (en) * | 2018-08-10 | 2018-12-25 | 陈涛 | A kind of topography's recognition methods and system based on AR intelligent glasses |
CN111385484A (en) * | 2018-12-28 | 2020-07-07 | 北京字节跳动网络技术有限公司 | Information processing method and device |
CN110298278A (en) * | 2019-06-19 | 2019-10-01 | 中国计量大学 | A kind of underground parking garage Pedestrians and vehicles monitoring method based on artificial intelligence |
CN111079600A (en) * | 2019-12-06 | 2020-04-28 | 长沙海格北斗信息技术有限公司 | Pedestrian identification method and system with multiple cameras |
CN112270241A (en) * | 2020-10-22 | 2021-01-26 | 珠海大横琴科技发展有限公司 | Pedestrian re-identification method and device, electronic equipment and computer readable storage medium |
CN112800950A (en) * | 2021-01-21 | 2021-05-14 | 合肥品恩智能科技有限公司 | Large security activity face searching method based on deep learning |
Non-Patent Citations (2)
Title |
---|
A Stride Detection Algorithm Based on Triaxial Acceleration Characteristics of Pedestrians;Hongyu Zhao .etal;IEEE;17-21 * |
基于多路图像融合的目标跟踪系统设计;梁兴建 等;四川理工学院学报(自然科学版)(第06期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN116704448A (en) | 2023-09-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111294812B (en) | Resource capacity-expansion planning method and system | |
CN111325451B (en) | Intelligent building multistage scheduling method, intelligent building scheduling center and system | |
CN110955586A (en) | System fault prediction method, device and equipment based on log | |
CN110969215A (en) | Clustering method and device, storage medium and electronic device | |
CN109150565B (en) | Network situation perception method, device and system | |
CN107305611A (en) | The corresponding method for establishing model of malice account and device, the method and apparatus of malice account identification | |
CN106803815B (en) | Flow control method and device | |
CN115495231B (en) | Dynamic resource scheduling method and system under high concurrency task complex scene | |
CN112101692A (en) | Method and device for identifying poor-quality users of mobile Internet | |
CN114584758A (en) | City-level monitoring video quality assessment method and system | |
CN117493067B (en) | Fusing control method and system based on data service protection | |
CN114338351B (en) | Network anomaly root cause determination method and device, computer equipment and storage medium | |
CN110139278B (en) | Method of safety type collusion attack defense system under Internet of vehicles | |
CN116704448B (en) | Pedestrian recognition method and recognition system with multiple cameras | |
CN112995287B (en) | Keyword detection task scheduling method facing edge calculation | |
CN112104730B (en) | Scheduling method and device of storage tasks and electronic equipment | |
CN109120424A (en) | A kind of bandwidth scheduling method and device | |
KR102525491B1 (en) | Method of providing structure damage detection report | |
CN116915432A (en) | Method, device, equipment and storage medium for arranging calculation network security | |
KR101537723B1 (en) | Video analysis system for using priority of video analysis filter and method thereof | |
CN114640841A (en) | Abnormity determining method and device, electronic equipment and storage medium | |
CN114884969A (en) | Cluster instance quantity regulation and control method, device, terminal and storage medium | |
CN114553964A (en) | Control method, device and equipment of simulcast system and simulcast system | |
Machida et al. | Optimizing resiliency of distributed video surveillance system for safer city | |
CN112632411A (en) | Target object data query method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |