CN117319809B - Intelligent adjusting method for monitoring visual field - Google Patents

Intelligent adjusting method for monitoring visual field Download PDF

Info

Publication number
CN117319809B
CN117319809B CN202311578066.4A CN202311578066A CN117319809B CN 117319809 B CN117319809 B CN 117319809B CN 202311578066 A CN202311578066 A CN 202311578066A CN 117319809 B CN117319809 B CN 117319809B
Authority
CN
China
Prior art keywords
value
visual field
grid
monitoring
difference
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311578066.4A
Other languages
Chinese (zh)
Other versions
CN117319809A (en
Inventor
田静
冯彬杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Jinyuan Technology Development Co ltd
Original Assignee
Guangzhou Jinyuan Technology Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Jinyuan Technology Development Co ltd filed Critical Guangzhou Jinyuan Technology Development Co ltd
Priority to CN202311578066.4A priority Critical patent/CN117319809B/en
Publication of CN117319809A publication Critical patent/CN117319809A/en
Application granted granted Critical
Publication of CN117319809B publication Critical patent/CN117319809B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Abstract

The invention discloses an intelligent regulation method of a monitoring visual field, which relates to the technical field of monitoring systems and comprises a video shooting module, a visual field judging module and a central default module, wherein the visual field judging module is arranged to judge the center of the monitoring visual field in real time, so that the regulation of the monitoring visual field is finished, the condition that the monitoring visual field can monitor the production activity of personnel to the maximum extent is ensured, the central default module is arranged, the default monitoring visual field center during monitoring starting is set through monitoring visual field regulation record, the regulation times of the monitoring visual field after the monitoring starting can be greatly reduced, and meanwhile, the most reasonable visual field monitoring during the monitoring starting is ensured.

Description

Intelligent adjusting method for monitoring visual field
Technical Field
The invention relates to the technical field of monitoring systems, in particular to an intelligent adjusting method for monitoring a visual field.
Background
Video monitoring is an important ring in security systems and plays an important role in daily life. In the current production workshops, video monitoring is installed in the production workshops in order to monitor the production safety conditions of operators at all times. At present, a supervisory person is required to adjust a monitoring visual field according to requirements, and if the supervisory person does not adjust the monitoring visual field in time, a dead zone of production safety is easy to occur due to the position movement of an operator.
Disclosure of Invention
Aiming at the defects existing in the prior art, the invention aims to provide an intelligent adjusting method for monitoring a visual field.
In order to achieve the above purpose, the present invention provides the following technical solutions:
an intelligent regulation method for monitoring a visual field comprises the following steps:
step one: shooting a video in a monitoring visual field, converting the video into video image frames, dividing the video image frames into a plurality of grids through a plurality of equidistant transverse lines and a plurality of equidistant longitudinal lines, and marking each grid as a visual field grid;
step two: judging a default monitoring visual field center when monitoring and starting according to all adjustment records of the monitoring visual field before the current time of the system;
step three: and judging the center of the monitoring visual field, and adjusting the monitoring visual field.
Further, the system comprises a video shooting module, a visual field judging module and a center default module;
the video shooting module is used for shooting videos in a monitoring visual field, converting the videos into video image frames, dividing the video image frames into a plurality of grids through a plurality of equidistant transverse lines and a plurality of equidistant longitudinal lines, marking each grid as a visual field grid, manufacturing an image analysis model, taking the visual field grid as input data of the image analysis model, acquiring image labels of output data of the image analysis model, and sending the image labels to the server for storage;
the visual field judging module is used for judging the center of the monitoring visual field, and then adjusting the monitoring visual field, and specifically comprises the following steps:
acquiring image labels of 12 frames of the same view lattice before the current time of the system, sorting the image labels of 12 frames before the current time of the system according to the sequence of the frames, marking the image labels of adjacent frames after sorting as the labels of the back frames, marking the image labels of adjacent frames before sorting as the labels of the front frames, performing difference calculation on the labels of the back frames and the labels of the front frames to acquire image label differences, setting each image label difference to correspond to a reference label difference, comparing the image label difference with the reference label difference, marking the image label difference as an offset label difference when the image label difference is smaller than the reference label difference, acquiring an offset reference value Dt of the view lattice, and marking the image label difference as a concentrated label difference when the image label difference is larger than the reference label difference, and acquiring a concentrated reference value Tb of the view lattice;
using the formulaObtaining fixed point values Bw of the field of view grids, wherein c1 and c2 are preset proportion coefficients, setting a fixed point value threshold value as Ze, marking the field of view grid as a concentrated field of view grid when the fixed point value Bw of the field of view grid is larger than or equal to the fixed point value threshold value Ze, marking the field of view grid as a deviation field of view grid when the fixed point value Bw of the field of view grid is smaller than the fixed point value threshold value Ze, marking the concentrated field of view grid as a preselected field of view grid, taking the preselected field of view grid as a circle center,drawing a circle with a preset radius to obtain a judging range, obtaining the number of other concentrated view lattices with positions within the judging range, marking the number as Lw, summing the fixed point values Bw of the deviation view lattices with the positions within the judging range, taking absolute values, obtaining a deviation fixed point value Fb, obtaining the fixed point values Bw of the preselected view lattices, and utilizing a formula->Obtaining a visual field judgment value Rc, wherein d1, d2 and d3 are all preset proportionality coefficients, marking a preselected visual field grid with the largest visual field judgment value Rc as a selected visual field grid, and taking the selected visual field grid as the center of a monitoring visual field for adjusting and monitoring;
the center default module is used for judging a default monitoring visual field center during monitoring startup, and specifically comprises the following steps:
acquiring all adjustment records of a monitoring view field before the current time of the system, sequencing the adjustment records corresponding to the same view field grid according to the sequence of adjustment moments, calculating the time difference value of the adjustment moments of two adjacent adjustment records after sequencing to acquire a same grid adjustment interval, summing all adjustment intervals of the same view field grid, taking an average value, and acquiring a same grid adjustment equal interval Rs;
acquiring the total number of adjustment records corresponding to the same field of view grid before the current time of the system, and marking the total number as Nw;
setting each same-grid adjusting interval to correspond to a standard adjusting interval, comparing the same-grid adjusting interval with the standard adjusting interval, and marking the same-grid adjusting interval as a demand adjusting interval when the same-grid adjusting interval is smaller than the standard adjusting interval to obtain a demand value Ws of the visual field grid; when the same-grid adjusting interval is larger than the standard adjusting interval, no treatment is carried out;
using the formulaObtaining a default central value Bg of the visual field grid, wherein n1, n2 and n3 are all preset proportion coefficients, and marking the visual field grid with the maximum value of the default central value Bg as a default monitoring when monitoring is startedThe center of the field of view.
Further, the image analysis model is obtained by the following steps: obtaining n visual field grids, marking the visual field grids as training images, giving image labels to the training images, dividing the training image acquisition into a training set and a verification set according to a set proportion, constructing a neural network model, carrying out iterative training on the neural network model through the training set and the verification set, judging that the neural network model is completed to train when the iterative training times are larger than the iterative times threshold, marking the trained neural network model as an image analysis model, and enabling the value range of the image labels to be 0-5, wherein the larger the numerical value of the image labels is, the more the number of people in the visual field grids is represented.
Further, the offset reference value Dt is obtained by: performing difference calculation on the reference label difference value and the offset label difference value to obtain an offset judgment value Ei; setting the offset judgment value coefficient as Fg by using a formulaObtaining an offset value Pr, i=1, 2,3, … …, n is the number of times that the image tag difference is marked as an offset tag difference, sequencing the offset tag difference according to the number of frames of the corresponding previous frame tags, performing difference calculation on the number of frames of the two adjacent previous frame tags to obtain an offset interval, summing all the offset intervals and taking an average value to obtain an offset average interval Mk; using the formula->And obtaining an offset reference value Dt, wherein a1 and a2 are preset proportion coefficients.
Further, the centralized reference value Tb is obtained by: performing difference calculation on the concentrated tag difference value and the reference tag difference value to obtain a reference judgment value Yj; setting a reference judgment value coefficient as Cs, and utilizing a formulaAcquiring a collection value Qg, i=1, 2,3, … … and n, wherein n is an image label difference value and is marked as a collection labelThe number of times of signing the difference value, namely sequencing the difference values of the concentrated tags according to the number of frames of the corresponding previous frame tags, calculating the difference value of the frames of the two adjacent previous frame tags to obtain a concentrated interval, summing all the concentrated intervals and taking an average value to obtain a concentrated average interval Hn; using the formula->And obtaining a centralized reference value Tb, wherein b1 and b2 are preset proportion coefficients.
Further, the adjustment record comprises the position of the field of view grid and the adjustment moment.
Further, the required value Ws of the field of view grid is obtained by: calculating the difference value of the standard adjusting interval and the demand adjusting interval to obtain a demand adjusting difference, summing all the demand adjusting differences to obtain a demand adjusting total difference Bd, obtaining the total number Uh of the same-grid adjusting interval marked as the demand adjusting interval, and utilizing a formulaObtaining a required value Ws of the view field grid, wherein m1 and m2 are preset proportionality coefficients.
Compared with the prior art, the invention has the following beneficial effects:
1. the visual field judging module is arranged, so that the center of the monitoring visual field can be judged in real time, the adjustment of the monitoring visual field is further completed, and the monitoring visual field can be ensured to monitor the production activity of personnel to the greatest extent;
2. the default module of the center is set, and the default monitoring visual field center during monitoring startup is set through the monitoring visual field adjustment record, so that the adjustment times of monitoring visual fields after monitoring startup can be greatly reduced, and meanwhile, the most reasonable visual field monitoring during monitoring startup is ensured.
Drawings
FIG. 1 is a schematic block diagram of a view determination module of the present invention;
FIG. 2 is a schematic block diagram of a central default module of the present invention;
fig. 3 is a flow chart of the present invention.
Detailed Description
Example 1
Referring to fig. 1, an intelligent adjustment method for monitoring a visual field is characterized by comprising a video shooting module and a visual field judging module;
the video shooting module is used for shooting videos in a monitoring view field, converting the videos into video image frames, dividing the video image frames into a plurality of grids through a plurality of equidistant transverse lines and a plurality of equidistant longitudinal lines, marking each grid as a view field grid, manufacturing an image analysis model, and obtaining the image analysis model through the following steps: obtaining n visual field grids, marking the visual field grids as training images, giving image labels to the training images, dividing the training image acquisition into a training set and a verification set according to a set proportion, wherein the set proportion of the training set and the verification set comprises but is not limited to 1:2,1:3 and 1:4, constructing a neural network model, carrying out iterative training on the neural network model through the training set and the verification set, judging that the neural network model is completed to train when the iterative training times are larger than an iterative time threshold, marking the trained neural network model as an image analysis model, and the value range of the image labels is [0-5], wherein the larger the numerical value of the image labels is, the more the number of people in the visual field grids is represented. The number of field of view bin for image tag 4 is greater than the number of field of view bin for image tag 3. And taking the field of view grid as input data of the image analysis model, acquiring an image tag of output data of the image analysis model, and sending the image tag to a server for storage.
The visual field judging module is used for judging the center of the monitoring visual field, and then adjusting the monitoring visual field, and specifically comprises the following steps:
acquiring image labels of 12 frames of the same field of view before the current time of the system, sorting the image labels of 12 frames before the current time of the system according to the sequence of the frames, marking the image labels of adjacent frames after sorting as the labels of the following frames, marking the image labels of adjacent frames before sorting as the labels of the preceding frames, performing difference calculation on the labels of the following frames and the labels of the preceding frames to acquire image label difference values, setting each image label difference value to correspond to one reference label difference value, and mapping the imageComparing the image label difference value with the reference label difference value, when the image label difference value is smaller than the reference label difference value, marking the image label difference value as an offset label difference value, and when the image label difference value is 0.1, marking the image label difference value as an offset label difference value, obtaining an offset reference value Dt of the view field grid, and obtaining the offset reference value Dt through the following steps: performing difference calculation on the reference label difference value and the offset label difference value to obtain an offset judgment value Ei; setting the offset judgment value coefficient to Fg, g=1, 2,3, …, g; F1F 1<F2<F3<…<Fg, setting a range of each offset judgment value coefficient corresponding to one offset judgment value, including (0, E1],(E1,E2],…,(Ei-1,Ei]When Ei E (0, E1)]The corresponding offset judgment value coefficient takes the value of F1; using the formulaObtaining an offset value Pr, i=1, 2,3, … …, n is the number of times that the image tag difference is marked as an offset tag difference, sequencing the offset tag difference according to the number of frames of the corresponding previous frame tags, performing difference calculation on the number of frames of the two adjacent previous frame tags to obtain an offset interval, summing all the offset intervals and taking an average value to obtain an offset average interval Mk; using the formula->And obtaining an offset reference value Dt, wherein a1 and a2 are preset proportionality coefficients, the value of a1 is 0.68, and the value of a2 is 0.53. When the image label difference value is larger than the reference label difference value, marking the image label difference value as a concentrated label difference value, wherein the reference label difference value is 0.2, and when the image label difference value is 0.3, marking the image label difference value as a concentrated label difference value, and obtaining a concentrated reference value Tb of the view field grid; the centralized reference value Tb is obtained by: performing difference calculation on the concentrated tag difference value and the reference tag difference value to obtain a reference judgment value Yj; setting a reference judgment value coefficient as Cs, s=1, 2,3, …, s; C1C 1<C2<C3<…<Cs, setting a range of one offset judgment value for each offset judgment value coefficient,comprises (0, E1)],(E1,E2],…,(Ei-1,Ei]When Ei E (0, E1)]The corresponding offset judgment value coefficient takes the value of F1; using the formula->Acquiring a collection value Qg, i=1, 2,3, … …, n, wherein n is the number of times that the image tag difference value is marked as a collection tag difference value, sequencing the collection tag difference value according to the frame number of the corresponding previous frame tag, performing difference value calculation on the frame numbers of the two adjacent previous frame tags to acquire a collection interval, summing all the collection intervals and taking an average value to acquire a collection average interval Hn; using the formula->And obtaining a centralized reference value Tb, wherein b1 and b2 are preset proportionality coefficients, the value of b1 is 0.65, and the value of b2 is 0.52.
Using the formulaObtaining fixed point values Bw of the view grid, wherein c1 and c2 are preset proportionality coefficients, the value of c1 is 0.75, the value of c2 is 0.73, a fixed point value threshold value is Ze, when the fixed point value Bw of the view grid is larger than or equal to the fixed point value threshold value Ze, the view grid is marked as a concentrated view grid, when the fixed point value Bw of the view grid is smaller than the fixed point value threshold value Ze, the view grid is marked as a deviation view grid, the fixed point value threshold value Ze is 8, when the fixed point value Bw of the view grid is 9, the view grid is marked as a concentrated view grid, and when the fixed point value Bw of the view grid is 6, the view grid is marked as a deviation view grid. Marking the concentrated view grids as preselected view grids, taking the preselected view grids as circle centers, drawing circles with preset radiuses to obtain a judging range, obtaining the number of the rest concentrated view grids with the positions within the judging range, marking the number as Lw, carrying out summation processing on the fixed point value Bw of the deviation view grids with the positions within the judging range, taking the absolute value, obtaining the deviation fixed point value Fb, obtaining the fixed point value Bw of the preselected view grids, and utilizing a formula->And obtaining a visual field judgment value Rc, wherein d1, d2 and d3 are all preset proportionality coefficients, d1 is 0.82, d2 is 0.94, d3 is 0.38, and a preselected visual field grid with the largest visual field judgment value Rc is marked as a selected visual field grid, and the selected visual field grid is used for central adjustment monitoring of a monitoring visual field. When the field determination value Rc of the preselected field grid x is 5, the field determination value Rc of the preselected field grid y is 5.6, and the field determination value Rc of the preselected field grid z is 7, the preselected field grid z is marked as the selected field grid. The visual field judging module is arranged, so that the center of the monitoring visual field can be judged in real time, the adjustment of the monitoring visual field is further completed, and the monitoring visual field can be ensured to monitor the production activity of personnel to the greatest extent.
Example 2
Referring to fig. 2 to fig. 3, on the basis of embodiment 1, the system further includes a center default module, where the center default module is configured to determine a default monitoring field center during monitoring startup, specifically:
and acquiring all adjustment records of the monitoring field before the current time of the system, wherein the adjustment records comprise the position of the field grid and the adjustment moment (the time for starting adjustment of the monitoring field). Sorting the adjustment records corresponding to the same view lattice according to the sequence of the adjustment moments, calculating the time difference value of the adjustment moments of two adjacent adjustment records after sorting to obtain same-lattice adjustment intervals, summing all the adjustment intervals of the same view lattice, taking an average value, and obtaining same-lattice adjustment interval Rs;
acquiring the total number of adjustment records corresponding to the same field of view grid before the current time of the system, and marking the total number as Nw;
setting each same-grid adjusting interval to correspond to a standard adjusting interval, comparing the same-grid adjusting interval with the standard adjusting interval, and marking the same-grid adjusting interval as a demand adjusting interval when the same-grid adjusting interval is smaller than the standard adjusting interval to obtain a demand value Ws of the visual field grid; the required value Ws of the field of view grid is obtained by the following steps: performing difference calculation on the standard adjustment interval and the demand adjustment interval to obtain a demand adjustment difference, performing summation processing on all the demand adjustment differences,obtaining a total demand adjustment difference Bd, obtaining total times Uh of marking the same-grid adjustment interval as the demand adjustment interval, and utilizing a formulaObtaining a required value Ws of the obtained view grid, wherein m1 and m2 are preset proportionality coefficients, the value of m1 is 0.37, and the value of m2 is 0.39. When the same-grid adjusting interval is larger than the standard adjusting interval, no treatment is carried out;
using the formulaAnd obtaining a default central value Bg of the visual field grid, wherein n1, n2 and n3 are all preset proportionality coefficients, the value of n1 is 0.69, the value of n2 is 0.58, the value of n3 is 0.71, and the visual field grid with the largest value of the default central value Bg is marked as a default monitoring visual field center during monitoring and starting. And if the default central value of the visual field lattice a is 11, the default central value of the visual field lattice b is 12 and the default central value of the visual field lattice c is 14, marking the visual field lattice c as a default monitoring visual field center when monitoring is started, and monitoring by taking the visual field lattice c as a monitoring visual field center when monitoring is started. The default module of the center is set, and the default monitoring visual field center during monitoring startup is set through the monitoring visual field adjustment record, so that the adjustment times of monitoring visual fields after monitoring startup can be greatly reduced, and meanwhile, the most reasonable visual field monitoring during monitoring startup is ensured.
Working principle:
the visual field judging module is arranged, so that the center of the monitoring visual field can be judged in real time, further the adjustment of the monitoring visual field is completed, and the monitoring visual field can be ensured to be shot under the condition of maximum personnel activities. The default module of the center is set, and the default monitoring visual field center during monitoring startup is set through the monitoring visual field adjustment record, so that the adjustment times of monitoring visual fields after monitoring startup can be greatly reduced, and meanwhile, the most reasonable visual field monitoring during monitoring startup is ensured.
The above description is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above examples, and all technical solutions belonging to the concept of the present invention belong to the protection scope of the present invention. It should be noted that modifications and adaptations to those skilled in the art without departing from the principles of the present invention are intended to be considered as protecting the scope of the present template.
The foregoing describes one embodiment of the present invention in detail, but the description is only a preferred embodiment of the present invention and should not be construed as limiting the scope of the invention. All equivalent changes and modifications within the scope of the present invention are intended to be covered by the present invention.

Claims (1)

1. An intelligent regulation method for monitoring a visual field is characterized by comprising the following steps:
step one: shooting a video in a monitoring visual field, converting the video into video image frames, dividing the video image frames into a plurality of grids through a plurality of equidistant transverse lines and a plurality of equidistant longitudinal lines, and marking each grid as a visual field grid;
step two: judging a default monitoring visual field center when monitoring and starting according to all adjustment records of the monitoring visual field before the current time of the system;
step three: judging the center of the monitoring visual field, and adjusting the monitoring visual field;
the application system of the intelligent regulation method comprises a video shooting module, a visual field judging module and a center default module;
the video shooting module is used for shooting videos in a monitoring visual field, converting the videos into video image frames, dividing the video image frames into a plurality of grids through a plurality of equidistant transverse lines and a plurality of equidistant longitudinal lines, marking each grid as a visual field grid, manufacturing an image analysis model, taking the visual field grid as input data of the image analysis model, acquiring image labels of output data of the image analysis model, and sending the image labels to the server for storage;
the visual field judging module is used for judging the center of the monitoring visual field, and then adjusting the monitoring visual field, and specifically comprises the following steps:
acquiring image labels of 12 frames of the same view lattice before the current time of the system, sorting the image labels of 12 frames before the current time of the system according to the sequence of the frames, marking the image labels of adjacent frames after sorting as the labels of the back frames, marking the image labels of adjacent frames before sorting as the labels of the front frames, performing difference calculation on the labels of the back frames and the labels of the front frames to acquire image label differences, setting each image label difference to correspond to a reference label difference, comparing the image label difference with the reference label difference, marking the image label difference as an offset label difference when the image label difference is smaller than the reference label difference, acquiring an offset reference value Dt of the view lattice, and marking the image label difference as a concentrated label difference when the image label difference is larger than the reference label difference, and acquiring a concentrated reference value Tb of the view lattice;
using the formulaObtaining fixed point values Bw of the view grid, wherein c1 and c2 are preset proportion coefficients, setting a fixed point value threshold value as Ze, marking the view grid as a concentrated view grid when the fixed point value Bw of the view grid is larger than or equal to the fixed point value threshold value Ze, marking the view grid as a deviation view grid when the fixed point value Bw of the view grid is smaller than the fixed point value threshold value Ze, marking the concentrated view grid as a preselected view grid, taking the preselected view grid as a circle center, drawing a circle with a preset radius to obtain a judging range, obtaining the number of the rest concentrated view grids with the positions in the judging range, marking the number as Lw, carrying out summation processing on the fixed point value Bw of the deviation view grid with the positions in the judging range, taking an absolute value, obtaining a deviation fixed point value Fb, obtaining the fixed point value Bw of the preselected view grid, and utilizing a formula to obtain the judgment range>Obtaining a visual field judgment value Rc, wherein d1, d2 and d3 are all preset proportionality coefficients, marking a preselected visual field grid with the largest visual field judgment value Rc as a selected visual field grid, and taking the selected visual field grid as the center of a monitoring visual field for adjusting and monitoring;
the center default module is used for judging a default monitoring visual field center during monitoring startup, and specifically comprises the following steps:
acquiring all adjustment records of a monitoring view field before the current time of the system, sequencing the adjustment records corresponding to the same view field grid according to the sequence of adjustment moments, calculating the time difference value of the adjustment moments of two adjacent adjustment records after sequencing to acquire a same grid adjustment interval, summing all adjustment intervals of the same view field grid, taking an average value, and acquiring a same grid adjustment equal interval Rs;
acquiring the total number of adjustment records corresponding to the same field of view grid before the current time of the system, and marking the total number as Nw;
setting each same-grid adjusting interval to correspond to a standard adjusting interval, comparing the same-grid adjusting interval with the standard adjusting interval, and marking the same-grid adjusting interval as a demand adjusting interval when the same-grid adjusting interval is smaller than the standard adjusting interval to obtain a demand value Ws of the visual field grid; when the same-grid adjusting interval is larger than the standard adjusting interval, no treatment is carried out;
using the formulaObtaining a default central value Bg of the visual field grid, wherein n1, n2 and n3 are all preset proportionality coefficients, and marking the visual field grid with the largest value of the default central value Bg as a default monitoring visual field center during monitoring startup;
the image analysis model is obtained by the following steps: obtaining n visual field grids, marking the visual field grids as training images, giving image labels to the training images, dividing the training image acquisition into a training set and a verification set according to a set proportion, constructing a neural network model, carrying out iterative training on the neural network model through the training set and the verification set, judging that the neural network model is completed to train when the iterative training times are greater than an iterative time threshold, marking the trained neural network model as an image analysis model, wherein the value range of the image labels is [0-5], and the larger the numerical value of the image labels is, the more the number of people in the visual field grids is represented;
the offset reference value Dt is obtained by: difference and bias of reference labelPerforming difference calculation on the label moving difference value to obtain an offset judgment value Ei; setting the offset judgment value coefficient as Fg by using a formulaObtaining an offset value Pr, i=1, 2,3, … …, n is the number of times that the image tag difference is marked as an offset tag difference, sequencing the offset tag difference according to the number of frames of the corresponding previous frame tags, performing difference calculation on the number of frames of the two adjacent previous frame tags to obtain an offset interval, summing all the offset intervals and taking an average value to obtain an offset average interval Mk; using the formula->Obtaining an offset reference value Dt, wherein a1 and a2 are preset proportion coefficients;
the centralized reference value Tb is obtained by: performing difference calculation on the concentrated tag difference value and the reference tag difference value to obtain a reference judgment value Yj; setting a reference judgment value coefficient as Cs, and utilizing a formulaAcquiring a collection value Qg, i=1, 2,3, … …, n, wherein n is the number of times that the image tag difference value is marked as a collection tag difference value, sequencing the collection tag difference value according to the frame number of the corresponding previous frame tag, performing difference value calculation on the frame numbers of the two adjacent previous frame tags to acquire a collection interval, summing all the collection intervals and taking an average value to acquire a collection average interval Hn; using the formula->Acquiring a centralized reference value Tb, wherein b1 and b2 are preset proportion coefficients;
the adjustment record comprises the position of the field of view grid and the adjustment moment;
the required value Ws of the field of view grid is obtained by the following steps: performing difference calculation on the standard adjustment interval and the demand adjustment interval to obtain a demand adjustment difference,summing all the demand regulation differences to obtain a demand regulation total difference Bd, obtaining the total number Uh of the demand regulation intervals marked as the same-grid regulation intervals, and utilizing a formulaObtaining a required value Ws of the view field grid, wherein m1 and m2 are preset proportionality coefficients.
CN202311578066.4A 2023-11-24 2023-11-24 Intelligent adjusting method for monitoring visual field Active CN117319809B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311578066.4A CN117319809B (en) 2023-11-24 2023-11-24 Intelligent adjusting method for monitoring visual field

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311578066.4A CN117319809B (en) 2023-11-24 2023-11-24 Intelligent adjusting method for monitoring visual field

Publications (2)

Publication Number Publication Date
CN117319809A CN117319809A (en) 2023-12-29
CN117319809B true CN117319809B (en) 2024-03-01

Family

ID=89288615

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311578066.4A Active CN117319809B (en) 2023-11-24 2023-11-24 Intelligent adjusting method for monitoring visual field

Country Status (1)

Country Link
CN (1) CN117319809B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101146216A (en) * 2006-09-14 2008-03-19 黄柏霞 Video positioning and parameter computation method based on picture sectioning
KR20150102433A (en) * 2014-02-28 2015-09-07 동의대학교 산학협력단 System and Method for Monitoring Around View with Multiple Scopes
CN109996042A (en) * 2019-04-09 2019-07-09 昆山古鳌电子机械有限公司 A kind of intelligent monitor system
CN111696128A (en) * 2020-05-27 2020-09-22 南京博雅集智智能技术有限公司 High-speed multi-target detection tracking and target image optimization method and storage medium
CN116168233A (en) * 2022-12-15 2023-05-26 上海师范大学 Blackboard writing restoration method based on grid image patch classification
CN116647643A (en) * 2023-06-01 2023-08-25 西藏康发电子工程有限公司 Intelligent security monitoring system
CN117079397A (en) * 2023-09-27 2023-11-17 青海民族大学 Wild human and animal safety early warning method based on video monitoring

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101146216A (en) * 2006-09-14 2008-03-19 黄柏霞 Video positioning and parameter computation method based on picture sectioning
KR20150102433A (en) * 2014-02-28 2015-09-07 동의대학교 산학협력단 System and Method for Monitoring Around View with Multiple Scopes
CN109996042A (en) * 2019-04-09 2019-07-09 昆山古鳌电子机械有限公司 A kind of intelligent monitor system
CN111696128A (en) * 2020-05-27 2020-09-22 南京博雅集智智能技术有限公司 High-speed multi-target detection tracking and target image optimization method and storage medium
CN116168233A (en) * 2022-12-15 2023-05-26 上海师范大学 Blackboard writing restoration method based on grid image patch classification
CN116647643A (en) * 2023-06-01 2023-08-25 西藏康发电子工程有限公司 Intelligent security monitoring system
CN117079397A (en) * 2023-09-27 2023-11-17 青海民族大学 Wild human and animal safety early warning method based on video monitoring

Also Published As

Publication number Publication date
CN117319809A (en) 2023-12-29

Similar Documents

Publication Publication Date Title
CN110855935B (en) Personnel track generation system and method based on multiple cameras
CN115272037B (en) Smart city based on Internet of things regional public security management early warning method and system
CN111795488B (en) Intelligent temperature regulation and control system and method for distributed machine room
CN108767851B (en) Intelligent operation command method and system for operation and maintenance of transformer substation
CN104636751A (en) Crowd abnormity detection and positioning system and method based on time recurrent neural network
CN112990558B (en) Meteorological temperature and illumination prediction method based on deep migration learning
CN117172414A (en) Building curtain construction management system based on BIM technology
CN113627768A (en) Project construction management system based on digital twins
CN114980011B (en) Livestock and poultry body temperature monitoring method with cooperation of wearable sensor and infrared camera
CN116627082A (en) Intelligent management system for live pig breeding suitable for stock quantity checking
CN115409338A (en) Intelligent supervisory system for carbon emission of air conditioner
CN117319809B (en) Intelligent adjusting method for monitoring visual field
CN115060312A (en) Building material safety monitoring system based on artificial intelligence
CN111177468A (en) Laboratory personnel unsafe behavior safety inspection method based on machine vision
CN114896652A (en) BIM-based industrial building informatization control terminal
CN206977446U (en) A kind of transformer station&#39;s operation platform automatic fault diagnosis and quick recovery system
CN109977984A (en) Stealing user&#39;s judgment method based on support vector machines
CN117092953A (en) Production data acquisition management and control system based on industrial Internet of things
CN116050788A (en) Industrial intelligent scheduling management system
CN107798409A (en) A kind of crowd massing Forecasting Methodology based on time series models
CN116307886A (en) Method and device for monitoring production state of enterprise in real time
CN115393900A (en) Intelligent construction site safety supervision method and system based on Internet of things
CN113221452A (en) Office space temperature prediction system based on distributed optical fiber
CN117715089B (en) BIM modeling-based communication base station energy consumption data management method
CN117575623B (en) Air conditioner hose product manufacturing traceability management system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant