CN112115804B - Method, system, intelligent terminal and storage medium for controlling monitoring video of key area - Google Patents
Method, system, intelligent terminal and storage medium for controlling monitoring video of key area Download PDFInfo
- Publication number
- CN112115804B CN112115804B CN202010872405.XA CN202010872405A CN112115804B CN 112115804 B CN112115804 B CN 112115804B CN 202010872405 A CN202010872405 A CN 202010872405A CN 112115804 B CN112115804 B CN 112115804B
- Authority
- CN
- China
- Prior art keywords
- image information
- information
- selection signal
- frame selection
- amplified
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012544 monitoring process Methods 0.000 title claims abstract description 94
- 238000000034 method Methods 0.000 title claims abstract description 40
- 230000003321 amplification Effects 0.000 claims description 7
- 238000003199 nucleic acid amplification method Methods 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 4
- 238000009432 framing Methods 0.000 claims 6
- 238000012545 processing Methods 0.000 abstract description 11
- 230000000694 effects Effects 0.000 abstract description 4
- 230000006870 function Effects 0.000 description 6
- 238000012806 monitoring device Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 210000003128 head Anatomy 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000004148 unit process Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/30—Writer recognition; Reading and verifying signatures
- G06V40/37—Writer recognition; Reading and verifying signatures based only on signature signals such as velocity or pressure, e.g. dynamic signature recognition
- G06V40/394—Matching; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/66—Analysis of geometric attributes of image moments or centre of gravity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/19—Sensors therefor
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Geometry (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Ophthalmology & Optometry (AREA)
- Closed-Circuit Television Systems (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
The invention relates to a method, a system, an intelligent terminal and a storage medium for controlling a monitoring video of a key area, which comprise the following steps: acquiring multi-channel monitoring image information, feeding back the multi-channel monitoring image information to a display device and displaying the multi-channel monitoring image information; judging whether a first frame selection signal for selecting a display area on the display device is received or not; if the first frame selection signal is received, acquiring monitoring image information corresponding to the display area selected by the frame selection signal according to the display area selected by the frame selection signal to form image information to be amplified; presetting a first corresponding relation between the image information to be amplified and the amplified image information, and processing the image information to be amplified according to the first corresponding relation to obtain the amplified image information; the enlarged image information is fed back to the display device. The invention has the effect of providing convenience for users to observe specific areas.
Description
Technical Field
The invention relates to the technical field of video monitoring, in particular to a monitoring video control method.
Background
Video monitoring is an important component of a safety protection system, and a traditional monitoring system comprises a front-end camera, a transmission cable and a video monitoring platform. Video monitoring is widely used in many occasions with visual, accurate, timely and rich information content. In recent years, with rapid development of computer, network, image processing and transmission technologies, video monitoring technologies have also been developed.
The existing video monitoring platform comprises a processing unit for processing images shot by a front-end camera and a display device for displaying monitoring pictures, wherein the processing unit processes the images shot by the front-end camera and outputs the monitoring pictures to the display device, and the display device plays multiple paths of monitoring pictures, so that the function of real-time monitoring is realized.
But this video surveillance platform still suffers from the following drawbacks when in use: when the video monitoring platform plays the multi-path monitoring pictures, the display areas of the multi-path monitoring pictures are the same, but when a certain path of monitoring pictures need to be focused, the image size of the path of monitoring pictures is limited, a worker cannot acquire details in the path of monitoring pictures when the worker wants to focus on the certain path of monitoring pictures, and only key information can be acquired again through modes such as playback, so that inconvenience is brought to a user.
Disclosure of Invention
The invention aims to provide a monitoring video control method which has the characteristic of providing convenience for a user to observe a specific area.
The first object of the present invention is achieved by the following technical solutions:
A method for controlling a monitoring video of a key area comprises the following steps:
acquiring multi-channel monitoring image information, feeding back the multi-channel monitoring image information to a display device and displaying the multi-channel monitoring image information;
Judging whether a first frame selection signal for selecting a display area on the display device is received or not;
If the first frame selection signal is received, acquiring monitoring image information corresponding to the display area selected by the frame selection signal according to the display area selected by the frame selection signal to form image information to be amplified;
Presetting a first corresponding relation between the image information to be amplified and the amplified image information, and processing the image information to be amplified according to the first corresponding relation to obtain the amplified image information;
the enlarged image information is fed back to the display device.
By adopting the technical scheme, a user inputs the first frame selection signal to frame-select the monitoring image information in the specific area so as to acquire the image information to be amplified, amplifies the information to be amplified and outputs the amplified information to the display device for display, so that the amplified monitoring of the specific area is realized, the user can observe the activities of things in the specific area conveniently, and convenience is brought to the user to monitor the specific area.
The present invention may be further configured in a preferred example to: the first frame selection signal for selecting the display area on the display device comprises an elliptical frame selection signal for selecting the elliptical display area in a frame manner and a rectangular frame selection signal for selecting the rectangular area in a frame manner; the method for forming the image information to be amplified comprises the following steps:
Judging whether the first frame selection signal is a rectangular frame selection signal or not;
if the first frame selection signal is a rectangular frame selection signal, extracting monitoring image information corresponding to a rectangular area selected by the rectangular frame selection signal to form image information to be amplified;
If the first touch trigger signal is not a rectangular frame selection signal, judging whether the first frame selection signal is an elliptical frame selection signal or not;
and if the first frame selection signal is an oval frame selection signal, extracting monitoring image information contained in the circumscribed rectangle of the oval outer edge corresponding to the oval frame selection signal to form image information to be amplified.
By adopting the technical scheme, when a user selects a specific range in a frame, different modes for inputting the first frame selection signal can be selected according to operation habits, so that images which are expected to be amplified are selected in the frame, the two input modes can be suitable for two groups of people with different operation habits, the human-computer interaction process is optimized, and the applicable crowd range of the monitoring video control method is widened.
The present invention may be further configured in a preferred example to: the method further comprises the steps of:
defining a state in which the display device does not display the enlarged image information as a first operating state;
defining a state in which the display device displays the enlarged image information as a second operating state;
Judging whether the display device receives the amplified image information, wherein the amplified image information comprises amplified image size information;
if the amplified image information is received, switching the display device from the first working state to the second working state;
analyzing the size information of the enlarged image information to determine an enlarged image display area for displaying the enlarged image information by the display device;
and scaling other monitoring image information which is not selected by the first frame selection signal frame to form scaled image information, feeding the scaled image information back to the display equipment and displaying the scaled image information in a non-amplified image display area of the display equipment.
By adopting the technical scheme, when amplified image information is amplified, the unused display area of the display device can be fully utilized, and the scaling image information is displayed by utilizing the unused display area, so that on one hand, the utilization rate of the display device in the second working state can be increased, and on the other hand, a user can pay attention to the amplified image information and take other paths of monitoring image information into consideration, thereby providing convenience for the user.
The present invention may be further configured in a preferred example to: the method for determining the enlarged image display area comprises the following steps:
If the first frame selection signal is received, acquiring original image information containing image information of a main observer in front of a screen through image acquisition equipment;
Analyzing the original image information to calculate height information representing a line-of-sight height of the main observer and distance information representing a distance between the main observer and the display device;
Calculating a preferable horizontal observation band of the display equipment according to the height information;
calculating a preferable vertical observation band of the display equipment according to the distance information;
acquiring a superposition area of the optimal horizontal observation area and the optimal vertical observation area as a preferable observation area;
The method includes the steps of acquiring region size information of a preferred observation region, calculating a first scaling relationship between size information contained in the acquired magnified image information and the region size information, and scaling the preferred observation region according to the first scaling relationship to acquire a magnified image display region.
Through adopting above-mentioned technical scheme, through the priority of comparing the observer in the original image information to obtain the position of main observer, obtain the preferred observation area of relative main observer in the screen according to the position that main observer is located, make the image that has been enlarged play in the preferred observation area, reduce the rotation of main observer head when observing the screen, and then bring better use experience for main observer.
The present invention may be further configured in a preferred example to: the main observer is a main reference object for acquiring a display area of a magnified image from one or more observers included in original image information, and the method for confirming the main observer comprises the following steps:
analyzing the original image information to extract a face image of each observer contained in the original image information;
Judging whether the number of face images of the observer is greater than or equal to two;
If the number of face images is one, the observer corresponding to the face information is regarded as the main observer.
By adopting the technical scheme, when an observer only has one person, the observer is directly taken as a main observer without carrying out face recognition, the time required by the face recognition can be saved, and the working efficiency is improved.
The present invention may be further configured in a preferred example to: the method of validating a primary observer further comprises:
Pre-storing reference face information of an observer, the face reference information including a face reference image and observer priority information;
If the number of the face images contained in the original image is not one, matching each face image with a pre-stored reference face image, and if the matching is successful, taking priority information corresponding to the face reference image information as the priority information of the face images successfully matched;
The priority information corresponding to the face images is arranged in descending order according to the priority, and the observer corresponding to the face image with the priority information arranged at the forefront position is taken as the main observer.
By adopting the technical scheme, when the original image information contains a plurality of observers, the observers in the original image information are identified by matching the preset reference face information with the face information of the observers contained in the original image information, the observers are arranged according to the order of the priority, the observer with the highest priority is used as the main observer, and a better observation area is provided for the main observer, so that convenience is provided for the main observer to acquire the information in the display device.
The present invention may be further configured in a preferred example to: the method for feeding back the scaled image information to the display device and displaying the scaled image information in a non-magnified image display area of the display device comprises the following steps:
Analyzing to obtain a maximum rectangular display area in the non-magnified display area;
dividing the maximum rectangular display area into a plurality of scaled image display areas with equal size;
and feeding back and displaying the zoom image information to a zoom image display area of the device.
By adopting the technical scheme, the non-amplified display area of the display device is utilized to display the monitoring image which is not selected by the frame, so that the observation of the amplified image information can be realized, meanwhile, the observation of the scaled image information can be considered, on one hand, the display area of the display device can be fully utilized, the utilization rate of the display area is improved, on the other hand, the scaled image information corresponding to other paths of monitoring video signals can be provided for the main observer, and the probability that the main observer loses the image content contained in the other paths of monitoring video information is reduced.
The second object of the present invention is to provide a monitoring video display system, which is characterized by providing convenience for users to observe specific areas.
The second object of the present invention is achieved by the following technical solutions:
A surveillance video display system comprising:
The image acquisition module is used for acquiring multi-path monitoring image information;
a display device for displaying image information;
the triggering device is used for inputting a first frame selection signal; and;
And the processing unit is used for carrying out logic operations such as judgment and the like and analyzing and processing the image information.
By adopting the technical scheme, a user inputs the monitoring image information acquired by the image acquisition module in the specific area on the first frame selection signal frame selection display device through the trigger device to acquire the image information to be amplified, amplifies the information to be amplified and outputs the amplified information to the display device for display, so that the amplified monitoring image of the specific area is acquired, the user can observe the activities of things in the specific area conveniently, and convenience is brought to the user to monitor the specific area.
The invention aims at providing the intelligent terminal for monitoring the video display, which has the characteristic of providing convenience for a user to observe a specific area.
The third object of the present invention is achieved by the following technical solutions:
A monitoring video display intelligent terminal comprises a memory and a processor, wherein the memory stores a computer program which can be loaded by the processor and execute the monitoring video control method.
By adopting the technical scheme, the method flow is convenient to execute through the intelligent terminal.
The fourth object of the present invention is to provide a computer storage medium capable of storing a corresponding program, which has the characteristic of facilitating the realization of providing convenience for a user to observe a specific area.
The fourth object of the present invention is achieved by the following technical solutions:
A computer readable storage medium storing a computer program capable of being loaded by a processor and executing any one of the above-described surveillance video control methods.
By adopting the technical scheme, the current method program can be stored through the storage medium, the carrying and execution of the program are convenient, and the operation is simple and convenient.
In summary, the present invention includes at least one of the following beneficial technical effects:
The user inputs the monitoring image information in the specific area through the first frame selection signal frame selection to acquire the image information to be amplified, amplifies the information to be amplified and outputs the amplified information to the display device for display, so that the amplification monitoring of the specific area is acquired, and convenience is brought to the user for monitoring the specific area;
The priority of the observer in the original image information is compared, so that the position of the main observer is obtained, a preferred observation area corresponding to the main observer is obtained in the screen according to the position of the main observer, the amplified image is played in the preferred observation area, the rotation of the head of the main observer when the main observer observes the screen is reduced, and better use experience is brought to the main observer;
When an observer only has one person, the observer is directly taken as a main observer without carrying out face recognition, so that the time required by the face recognition can be saved, and the working efficiency is improved.
Drawings
Fig. 1 is a block diagram of one embodiment of the present invention.
FIG. 2 is a schematic flow chart of forming image information to be magnified according to an embodiment of the invention;
FIG. 3 is a schematic flow chart of feeding back the enlarged image information to the display device according to an embodiment of the invention;
FIG. 4 is a flowchart illustrating a method for determining an enlarged image display area according to an embodiment of the present invention;
FIG. 5 is a flowchart of an embodiment of the present invention for acquiring image information of a primary observer;
FIG. 6 is a flowchart of feeding back zoom image information to a zoom-in area according to an embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to fig. 1-6.
The present embodiment is only for explanation of the present invention and is not to be construed as limiting the present invention, and modifications to the present embodiment, which may not creatively contribute to the present invention as required by those skilled in the art after reading the present specification, are all protected by patent laws within the scope of claims of the present invention.
A method for controlling a monitoring video of a key area comprises the following steps:
acquiring multi-channel monitoring image information, feeding back the multi-channel monitoring image information to a display device and displaying the multi-channel monitoring image information;
Judging whether a first frame selection signal for selecting a display area on the display device is received or not;
If the first frame selection signal is received, acquiring monitoring image information corresponding to the display area selected by the frame selection signal according to the display area selected by the frame selection signal to form image information to be amplified;
Presetting a first corresponding relation between the image information to be amplified and the amplified image information, and processing the image information to be amplified according to the first corresponding relation to obtain the amplified image information;
the enlarged image information is fed back to the display device.
In the embodiment of the invention, the first frame selection signal is input in a preset mode to frame-select the specific area from the multi-path monitoring image information so as to obtain the area to be amplified, the area to be amplified is amplified according to the preset first corresponding relation, so that the amplified image information is obtained, the obtained image information is fed back to the display device for display, and the amplification of the image information of the specific area selected by the monitoring image information of a certain path selected by the multi-path monitoring image information is completed, so that the monitoring of the specific area is realized, a clearer and easier-to-observe monitoring picture is provided for a user, and convenience is provided for the user.
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In addition, the term "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In this context, unless otherwise specified, the term "/" generally indicates that the associated object is an "or" relationship.
Embodiments of the invention are described in further detail below with reference to the drawings.
The embodiment of the invention provides a monitoring video control method, and the main flow of the method is described as follows.
Referring to fig. 1, step 1000: and acquiring and feeding back the multi-channel monitoring image information to a display device for display.
The multi-channel monitoring image information is captured by different monitoring devices, the monitoring devices are terminals with image information obtaining functions such as pictures and videos, for example, cameras, monitoring cameras and the like, the monitoring cameras are selected as the monitoring devices, the display devices are terminals with image information output functions such as pictures and videos, the display devices comprise a display, an electronic display screen, an electronic screen of a handheld mobile terminal and the like, and the electronic display screen is selected as a display device.
Referring to fig. 1, step 2000: it is determined whether a first frame selection signal is received for frame selecting a display area on a display device.
The first frame selection signal is input by an external input device, and the input device is an input end capable of performing frame selection operation, such as a mouse, an electronic display screen with a touch function, and the like.
Referring to fig. 1, step 3000: and if the first frame selection signal is received, acquiring monitoring image information corresponding to the display area selected by the frame selection signal according to the display area selected by the frame selection signal to form image information to be amplified.
The user clicks a mouse button and/or drags a mouse to operate a specific area of the frame selection display device, so that frame selection is performed on image information in the specific area to form image information to be amplified, and if a first frame selection signal is not received, whether a trigger signal is received is judged again.
The specific flow of forming the image information to be amplified is as follows:
referring to fig. 2, step 3100: and judging whether the first frame selection signal is a rectangular frame selection signal or not.
The input method of the rectangular frame selection signal comprises the following steps: pressing a mouse button to select a right-angle vertex of a rectangular area where the image information to be amplified is located, defining the vertex as a first vertex, then, not loosening the pressed button and dragging the mouse to move in the display range where the same path of monitoring image information is located, then dragging a certain position of the mouse to loosen the pressed button of the mouse, so as to determine the right-angle vertex corresponding to the first vertex, defining the vertex as a second vertex, and judging whether the signal input by the mouse is a rectangular frame selection signal or not by judging whether the signal input by the mouse is a signal representing that the mouse button is pressed for a long time.
Referring to fig. 2, step 3200: and if the first frame selection signal is a rectangular frame selection signal, extracting monitoring image information corresponding to the rectangular area selected by the rectangular frame selection signal to form image information to be amplified.
When the selection of two diagonal vertexes is completed through the mouse input, two straight lines are extended along the horizontal direction and the vertical direction by taking the two vertexes as starting points to form a rectangular closed boundary, and after a rectangular frame selection signal is received, the monitoring image information of the path of monitoring image information in the rectangular closed boundary is the image information to be amplified.
Referring to fig. 2, step 3300: if the first touch trigger signal is not a rectangular frame selection signal, judging whether the first frame selection signal is an elliptical frame selection signal or not.
The input mode of the oval frame selection signal comprises clicking a mouse button to determine the center point of the oval frame, then clicking a single mouse button at any point in the horizontal direction of the center point to determine one end point of the major axis of the oval frame, clicking a single mouse button at any point in the vertical direction of the center point to determine one end point of the minor axis of the oval, and judging whether the first frame selection signal is the oval frame selection signal by judging whether the input first frame selection signal is a signal which represents three times of output of the single mouse button in turn in the display area of the same monitoring image information.
Referring to fig. 2, step 3400: and if the first frame selection signal is an oval frame selection signal, extracting monitoring image information contained in the circumscribed rectangle of the oval outer edge corresponding to the oval frame selection signal to form image information to be amplified.
If the first frame selection signal is a signal which is sequentially output by a single mouse button for three times in a display area of the same monitoring image information, determining an oval boundary according to a point determined by three single mouse button clicks, analyzing the oval boundary to obtain an external rectangular boundary corresponding to the oval boundary, and taking the monitoring image information contained in the external rectangular boundary as the image information to be amplified; if the first frame selection signal is a signal which is output by three times of single mouse buttons in sequence in the display area of the same monitoring image information, whether the first frame selection signal is input is judged again.
Referring to fig. 1, step 4000: the method comprises the steps of presetting a first corresponding relation between image information to be amplified and amplified image information, and processing the image information to be amplified according to the first corresponding relation to obtain amplified image information.
The first corresponding relation is preset amplification factor, the preset amplification factor is determined by the maximum optical amplification factor of a monitor for acquiring monitoring image information, and the monitor is adjusted to the maximum optical amplification factor to amplify the image information to be amplified to acquire amplified image information.
Referring to fig. 1, step 5000: the enlarged image information is fed back to the display device.
The state that the pre-defined display device does not display the amplified image information is defined as a first working state, and the state that the pre-defined display device displays the amplified image information is defined as a second working state.
The specific flow of feeding back the amplified image information to the display device is as follows:
Referring to fig. 3, step 5100: it is determined whether the display device has received the enlarged image information.
Wherein the magnified image information includes magnified image size information.
Referring to fig. 3, step 5200: and if the amplified image information is received, switching the display device from the first working state to the second working state.
Referring to fig. 3, step 5300: the size information of the enlarged image information is analyzed to determine an enlarged image display area of the display device for displaying the enlarged image information.
The method for determining the enlarged image display area comprises the following steps:
referring to fig. 4, step 5310: and if the first frame selection signal is received, acquiring original image information containing the image information of the main observer before the screen through the image acquisition equipment.
The image acquisition device is a terminal capable of acquiring image information, and comprises a camera, a video camera, a mobile terminal with a photographing function and the like, wherein the image information terminal in the embodiment is a camera with a fixed focal length. The first frame selection signal comprises the oval frame selection signal or the rectangular frame selection signal, and when the first frame selection signal is received, the image acquisition equipment acquires an image in front of the display device to acquire original image information, wherein the original image information comprises image information of a main observer and image information of other observers.
Referring to fig. 4, step 5320: the raw image information is analyzed to calculate primary observer image information.
The method for extracting the distance information between the main observer and the display equipment comprises the steps of obtaining focal length data of a camera, setting a reference object in a shooting area of the camera and pre-storing size information of the reference object, wherein the reference object can be a marker which is arranged at a specific position in the shooting range of the camera and has a fixed geometric size, the reference object in the embodiment is a reference line which is arranged on the ground, obtaining size information of a reference object image contained in original image information and the size information of the main observer, calculating the ratio between the size information of the reference object image and the actual size information of the reference object as a first ratio coefficient, and estimating the actual size information of the main observer by the size information of the first ratio coefficient and the size information of the main observer, and estimating the distance information of the main observer by the actual size information of the main observer, the image size information of the main observer and the focal length data of the camera; the method for extracting the height information comprises the steps of extracting the position data of the eyes of the main observer in the image information of the main observer, namely the position of the eyes of the main observer in the original image, and estimating the actual height information of the eyes of the main observer according to the distance information of the main observer and the position of the eyes of the main observer in the original image.
The specific steps for acquiring the image information of the main observer comprise:
Referring to fig. 5, step 5321: pre-storing reference face information of the observer.
Wherein the face reference information includes a face reference image and observer priority information indicating the priority of the observer, the reference face information being pre-stored in a specific storage area.
Referring to fig. 5, step 5322: the original image information is analyzed to extract a face image of each observer contained in the original image information.
Step 5323: it is determined whether the number of face images of the observer is one.
Step 5324: if the number of face images is one, the observer corresponding to the face information is regarded as the main observer.
Step 5325: if the number of the face images contained in the original image is not one, each face image is matched with a pre-stored reference face image, and if the matching is successful, the priority information corresponding to the face reference image information is used as the priority information of the face images successfully matched.
Step 5326: the priority information corresponding to the face images is arranged in descending order according to the priority, and the observer corresponding to the face image with the priority information arranged at the forefront position is taken as the main observer.
Referring to fig. 4, step 5330: and calculating the preferable horizontal observation band of the display device according to the height information and the distance information.
The method for acquiring the preferable horizontal observation band comprises the steps of defining the distance between the upper limit height of the preferable horizontal observation band and the horizontal center line height of the preferable horizontal observation band as the width of the preferable horizontal observation band by taking the height corresponding to the height information as the horizontal center line height of the preferable horizontal observation band, wherein the width of the preferable horizontal observation band is twice the width, the width is equal to the product of the tangent value of the preferable depression/elevation angle and the distance value corresponding to the distance information, and the range of the preferable depression/elevation angle is 10 degrees to 20 degrees.
Referring to fig. 4, step 5340: the preferred vertical viewing zone of the display device is calculated from the height information and the distance information.
The method for acquiring the preferable vertical observation band comprises the steps of estimating the position relation between a main observer and an actual reference line according to the position relation between the observer and the reference line in an original image, further combining the position relation between the preset reference line and a screen to determine a vertical straight line closest to the observer on a display screen, taking the straight line as a vertical central line of the preferable vertical observation band, defining the distance between a left/right boundary of the preferable vertical observation band and the vertical central line as the width of the preferable vertical observation band, namely, the width of the preferable vertical observation band is twice the width, the width of the preferable vertical observation band is equal to the product of the tangent value of the preferable left/right view angle and the distance value corresponding to distance information, and the preferable left/right view angle is 20 DEG to 40 DEG.
Referring to fig. 4, step 5350: and acquiring the superposition area of the optimal horizontal observation area and the optimal vertical observation area as a preferable observation area.
The overlapping area of the preferable horizontal observation area and the preferable vertical observation area is a rectangular area, and the rectangular area is the preferable observation area.
Referring to fig. 4, step 5360: the method includes the steps of acquiring region size information of a preferred observation region, calculating a first scaling relationship between size information contained in the acquired magnified image information and the region size information, and scaling the preferred observation region according to the first scaling relationship to acquire a magnified image display region.
The area size information comprises the horizontal side length of the preferable observation area and the vertical side length of the preferable observation area, the horizontal side length is equal to the width of the preferable vertical observation area which is two times, the vertical side length is equal to the width of the preferable horizontal observation area which is two times, the size information contained in the amplified image information comprises the horizontal length of the amplified image and the vertical length of the amplified image, the ratio of the horizontal length of the amplified image to the horizontal side length and the ratio of the vertical length of the amplified image are obtained, the sizes of the two ratios are compared, the smaller ratio is obtained as a scaling relation, the preferable observation area is scaled by taking the center point of the preferable observation area as the amplifying center according to the scaling relation, and the amplified image is output to the amplified image display area for display.
Referring to fig. 3, step 5400: and scaling other monitoring image information which is not selected by the first frame selection signal frame to form scaled image information, feeding the scaled image information back to the display equipment and displaying the scaled image information in a non-amplified image display area of the display equipment.
The specific flow of the scaled image information fed back to the display device and displayed in the non-enlarged image display area of the display device is as follows:
Referring to fig. 6, step 5410: the magnified image display area is analyzed to obtain a largest rectangular display area in the non-magnified display area.
The non-enlarged display area is a display area which is not used as an enlarged image display area of the display device, and the largest rectangular display area in the non-enlarged display area is a rectangular enlarged area adjacent to the enlarged image information and adjacent to the rectangular enlarged image display area.
Referring to fig. 6, step 5420: the largest rectangular display area is divided into a plurality of scaled image display areas of equal size.
The method comprises the steps of dividing a maximum rectangular display area according to the length-width size ratio of monitoring image information to obtain a plurality of scaled image display areas, and then scaling the monitoring video information to enable the size of the monitoring video information to be equal to that of the scaled image display areas so as to obtain scaled image information.
Referring to fig. 6, step 5430: and feeding back and displaying the zoom image information to a zoom image display area of the display device.
And feeding the scaled image information back to the scaled image display areas of the display device, and if the number of the scaled image information is larger than the number of the scaled image display areas, circularly playing the plurality of the scaled image information in the plurality of the scaled image display areas in a preset circulation period.
Embodiments of the present application provide a computer readable storage medium comprising instructions capable of, when loaded and executed by a processor, implementing the steps as described in the flowcharts of fig. 1-6.
The computer-readable storage medium includes, for example: a usb disk, a removable hard disk, a Read-only memory (ROM), a random access memory (RandomAccessMemory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Based on the same inventive concept, the embodiment of the application provides an intelligent terminal for controlling a monitoring video of a key area, which comprises a memory, a processor and a program stored on the memory and capable of running on the processor, wherein the program can be loaded and executed by the processor to realize the method for processing the feedback of the information in an enterprise as shown in the flow of fig. 1-6.
Based on the same inventive concept, the embodiment of the application provides a key area monitoring video control system, which comprises a memory, a processor and a program stored on the memory and capable of running on the processor, wherein the program can be loaded and executed by the processor to realize an enterprise internal information feedback processing method as described in the flow of fig. 1-6; the key area monitoring video control system further comprises: the image acquisition module is used for acquiring multi-path monitoring image information; a display device for displaying image information; the triggering device is used for inputting a first frame selection signal, and is a hardware device capable of inputting a frame selection instruction, such as a mouse, a keyboard or a touch screen, and the triggering device in the embodiment comprises the mouse; the image acquisition device is a terminal capable of acquiring image information, and comprises a camera, a video camera, a mobile terminal with a photographing function and the like, wherein the image information terminal is a camera with a fixed focal length.
Claims (7)
1. The method for controlling the monitoring video of the key area is characterized by comprising the following steps of:
acquiring multi-channel monitoring image information, feeding back the multi-channel monitoring image information to a display device and displaying the multi-channel monitoring image information;
Judging whether a first frame selection signal for selecting a display area on the display device is received or not;
defining a state in which the display device does not display the enlarged image information as a first operating state;
defining a state in which the display device displays the enlarged image information as a second operating state;
Judging whether the display device receives the amplified image information, wherein the amplified image information comprises amplified image size information;
if the amplified image information is received, switching the display device from the first working state to the second working state;
analyzing the size information of the enlarged image information to determine an enlarged image display area for displaying the enlarged image information by the display device;
The determination display device is used for displaying an enlarged image display area of enlarged image information, and comprises:
if the first frame selection signal is received, acquiring original image information containing image information of a main observer in front of a screen;
Analyzing the original image information to calculate height information representing a line-of-sight height of the main observer and distance information representing a distance between the main observer and the display device;
Calculating a preferable horizontal observation band of the display equipment according to the height information;
calculating a preferable vertical observation band of the display equipment according to the distance information;
Acquiring a superposition area of the preferable horizontal observation band and the preferable vertical observation band as a preferable observation area;
Acquiring region size information of a preferred observation region, calculating a first scaling relationship between the size information contained in the acquired magnified image information and the region size information, and scaling the preferred observation region according to the first scaling relationship to acquire a magnified image display region;
scaling other monitoring image information which is not selected by the first frame selection signal frame to form scaled image information, feeding the scaled image information back to the display equipment and displaying the scaled image information in a non-amplified image display area of the display equipment;
If the first frame selection signal is received, acquiring monitoring image information corresponding to the display area selected by the frame selection signal according to the display area selected by the frame selection signal to form image information to be amplified;
Amplifying the image information to be amplified according to a preset amplification factor to obtain amplified image information;
the enlarged image information is fed back to the display device.
2. The method of claim 1, wherein the first framing signal for framing a display area on the display device comprises an elliptical framing signal for framing an elliptical display area and a rectangular framing signal for framing a rectangular area; the method for forming the image information to be amplified comprises the following steps:
Judging whether the first frame selection signal is a rectangular frame selection signal or not;
if the first frame selection signal is a rectangular frame selection signal, extracting monitoring image information corresponding to a rectangular area selected by the rectangular frame selection signal to form image information to be amplified;
If the first touch trigger signal is not a rectangular frame selection signal, judging whether the first frame selection signal is an elliptical frame selection signal or not;
and if the first frame selection signal is an oval frame selection signal, extracting monitoring image information contained in the circumscribed rectangle of the oval outer edge corresponding to the oval frame selection signal to form image information to be amplified.
3. The method of claim 1, wherein the primary observer is a primary reference object for acquiring a magnified image display area among one or more observers included in the original image information, and wherein the step of acquiring the original image information including the on-screen primary observer image information further includes confirming the primary observer, the step of confirming the primary observer including:
analyzing the original image information to extract a face image of each observer contained in the original image information;
Judging whether the number of face images of the observer is greater than or equal to two;
if the number of the face images is one, the observer corresponding to the face image is taken as a main observer.
4. A method according to claim 3, comprising: the validating the primary observer further comprises:
Pre-storing reference face information of an observer, the face reference information including a face reference image and observer priority information;
If the number of the face images contained in the original image is greater than or equal to two, matching each face image with a pre-stored reference face image, and if the matching is successful, taking priority information corresponding to the face reference image information as the priority information of the face images successfully matched;
The priority information corresponding to the face images is arranged in descending order according to the priority, and the observer corresponding to the face image with the priority information arranged at the forefront position is taken as the main observer.
5. The method of claim 1, wherein feeding back the scaled image information to the display device and displaying the scaled image information in a non-magnified image display area of the display device, comprises:
analyzing the non-magnified image display area to obtain a maximum rectangular display area in the non-magnified display area;
dividing the maximum rectangular display area into a plurality of scaled image display areas with equal size;
and feeding back and displaying the zoom image information to a zoom image display area of the display device.
6. A monitor video display device comprising a memory and a processor, the memory having stored thereon a computer program capable of being loaded by the processor and performing the method of any of claims 1 to 5.
7. A computer readable storage medium, characterized in that a computer program is stored which can be loaded by a processor and which performs the method according to any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010872405.XA CN112115804B (en) | 2020-08-26 | 2020-08-26 | Method, system, intelligent terminal and storage medium for controlling monitoring video of key area |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010872405.XA CN112115804B (en) | 2020-08-26 | 2020-08-26 | Method, system, intelligent terminal and storage medium for controlling monitoring video of key area |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112115804A CN112115804A (en) | 2020-12-22 |
CN112115804B true CN112115804B (en) | 2024-05-24 |
Family
ID=73804231
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010872405.XA Active CN112115804B (en) | 2020-08-26 | 2020-08-26 | Method, system, intelligent terminal and storage medium for controlling monitoring video of key area |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112115804B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112446823B (en) * | 2021-02-01 | 2021-04-27 | 武汉中科通达高新技术股份有限公司 | Monitoring image display method and device |
CN113891040A (en) * | 2021-09-24 | 2022-01-04 | 深圳Tcl新技术有限公司 | Video processing method, video processing device, computer equipment and storage medium |
CN118096752B (en) * | 2024-04-25 | 2024-07-30 | 深圳市度申科技有限公司 | Image quality analysis method and system |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101383969A (en) * | 2008-10-27 | 2009-03-11 | 杭州华三通信技术有限公司 | Method, decoder and main control module for enlarging local region of image |
CN101950242A (en) * | 2010-09-19 | 2011-01-19 | 电子科技大学 | Multiple viewpoint scene imaging scaling display control method |
WO2011065822A1 (en) * | 2009-11-24 | 2011-06-03 | Nederlandse Organisatie Voor Toegepast-Natuurwetenschappelijk Onderzoek Tno | System for displaying surveillance images |
CN103076957A (en) * | 2013-01-17 | 2013-05-01 | 上海斐讯数据通信技术有限公司 | Display control method and mobile terminal |
CN103139592A (en) * | 2011-11-23 | 2013-06-05 | 韩国科学技术研究院 | 3d display system |
CN103260006A (en) * | 2012-02-17 | 2013-08-21 | 建腾创达科技股份有限公司 | Window dividing control method for channels displaying many surveillance videos |
CN105446687A (en) * | 2015-12-07 | 2016-03-30 | 广东威创视讯科技股份有限公司 | Method and apparatus for locally enlarging spliced wall window image signal |
CN107580228A (en) * | 2017-09-15 | 2018-01-12 | 赵立峰 | A kind of monitor video processing method, device and equipment |
CN107608646A (en) * | 2017-09-11 | 2018-01-19 | 威创集团股份有限公司 | A kind of combination image magnification display method and device |
CN107621932A (en) * | 2017-09-25 | 2018-01-23 | 威创集团股份有限公司 | The local amplification method and device of display image |
KR101897777B1 (en) * | 2018-03-29 | 2018-10-29 | 주식회사 에스오넷 | The image surveillance system for simply setting angle of image surveillance's view, and the method thereof |
CN109271981A (en) * | 2018-09-05 | 2019-01-25 | 广东小天才科技有限公司 | Image processing method and device and terminal equipment |
CN110072087A (en) * | 2019-05-07 | 2019-07-30 | 高新兴科技集团股份有限公司 | Video camera interlock method, device, equipment and storage medium based on 3D map |
CN111263117A (en) * | 2020-02-17 | 2020-06-09 | 北京金和网络股份有限公司 | Emergency command video linkage method, device and system |
-
2020
- 2020-08-26 CN CN202010872405.XA patent/CN112115804B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101383969A (en) * | 2008-10-27 | 2009-03-11 | 杭州华三通信技术有限公司 | Method, decoder and main control module for enlarging local region of image |
WO2011065822A1 (en) * | 2009-11-24 | 2011-06-03 | Nederlandse Organisatie Voor Toegepast-Natuurwetenschappelijk Onderzoek Tno | System for displaying surveillance images |
CN101950242A (en) * | 2010-09-19 | 2011-01-19 | 电子科技大学 | Multiple viewpoint scene imaging scaling display control method |
CN103139592A (en) * | 2011-11-23 | 2013-06-05 | 韩国科学技术研究院 | 3d display system |
CN103260006A (en) * | 2012-02-17 | 2013-08-21 | 建腾创达科技股份有限公司 | Window dividing control method for channels displaying many surveillance videos |
CN103076957A (en) * | 2013-01-17 | 2013-05-01 | 上海斐讯数据通信技术有限公司 | Display control method and mobile terminal |
CN105446687A (en) * | 2015-12-07 | 2016-03-30 | 广东威创视讯科技股份有限公司 | Method and apparatus for locally enlarging spliced wall window image signal |
CN107608646A (en) * | 2017-09-11 | 2018-01-19 | 威创集团股份有限公司 | A kind of combination image magnification display method and device |
CN107580228A (en) * | 2017-09-15 | 2018-01-12 | 赵立峰 | A kind of monitor video processing method, device and equipment |
CN107621932A (en) * | 2017-09-25 | 2018-01-23 | 威创集团股份有限公司 | The local amplification method and device of display image |
KR101897777B1 (en) * | 2018-03-29 | 2018-10-29 | 주식회사 에스오넷 | The image surveillance system for simply setting angle of image surveillance's view, and the method thereof |
CN109271981A (en) * | 2018-09-05 | 2019-01-25 | 广东小天才科技有限公司 | Image processing method and device and terminal equipment |
CN110072087A (en) * | 2019-05-07 | 2019-07-30 | 高新兴科技集团股份有限公司 | Video camera interlock method, device, equipment and storage medium based on 3D map |
CN111263117A (en) * | 2020-02-17 | 2020-06-09 | 北京金和网络股份有限公司 | Emergency command video linkage method, device and system |
Also Published As
Publication number | Publication date |
---|---|
CN112115804A (en) | 2020-12-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112115804B (en) | Method, system, intelligent terminal and storage medium for controlling monitoring video of key area | |
US20170102776A1 (en) | Information processing apparatus, method and program | |
CN112135046B (en) | Video shooting method, video shooting device and electronic equipment | |
US8581993B2 (en) | Information processing device and computer readable recording medium | |
US9704028B2 (en) | Image processing apparatus and program | |
CN112312016B (en) | Shooting processing method and device, electronic equipment and readable storage medium | |
JP4419768B2 (en) | Control device for electronic equipment | |
CN101765860A (en) | Method for manipulating regions of a digital image | |
KR20180018561A (en) | Apparatus and method for scaling video by selecting and tracking image regions | |
US10257436B1 (en) | Method for using deep learning for facilitating real-time view switching and video editing on computing devices | |
US7843512B2 (en) | Identifying key video frames | |
CN109194866B (en) | Image acquisition method, device, system, terminal equipment and storage medium | |
JP2011188297A (en) | Electronic zoom apparatus, electronic zoom method, and program | |
WO2022161260A1 (en) | Focusing method and apparatus, electronic device, and medium | |
KR20130088493A (en) | Method for providing user interface and video receving apparatus thereof | |
JP2011053587A (en) | Image processing device | |
CN113852756B (en) | Image acquisition method, device, equipment and storage medium | |
CN112449165B (en) | Projection method and device and electronic equipment | |
CN112286430B (en) | Image processing method, apparatus, device and medium | |
CN113055599A (en) | Camera switching method and device, electronic equipment and readable storage medium | |
CN113709565A (en) | Method and device for recording facial expressions of watching videos | |
JP2009010849A (en) | Control device for electronic apparatus | |
JP2008090570A (en) | Information processor and information processing method | |
JP2006277409A (en) | Image display device | |
CN111669504B (en) | Image shooting method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |