CN108632569B - Video monitoring method and device based on gun and ball linkage - Google Patents

Video monitoring method and device based on gun and ball linkage Download PDF

Info

Publication number
CN108632569B
CN108632569B CN201710167181.0A CN201710167181A CN108632569B CN 108632569 B CN108632569 B CN 108632569B CN 201710167181 A CN201710167181 A CN 201710167181A CN 108632569 B CN108632569 B CN 108632569B
Authority
CN
China
Prior art keywords
video picture
spherical
pixel
point
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710167181.0A
Other languages
Chinese (zh)
Other versions
CN108632569A (en
Inventor
蔡永锦
铁淑霞
周波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201710167181.0A priority Critical patent/CN108632569B/en
Publication of CN108632569A publication Critical patent/CN108632569A/en
Application granted granted Critical
Publication of CN108632569B publication Critical patent/CN108632569B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The application discloses a video monitoring method and device based on gun and ball linkage, and belongs to the field of video analysis. The method comprises the following steps: the method comprises the steps of obtaining feature information of a central point of an area occupied by a target object in a first video picture, searching pixel points of which the feature information is matched with the obtained feature information in a second video picture, and determining a first rotation angle according to the searched pixel points. The spherical IPC needs to rotate when the central point of the second video picture is rotated to the central point of the area occupied by the target object in the second video picture, so that the spherical IPC rotates according to the first rotation angle and then the central point of the collected video picture is the central point of the area occupied by the target object in the second video picture, the situation that the target object is possibly not located in the center of the video picture collected by the spherical IPC due to the aging problem of the spherical IPC is avoided, and the effect of monitoring the details of the target object is improved.

Description

Video monitoring method and device based on gun and ball linkage
Technical Field
The application relates to the field of video analysis, in particular to a video monitoring method and device based on gun and ball linkage.
Background
Video monitoring refers to sending a collected video picture to a client through an Internet Protocol Camera (IPC) so that the client can view the video picture. At present, IPCs are mainly classified into gun-type IPCs and ball-type IPCs, because the gun-type IPCs are mainly used for collecting large-range video pictures, and the ball-type IPCs are mainly used for collecting small-range high-precision video pictures, for video monitoring of a target object moving in a large range, in order to monitor the moving condition of the target object as a whole and the details of the target object, a gun-ball linkage video monitoring method is generally adopted, that is, a server simultaneously sends the video pictures collected by the gun-type IPCs and the video pictures collected by the ball-type IPCs configured for the gun-type IPCs to a client, so as to realize the video monitoring of the target object based on gun-ball linkage.
At present, when video monitoring is carried out based on gun and ball linkage, a server uniformly divides a video picture acquired by gun type IPC into m rows and n columns of grids, and then the server can determine a grid where a target object is located currently from the m rows and n columns of grids based on the video picture acquired by the gun type IPC and control the ball type IPC to acquire the video picture in the grid. When the target object is not located in the center of the grid, the center of the video picture acquired by the spherical IPC is not the target object, so that the video monitoring effect of the target object through the spherical IPC is poor, therefore, the server also needs to determine 4 calibration points in the grid in advance, and determine the offset of the spherical IPC which needs to rotate when the central point of the video picture acquired by the spherical IPC moves from the central point of the grid to the 4 calibration points, so as to obtain the offset of the 4 calibration points. Then, when the server detects that the target object moves from one position to another position in the grid, 3 calibration points which are relatively close to the moved position of the target object are selected from the 4 calibration points, based on the offset of the 3 calibration points, the offset of the spherical IPC which needs to rotate when the central point of the video picture collected by the spherical IPC moves from the central point of the grid to the another position is determined, the spherical IPC is controlled to rotate according to the offset, and therefore the target object is located in the center of the video picture collected by the spherical IPC, and the target object is monitored in real time through the spherical IPC after the position of the target object changes.
In the method, the offset of the spherical IPC needing to rotate is determined through theoretical operation, but in practical application, the problems of aging and the like of the rotating belt of the spherical IPC occur along with the increase of the service time of the spherical IPC, so that the spherical IPC has an error with the actual offset when the spherical IPC is offset according to the theoretical offset. Therefore, when video monitoring is carried out according to the method, the target object may not be located in the center of the video picture acquired by the spherical IPC, so that the effect of monitoring the details of the target object through the spherical IPC is reduced.
Disclosure of Invention
In order to solve the problem that in the prior art, a target object possibly does not exist in the center of a video picture acquired by the spherical IPC after the spherical IPC deviates according to a theoretical offset due to aging and other problems, the application provides a video monitoring method and device based on gun-ball linkage. The technical scheme is as follows:
in a first aspect, a video monitoring method based on gun and ball linkage is provided, and the method includes:
acquiring feature information of a central point of an area occupied by a target object in a first video picture, wherein the first video picture is a video picture acquired by a gun-shaped network camera IPC, the feature information comprises pixel difference values between a pixel value of the central point and pixel values of a plurality of neighborhood pixels, and the neighborhood pixels are pixels in the neighborhood of the central point in the first video picture;
searching pixel points with characteristic information matched with the characteristic information of the central point of the area occupied by the target object in the first video picture in pixel points included in a second video picture, wherein the second video picture is a video picture currently acquired by a spherical IPC configured for the gun-shaped IPC;
determining a first rotation angle of the spherical IPC according to the longitude and latitude corresponding to the searched pixel point in the spherical coordinate system of the spherical IPC, wherein the first rotation angle is an angle which the spherical IPC needs to rotate when the central point of the second video picture is rotated to the central point of the area occupied by the target object in the second video picture;
and sending a first rotation request to the spherical IPC, wherein the first rotation request carries the first rotation angle, and the first rotation request is used for indicating the spherical IPC to rotate according to the first rotation angle.
In the embodiment of the invention, the pixel point with the characteristic information matched with the characteristic information of the central point of the area occupied by the target object in the first video picture is directly searched in the second video picture, and the spherical IPC is controlled to rotate the center of the second video picture to the central point of the area occupied by the target object in the second video picture, namely the searched pixel point, so that the central point of the area occupied by the target object in the second video picture is arranged at the center of the second video picture, the condition that the target object is possibly not positioned at the center of the video picture collected by the spherical IPC due to the aging problem of the spherical IPC is avoided, and the effect of monitoring the details of the target object through the spherical IPC is improved.
Optionally, the searching for a pixel point whose feature information matches with the feature information of the central point of the region occupied by the target object in the first video picture from the pixel points included in the second video picture includes:
in the second video picture, a first target area with the area as a preset area is determined by taking the central point of the second video picture as a center, and the preset area is determined according to the aging degree of the spherical IPC;
and searching pixel points with characteristic information matched with the characteristic information of the central point of the area occupied by the target object in the first video picture from all the pixel points included in the first target area.
Further, in order to improve efficiency of searching for a pixel point in the second video image, where the feature information matches with the feature information of the center point of the area occupied by the target object in the first video image, the first target area may be determined in the second video image, and only the pixel point in the first target area, where the feature information matches with the feature information of the center point of the area occupied by the target object in the first video image, may be searched for.
Optionally, the searching for a pixel point whose feature information matches with the feature information of the central point of the area occupied by the target object in the first video image from all pixel points included in the first target area includes:
acquiring characteristic information of all pixel points included in the first target area;
for each pixel point in all pixel points included in the first target area, performing average operation and range operation on pixel difference values included in the feature information of the pixel points to obtain a first average value and a first range value, and determining the product of the first average value and the first range value as the feature value of the pixel point;
carrying out average operation and range operation on pixel difference values included in the feature information of the central point of the area occupied by the target object in the first video picture to obtain a second average value and a second range value, and determining the product of the second average value and the second range value as a target feature value;
and selecting a characteristic value with the minimum difference value with the target characteristic value from the characteristic values of all pixel points included in the first target area, and determining the pixel point corresponding to the selected characteristic value as a pixel point with characteristic information matched with the characteristic information of the central point of the area occupied by the target object in the first video picture.
In the embodiment of the present invention, determining the pixel point in the first target region, where the feature information matches with the feature information of the central point of the region occupied by the target object in the first video image, may be implemented by determining feature values of all pixel points in the first target region, and searching for a feature value having a minimum difference with the target feature value from the feature values of all pixel points included in the first target region.
Optionally, the searching for a pixel point whose feature information matches with the feature information of the central point of the region occupied by the target object in the first video picture from the pixel points included in the second video picture includes:
reducing the second video image for multiple times according to a preset rule to obtain multiple third video images;
for each third video picture in the plurality of third video pictures, determining a second target area with the area as a preset area in the third video picture by taking the center point of the third video picture as the center, wherein the preset area is determined according to the aging degree of the spherical IPC;
and searching pixel points with characteristic information matched with the characteristic information of the central point of the area occupied by the target object in the first video picture from all the pixel points in the obtained second target areas.
Further, in order to avoid finding out a pixel point in the current second video picture, where the characteristic information of the pixel point matches with the characteristic information of the central point of the area occupied by the target object in the first video picture, the second video picture may be narrowed down for multiple times to obtain multiple third video pictures, and a pixel point in the multiple third video pictures, where the characteristic information of the pixel point matches with the characteristic information of the central point of the area occupied by the target object in the first video picture, is found out.
Optionally, the searching for a pixel point whose feature information matches with the feature information of the central point of the region occupied by the target object in the first video picture from all pixel points included in the obtained plurality of second target regions includes:
for each second target area in the plurality of second target areas, acquiring feature information of all pixel points included in the second target area;
for each pixel point in all pixel points included in the second target region, performing average operation and range operation on pixel difference values included in the feature information of the pixel points to obtain a third average value and a third range value, and determining a product between the third average value and the third range value as a feature value of the pixel point;
carrying out average operation and range operation on pixel difference values included in the feature information of the central point of the area occupied by the target object in the first video picture to obtain a second average value and a second range value, and determining the product of the second average value and the second range value as a target feature value;
for each second target area in the plurality of second target areas, selecting a characteristic value with the minimum difference value with the target characteristic value from the characteristic values of all pixel points included in the second target area, and determining the pixel point corresponding to the selected characteristic value as a target pixel point;
and selecting a characteristic value with the minimum difference value with the target characteristic value from the obtained characteristic values of the plurality of target pixel points, and determining the target pixel point corresponding to the selected characteristic value as a pixel point with characteristic information matched with the characteristic information of the central point of the area occupied by the target object in the first video picture.
Searching for the pixel point with the characteristic information matched with the characteristic information of the central point of the area occupied by the target object in the first video image in the plurality of third video images can be realized by determining a plurality of second target areas in the plurality of third video images and searching for the characteristic value with the minimum difference value from the target characteristic value in the characteristic values of all the pixel points included in the plurality of second target areas.
Optionally, before searching for a pixel point whose feature information matches with the feature information of the central point of the region occupied by the target object in the first video picture among the pixel points included in the second video picture, the method further includes:
determining the corresponding coordinate of the central point of the area occupied by the target object in the first video picture in the plane coordinate system of the gun type IPC;
determining longitude and latitude corresponding to the central point of the area occupied by the target object in the first video picture in the spherical coordinate system according to the determined coordinates and a preset coordinate conversion model, wherein the preset coordinate conversion model is used for converting the coordinates of the point in the plane coordinate system into the spherical coordinate system;
determining a second rotation angle of the spherical IPC according to the determined longitude and latitude;
sending a second rotation request to the spherical IPC, wherein the second rotation request carries the second rotation angle;
and receiving a second video picture acquired by the spherical IPC, wherein the second video picture is the video picture acquired after the spherical IPC rotates according to the second rotation angle when a second rotation request is received.
In the embodiment of the present invention, when a target object to be monitored is detected in the first video picture, the longitude and latitude corresponding to the center point of the area occupied by the target object in the first video picture in the spherical coordinate system may be determined according to the coordinate corresponding to the center point of the area occupied by the target object in the first video picture in the planar coordinate system and the preset coordinate conversion model, and the spherical IPC may be controlled to rotate according to the determined longitude and latitude, so that the center point of the area occupied by the target object in the second video picture is located at the center of the second video picture.
Optionally, the determining, according to the determined coordinate and a preset coordinate conversion model, a longitude and a latitude, corresponding to a central point of an area occupied by the target object in the first video picture, in the spherical coordinate system includes:
according to the determined coordinates, determining the distance x between the central point of the area occupied by the target object in the first video picture and the vertical intersection point1And the distance x between the central point of the area occupied by the target object in the first video picture and the origin of the spherical coordinate system2And the distance x between the central point of the area occupied by the target object in the first video picture and any one of at least four calibration points3The vertical intersection point is an intersection point of a straight line which passes through the origin of the spherical coordinate system and is vertical to the plane coordinate system and intersects the plane coordinate system, and the at least four calibration points are randomly selected from the first video picture;
according to the x1、x2、x3Determining the longitude and the latitude corresponding to the central point of the area occupied by the target object in the first video picture in the spherical coordinate system according to a preset coordinate conversion model;
Figure BDA0001250152380000041
Figure BDA0001250152380000042
wherein the content of the first and second substances,
Figure BDA0001250152380000043
theta is respectively the longitude and latitude of the central point of the area occupied by the target object in the first video picture in the spherical coordinate system,
Figure BDA0001250152380000044
and theta1Respectively the longitude and latitude of the vertical intersection point in the spherical coordinate system,
Figure BDA0001250152380000045
and theta3Respectively the longitude and the latitude of any one of the at least four calibration points in the spherical coordinate system.
In the embodiment of the present invention, according to the coordinate of the central point of the area occupied by the target object in the first video image in the planar coordinate system and the preset coordinate conversion model, the longitude and latitude of the central point of the area occupied by the target object in the first video image in the spherical coordinate system are determined, and it is required to first determine the parameter x to be determined in the preset coordinate conversion model1、x2、x3
Optionally, the determining, according to the determined coordinate and a preset coordinate conversion model, that the longitude and the latitude of the central point of the area occupied by the target object in the first video picture are ahead of the corresponding longitude and latitude in the spherical coordinate system further includes:
randomly selecting at least four index points in the first video picture, wherein any three index points in the at least four index points are not collinear;
determining coordinates of each of the at least four calibration points in the planar coordinate system and a longitude and latitude in the spherical coordinate system;
determining a position parameter of a vertical intersection point in the spherical coordinate system according to the coordinates of each of the at least four calibration points in the planar coordinate system and the longitude and latitude in the spherical coordinate system, wherein the position parameter comprises a distance between the vertical intersection point and an origin of the spherical coordinate system and the longitude and latitude of the vertical intersection point in the spherical coordinate system;
and establishing the preset coordinate conversion model according to the position parameter of the vertical intersection point in the spherical coordinate system and the longitude and latitude of any one of the at least four calibration points in the spherical coordinate system.
In the embodiment of the present invention, the establishing of the preset coordinate conversion model may be implemented by coordinates of each of the at least four calibration points in the first video frame in a planar coordinate system and longitude and latitude of each of the at least four calibration points in a spherical coordinate system.
Optionally, the determining the longitude and latitude of each of the at least four calibration points in the spherical coordinate system comprises:
for each of the at least four calibration points, controlling a central point of a video picture acquired by the spherical IPC to rotate to the calibration point from a position where the longitude and the latitude of the spherical coordinate system are both zero;
determining a rotation angle of the spherical IPC in the horizontal direction and a rotation angle of the spherical IPC in the vertical direction;
and determining the rotation angle of the spherical IPC in the horizontal direction as the longitude of the calibration point in the spherical coordinate system, and determining the rotation angle of the spherical IPC in the vertical direction as the latitude of the calibration point in the spherical coordinate system.
In the embodiment of the invention, the longitude and latitude of each of the at least four calibration points in the spherical coordinate system are determined, that is, when the central point of the video picture acquired by the spherical IPC rotates to the calibration point from the position where the longitude and latitude of the spherical coordinate system are both zero, the rotation angle of the spherical IPC in the horizontal direction and the rotation angle of the spherical IPC in the vertical direction are determined.
In a second aspect, a video monitoring device based on gun and ball linkage is provided, and the video monitoring device based on gun and ball linkage has a function of realizing the behavior of the video monitoring method based on gun and ball linkage in the first aspect. The video monitoring device based on the gun-ball linkage comprises at least one module, and the at least one module is used for realizing the video monitoring method based on the gun-ball linkage provided by the first aspect.
In a third aspect, a video monitoring device based on gun and ball linkage is provided, where the structure of the video monitoring device based on gun and ball linkage includes a processor and a memory, and the memory is used to store a program for supporting the video monitoring device based on gun and ball linkage to execute the video monitoring method based on gun and ball linkage provided in the first aspect, and to store data for implementing the video monitoring method based on gun and ball linkage provided in the first aspect. The processor is configured to execute programs stored in the memory.
In a fourth aspect, a computer storage medium is provided, in which instructions are stored, and when the instructions are executed on a computer, the instructions cause the computer to execute the method for monitoring video based on gun and ball linkage according to the first aspect.
In a fifth aspect, there is provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the method for video surveillance based on gun-and-ball linkage of the first aspect.
The technical effects obtained by the above second, third, fourth and fifth aspects are similar to the technical effects obtained by the corresponding technical means in the first aspect, and are not described herein again.
The technical scheme provided by the embodiment of the invention has the following beneficial effects: the method comprises the steps that the characteristic information of the central point of the area occupied by a target object in a first video picture is obtained, and the characteristic information of each pixel point is unique, so that the pixel point with the characteristic information matched with the obtained characteristic information can be searched in a second video picture collected by a spherical IPC, and the searched pixel point can represent the central point of the area occupied by the target object in the second video picture; and then determining a first rotation angle of the spherical IPC according to the longitude and the latitude of the searched pixel point corresponding to the spherical coordinate system of the spherical IPC, wherein the first rotation angle is an angle which the spherical IPC needs to rotate when the central point of the second video picture is rotated to the central point of the area occupied by the target object in the second video picture, so that after the spherical IPC receives the first rotation request and rotates according to the first rotation angle, the central point of the video picture collected by the spherical IPC is the central point of the area occupied by the target object in the second video picture, the situation that the target object possibly does not exist in the center of the video picture collected by the spherical IPC due to the aging problem of the spherical IPC is avoided, and the effect of monitoring the details of the target object through the spherical IPC is improved.
Drawings
FIG. 1 is a schematic view of a video surveillance system based on gun and ball linkage according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of an IPC according to an embodiment of the present invention;
FIG. 3A is a flowchart of a video monitoring method based on gun and ball linkage according to an embodiment of the present invention;
fig. 3B is a schematic diagram illustrating distribution of pixel points in a neighborhood of a central point according to an embodiment of the present invention;
FIG. 3C is a schematic diagram of a plane coordinate system and a spherical coordinate system according to an embodiment of the present invention;
FIG. 3D is a schematic diagram of a geometric model provided by an embodiment of the invention;
FIG. 3E is a schematic diagram of another geometric model provided by an embodiment of the invention;
FIG. 4A is a block diagram of a video monitoring apparatus based on gun and ball linkage according to an embodiment of the present invention;
FIG. 4B is a block diagram of a lookup module according to an embodiment of the present invention;
FIG. 4C is a block diagram of another lookup module provided by embodiments of the present invention;
FIG. 4D is a block diagram of another video monitoring apparatus based on gun and ball linkage according to an embodiment of the present invention;
FIG. 4E is a block diagram of a third determining module according to an embodiment of the present invention;
FIG. 4F is a block diagram of another video monitoring apparatus based on gun and ball linkage according to an embodiment of the present invention;
fig. 4G is a block diagram of a fifth determining module according to an embodiment of the present invention.
Detailed Description
Embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Before explaining the embodiments of the present invention in detail, an application scenario of the embodiments of the present invention will be described. For video monitoring based on single-gun IPC, namely video monitoring realized by only one gun IPC, when a target object to be monitored exists in a monitoring range, the target object can be detected by the gun IPC. In order to determine the details of the target object, the video frame acquired by the gun-shaped IPC needs to be magnified in a variable magnification manner, and the details of the target object are determined in the video frame after the magnification is changed. However, since the gun type IPC is used to capture a video image in a wide range, the resolution of the video image captured by the gun type IPC is not high, and therefore, the effect of determining the details of the target object in the video image after the zoom-up is not ideal. Therefore, for video monitoring of a target object moving in a large range, in order to monitor the moving situation of the target object as a whole and the details of the target object, a video monitoring method based on gun-ball linkage is generally adopted, that is, whether the target object to be monitored exists is detected through the gun type IPC, and when the target object to be monitored is detected through the gun type IPC, the details of the target object are monitored through the dynamic rotation of the ball type IPC. For example, for a cell monitoring scene, when a stranger invades in a monitoring range, the stranger can be detected through the gun-shaped IPC, and at the moment, the spherical IPC is controlled to dynamically rotate so that a video picture acquired after the spherical IPC rotates comprises the stranger. Because the video picture collected by the spherical IPC is the video picture after zooming, the details of the stranger, such as face close-up, can be determined through the video picture collected by the spherical IPC, so that the safety degree of cell monitoring is improved. The video monitoring method based on gun-ball linkage provided by the embodiment of the invention is applied to a scene for monitoring the details of a target object through the dynamic rotation of the spherical IPC when the target object to be monitored is detected through the gun-type IPC.
Fig. 1 is a video monitoring system 100 based on gun and ball linkage according to an embodiment of the present invention, and as shown in fig. 1, the video monitoring system 100 based on gun and ball linkage includes a server 101, a client 102, and an IPC 103. The IPC103 is used for acquiring a video picture to be monitored and sending the acquired video picture to the server 101. When the server 101 receives a video picture collected by the IPC103, the server 101 sends the received video picture to the client 102. When the client 102 receives the video picture sent by the server 101, the received video picture is displayed, so that the user can conveniently perform video monitoring according to the video picture displayed by the client 102. In addition, when the server 101 receives the video picture collected by the IPC103 and determines that the video picture collected by the IPC103 is to be adjusted, the server 101 can also control the IPC103 to dynamically rotate and receive the video picture collected after the IPC103 dynamically rotates.
The server 101 and the IPC103 may communicate with each other through a wireless network or a wired network, and the server 101 and the client 102 may also communicate with each other through a wireless network or a wired network. In addition, the video monitoring system based on gun and ball linkage may include a plurality of IPCs 103, that is, the video monitoring system based on gun and ball linkage may deploy a plurality of IPCs, and only 3 IPCs 103 are illustrated in fig. 1 as an example.
It should be noted that, as shown in fig. 1, the IPC103 includes a gun type IPC1031 and a ball type IPC1032 to implement the video monitoring method based on gun and ball linkage provided by the embodiment of the present invention. That is, the server configures a corresponding spherical IPC for each gun type IPC, specifically, when the user determines that the gun type IPC1031 and the spherical IPC1032 in the IPC103 are installed, the client 102 sends a configuration request to the server 101, where the configuration request carries an identifier of the gun type IPC1031 and an identifier of the spherical IPC1032 in the IPC 103. When the server 101 receives the configuration request, the identifier of the gun type IPC1031 in the IPC103 and the identifier of the ball type IPC1032 are stored in the corresponding relationship of the gun type IPC and the ball type IPC, so that the ball type IPC1032 is configured for the gun type IPC1031 in the IPC 103.
The identification of the gun type IPC1031 is used for uniquely identifying the gun type IPC1031, and the identification of the ball type IPC1032 is used for uniquely identifying the ball type IPC 1032. It should be noted that the IPC103 may include a gun-type IPC and a ball-type IPC, and may also include a gun-type IPC and a plurality of ball-type IPCs, which is not limited in the embodiments of the present invention.
Optionally, the IPC103 shown in fig. 1 may also directly communicate with the client 102 through a wireless network or a wired network, that is, the IPC103 directly sends the acquired video picture to the client 102, and does not need to send the acquired video picture to the client 102 through the server 101, at this time, the IPC103 is further configured to perform an operation of automatically performing dynamic rotation and reacquiring the video picture when the video picture acquired by the IPC103 is to be adjusted, that is, when the IPC103 directly communicates with the client 102 through a wireless network or a wired network, the method provided in the embodiment of the present invention may also be applied to the IPC 103. In particular, in the embodiment of the present invention, the video monitoring system based on gun and ball linkage shown in fig. 1 is taken as an example for description.
Fig. 2 is a schematic structural diagram of an IPC according to an embodiment of the present invention, where the IPC may be a gun-type IPC1031 or a ball-type IPC1032 included in the IPC103 shown in fig. 1. Referring to fig. 2, the IPC includes: a video collector 201, a transmitter 202 and a receiver 203.
The video collector 201 is configured to collect video pictures, and the transmitter 202 may be configured to transmit data and/or signaling. Receiver 203 may be used to receive data and/or signaling, etc.
Optionally, the IPC further comprises a memory and a processor. The memory may be used to store, among other things, one or more software programs and/or modules. The Memory may be, but is not limited to, a Read-Only Memory (ROM), a Random Access Memory (RAM), an Electrically erasable programmable Read-Only Memory (EEPROM), a Compact Disc Read-Only Memory (CD-ROM), a magnetic disk storage medium, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by an integrated circuit. The processor may be a Central Processing Unit (CPU), a microprocessor, an Application-Specific Integrated Circuit (ASIC), or one or more ics for controlling the execution of programs in accordance with the teachings of the present disclosure. Particularly, when the IPC directly communicates with the client through a wireless network or a wired network, the IPC includes a processor that can implement the video monitoring method based on the gun-and-ball linkage provided by the embodiment of the present invention by running the software program and/or module stored in the memory and calling the data stored in the memory.
Fig. 3A is a video monitoring method based on gun and ball linkage according to an embodiment of the present invention, and the method is applied to the video monitoring system based on gun and ball linkage shown in fig. 1. As shown in fig. 3A, the video monitoring method based on gun and ball linkage includes the following steps.
Step 301: the gun-shaped IPC collects a first video picture and sends the first video picture to the server, and the ball-shaped IPC collects a second video picture and sends the second video picture to the server.
As shown in fig. 1, after the server configures the gun-type IPC with the ball-type IPC, the gun-type IPC and the ball-type IPC start to acquire video pictures and send the acquired video pictures to the server. For the following convenience, the video picture acquired by the gun-type IPC is referred to as a first video picture, and the video picture acquired by the spherical IPC is referred to as a second video picture, that is, the first video picture is the video picture acquired by the gun-type IPC, and the second video picture is the video picture currently acquired by the spherical IPC configured by the server for the gun-type IPC. When the server receives a first video picture acquired by the gun-shaped IPC and a second video picture acquired by the ball-shaped IPC, the first video picture and the second video picture are sent to the client. The client receives the first video picture and the second video picture sent by the server, so that a user can conveniently check the first video picture collected by the gun type IPC and the second video picture collected by the ball type IPC through the client.
It should be noted that the second video frame acquired by the spherical IPC is only a partial frame of the video frame acquired by the gun-type IPC, so that the target object may not be included in the current second video frame or may not be located in the center of the second video frame, at this time, in order to improve the effect of video monitoring on the target object, the server needs to determine the target object to be monitored in the first video frame first, and then control the spherical IPC to reacquire the video frame through steps 302 to 305, so that the target object is located in the center of the second video frame. The server determines that the target object to be monitored in the first video picture can be: the server detects whether a target object to be monitored exists in the first video picture in real time, and when the target object to be monitored exists in the first video picture, the target object in the first video picture is determined. Of course, the determination of the target object to be monitored in the first video frame by the server may also be: when a user looks at a target object to be monitored on a first video picture through a client, a monitoring request is sent to a server through the client, the monitoring request carries the position of the target object in the first video picture, and when the server receives the monitoring request, the target object in the first video picture is determined according to the position of the target object in the first video picture.
Step 302: the method comprises the steps of obtaining feature information of a central point of a region occupied by a target object in a first video picture, wherein the feature information comprises pixel difference values between pixel values of the central point and pixel values of a plurality of neighborhood pixels, and the neighborhood pixels are pixels in the neighborhood of the central point in the first video picture.
When the server detects a target object to be monitored in the first video picture, in order to monitor the details of the target object through the spherical IPC, the server needs to control the spherical IPC to rotate the central point of the area occupied by the target object in the second video picture to the central point of the second video picture, that is, the server needs to search the central point of the area occupied by the target object in the second video picture. Since the feature information of the central point of the area occupied by the target object in the second video picture is the same as the feature information of the central point of the area occupied by the target object in the first video picture, when the server detects the target object to be monitored in the first video picture, the server needs to first acquire the feature information of the central point of the area occupied by the target object in the first video picture.
The characteristic information comprises pixel difference values between the pixel value of the central point of the area occupied by the target object in the first video picture and the pixel values of the plurality of neighborhood pixels. That is, after determining the pixel value of the central point of the area occupied by the target object in the first video picture, the server obtains the pixel value of each pixel point in the pixel points in the neighborhood of the central point, and determines the difference between the pixel value of the pixel point and the pixel value of the central point for each pixel point in the neighborhood to obtain a plurality of pixel difference values, wherein the pixel difference values are the feature information of the central point, that is, the feature information of the central point is a set of data. In addition, the server may obtain the pixel value of the central point and the pixel value of each pixel point in the neighborhood of the central point in an optical flow manner, which is not described in detail herein in the embodiments of the present invention. As shown in FIG. 3B, the P point is the central point of the area occupied by the target object in the first video frame, P1To P8Pixel points in the neighborhood of P points, G1Pixel value of P point and P1Difference between pixel values of dots, G2Pixel value of P point and P2Difference between pixel values of points, …, and so on, G8Pixel value of P point and P8The difference between the pixel values of the points, and thus the characteristic information of the P points can be expressed as an array (G)1,G2,G3,G4,G5,G6,G7,G8)。
In addition, in the embodiment of the present invention, in order to facilitate the server to compare the difference between the feature information of two pixel points, for the feature information of a certain pixel point, the feature information of the pixel point is subjected to statistical processing to obtain a value that can represent the feature information of the pixel point, and for convenience of description, the value that represents the feature information of the pixel point is referred to as the feature value of the pixel point. In one possible implementation, the average value may indicate an average size of a group of data, and the range value may indicate a discrete degree of the group of data, so that the characteristic of the pixel point may be signaledAnd determining the product of the average value and the extreme difference value of the information as the characteristic value of the pixel point. For example, the feature information (G) for the above-mentioned P point1,G2,G3,G4,G5,G6,G7,G8) The characteristic value of the P point can be expressed as
Figure BDA0001250152380000091
Wherein p isxIs the characteristic value of P point, GmaxCharacteristic information (G) of P point1,G2,G3,G4,G5,G6,G7,G8) Maximum value of (1), GminCharacteristic information (G) of P point1,G2,G3,G4,G5,G6,G7,G8) The minimum value of (a) to (b),
Figure BDA0001250152380000092
to obtain the characteristic information (G) of P point1,G2,G3,G4,G5,G6,G7,G8) The eight data in (1) are added.
It should be noted that, when the server determines the target object to be monitored in the first video frame, the video frame acquired by the spherical IPC may or may not include the target object. When the video pictures acquired by the spherical IPC include the target object, the server directly controls the spherical IPC to rotate the central point of the target object in the area occupied by the second video picture to the central point of the second video picture according to the steps 303 to 305. When the video frame acquired by the spherical IPC does not include the target object, the server needs to control the spherical IPC to acquire the video frame including the target object first, and then control the spherical IPC to rotate the central point of the target object in the area occupied by the second video frame to the central point of the second video frame through the steps 303 to 305.
Specifically, the implementation manner of the server controlling the spherical IPC to collect the video picture including the target object may be: the server determines the corresponding coordinates of the central point of the area occupied by the target object in the first video picture in the plane coordinate system of the gun type IPC; according to the determined coordinates and a preset coordinate conversion model, the server determines the longitude and the latitude, corresponding to the central point of the area occupied by the target object in the first video picture, in the spherical coordinate system; determining a second rotation angle of the spherical IPC according to the determined longitude and latitude; the server sends a second rotation request to the spherical IPC, wherein the second rotation request carries a second rotation angle; when the spherical IPC receives the second rotation request, the spherical IPC rotates according to the second rotation angle, collects the video pictures after the rotation and sends the video pictures collected after the rotation to the server; when the server receives the video picture collected by the spherical IPC, the server determines to receive a second video picture collected by the spherical IPC, namely the second video picture is the video picture collected by the spherical IPC after the spherical IPC rotates according to a second rotation angle when receiving a second rotation request.
The plane coordinate system of the gun-shaped IPC is a plane coordinate system determined by the server according to the first video picture collected by the gun-shaped IPC in advance, the origin of the plane coordinate system is a preset origin, and the corresponding coordinates of any point in the first video picture in the plane coordinate system are also determined. The spherical coordinate system is determined by the server according to all video pictures collected by the spherical IPC in the full view angle range in advance, the position with 0 longitude and latitude is preset in the spherical coordinate system, namely for the second video picture collected by the spherical IPC, any point in the second video picture has corresponding longitude and latitude in the spherical coordinate system. Since the spherical coordinate system is determined by the server according to all the video pictures collected by the spherical IPC in the full view angle range in advance, any point in the first video picture has a corresponding point in the spherical coordinate system, that is, any point in the first video picture has a corresponding longitude and latitude in the spherical coordinate system.
Therefore, when the server determines the center point of the area occupied by the target object in the first video picture, the longitude and latitude corresponding to the center point in the spherical coordinate system can be determined, and according to the longitude and latitude of the center point of the current second video picture, the difference between the longitude of the center point of the current second video picture and the longitude corresponding to the center point in the spherical coordinate system is determined to obtain a first angle, and the first angle is determined as the angle of the second rotation angle in the horizontal direction. And simultaneously, the server determines the difference between the latitude of the central point of the current second video picture and the corresponding latitude of the central point in the spherical coordinate system to obtain a second angle, determines the second angle as the angle of the second rotation angle in the vertical direction, and determines the second rotation angle when the server determines the angle of the second rotation angle in the horizontal direction and the angle of the second rotation angle in the vertical direction.
It should be noted that, after the spherical IPC rotates according to the second rotation angle, due to the aging problem of the spherical IPC, the center point of the target object in the area occupied by the second video screen is still not located at the center point of the second video screen, and at this time, the server still needs to control the spherical IPC to rotate the center point of the target object in the area occupied by the second video screen to the center point of the second video screen through steps 303 to 305.
In addition, the preset coordinate conversion model is used for converting the coordinates of the points in the planar coordinate system into the spherical coordinate system, that is, the server can determine the longitude and the latitude of any point in the first video picture in the spherical coordinate system according to the preset coordinate conversion model. The preset coordinate conversion model is a coordinate conversion model pre-established by the server, and specifically, the establishment of the preset coordinate conversion model by the server can be realized through the following steps:
(1) the server determines coordinates of each of the at least four calibration points in a planar coordinate system and a longitude and latitude in a spherical coordinate system.
The at least four calibration points are at least four calibration points randomly selected by the server in the first video picture. It should be noted that, since the at least four calibration points are used for indicating the plane coordinate system, any three calibration points of the at least four calibration points are not collinear, and meanwhile, in order to improve the accuracy of the default coordinate transformation model, the relative positions of the at least four calibration points in the first video frame are relatively dispersed. Optionally, the at least four calibration points may also be calibration points selected by the user in the first video image through the client, that is, when the client determines the four calibration points selected by the user in the first video image, a calibration request is sent to the server, where the calibration request carries coordinates of the at least four calibration points in the plane coordinate system, and when the server receives the calibration request sent by the client, the coordinates of the at least four calibration points in the plane coordinate system are determined.
In particular, the implementation of the server to determine the longitude and latitude of the at least four calibration points in the spherical coordinate system may be: aiming at each of at least four calibration points, the server controls the spherical IPC to rotate the central point of the collected video picture to the calibration point from the position where the longitude and the latitude of the spherical coordinate system are both zero; the server determines the rotation angle of the spherical IPC in the horizontal direction and the rotation angle of the spherical IPC in the vertical direction; then, the rotation angle of the spherical IPC in the horizontal direction is determined as the longitude of the calibration point in the spherical coordinate system, and the rotation angle of the spherical IPC in the vertical direction is determined as the latitude of the calibration point in the spherical coordinate system.
As shown in fig. 3C, the plane T is a plane represented by the plane coordinate system, the sphere O represents a space of the spherical coordinate system, O' is an intersection point where a straight line passing through the origin O of the spherical coordinate system and perpendicular to the plane coordinate system T intersects the plane coordinate system T, and A, B, C and D are four calibration points randomly selected by the server in the plane coordinate system T. Since A, B, C and D are four calibration points randomly selected by the server in the planar coordinate system T, the server can directly determine A, B, C and D as (x) coordinates in the planar coordinate systemA,yA)、(xB,yB)、(xC,yC) And (x)D,yD) And simultaneously, the server determines A, B, C longitude and latitude of D in the spherical coordinate system as respectively according to the method for determining the longitude and latitude of the at least four calibration points in the spherical coordinate system
Figure BDA0001250152380000111
And
Figure BDA0001250152380000112
(2) and the server determines the position parameters of the vertical intersection point in the spherical coordinate system according to the coordinates of each of the at least four calibration points in the plane coordinate system and the longitude and the latitude in the spherical coordinate system, wherein the position parameters comprise the distance between the vertical intersection point and the origin of the spherical coordinate system and the longitude and the latitude of the vertical intersection point in the spherical coordinate system, and the vertical intersection point is the intersection point of a straight line which passes through the origin of the spherical coordinate system and is perpendicular to the plane coordinate system and intersects with the plane coordinate system in the plane coordinate system.
Specifically, as shown in fig. 3C, three index points A, B and C are selected from the four index points, the three index points and the vertical intersection O' and the origin of the spherical coordinate system constitute the geometric model shown in fig. 3D. From the coordinates of A, B and C in the planar coordinate system, the distance AB between a and B, the distance BC between B and C, and the distance AC between a and C can be determined according to the following formula (1).
Figure BDA0001250152380000113
In the spherical coordinate system, when determining A, B and C as the longitude and latitude in the spherical coordinate system are respectively
Figure BDA0001250152380000114
Figure BDA0001250152380000115
And
Figure BDA0001250152380000116
then, the spatial coordinates of the A, B and the C three points in the spherical coordinate system can be determined according to the following formula (2).
Figure BDA0001250152380000117
Wherein, the ratio of theta,
Figure BDA0001250152380000118
respectively, the longitude and latitude of a point in the spherical coordinate system, (x, y, z) is the spatial coordinate of the point in the spherical coordinate system, and r is the distance from the point to the origin. In particular, when r is 1, (x, y, z) is the unit vector of the vector formed by the point and the origin.
Determining a vector according to the above equation (2)
Figure BDA0001250152380000119
Unit vector of
Figure BDA00012501523800001110
Is composed of
Figure BDA00012501523800001111
Vector quantity
Figure BDA00012501523800001112
Unit vector of
Figure BDA00012501523800001113
Is composed of
Figure BDA00012501523800001114
Vector quantity
Figure BDA00012501523800001115
Unit vector of
Figure BDA00012501523800001116
Is composed of
Figure BDA00012501523800001117
And vector
Figure BDA00012501523800001118
Sum vector
Figure BDA00012501523800001119
Angle ∠ AOB between them is also vector
Figure BDA00012501523800001120
Sum vector
Figure BDA0001250152380000121
Angle, vector between
Figure BDA0001250152380000122
Sum vector
Figure BDA0001250152380000123
Angle ∠ BOC between them is also vector
Figure BDA0001250152380000124
Sum vector
Figure BDA0001250152380000125
Angle, vector between
Figure BDA0001250152380000126
Sum vector
Figure BDA0001250152380000127
Angle ∠ AOC between them is also vector
Figure BDA0001250152380000128
Sum vector
Figure BDA0001250152380000129
The included angle therebetween.
Meanwhile, the server may determine the lengths of a 'B', B 'C', and a 'C', respectively, according to the following formula (3).
Figure BDA00012501523800001210
As can be seen from the theory of trigonometric cosine,
Figure BDA00012501523800001211
since OA 'and OB' have a length of 1, it will be according to equation (3)The determined values of A 'B' are substituted to determine ∠ AOB, ∠ BOC and ∠ AOC can be determined in the same way.
Assuming that AB is equal to m, AC is equal to n, BC is equal to l, angle AOB is equal to α, angle BOC is equal to β, angle AOC is equal to γ, OA is equal to x, OB is equal to y, and OC is equal to z, the following formula (4) can be determined according to the trigonometric cosine theorem.
Figure BDA00012501523800001212
Since m, n, l, cos α, cos β, and cos γ in the above formula (4) are already determined data, the lengths of OA, OB, and OC can be determined by the above formula (4). It should be noted that, in the process of solving the lengths of OA, OB, and OC according to the above formula (4), a set of solutions may be obtained, and at this time, a set of solutions regarding the lengths of OA, OB, and OD is determined by using another calibration point D according to the above method, and a common solution of the two sets of solutions is taken, so that a unique solution of OA and OB can be determined. A unique solution for OC can also be determined.
Heretofore, for the mitsubishi cone OABC in the spherical coordinate system, since the respective longitudes and latitudes of A, B and C are known and the lengths of OA, OB and OC are also determined, the spatial coordinates of three points A, B and C in the spherical coordinate system can be determined according to equation (2), that is, the spatial position of the mitsubishi cone OABC in the spherical coordinate system is determined, and therefore, the spatial coordinates of the vertical intersection point O ' in the mitsubishi cone OABC, that is, the length of OO ' and the longitude and latitude of O ' in the spherical coordinate system, that is, the position parameters of the vertical intersection point can be determined according to the space vector geometry.
(3) And establishing a preset coordinate conversion model according to the position parameters of the vertical intersection points in the spherical coordinate system and the longitude and latitude of any one of the at least four calibration points in the spherical coordinate system.
If any of the at least four calibration points is the point a shown in fig. 3C or fig. 3D, for any point X in the plane coordinate system, the geometric model shown in fig. 3E can be constructed by the any point X, the vertical intersection point O ', and any calibration point a, as shown in fig. 3E, the coordinate of the point X in the plane coordinate system is a known coordinate, so the lengths of O ' X and XA can be directly determined, and since the triangle OO ' X is a right triangle, under the premise that OO ' and O ' X are known, the length of OX can be directly determined, at this time, all the side lengths of the triangle OO ' X and the triangle OAX are determined, so the cosine AOX and the cosine O ' OX can be determined by the following formula (5) according to the triangle theorem.
Figure BDA00012501523800001213
Let X and O' have respective longitudes and latitudes in the spherical coordinate system
Figure BDA00012501523800001214
And
Figure BDA00012501523800001215
thus, a vector can be determined
Figure BDA00012501523800001216
Unit vector of
Figure BDA00012501523800001217
Is composed of
Figure BDA00012501523800001218
Vector quantity
Figure BDA00012501523800001219
Unit vector of
Figure BDA00012501523800001220
Is composed of
Figure BDA00012501523800001221
Vector quantity
Figure BDA00012501523800001222
Unit vector of
Figure BDA00012501523800001223
Is composed of
Figure BDA0001250152380000131
And vector
Figure BDA0001250152380000132
Sum vector
Figure BDA0001250152380000133
Angle ∠ AOX between them is also vector
Figure BDA0001250152380000134
Sum vector
Figure BDA0001250152380000135
Angle, vector between
Figure BDA0001250152380000136
Sum vector
Figure BDA0001250152380000137
Angle ∠ O' OX (vector)
Figure BDA0001250152380000138
Sum vector
Figure BDA0001250152380000139
The included angle therebetween.
Meanwhile, the server may determine the length of the length a ' X ' and the length of O "X ' according to the following formula (6).
Figure BDA00012501523800001310
Figure BDA00012501523800001311
The above formula (6) can be abbreviated as
Figure BDA00012501523800001312
Further, the length of O "X ' and the length of a ' X ' can be obtained by the trigonometric cosine law, as shown in the following formula (7):
Figure BDA00012501523800001313
substituting equation (5) into equation (7) may result in transformed equation (7) as follows:
Figure BDA00012501523800001314
in the transformed formula (7), since X is any point in the plane coordinate system, i.e. OX, O' X, AX are undetermined parameters, the transformed formula (7) can be abbreviated as: o 'X'2=f1(XO',OX),A'X'2=f1(OX,AX)。
From the abbreviated formula (6) and the abbreviated formula (7), the following formula (8) can be obtained:
Figure BDA00012501523800001315
in the formula (8), for any point X in the plane coordinate system, when the coordinates of X in the plane coordinate system are determined, three undetermined parameters OX, O 'X, AX in the formula (8) can be obtained, because of the longitude and latitude of the vertical intersection point O' in the formula (8)
Figure BDA00012501523800001316
And the longitude and latitude of any one of the four index points A
Figure BDA00012501523800001317
The determined parameters are used to determine the longitude and latitude of any point X in the spherical coordinate system according to the formula (8)
Figure BDA00012501523800001318
Therefore, in order to determine the corresponding longitude and latitude in the spherical coordinate system of the central point of the area occupied by the target object in the first video frame, the server may establish the following preset coordinate conversion model according to equation (8)
Figure BDA00012501523800001319
Figure BDA00012501523800001320
Wherein x is1Is the distance, x, between the center point and the vertical intersection point of the region occupied by the target object in the first video picture2Is the distance between the center point of the area occupied by the target object in the first video picture and the origin of the spherical coordinate system, x3The distance between the center point of the area occupied by the target object in the first video picture and any one of the at least four calibration points,
Figure BDA00012501523800001321
theta is respectively the longitude and latitude of the central point of the area occupied by the target object in the first video picture in the spherical coordinate system,
Figure BDA00012501523800001322
and theta1Respectively the longitude and latitude of the vertical intersection point in the spherical coordinate system,
Figure BDA00012501523800001323
and theta3Respectively the longitude and latitude of any one of the at least four index points in the spherical coordinate system.
According to the preset coordinate conversion model, the implementation manner of determining the longitude and the latitude, corresponding to the central point of the area occupied by the target object in the first video picture, in the spherical coordinate system according to the determined coordinate and the preset coordinate conversion model can be as follows: determining the distance x between the central point and the vertical intersection point of the area occupied by the target object in the first video picture according to the determined coordinates1The central point of the area occupied by the target object in the first video picture and the origin point of the spherical coordinate systemDistance x of2And the distance x between the center point of the area occupied by the target object in the first video picture and any one of the at least four calibration points3The vertical intersection point is an intersection point of a straight line which passes through the origin of the spherical coordinate system and is vertical to the plane coordinate system and intersects the plane coordinate system, and the at least four calibration points are randomly selected from the first video picture; according to x1、x2、x3And determining the longitude and the latitude corresponding to the central point of the area occupied by the target object in the first video picture in the spherical coordinate system according to the preset coordinate conversion model.
Step 303: and the server searches pixel points with characteristic information matched with the characteristic information of the central point of the area occupied by the target object in the first video picture from the pixel points included in the second video picture.
Specifically, step 303 can be implemented by the following two possible implementations:
in a first possible implementation manner, in the second video picture, a first target area with an area as a preset area is determined by taking a central point of the second video picture as a center, the preset area is determined according to the aging degree of the spherical IPC, and pixel points with characteristic information matched with the characteristic information of the central point of the area occupied by the target object in the first video picture are searched from all pixel points included in the first target area.
In order to improve the efficiency of searching for the pixel point with the characteristic information matched with the characteristic information of the central point of the area occupied by the target object in the first video picture in the second video picture, the pixel point with the characteristic information matched with the characteristic information of the central point of the area occupied by the target object in the first video picture can be searched only in all the pixel points included in the first target area in the second video picture. The preset area is determined by the server according to the aging degree of the spherical IPC, namely, the corresponding relation between the aging degree of the spherical IPC and the preset area is stored in the server, and when the server determines the aging degree of the spherical IPC, the preset area of the first target area can be determined according to the corresponding relation between the aging degree of the spherical IPC and the preset area. In addition, the aging degree is determined by the server according to the factory service time of the spherical IPC, namely for each spherical IPC, the factory service time of the spherical IPC is stored in the server, and the factory service time of the spherical IPC can be determined according to the factory service time of the spherical IPC; the server can determine the aging degree of the spherical IPC according to the corresponding relation between the factory service time and the aging degree. It should be noted that, in the embodiment of the present invention, the shape of the first target area may be a rectangle or a circle, and the embodiment of the present invention is not limited specifically herein.
Particularly, the implementation manner of searching for the pixel point whose feature information matches with the feature information of the central point of the region occupied by the target object in the first video picture from all the pixel points included in the first target region may be: acquiring characteristic information of all pixel points included in a first target area; for each pixel point in all pixel points included in the first target area, carrying out average operation and range operation on pixel difference values included in the characteristic information of the pixel point to obtain a first average value and a first range value, and determining the product of the first average value and the first range value as the characteristic value of the pixel point; carrying out average operation and range operation on pixel difference values included in the feature information of the central point of the area occupied by the target object in the first video picture to obtain a second average value and a second range value, and determining the product of the second average value and the second range value as a target feature value; and selecting a characteristic value with the minimum difference value with the target characteristic value from the characteristic values of all the pixel points included in the first target area, and determining the pixel point corresponding to the selected characteristic value as a pixel point with characteristic information matched with the characteristic information of the central point of the area occupied by the target object in the first video picture.
For example, the first target area includes 50 pixels, and the server obtains feature information of the 50 pixels by an optical flow method, which is P1 (G)1,G2,G3,G4,G5,G6,G7,G8)、P2(G1,G2,G3,G4,G5,G6,G7,G8)、…、P50(G1,G2,G3,G4,G5,G6,G7,G8) Aiming at the characteristic information of each pixel point in the 50 pixel points, according to a formula
Figure BDA0001250152380000141
Determining the characteristic value of the pixel point to obtain the characteristic value p of 50 pixel points1、p2、…、p50(ii) a The characteristic information of the central point of the area occupied by the target object in the first video picture is also according to the formula
Figure BDA0001250152380000142
Determining a characteristic value p of a central point of a region occupied by a target object in a first video picture, and determining p as a target characteristic value; from the characteristic value p of 50 pixels1、p2、…、p50To select the feature value that is closest to, i.e. has the smallest difference with, the target feature value p, provided that the selected feature value is p30Then characteristic value p30The corresponding pixel points are pixel points with characteristic information matched with the characteristic information of the central point of the area occupied by the target object in the first video picture.
Optionally, the central point of the area occupied by the target object in the second video image may not be located in the first target area, that is, at this time, a pixel point whose feature information matches with the feature information of the central point of the area occupied by the target object in the first video image is searched in the first target area, and the searched pixel point may not be the central point of the area occupied by the target object in the second video image, so the server needs to reduce the current second video image, and re-search the reduced video image for a pixel point whose feature information matches with the feature information of the central point of the area occupied by the target object in the first video image. Specifically, the server may reduce the second video image for multiple times according to a preset rule to obtain multiple third video images; for each third video picture in the plurality of third video pictures, determining a second target region with the area as a preset area by taking the center point of the third video picture as the center in the third video picture, thereby obtaining a plurality of second target regions; and searching pixel points with characteristic information matched with the characteristic information of the central point of the area occupied by the target object in the first video picture from all pixel points included in the plurality of second target areas.
The implementation process of the server searching for the pixel point with the characteristic information matched with the characteristic information of the central point of the area occupied by the target object in the first video picture from all the pixel points included in the plurality of second target areas may be as follows: for each second target area in the plurality of second target areas, acquiring characteristic information of all pixel points included in the second target area; for each pixel point in all pixel points included in the second target area, carrying out average operation and pole difference operation on pixel difference values included in the characteristic information of the pixel point to obtain a third average value and a third pole difference value, and determining the product of the third average value and the third pole difference value as the characteristic value of the pixel point; carrying out average operation and range operation on pixel difference values included in the feature information of the central point of the area occupied by the target object in the first video picture to obtain a second average value and a second range value, and determining the product of the second average value and the second range value as a target feature value; for each second target area in the plurality of second target areas, selecting a characteristic value with the minimum difference value with the target characteristic value from the characteristic values of all pixel points included in the second target area, and determining the pixel point corresponding to the selected characteristic value as a target pixel point, thereby obtaining a plurality of target pixel points; and selecting a characteristic value with the minimum difference value with the target characteristic value from the characteristic values of the plurality of target pixel points, and determining the target pixel point corresponding to the selected characteristic value as a pixel point with characteristic information matched with the characteristic information of the central point of the area occupied by the target object in the first video picture.
It should be noted that the area of the second target area is also a preset area, and the preset area is determined according to the aging degree of the ball IPC. The implementation manner of the server determining the preset area of the second target area according to the aging degree of the spherical IPC is the same as the implementation manner of the server determining the preset area of the first target area according to the aging degree of the spherical IPC, and is not described in detail herein. In addition, the shape of the second target area may be a rectangle or a circle, and the embodiment of the present invention is not limited specifically herein. In particular, the area and shape of the second target region may be the same as the area and shape of the first target region.
In addition, the process of reducing the second video image for multiple times according to the preset rule to obtain multiple third video images may be as follows: reducing the current second video image according to a preset rule to obtain a first third video image; continuously reducing the first third video image according to a preset rule to obtain a second third video image; and continuously reducing the second third video picture according to a preset rule to obtain a third video picture …, and so on to obtain a plurality of third video pictures. That is, a third video frame can be obtained by reducing the second video frame once according to the preset rule. The preset rule can respectively reduce one pixel point in each row and column of pixels of the current video picture at each time, namely one pixel point is reduced in each row of pixel points and each column of pixel points of the current video picture at each time.
In addition, when the server performs multiple reductions on the second video frame according to a preset rule, the number of times that the server performs the video frame reduction may be a preset number of times, and the preset number of times may be 5, 10, or 15, and the like. In particular, the server may reduce the second video screen a plurality of times until the scale of the reduced video screen coincides with the scale of the first video screen.
For example, the server reduces the pixel rows and columns in the current second video picture by one pixel point respectively to obtain a first third video picture; continuously reducing a pixel point for each pixel row in the first third video picture to obtain a second third video picture; …, respectively; and so on until the scaling of the reduced video picture is consistent with that of the first video picture, and then 20 third video pictures are obtained; determining 20 second target areas in the 20 third video pictures according to the method; for each of the 20 second target regions, the server determines a target pixel point among all pixel points included in the second target region, wherein the difference between the characteristic value of the target pixel point and the target characteristic value is minimum; when the server executes the above operation on all the 20 second target areas, obtaining 20 target pixel points; and then selecting a target pixel point with the minimum difference between the characteristic value and the target characteristic value from the 20 target pixel points, and determining the characteristic information of the selected target pixel point as a pixel point matched with the characteristic information of the central point of the area occupied by the target object in the first video picture.
In a second possible implementation manner, a pixel point with characteristic information matched with the characteristic information of the central point of the area occupied by the target object in the first video picture is searched in all pixel points included in the second video picture.
Of course, the server may not determine the first target region in the second video picture, and directly search, among all the pixel points included in the second video picture, for the pixel point whose feature information matches the feature information of the center point of the region occupied by the target object in the first video picture, where an implementation manner of searching, among all the pixel points included in the second video picture, for the pixel point whose feature information matches the feature information of the center point of the region occupied by the target object in the first video picture is substantially the same as an implementation manner of searching, among all the pixel points included in the first target region, for the pixel point whose feature information matches the feature information of the center point of the region occupied by the target object in the first video picture, and detailed description is omitted here.
Step 304: and the server determines a first rotation angle of the spherical IPC according to the longitude and the latitude of the searched pixel point corresponding to the spherical coordinate system of the spherical IPC, wherein the first rotation angle is an angle which the spherical IPC needs to rotate when the central point of the second video picture is rotated to the central point of the area occupied by the target object in the second video picture.
It should be noted that, since the found pixel point is a pixel point in the video image acquired by the spherical IPC, and the found pixel point is a pixel point whose characteristic information matches with the characteristic information of the central point of the area occupied by the target object in the first video image, the found pixel point can represent the central point of the area occupied by the target object in the second video image, and therefore the first rotation angle is also an angle at which the spherical IPC needs to be rotated when the central point of the second video image is rotated to the found pixel point.
Thus, the server determines the first rotation angle of the spherical IPC may be: the server determines the corresponding longitude and latitude of the searched pixel point in the spherical coordinate system; according to the longitude and the latitude of the center point of the video picture acquired by the current spherical IPC, determining the difference between the longitude of the center point of the video picture acquired by the current spherical IPC and the longitude of the searched pixel point in the spherical coordinate system to obtain a third angle, and determining the third angle as the angle of the first rotating angle in the horizontal direction; and simultaneously, the server determines the difference between the latitude of the central point of the video picture acquired by the current spherical IPC and the latitude corresponding to the searched pixel point in the spherical coordinate system to obtain a fourth angle, the fourth angle is determined as the angle of the first rotation angle in the vertical direction, and the first rotation angle is also determined when the server determines the angle of the first rotation angle in the horizontal direction and the angle of the first rotation angle in the vertical direction.
Step 305: the server sends a first rotation request to the spherical IPC, the first rotation request carries a first rotation angle, and the first rotation request is used for indicating the spherical IPC to rotate according to the first rotation angle.
When the spherical IPC receives a first rotation request sent by a server, rotating according to a first rotation angle carried in the first rotation request, re-acquiring a video picture after rotating, and sending the video picture acquired after rotating according to the first rotation angle to the server; when the server receives the video pictures acquired after the spherical IPC rotates according to the first rotation angle, the server sends the received video pictures to the client, so that a user can view the video pictures acquired after the spherical IPC rotates according to the first rotation angle through the client. It should be noted that, since the first rotation angle is an angle at which the spherical IPC needs to rotate when the center point of the second video frame is rotated to the center point of the area occupied by the target object in the second video frame, in the video frame acquired after the spherical IPC is rotated according to the first rotation angle, the center point of the area occupied by the target object in the video frame is the center point of the video frame.
In the embodiment of the invention, when the server determines the target object to be monitored in the first video picture acquired by the gun-shaped IPC, the characteristic information of the central point of the area occupied by the target object in the first video picture is acquired, and because the characteristic information of each pixel point has uniqueness, the pixel point with the characteristic information matched with the acquired characteristic information can be searched in the second video picture acquired by the ball-shaped IPC, and the searched pixel point can represent the central point of the area occupied by the target object in the second video picture. And then determining a first rotation angle of the spherical IPC according to the longitude and the latitude of the searched pixel point corresponding to the spherical coordinate system of the spherical IPC, wherein the first rotation angle is an angle which the spherical IPC needs to rotate when the central point of the second video picture is rotated to the central point of the area occupied by the target object in the second video picture, so that after the spherical IPC receives the first rotation request and rotates according to the first rotation angle, the central point of the video picture collected by the spherical IPC is the central point of the area occupied by the target object in the second video picture, the situation that the target object possibly does not exist in the center of the video picture collected by the spherical IPC due to the aging problem of the spherical IPC is avoided, and the effect of monitoring the details of the target object through the spherical IPC is improved. In addition, the server also establishes a preset coordinate conversion model in advance, the preset coordinate conversion model is used for indicating the position of a point in a plane coordinate system of a video picture acquired by the gun-type IPC in a spherical coordinate system of a video picture acquired by the spherical IPC, namely for any point in the video picture acquired by the gun-type IPC, the server can determine the longitude and the latitude of the any point in the spherical coordinate system through the preset coordinate conversion model, and the spherical IPC is controlled to rotate according to the longitude and the latitude of the any point so as to realize video monitoring of the any point.
Fig. 4A is a schematic diagram of a video monitoring apparatus 400 based on a gun and ball linkage according to an embodiment of the present invention, which may be implemented by software, hardware, or a combination of the two as part or all of a server. When the IPC shown in fig. 1 directly communicates with the client via a wireless network or a wired network, the video monitoring apparatus based on gun-ball linkage may be implemented as part or all of the IPC by software, hardware, or a combination of the two, and the IPC may be the IPC shown in fig. 2. Referring to fig. 4A, the apparatus includes an obtaining module 401, a searching module 402, a first determining module 403, and a first transmitting module 404.
An obtaining module 401, configured to perform step 302 in the embodiment of fig. 3A;
a searching module 402, configured to execute step 303 in the embodiment of fig. 3A, where the second video picture is a video picture currently acquired by a spherical IPC configured for a gun-type IPC;
a first determining module 403, configured to perform step 304 in the embodiment of fig. 3A;
a first sending module 404, configured to perform step 305 in the embodiment of fig. 3A.
Optionally, referring to fig. 4B, the search module 402 includes a first determining unit 4021 and a first searching unit 4022:
a first determining unit 4021, configured to determine, in a second video frame, a first target region having an area as a preset area, where the first target region is determined by using a center point of the second video frame as a center, and the preset area is determined according to an aging degree of a spherical IPC;
the first searching unit 4022 is configured to search, from all the pixel points included in the first target area, a pixel point whose feature information matches with the feature information of the center point of the area occupied by the target object in the first video image.
Optionally, the first search unit 4021 is specifically configured to:
acquiring characteristic information of all pixel points included in a first target area;
for each pixel point in all pixel points included in the first target area, carrying out average operation and range operation on pixel difference values included in the characteristic information of the pixel point to obtain a first average value and a first range value, and determining the product of the first average value and the first range value as the characteristic value of the pixel point;
carrying out average operation and range operation on pixel difference values included in the feature information of the central point of the area occupied by the target object in the first video picture to obtain a second average value and a second range value, and determining the product of the second average value and the second range value as a target feature value;
and selecting a characteristic value with the minimum difference value with the target characteristic value from the characteristic values of all the pixel points included in the first target area, and determining the pixel point corresponding to the selected characteristic value as a pixel point with characteristic information matched with the characteristic information of the central point of the area occupied by the target object in the first video picture.
Optionally, referring to fig. 4C, the search module 402 includes a reduction unit 4023, a second determination unit 4024, and a second search unit 4025:
a reducing unit 4023, configured to reduce the second video picture for multiple times according to a preset rule to obtain multiple third video pictures;
a second determining unit 4024, configured to determine, for each of the plurality of third video pictures, a second target area having an area as a preset area in the third video picture with a center point of the third video picture as a center, where the preset area is determined according to an aging degree of the spherical IPC;
the second searching unit 4025 is configured to search, from all pixel points included in the obtained plurality of second target regions, pixel points whose feature information matches with the feature information of the center point of the region occupied by the target object in the first video image.
Optionally, the second search unit 4025 is specifically configured to:
for each second target area in the plurality of second target areas, acquiring characteristic information of all pixel points included in the second target area;
for each pixel point in all pixel points included in the second target region, performing average operation and range operation on pixel difference values included in the feature information of the pixel point to obtain a third average value and a third range value, and determining the product of the third average value and the third range value as the feature value of the pixel point;
carrying out average operation and range operation on pixel difference values included in the feature information of the central point of the area occupied by the target object in the first video picture to obtain a second average value and a second range value, and determining the product of the second average value and the second range value as a target feature value;
for each second target area in the plurality of second target areas, selecting a characteristic value with the minimum difference value with the target characteristic value from the characteristic values of all pixel points included in the second target area, and determining the pixel point corresponding to the selected characteristic value as a target pixel point;
and selecting a characteristic value with the minimum difference value with the target characteristic value from the obtained characteristic values of the plurality of target pixel points, and determining the target pixel point corresponding to the selected characteristic value as a pixel point with characteristic information matched with the characteristic information of the central point of the area occupied by the target object in the first video picture.
Optionally, referring to fig. 4D, the apparatus 400 further includes a second determining module, a second determining module 405, a third determining module 406, a fourth determining module 407, a second sending module 408, and a receiving module 409:
a second determining module 405, configured to determine a coordinate of a central point of an area occupied by the target object in the first video frame, in a plane coordinate system of the gun type IPC;
a third determining module 406, configured to determine, according to the determined coordinate and a preset coordinate conversion model, a longitude and a latitude, which correspond to a central point of an area occupied by the target object in the first video frame, in a spherical coordinate system, where the preset coordinate conversion model is used to convert coordinates of a point in a planar coordinate system into the spherical coordinate system;
a fourth determining module 407, configured to determine a second rotation angle of the spherical IPC according to the determined longitude and latitude;
a second sending module 408, configured to send a second rotation request to the spherical IPC, where the second rotation request carries a second rotation angle;
the receiving module 409 is configured to receive a second video picture acquired by the spherical IPC, where the second video picture is a video picture acquired after the spherical IPC rotates according to a second rotation angle when receiving a second rotation request.
Optionally, referring to fig. 4E, the third determining module 406 includes a third determining unit 4061 and a fourth determining unit 4062:
a third determining unit 4061, configured to determine, according to the determined coordinates, a distance x between a center point of an area occupied by the target object in the first video picture and the vertical intersection point1And the distance x between the central point of the area occupied by the target object in the first video picture and the origin of the spherical coordinate system2And the distance x between the center point of the area occupied by the target object in the first video picture and any one of the at least four calibration points3The vertical intersection point is an intersection point of a straight line which passes through the origin of the spherical coordinate system and is vertical to the plane coordinate system and intersects the plane coordinate system, and the at least four calibration points are randomly selected from the first video picture;
a fourth determination unit 4062 for determining x1、x2、x3Determining the longitude and the latitude corresponding to the central point of the area occupied by the target object in the first video picture in a spherical coordinate system according to a preset coordinate conversion model;
Figure BDA0001250152380000191
Figure BDA0001250152380000192
wherein,
Figure BDA0001250152380000193
Theta is respectively the longitude and latitude of the central point of the area occupied by the target object in the first video picture in the spherical coordinate system,
Figure BDA0001250152380000194
and theta1Respectively the longitude and latitude of the vertical intersection point in the spherical coordinate system,
Figure BDA0001250152380000195
and theta3Respectively the longitude and the latitude of any one of the at least four calibration points in the spherical coordinate system.
Optionally, referring to fig. 4F, the apparatus 400 further comprises:
a selecting module 410, configured to randomly select at least four index points in the first video frame, where any three index points of the at least four index points are not collinear;
a fifth determining module 411, configured to determine coordinates of each of the at least four calibration points in the planar coordinate system and longitude and latitude in the spherical coordinate system;
a sixth determining module 412, configured to determine, according to coordinates of each of the at least four calibration points in the planar coordinate system and longitude and latitude in the spherical coordinate system, a position parameter of the vertical intersection point in the spherical coordinate system, where the position parameter includes a distance between the vertical intersection point and an origin of the spherical coordinate system, and longitude and latitude of the vertical intersection point in the spherical coordinate system;
the establishing module 413 is configured to establish the preset coordinate transformation model according to the position parameter of the vertical intersection point in the spherical coordinate system and the longitude and latitude of any one of the at least four calibration points in the spherical coordinate system.
Optionally, referring to fig. 4G, the fifth determining module 411 includes a control unit 4111, a fifth determining unit 4112, and a sixth determining unit 4113:
a control unit 4111, configured to control, for each of the at least four calibration points, a center point of a video frame acquired by the spherical IPC to rotate to the calibration point from a position where longitude and latitude of a spherical coordinate system are both zero;
a fifth determining unit 4112, configured to determine a rotation angle of the spherical IPC in the horizontal direction and a rotation angle of the spherical IPC in the vertical direction;
a sixth determining unit 4113, configured to determine a rotation angle of the spherical IPC in the horizontal direction as a longitude of the calibration point in the spherical coordinate system, and determine a rotation angle of the spherical IPC in the vertical direction as a latitude of the calibration point in the spherical coordinate system.
In the embodiment of the invention, when the server determines the target object to be monitored in the first video picture acquired by the gun-shaped IPC, the characteristic information of the central point of the area occupied by the target object in the first video picture is acquired, and because the characteristic information of each pixel point has uniqueness, the pixel point with the characteristic information matched with the acquired characteristic information can be searched in the second video picture acquired by the ball-shaped IPC, and the searched pixel point can represent the central point of the area occupied by the target object in the second video picture. And then determining a first rotation angle of the spherical IPC according to the longitude and the latitude of the searched pixel point corresponding to the spherical coordinate system of the spherical IPC, wherein the first rotation angle is an angle which the spherical IPC needs to rotate when the central point of the second video picture is rotated to the central point of the area occupied by the target object in the second video picture, so that after the spherical IPC receives the first rotation request and rotates according to the first rotation angle, the central point of the video picture collected by the spherical IPC is the central point of the area occupied by the target object in the second video picture, the situation that the target object possibly does not exist in the center of the video picture collected by the spherical IPC due to the aging problem of the spherical IPC is avoided, and the effect of monitoring the details of the target object through the spherical IPC is improved. In addition, the server also establishes a preset coordinate conversion model in advance, the preset coordinate conversion model is used for indicating the position of a point in a plane coordinate system of a video picture acquired by the gun-type IPC in a spherical coordinate system of a video picture acquired by the spherical IPC, namely for any point in the video picture acquired by the gun-type IPC, the server can determine the longitude and the latitude of the any point in the spherical coordinate system through the preset coordinate conversion model, and the spherical IPC is controlled to rotate according to the longitude and the latitude of the any point so as to realize video monitoring of the any point.
It should be noted that: in the video monitoring device based on the gun and ball linkage provided by the embodiment, the division of the functional modules is only used for illustration when video monitoring is performed, and in practical application, the function distribution can be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules so as to complete all or part of the functions described above. In addition, the video monitoring device based on the gun and ball linkage provided by the above embodiment and the video monitoring method based on the gun and ball linkage provided by the embodiment of fig. 3A belong to the same concept, and the specific implementation process thereof is detailed in the embodiment of fig. 3A, and is not repeated here.
In the above embodiments, the implementation may be wholly or partly realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with embodiments of the invention, to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., Digital Versatile Disk (DVD)), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above-mentioned embodiments are provided not to limit the present application, and any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (18)

1. A video monitoring method based on gun and ball linkage is characterized by comprising the following steps:
acquiring feature information of a central point of an area occupied by a target object in a first video picture, wherein the first video picture is a video picture acquired by a gun-shaped network camera IPC, the feature information comprises pixel difference values between a pixel value of the central point and pixel values of a plurality of neighborhood pixels, and the neighborhood pixels are pixels in the neighborhood of the central point in the first video picture;
searching pixel points with characteristic information matched with the characteristic information of the central point of the area occupied by the target object in the first video picture in pixel points included in a second video picture, wherein the second video picture is a video picture currently acquired by a spherical IPC configured for the gun-shaped IPC;
determining a first rotation angle of the spherical IPC according to the longitude and latitude corresponding to the searched pixel point in the spherical coordinate system of the spherical IPC, wherein the first rotation angle is an angle which the spherical IPC needs to rotate when the central point of the second video picture is rotated to the central point of the area occupied by the target object in the second video picture;
sending a first rotation request to the spherical IPC, wherein the first rotation request carries the first rotation angle, and the first rotation request is used for indicating the spherical IPC to rotate according to the first rotation angle;
the searching for the pixel point with the characteristic information matched with the characteristic information of the central point of the area occupied by the target object in the first video picture from the pixel points included in the second video picture comprises the following steps:
according to a target characteristic value and a characteristic value of a pixel point included in the second video picture, a pixel point with characteristic information matched with the characteristic information of the central point of the area occupied by the target object in the first video picture is searched from the pixel point included in the second video picture, the characteristic value of the pixel point is related to the average size and the discrete degree of a pixel difference value included in the characteristic information of the pixel point, and the target characteristic value is related to the average size and the discrete degree of a pixel difference value included in the characteristic information of the central point of the area occupied by the target object in the first video picture.
2. The method of claim 1,
the searching, according to the target feature value and the feature value of the pixel point included in the second video image, for the pixel point whose feature information matches with the feature information of the center point of the area occupied by the target object in the first video image from the pixel point included in the second video image includes:
in the second video picture, a first target area with the area as a preset area is determined by taking the central point of the second video picture as a center, and the preset area is determined according to the aging degree of the spherical IPC;
and searching pixel points with characteristic information matched with the characteristic information of the central point of the area occupied by the target object in the first video picture from all the pixel points included in the first target area according to the target characteristic value and the characteristic values of all the pixel points included in the first target area.
3. The method according to claim 2, wherein the searching for a pixel point whose feature information matches with the feature information of the center point of the area occupied by the target object in the first video frame from all the pixel points included in the first target area according to the target feature value and the feature values of all the pixel points included in the first target area comprises:
acquiring characteristic information of all pixel points included in the first target area;
for each pixel point in all pixel points included in the first target area, performing average operation and range operation on pixel difference values included in the feature information of the pixel points to obtain a first average value and a first range value, and determining the product of the first average value and the first range value as the feature value of the pixel point;
carrying out average operation and range operation on pixel difference values included in the feature information of the central point of the area occupied by the target object in the first video picture to obtain a second average value and a second range value, and determining the product of the second average value and the second range value as the target feature value;
and selecting a characteristic value with the minimum difference value with the target characteristic value from the characteristic values of all pixel points included in the first target area, and determining the pixel point corresponding to the selected characteristic value as a pixel point with characteristic information matched with the characteristic information of the central point of the area occupied by the target object in the first video picture.
4. The method according to claim 2, wherein the searching for the pixel point whose feature information matches with the feature information of the center point of the region occupied by the target object in the first video frame among the pixel points included in the second video frame further comprises:
if the pixel point searched from the first target area is not the central point of the area occupied by the target object in the second video image, reducing the second video image for multiple times according to a preset rule to obtain multiple third video images;
for each third video picture in the plurality of third video pictures, determining a second target area with the area as a preset area in the third video picture by taking the center point of the third video picture as the center, wherein the preset area is determined according to the aging degree of the spherical IPC;
and searching pixel points with characteristic information matched with the characteristic information of the central point of the area occupied by the target object in the first video picture from all the pixel points in the obtained second target areas.
5. The method according to claim 4, wherein the searching for a pixel point whose feature information matches with the feature information of the center point of the region occupied by the target object in the first video frame from all the pixel points included in the obtained plurality of second target regions comprises:
for each second target area in the plurality of second target areas, acquiring feature information of all pixel points included in the second target area;
for each pixel point in all pixel points included in the second target region, performing average operation and range operation on pixel difference values included in the feature information of the pixel points to obtain a third average value and a third range value, and determining a product between the third average value and the third range value as a feature value of the pixel point;
carrying out average operation and range operation on pixel difference values included in the feature information of the central point of the area occupied by the target object in the first video picture to obtain a second average value and a second range value, and determining the product of the second average value and the second range value as the target feature value;
for each second target area in the plurality of second target areas, selecting a characteristic value with the minimum difference value with the target characteristic value from the characteristic values of all pixel points included in the second target area, and determining the pixel point corresponding to the selected characteristic value as a target pixel point;
and selecting a characteristic value with the minimum difference value with the target characteristic value from the obtained characteristic values of the plurality of target pixel points, and determining the target pixel point corresponding to the selected characteristic value as a pixel point with characteristic information matched with the characteristic information of the central point of the area occupied by the target object in the first video picture.
6. The method according to claim 1, wherein before searching for a pixel point whose feature information matches with the feature information of the center point of the region occupied by the target object in the first video frame among the pixel points included in the second video frame, the method further comprises:
determining the corresponding coordinate of the central point of the area occupied by the target object in the first video picture in the plane coordinate system of the gun type IPC;
determining longitude and latitude corresponding to the central point of the area occupied by the target object in the first video picture in the spherical coordinate system according to the determined coordinates and a preset coordinate conversion model, wherein the preset coordinate conversion model is used for converting the coordinates of the point in the plane coordinate system into the spherical coordinate system;
determining a second rotation angle of the spherical IPC according to the determined longitude and latitude;
sending a second rotation request to the spherical IPC, wherein the second rotation request carries the second rotation angle;
and receiving a second video picture acquired by the spherical IPC, wherein the second video picture is the video picture acquired after the spherical IPC rotates according to the second rotation angle when a second rotation request is received.
7. The method of claim 6, wherein determining the longitude and latitude of the center point of the area occupied by the target object in the first video frame in the spherical coordinate system according to the determined coordinates and a preset coordinate transformation model comprises:
according to the determined coordinates, determining the distance x between the central point of the area occupied by the target object in the first video picture and the vertical intersection point1And the distance x between the central point of the area occupied by the target object in the first video picture and the origin of the spherical coordinate system2And the distance x between the central point of the area occupied by the target object in the first video picture and any one of at least four calibration points3The vertical intersection point is an intersection point of a straight line which passes through the origin of the spherical coordinate system and is vertical to the plane coordinate system and intersects the plane coordinate system, and the at least four calibration points are randomly selected from the first video picture;
according to the x1、x2、x3Determining the longitude and the latitude corresponding to the central point of the area occupied by the target object in the first video picture in the spherical coordinate system according to a preset coordinate conversion model;
Figure FDA0002504766690000031
Figure FDA0002504766690000032
wherein the content of the first and second substances,
Figure FDA0002504766690000033
respectively corresponding longitude and latitude in the spherical coordinate system of the central point of the area occupied by the target object in the first video picture,
Figure FDA0002504766690000034
and theta1Respectively the longitude and latitude of the vertical intersection point in the spherical coordinate system,
Figure FDA0002504766690000035
and theta3Respectively the longitude and the latitude of any one of the at least four calibration points in the spherical coordinate system.
8. The method according to claim 6 or 7, wherein the determining, according to the determined coordinates and a preset coordinate transformation model, before the longitude and the latitude of the center point of the area occupied by the target object in the first video frame in the spherical coordinate system, further comprises:
randomly selecting at least four index points in the first video picture, wherein any three index points in the at least four index points are not collinear;
determining coordinates of each of the at least four calibration points in the planar coordinate system and a longitude and latitude in the spherical coordinate system;
determining a position parameter of a vertical intersection point in the spherical coordinate system according to the coordinates of each of the at least four calibration points in the planar coordinate system and the longitude and latitude in the spherical coordinate system, wherein the position parameter comprises a distance between the vertical intersection point and an origin of the spherical coordinate system and the longitude and latitude of the vertical intersection point in the spherical coordinate system;
and establishing the preset coordinate conversion model according to the position parameter of the vertical intersection point in the spherical coordinate system and the longitude and latitude of any one of the at least four calibration points in the spherical coordinate system.
9. The method of claim 8, wherein said determining the longitude and latitude in the spherical coordinate system of each of the at least four calibration points comprises:
for each of the at least four calibration points, controlling a central point of a video picture acquired by the spherical IPC to rotate to the calibration point from a position where the longitude and the latitude of the spherical coordinate system are both zero;
determining a rotation angle of the spherical IPC in the horizontal direction and a rotation angle of the spherical IPC in the vertical direction;
and determining the rotation angle of the spherical IPC in the horizontal direction as the longitude of the calibration point in the spherical coordinate system, and determining the rotation angle of the spherical IPC in the vertical direction as the latitude of the calibration point in the spherical coordinate system.
10. A video monitoring device based on rifle ball linkage, its characterized in that, the device includes:
the device comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring characteristic information of a central point of an area occupied by a target object in a first video picture, the first video picture is a video picture acquired by a gun-shaped network camera IPC, the characteristic information comprises pixel difference values between a pixel value of the central point and pixel values of a plurality of neighborhood pixels, and the neighborhood pixels are pixels in the neighborhood of the central point in the first video picture;
the searching module is used for searching pixel points with characteristic information matched with the characteristic information of the central point of the area occupied by the target object in the first video picture in pixel points included in a second video picture, wherein the second video picture is a video picture currently acquired by the spherical IPC configured for the gun-shaped IPC;
the first determining module is used for determining a first rotation angle of the spherical IPC according to the longitude and the latitude of the searched pixel point corresponding to the spherical coordinate system of the spherical IPC, wherein the first rotation angle is an angle which is required to rotate the spherical IPC when the central point of the second video picture is rotated to the central point of the area occupied by the target object in the second video picture;
a first sending module, configured to send a first rotation request to the spherical IPC, where the first rotation request carries the first rotation angle, and the first rotation request is used to instruct the spherical IPC to rotate according to the first rotation angle;
the searching module is configured to search, according to a target feature value and a feature value of a pixel included in the second video image, a pixel whose feature information matches with feature information of a center point of an area occupied by the target object in the first video image from the pixel included in the second video image, where the feature value of the pixel is related to an average size and a dispersion degree of a pixel difference value included in the feature information of the pixel, and the target feature value is related to an average size and a dispersion degree of a pixel difference value included in the feature information of the center point of the area occupied by the target object in the first video image.
11. The apparatus of claim 10, wherein the lookup module comprises:
a first determining unit, configured to determine, in the second video picture, a first target area having an area as a preset area with a center point of the second video picture as a center, where the preset area is determined according to an aging degree of the spherical IPC;
and the first searching unit is used for searching pixel points with characteristic information matched with the characteristic information of the central point of the area occupied by the target object in the first video picture from all the pixel points included in the first target area according to the target characteristic value and the characteristic values of all the pixel points included in the first target area.
12. The apparatus of claim 11, wherein the first lookup unit is specifically configured to:
acquiring characteristic information of all pixel points included in the first target area;
for each pixel point in all pixel points included in the first target area, performing average operation and range operation on pixel difference values included in the feature information of the pixel points to obtain a first average value and a first range value, and determining the product of the first average value and the first range value as the feature value of the pixel point;
carrying out average operation and range operation on pixel difference values included in the feature information of the central point of the area occupied by the target object in the first video picture to obtain a second average value and a second range value, and determining the product of the second average value and the second range value as the target feature value;
and selecting a characteristic value with the minimum difference value with the target characteristic value from the characteristic values of all pixel points included in the first target area, and determining the pixel point corresponding to the selected characteristic value as a pixel point with characteristic information matched with the characteristic information of the central point of the area occupied by the target object in the first video picture.
13. The apparatus of claim 11, wherein the lookup module further comprises:
a reduction unit, configured to reduce the second video frame multiple times according to a preset rule to obtain multiple third video frames if the pixel point found in the first target region is not a central point of the region occupied by the target object in the second video frame;
a second determining unit, configured to determine, for each of the plurality of third video pictures, a second target area having an area as a preset area in the third video picture with a center point of the third video picture as a center, where the preset area is determined according to an aging degree of the spherical IPC;
and the second searching unit is used for searching pixel points with characteristic information matched with the characteristic information of the central point of the area occupied by the target object in the first video picture from all the pixel points included in the obtained second target areas.
14. The apparatus of claim 13, wherein the second lookup unit is specifically configured to:
for each second target area in the plurality of second target areas, acquiring feature information of all pixel points included in the second target area;
for each pixel point in all pixel points included in the second target region, performing average operation and range operation on pixel difference values included in the feature information of the pixel points to obtain a third average value and a third range value, and determining a product between the third average value and the third range value as a feature value of the pixel point;
carrying out average operation and range operation on pixel difference values included in the feature information of the central point of the area occupied by the target object in the first video picture to obtain a second average value and a second range value, and determining the product of the second average value and the second range value as the target feature value;
for each second target area in the plurality of second target areas, selecting a characteristic value with the minimum difference value with the target characteristic value from the characteristic values of all pixel points included in the second target area, and determining the pixel point corresponding to the selected characteristic value as a target pixel point;
and selecting a characteristic value with the minimum difference value with the target characteristic value from the obtained characteristic values of the plurality of target pixel points, and determining the target pixel point corresponding to the selected characteristic value as a pixel point with characteristic information matched with the characteristic information of the central point of the area occupied by the target object in the first video picture.
15. The apparatus of claim 10, wherein the apparatus further comprises:
the second determination module is used for determining the corresponding coordinate of the central point of the area occupied by the target object in the first video picture in the plane coordinate system of the gun type IPC;
a third determining module, configured to determine, according to the determined coordinate and a preset coordinate conversion model, a longitude and a latitude, in the spherical coordinate system, of a center point of an area occupied by the target object in the first video frame, where the preset coordinate conversion model is used to convert coordinates of a point in the planar coordinate system into the spherical coordinate system;
the fourth determining module is used for determining a second rotation angle of the spherical IPC according to the determined longitude and latitude;
the second sending module is used for sending a second rotation request to the spherical IPC, and the second rotation request carries the second rotation angle;
and the receiving module is used for receiving a second video picture acquired by the spherical IPC, and the second video picture is a video picture acquired after the spherical IPC rotates according to the second rotation angle when receiving a second rotation request.
16. The apparatus of claim 15, wherein the third determining module comprises:
a third determining unit configured to determine, according to the determined coordinates, a distance x between a center point of an area occupied by the target object in the first video picture and a vertical intersection point1And the distance x between the central point of the area occupied by the target object in the first video picture and the origin of the spherical coordinate system2And the distance x between the central point of the area occupied by the target object in the first video picture and any one of at least four calibration points3The vertical intersection point is an intersection point of a straight line which passes through the origin of the spherical coordinate system and is vertical to the plane coordinate system and intersects the plane coordinate system, and the at least four calibration points are randomly selected from the first video picture;
a fourth determination unit for determining x1、x2、x3Determining the longitude and the latitude corresponding to the central point of the area occupied by the target object in the first video picture in the spherical coordinate system according to a preset coordinate conversion model;
Figure FDA0002504766690000061
Figure FDA0002504766690000062
wherein the content of the first and second substances,
Figure FDA0002504766690000063
respectively corresponding longitude and latitude in the spherical coordinate system of the central point of the area occupied by the target object in the first video picture,
Figure FDA0002504766690000064
and theta1Respectively the longitude and latitude of the vertical intersection point in the spherical coordinate system,
Figure FDA0002504766690000065
and theta3Respectively the longitude and the latitude of any one of the at least four calibration points in the spherical coordinate system.
17. The apparatus of claim 15 or 16, wherein the apparatus further comprises:
a selection module, configured to randomly select at least four calibration points in the first video picture, where any three calibration points of the at least four calibration points are not collinear;
a fifth determining module for determining coordinates of each of the at least four calibration points in the planar coordinate system and longitude and latitude in the spherical coordinate system;
a sixth determining module, configured to determine, according to coordinates of each of the at least four calibration points in the planar coordinate system and longitude and latitude in the spherical coordinate system, location parameters of a vertical intersection in the spherical coordinate system, where the location parameters include a distance between the vertical intersection and an origin of the spherical coordinate system, and longitude and latitude of the vertical intersection in the spherical coordinate system;
and the establishing module is used for establishing the preset coordinate conversion model according to the position parameter of the vertical intersection point in the spherical coordinate system and the longitude and the latitude of any one of the at least four calibration points in the spherical coordinate system.
18. The apparatus of claim 17, wherein the fifth determining module comprises:
the control unit is used for controlling the central point of a video picture acquired by the spherical IPC to rotate to the calibration point from a position where the longitude and the latitude of the spherical coordinate system are both zero aiming at each of the at least four calibration points;
a fifth determining unit, configured to determine a rotation angle of the spherical IPC in the horizontal direction and a rotation angle of the spherical IPC in the vertical direction;
a sixth determining unit, configured to determine a rotation angle of the spherical IPC in the horizontal direction as a longitude of the calibration point in the spherical coordinate system, and determine a rotation angle of the spherical IPC in the vertical direction as a latitude of the calibration point in the spherical coordinate system.
CN201710167181.0A 2017-03-20 2017-03-20 Video monitoring method and device based on gun and ball linkage Active CN108632569B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710167181.0A CN108632569B (en) 2017-03-20 2017-03-20 Video monitoring method and device based on gun and ball linkage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710167181.0A CN108632569B (en) 2017-03-20 2017-03-20 Video monitoring method and device based on gun and ball linkage

Publications (2)

Publication Number Publication Date
CN108632569A CN108632569A (en) 2018-10-09
CN108632569B true CN108632569B (en) 2020-09-29

Family

ID=63687749

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710167181.0A Active CN108632569B (en) 2017-03-20 2017-03-20 Video monitoring method and device based on gun and ball linkage

Country Status (1)

Country Link
CN (1) CN108632569B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111565299A (en) * 2020-05-06 2020-08-21 苏州新舟锐视信息技术科技有限公司 Method for capturing targets through linkage of multiple vehicle-mounted guns and one dome camera
CN112330726B (en) * 2020-10-27 2022-09-09 天津天瞳威势电子科技有限公司 Image processing method and device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103607569A (en) * 2013-11-22 2014-02-26 广东威创视讯科技股份有限公司 Method and system for tracking monitored target in process of video monitoring
CN103747207A (en) * 2013-12-11 2014-04-23 深圳先进技术研究院 Positioning and tracking method based on video monitor network
CN104125433A (en) * 2014-07-30 2014-10-29 西安冉科信息技术有限公司 Moving object video surveillance method based on multi-PTZ (pan-tilt-zoom)-camera linkage structure
CN104184995A (en) * 2014-08-26 2014-12-03 天津市亚安科技股份有限公司 Method and system for achieving real-time linkage monitoring of networking video monitoring system
CN104185078A (en) * 2013-05-20 2014-12-03 华为技术有限公司 Video monitoring processing method, device and system thereof
CN104537659A (en) * 2014-12-23 2015-04-22 金鹏电子信息机器有限公司 Automatic two-camera calibration method and system
CN105338248A (en) * 2015-11-20 2016-02-17 成都因纳伟盛科技股份有限公司 Intelligent multi-target active tracking monitoring method and system
CN105635657A (en) * 2014-11-03 2016-06-01 航天信息股份有限公司 Camera holder omnibearing interaction method and device based on face detection
CN106408551A (en) * 2016-05-31 2017-02-15 北京格灵深瞳信息技术有限公司 Monitoring device controlling method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104185078A (en) * 2013-05-20 2014-12-03 华为技术有限公司 Video monitoring processing method, device and system thereof
CN103607569A (en) * 2013-11-22 2014-02-26 广东威创视讯科技股份有限公司 Method and system for tracking monitored target in process of video monitoring
CN103747207A (en) * 2013-12-11 2014-04-23 深圳先进技术研究院 Positioning and tracking method based on video monitor network
CN104125433A (en) * 2014-07-30 2014-10-29 西安冉科信息技术有限公司 Moving object video surveillance method based on multi-PTZ (pan-tilt-zoom)-camera linkage structure
CN104184995A (en) * 2014-08-26 2014-12-03 天津市亚安科技股份有限公司 Method and system for achieving real-time linkage monitoring of networking video monitoring system
CN105635657A (en) * 2014-11-03 2016-06-01 航天信息股份有限公司 Camera holder omnibearing interaction method and device based on face detection
CN104537659A (en) * 2014-12-23 2015-04-22 金鹏电子信息机器有限公司 Automatic two-camera calibration method and system
CN105338248A (en) * 2015-11-20 2016-02-17 成都因纳伟盛科技股份有限公司 Intelligent multi-target active tracking monitoring method and system
CN106408551A (en) * 2016-05-31 2017-02-15 北京格灵深瞳信息技术有限公司 Monitoring device controlling method and device

Also Published As

Publication number Publication date
CN108632569A (en) 2018-10-09

Similar Documents

Publication Publication Date Title
US20220078349A1 (en) Gimbal control method and apparatus, control terminal and aircraft system
US9667862B2 (en) Method, system, and computer program product for gamifying the process of obtaining panoramic images
WO2016141744A1 (en) Target tracking method, apparatus and system
CN111461994A (en) Method for obtaining coordinate transformation matrix and positioning target in monitoring picture
US10210728B2 (en) Method, server, system, and image capturing device for surveillance
CN205693769U (en) A kind of motion cameras positioning capturing quick to panorama target system
CN109074657B (en) Target tracking method and device, electronic equipment and readable storage medium
CN110099220B (en) Panoramic stitching method and device
CN101794316A (en) Real-scene status consulting system and coordinate offset method based on GPS location and direction identification
CN111815672B (en) Dynamic tracking control method, device and control equipment
WO2018040480A1 (en) Method and device for adjusting scanning state
WO2017133147A1 (en) Live-action map generation method, pushing method and device for same
CN108632569B (en) Video monitoring method and device based on gun and ball linkage
CN103262561B (en) Video distribution system and method for video distribution
CN113850137A (en) Power transmission line image online monitoring method, system and equipment
EP4220547A1 (en) Method and apparatus for determining heat data of global region, and storage medium
US20170041538A1 (en) Method for correcting image from wide-angle lens and device therefor
CN111882605A (en) Monitoring equipment image coordinate conversion method and device and computer equipment
CN111046121A (en) Environment monitoring method, device and system
CN111862620A (en) Image fusion processing method and device
CN114785961B (en) Patrol route generation method, device and medium based on holder camera
CN111325201A (en) Image processing method and device, movable equipment, unmanned aerial vehicle remote controller and system
CN105354813B (en) Holder is driven to generate the method and device of stitching image
US9800773B2 (en) Digital camera apparatus with dynamically customized focus reticle and automatic focus reticle positioning
WO2021022989A1 (en) Calibration parameter obtaining method and apparatus, processor, and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant