CN117221483A - Target video generation method, device, system and medium for monocular monitoring - Google Patents

Target video generation method, device, system and medium for monocular monitoring Download PDF

Info

Publication number
CN117221483A
CN117221483A CN202310966123.XA CN202310966123A CN117221483A CN 117221483 A CN117221483 A CN 117221483A CN 202310966123 A CN202310966123 A CN 202310966123A CN 117221483 A CN117221483 A CN 117221483A
Authority
CN
China
Prior art keywords
target object
monitoring
target
monitoring camera
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310966123.XA
Other languages
Chinese (zh)
Other versions
CN117221483B (en
Inventor
蔡慧贤
周志贤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Deshen Technology Co ltd
Original Assignee
Guangdong Deshen Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Deshen Technology Co ltd filed Critical Guangdong Deshen Technology Co ltd
Publication of CN117221483A publication Critical patent/CN117221483A/en
Application granted granted Critical
Publication of CN117221483B publication Critical patent/CN117221483B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • H04N5/915Television signal processing therefor for field- or frame-skip recording or reproducing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention discloses a target video generation method, device, system and medium for monocular monitoring, wherein the method comprises the following steps: acquiring a positioning position of a target object, and determining whether the target object enters a monitoring area according to the positioning position; the angle and the lens structure of the monitoring camera are adjusted through the positioning information of the target object, so that the monitoring camera is aligned to the target object in real time; marking the starting moment point of the target object entering the monitoring area; and marking the ending time point when the target object leaves the monitoring area. And intercepting the monitoring video from the starting time point to the ending time point to generate a target video. The method and the device are mainly used in the technical field of video generation.

Description

Target video generation method, device, system and medium for monocular monitoring
Technical Field
The invention relates to the technical field of video generation, in particular to a target video generation method, device, system and medium for monocular monitoring.
Background
Existing buildings typically have monitoring systems installed. In order to identify a moving image of a target object in a building, the target object is generally identified by a monitoring system, and when the target object is identified, shooting is performed to obtain a unit video in which the target object is stored. And then integrating the unit videos to obtain a target video containing the target object. Alternatively, a specific position is photographed to obtain a video. And then, identifying the video, and slicing the video containing the specific object in the video to obtain the target video containing the specific object. However, in the target video obtained by these methods, there are cases where the shooting of the target object is unclear, the target object is not well represented, the cost is high, and the camera is required to be large (binocular or multi-view). For this reason, how to better highlight the target object in the target video is a technical problem that needs to be solved in the industry.
Disclosure of Invention
The invention provides a target video generation method, device, system and medium for monocular monitoring, which are used for solving one or more technical problems in the prior art and at least providing a beneficial selection or creation condition.
The invention provides a target video generation method for monocular monitoring, which comprises the following steps:
determining the positioning position of a target object, and determining whether the target object enters a monitoring area according to the positioning position;
the determining the positioning position of the target object specifically includes: determining the positioning position of a target object by a Bluetooth three-point positioning method; the target object wears the Bluetooth transmitting node, the position of the Bluetooth transmitting node worn by the target object is determined through three preset anchor nodes,
the three anchor nodes are a first anchor node, a second anchor node and a third anchor node respectively, and a space rectangular coordinate system is established by taking a rotating base plate of the monitoring camera as an origin;
in the space rectangular coordinate system, the coordinates of the first anchor node areThe coordinates of the second anchor node are +.>The coordinates of the third anchor node are +.>The method comprises the steps of carrying out a first treatment on the surface of the The coordinates of the Bluetooth transmitting node are defined as +.>
The first anchor node calculates and obtains the real-time signal strength of signals sent by each anchor node receiving Bluetooth transmitting node by an RSSI ranging method, and obtains the signals according to the signal receiving strength L per meter by combining the RSSI ranging method: the distance between the Bluetooth transmitting node and the first anchor node is S1, the distance between the Bluetooth transmitting node and the second anchor node is S2, and the distance between the Bluetooth transmitting node and the third anchor node is S3;
Establishing an equation set:
solving the equation set to obtain the coordinates (a, b, c) of the Bluetooth transmitting node, and obtaining the coordinates (a, b, c) of the positioning position of the target object;
when the target object is determined to enter the monitoring area for the first time, outputting a first response signal;
after confirming that the first response signal is output, according to the positioning position and the position of the monitoring camera, enabling the monitoring camera to focus on the target object so as to acquire a fine video picture of the target object;
according to the positioning position and the position of the monitoring camera, the method for focusing the target object by the monitoring camera specifically comprises the following steps: based on the space rectangular coordinate system, acquiring the transverse rotation angle of the current monitoring cameraAnd a longitudinal rotation angle->
According to the transverse rotation angleAnd a longitudinal rotation angle->Determining coordinates of a lens center point of the monitoring camera, coordinates of a CCD center point of the monitoring camera and coordinates of a physical center point of the monitoring camera;
the method comprises the steps of recording a lens center point of a monitoring camera as E, recording a CCD center point of the monitoring camera as F, recording a positioning position of a target object as M, and recording a physical center point of the monitoring camera as N; wherein, the physical center point of the monitoring camera means that the position of the point is not changed when the monitoring camera transversely rotates or longitudinally rotates;
By vectorSum vector->Calculating to obtain a space included angle->The method comprises the steps of carrying out a first treatment on the surface of the The space angle->Decomposing to obtain transverse included angle->And longitudinal included angle->Calculating the transverse included angle->And transverse included angle->Obtaining a target transverse included angle by the difference value; calculating longitudinal included angle->And longitudinal included angle->Obtaining a target longitudinal included angle by the difference value;
controlling the angle value of the transverse included angle of the transverse rotation target of the monitoring camera and the angle value of the longitudinal included angle of the longitudinal rotation target of the monitoring camera, so as to aim the monitoring camera at the target object;
after the rotation of the monitoring camera is determined, determining the coordinate of the lens center point of the monitoring camera according to the target transverse included angle and the target longitudinal included angle, wherein the sitting mark is the target coordinate, calculating according to the coordinates of the target coordinate and the positioning position of the target object to obtain the distance between the lens center point of the monitoring camera and the target object, and recording the distance as the target distance;
adjusting the lens structure of the monitoring camera according to the target distance so that the target object falls on the focusing surface of the monitoring camera to realize focusing on the target object;
judging whether the target object leaves the monitoring area, and outputting a second response signal when the target object is determined to leave the monitoring area;
Marking a time point generated by the monitoring video with the first response signal according to the first response signal, wherein the time point is a starting time point;
marking the time point of the monitoring video generated by the second response signal according to the second response signal, wherein the mark is an ending time point;
and intercepting the monitoring video from the starting time point to the ending time point to generate a target video.
Further, after generating the target video, the method further comprises the steps of: and restoring the angle of the monitoring camera and the lens structure to the initial setting.
Further, determining whether the target object enters the monitoring area according to the positioning position specifically includes: comparing the positioning position with the monitoring area, and considering that the target object enters the monitoring area when the positioning position first belongs to the monitoring area.
Further, judging whether the target object leaves the monitoring area, and outputting a second response signal when the target object is determined to leave the monitoring area specifically includes: when the positioning position first belongs to the monitoring area and then first belongs to the non-monitoring area, the target object is considered to leave the monitoring area, and a second response signal is generated.
Further, the capturing the monitoring video from the starting time point to the ending time point, and generating the target video specifically includes: copying the monitoring video to obtain a monitoring video copy, and intercepting the monitoring video copy from a starting time point to an ending time point to obtain a target video.
Further, after generating the target video, the method further comprises the steps of: and sending the target video to a remote backup storage server.
Further, the target video generating method for monocular monitoring further comprises the step of clearing the starting time point and the ending time point on the monitoring video after the target video is confirmed to be stored.
In a second aspect, there is provided a target video generating apparatus for monocular monitoring, including: a processor and a memory for storing a computer readable program;
the computer readable program, when executed by the processor, causes the processor to implement the target video generation method for monocular monitoring as set forth in any one of the above technical solutions.
In a third aspect, a target video generating system for monocular monitoring is provided, including: the device comprises a first judging module, a determining module, an adjusting module, a second judging module, a first marking module, a second marking module and a generating module;
the first judging module is used for determining the positioning position of the target object and determining whether the target object enters a monitoring area according to the positioning position;
the determining the positioning position of the target object specifically includes: determining the positioning position of a target object by a Bluetooth three-point positioning method; the target object wears the Bluetooth transmitting node, the position of the Bluetooth transmitting node worn by the target object is determined through three preset anchor nodes,
The three anchor nodes are a first anchor node, a second anchor node and a third anchor node respectively, and a space rectangular coordinate system is established by taking a rotating base plate of the monitoring camera as an origin;
in the space rectangular coordinate system, the coordinates of the first anchor node areThe coordinates of the second anchor node are +.>The coordinates of the third anchor node are +.>The method comprises the steps of carrying out a first treatment on the surface of the The coordinates of the Bluetooth transmitting node are defined as +.>
The first anchor node calculates and obtains the real-time signal strength of signals sent by each anchor node receiving Bluetooth transmitting node by an RSSI ranging method, and obtains the signals according to the signal receiving strength L per meter by combining the RSSI ranging method: the distance between the Bluetooth transmitting node and the first anchor node is S1, the distance between the Bluetooth transmitting node and the second anchor node is S2, and the distance between the Bluetooth transmitting node and the third anchor node is S3;
establishing an equation set:
solving the equation set to obtain the coordinates (a, b, c) of the Bluetooth transmitting node, and obtaining the coordinates (a, b, c) of the positioning position of the target object;
the determining module is used for outputting a first response signal when determining that the target object enters the monitoring area for the first time;
the adjusting module is used for enabling the monitoring camera to focus on the target object according to the positioning position and the position of the monitoring camera after confirming that the first response signal is output, so as to obtain a fine video picture of the target object;
According to the positioning position and the position of the monitoring camera, the method for focusing the target object by the monitoring camera specifically comprises the following steps: based on the space rectangular coordinate system, acquiring the transverse rotation angle of the current monitoring cameraAnd a longitudinal rotation angle->
According to the transverse rotation angleAnd a longitudinal rotation angle->Determining coordinates of a lens center point of the monitoring camera, coordinates of a CCD center point of the monitoring camera and coordinates of a physical center point of the monitoring camera;
the method comprises the steps of recording a lens center point of a monitoring camera as E, recording a CCD center point of the monitoring camera as F, recording a positioning position of a target object as M, and recording a physical center point of the monitoring camera as N; wherein, the physical center point of the monitoring camera means that the position of the point is not changed when the monitoring camera transversely rotates or longitudinally rotates;
by vectorSum vector->Calculating to obtain a space included angle->The method comprises the steps of carrying out a first treatment on the surface of the The space angle->Decomposing to obtain transverse included angle->And longitudinal included angle->Calculating the transverse included angle->And transverse included angle->Obtaining a target transverse included angle by the difference value; calculating longitudinal included angle->And longitudinal included angle->Obtaining a target longitudinal included angle by the difference value;
controlling the angle value of the transverse included angle of the transverse rotation target of the monitoring camera and the angle value of the longitudinal included angle of the longitudinal rotation target of the monitoring camera, so as to aim the monitoring camera at the target object;
After the rotation of the monitoring camera is determined, determining the coordinate of the lens center point of the monitoring camera according to the target transverse included angle and the target longitudinal included angle, wherein the sitting mark is the target coordinate, calculating according to the coordinates of the target coordinate and the positioning position of the target object to obtain the distance between the lens center point of the monitoring camera and the target object, and recording the distance as the target distance;
adjusting the lens structure of the monitoring camera according to the target distance so that the target object falls on the focusing surface of the monitoring camera to realize focusing on the target object;
the second judging module is used for: judging whether the target object leaves the monitoring area, and outputting a second response signal when the target object is determined to leave the monitoring area;
the first marking module is used for: marking a time point generated by the monitoring video with the first response signal according to the first response signal, wherein the time point is a starting time point;
the second marking module is used for: marking the time point of the monitoring video generated by the second response signal according to the second response signal, wherein the mark is an ending time point;
the generating module is used for: and intercepting the monitoring video from the starting time point to the ending time point to generate a target video.
In a fourth aspect, a computer readable storage medium is provided, in which a processor executable program is stored, the processor executable program being for implementing the target video generation method for monocular monitoring according to any one of the above technical solutions when executed by a processor.
The application has at least the following beneficial effects: according to the method, the angle and the lens structure of the monitoring camera are adjusted according to the positioning position of the target object, so that the target object can be focused in the starting time point and the ending time point of the monitoring video recorded by the monitoring camera. So that a clear target video for the target object can be obtained. Moreover, focusing on the target object can be accomplished only by a single camera (monocular). The scheme can reduce the using amount of the camera and reduce the cost. The method has the advantage of reducing the using amount of the camera. The method is suitable for application scenes of monocular shooting. The application also discloses a corresponding device, a system and a medium. The advantages of the corresponding apparatus, system, and medium are the same as those of the method and will not be repeated here. The method and the device are mainly used in the technical field of video generation.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate and do not limit the invention.
FIG. 1 is a flow chart of steps of a target video generation method for monocular monitoring;
FIG. 2 is a schematic diagram of a target video generating apparatus for monocular monitoring;
fig. 3 is a schematic diagram of a connection structure of a target video generating system for monocular monitoring.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
It should be noted that although functional block diagrams are depicted as block diagrams, and logical sequences are shown in the flowchart, in some cases, the steps shown or described may be performed in a different order than the block diagrams in the system. The terms first, second and the like in the description and in the claims and in the above-described figures, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Referring to fig. 1, fig. 1 is a flowchart illustrating steps of a target video generating method for monocular monitoring.
In a first aspect, a method for generating a target video for monocular monitoring is provided, including: step 1, determining the positioning position of a target object, and determining whether the target object enters a monitoring area according to the positioning position.
The function of the step 1 is to detect whether the target object enters the monitoring area, so as to judge the subsequent event according to the detection result.
The detection of the target object specifically includes: and determining the position of the target object to obtain the positioning position of the target object.
In this specific embodiment, determining the positioning position of the target object specifically includes: determining the positioning position of a target object by a Bluetooth three-point positioning method; the target object wears a Bluetooth transmitting node, and the position of the Bluetooth transmitting node worn by the target object is determined through three preset anchor nodes;
the three anchor nodes are a first anchor node, a second anchor node and a third anchor node respectively, a rotation base plate of the monitoring camera is taken as an origin, a space rectangular coordinate system is established, and in the space rectangular coordinate system, the coordinates of the first anchor node are as follows The coordinates of the second anchor node are +.>The coordinates of the third anchor node are +.>The method comprises the steps of carrying out a first treatment on the surface of the The coordinates of the Bluetooth transmitting node are defined as +.>
The first anchor node calculates and obtains the real-time signal strength of signals sent by each anchor node receiving Bluetooth transmitting node by an RSSI ranging method, and obtains the signals according to the signal receiving strength L per meter by combining the RSSI ranging method: the distance between the Bluetooth transmitting node and the first anchor node is S1, the distance between the Bluetooth transmitting node and the second anchor node is S2, and the distance between the Bluetooth transmitting node and the third anchor node is S3.
Establishing an equation set:
the coordinates (a, b, c) of the bluetooth transmitting node can be obtained by solving the above equation set, and the coordinates (a, b, c) of the positioning position of the target object can be obtained.
After the coordinates of the positioning position of the target object are obtained, the coordinates can be compared with a preset coordinate set of the monitoring area. When the coordinates of the positioning position of the target object belong to the coordinate set of the monitoring area, the target object is considered to enter the monitoring area.
And step 2, outputting a first response signal when the target object is determined to enter the monitoring area for the first time.
And step 3, after confirming that the first response signal is output, enabling the monitoring camera to focus on the target object according to the positioning position and the position of the monitoring camera so as to acquire a fine video picture of the target object.
The function of step 3 is to obtain a fine video image of the target object, for this reason, after confirming that the target object has entered the monitoring area, the monitoring camera needs to be adjusted to make the monitoring camera aim at the target object, and adjust the optical lens structure of the monitoring camera in real time, so that the monitoring camera is always focused in the target object.
According to the positioning position and the position of the monitoring camera, the method for focusing the target object by the monitoring camera specifically comprises the following steps: based on the space rectangular coordinate system, acquiring the transverse rotation angle of the current monitoring cameraAnd a longitudinal rotation angle->
According to the transverse rotation angleAnd a longitudinal rotation angle->Determining coordinates of a lens center point of the monitoring camera, coordinates of a CCD center point of the monitoring camera and coordinates of a physical center point of the monitoring camera;
the method comprises the steps of recording a lens center point of a monitoring camera as E, recording a CCD center point of the monitoring camera as F, recording a positioning position of a target object as M, and recording a physical center point of the monitoring camera as N; wherein, the physical center point of the monitoring camera means that the position of the point is not changed when the monitoring camera rotates transversely or longitudinally.
By vectorSum vector->Calculating to obtain a space included angle->The method comprises the steps of carrying out a first treatment on the surface of the The space angle->Decomposing to obtain transverse included angle->And longitudinal included angle->Calculating the transverse included angle->And transverse included angle->Obtaining a target transverse included angle by the difference value; calculating longitudinal included angle->And longitudinal included angle->And obtaining the target longitudinal included angle by the difference value.
The angle value of the transverse included angle of the transverse rotating target of the monitoring camera is controlled, and the angle value of the longitudinal included angle of the longitudinal rotating target of the monitoring camera is controlled, so that the monitoring camera is aligned to the target object.
After the rotation of the monitoring camera is determined, the coordinates of the center point of the lens of the monitoring camera are determined according to the transverse included angle of the target and the longitudinal included angle of the target, the sitting mark is the target coordinates, the distance between the center point of the lens of the monitoring camera and the target object is calculated according to the coordinates of the positioning position of the target coordinates and the target object, and the distance is recorded as the target distance.
And adjusting the lens structure of the monitoring camera according to the target distance, so that the target object falls on the focusing surface of the monitoring camera, and focusing on the target object is realized.
And 3, determining the position relationship between the monitoring camera and the target object through the established space rectangular coordinate system. The azimuth relation between the target object and the monitoring camera is determined through the physical center point and the positioning position of the monitoring camera, and the distance relation between the monitoring camera and the target object is determined through the lens center point and the positioning position of the monitoring camera. And obtaining a target transverse included angle and a target longitudinal included angle of the monitoring camera according to the azimuth relation. The method is characterized in that the monitoring camera is aligned to the target object by controlling the mode that the monitoring camera rotates the target transverse included angle and the target longitudinal included angle. And adjusting the lens structure of the monitoring camera according to the distance relation, so that the target object falls on the focusing surface of the monitoring camera, and focusing on the target object is realized. Through the operation, the monitoring camera can focus the target object so as to acquire a fine video picture of the target object.
Meanwhile, the scheme adopted in the step 3 is realized based on the positioning position of the target object, so that the focusing of the target object can be completed by only a single camera. The scheme can reduce the using amount of the camera and reduce the cost. The method has the advantage of reducing the using amount of the camera. The method is suitable for application scenes of monocular shooting.
And step 4, judging whether the target object leaves the monitoring area, and outputting a second response signal when the target object is determined to leave the monitoring area.
In this step, a major consideration is how to determine that the target object leaves the monitoring area.
After the coordinates of the positioning position of the target object are obtained, the coordinates can be compared with a preset coordinate set of the monitoring area. When the coordinates of the positioning position of the target object appear a coordinate set which belongs to the non-monitoring area for the first time, the target object is considered to leave the monitoring area. For this purpose, a second response signal is generated. After the second response signal is generated, the process proceeds to step 5.
And 5, marking the time point of the monitoring video generated by the first response signal according to the first response signal, wherein the time point is the starting time point.
And 6, marking the time point of the monitoring video generated by the second response signal according to the second response signal, wherein the mark is an ending time point.
And 7, intercepting the monitoring video from the starting time point to the ending time point to generate a target video.
The function of steps 5 to 7 is to cut the surveillance video. Since the monitoring camera has already been adjusted in step 3. Therefore, after the first response signal is output, at this time, the monitoring video photographed by the monitoring camera is clear for photographing of the target object. In order to intercept the clear video, the starting time point is obtained by marking the time point of the first response signal in the monitoring video. And marking the moment when the monitoring video is generated by the second response signal. And taking the mark as an ending time point for intercepting the monitoring video. After determining that the marking of the start time point and the end time point of the monitoring video has been completed. The surveillance video between the start time point and the end time point can be considered to be a clear video that contains the target object. Therefore, the monitoring video during the period from the start time point to the end time point can be intercepted in the monitoring video to obtain the target video.
According to the invention, the angle and the focal length of the monitoring camera are adjusted according to the positioning position of the target object, so that the target object can be focused in the starting time point and the ending time point of the monitoring video recorded by the monitoring camera. So that a clear target video for the target object can be obtained.
After the generation of the target video is completed, in some further embodiments, the angle and lens configuration of the monitoring camera are restored to the initial settings. The monitoring camera shoots according to the preset angle and the preset lens structure, so that daily shooting work is prevented from being influenced.
In order to maintain the integrity of the monitoring video, the specific method for generating the target video comprises the following steps: and copying the monitoring video to obtain a monitoring video copy. After the monitoring video copy is obtained, the monitoring video copy can be intercepted at the starting time point and the ending time point, so that an intercepted part is taken, and the intercepted part becomes the target video.
In order to backup the target video, the target video is also timely sent to a remote backup storage server after the target video is generated. So that the backup storage server can store the target video. Of course, in some further embodiments, the starting time point and the ending time point on the surveillance video are cleared after it is determined that the target video has been stored. Thereby erasing the trace points generated for the target video in the monitoring video.
In some application scenarios, the target video needs to be compressed to obtain a material video that is easy to edit. From analysis, the required material video is generally only the moving part of the target object, and is not required for some parts of the target object, which are not moving. Therefore, in order to preserve the image quality of the target video, the compression of the target object can be achieved by taking out the portion of the target object that does not move.
To this end, in some further embodiments, a frequency of movement changes of the target object in the target video is obtained. The frequency of the movement change is the change degree of the positioning position of the target object in unit time. And calculating the target video based on the frequency to obtain the frequency of each second on a time axis of the target video, and forming a frequency curve graph. And removing the video frames with the frequency lower than the set frequency threshold according to the frequency curve graph, and integrating the target video with the video frames removed to obtain the material video. The frequency threshold is used for reflecting the movement condition of the target object. By setting a reasonable frequency threshold, the part of the target object, which does not move, can be quickly removed, so that a material video only comprising the movement of the target object can be quickly obtained. Meanwhile, the capacity of the target video can be reduced, and the target video can be well reflected through the material video under the condition that only the movement of the target object is required to be analyzed. And the user can conveniently edit the file in the later period.
Referring to fig. 2, fig. 2 is a schematic structural diagram of a target video generating apparatus for monocular monitoring.
In a second aspect, there is provided a target video generating apparatus for monocular monitoring, including: a processor and a memory, wherein the memory is for storing a computer readable program; the computer readable program, when executed by the processor, causes the processor to implement the target video generation method of monocular surveillance as described in any of the above specific embodiments.
Those of ordinary skill in the art will appreciate that all or some of the steps, systems, and methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as known to those skilled in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as is known to one of ordinary skill in the art.
Referring to fig. 3, fig. 3 is a schematic diagram of a connection structure of a target video generating system for monocular monitoring.
In a third aspect, a target video generating system for monocular monitoring is provided, including: the device comprises a first judging module, a determining module, an adjusting module, a first marking module, a second judging module, a second marking module and a generating module.
The first judging module is used for detecting whether the target object enters the monitoring area or not, so that the follow-up event is judged according to the detection result. The detection of the target object specifically includes: and determining the position of the target object to obtain the positioning position of the target object.
In this specific embodiment, determining the positioning position of the target object specifically includes: determining the positioning position of a target object by a Bluetooth three-point positioning method; the target object wears the Bluetooth transmitting node, and the position of the Bluetooth transmitting node worn by the target object is determined through three preset anchor nodes.
The three anchor nodes are a first anchor node, a second anchor node and a third anchor node respectively, a rotation base plate of the monitoring camera is taken as an origin, a space rectangular coordinate system is established, and in the space rectangular coordinate system, the coordinates of the first anchor node are as follows The coordinates of the second anchor node are +.>The coordinates of the third anchor node are +.>The method comprises the steps of carrying out a first treatment on the surface of the The coordinates of the Bluetooth transmitting node are defined as +.>
The first anchor node calculates and obtains the real-time signal strength of signals sent by each anchor node receiving Bluetooth transmitting node by an RSSI ranging method, and obtains the signals according to the signal receiving strength L per meter by combining the RSSI ranging method: the distance between the Bluetooth transmitting node and the first anchor node is S1, the distance between the Bluetooth transmitting node and the second anchor node is S2, and the distance between the Bluetooth transmitting node and the third anchor node is S3;
establishing an equation set:
the coordinates (a, b, c) of the bluetooth transmitting node can be obtained by solving the above equation set, and the coordinates (a, b, c) of the positioning position of the target object can be obtained.
After the coordinates of the positioning position of the target object are obtained, the coordinates can be compared with a preset coordinate set of the monitoring area. When the coordinates of the positioning position of the target object belong to the coordinate set of the monitoring area, the target object is considered to enter the monitoring area.
The determining module is used for outputting a first response signal when determining that the target object enters the monitoring area for the first time.
And the adjusting module is used for enabling the monitoring camera to focus on the target object according to the positioning position and the position of the monitoring camera after confirming that the first response signal is output, so as to acquire a fine video picture of the target object.
The function of the adjusting module is: therefore, after confirming that the target object has entered the monitoring area, the monitoring camera needs to be adjusted to enable the monitoring camera to be aligned to the target object, and the optical lens structure of the monitoring camera is adjusted in real time, so that the monitoring camera is always focused in the target object.
Wherein, according to the positioning position andthe position of the monitoring camera, so that the focusing of the monitoring camera on the target object specifically comprises: based on the space rectangular coordinate system, acquiring the transverse rotation angle of the current monitoring cameraAnd a longitudinal rotation angle->
According to the transverse rotation angleAnd a longitudinal rotation angle->Determining coordinates of a lens center point of the monitoring camera, coordinates of a CCD center point of the monitoring camera and coordinates of a physical center point of the monitoring camera;
the method comprises the steps of recording a lens center point of a monitoring camera as E, recording a CCD center point of the monitoring camera as F, recording a positioning position of a target object as M, and recording a physical center point of the monitoring camera as N; wherein, the physical center point of the monitoring camera means that the position of the point is not changed when the monitoring camera rotates transversely or longitudinally.
By vectorSum vector->Calculating to obtain a space included angle->The method comprises the steps of carrying out a first treatment on the surface of the The space angle->Decomposing to obtain transverse included angle->And longitudinal included angle->Calculating the transverse included angle->And transverse included angle->Obtaining a target transverse included angle by the difference value; calculating longitudinal included angle->And longitudinal included angle->And obtaining the target longitudinal included angle by the difference value.
Controlling the angle value of the transverse included angle of the transverse rotation target of the monitoring camera and the angle value of the longitudinal included angle of the longitudinal rotation target of the monitoring camera, so as to aim the monitoring camera at the target object;
after the rotation of the monitoring camera is determined, determining the coordinate of the lens center point of the monitoring camera according to the target transverse included angle and the target longitudinal included angle, wherein the sitting mark is the target coordinate, calculating according to the coordinates of the target coordinate and the positioning position of the target object to obtain the distance between the lens center point of the monitoring camera and the target object, and recording the distance as the target distance;
and adjusting the lens structure of the monitoring camera according to the target distance, so that the target object falls on the focusing surface of the monitoring camera, and focusing on the target object is realized.
In the adjustment module, the position relationship between the monitoring camera and the target object is determined through the established space rectangular coordinate system. The azimuth relation between the target object and the monitoring camera is determined through the physical center point and the positioning position of the monitoring camera, and the distance relation between the monitoring camera and the target object is determined through the lens center point and the positioning position of the monitoring camera. And obtaining a target transverse included angle and a target longitudinal included angle of the monitoring camera according to the azimuth relation. The method is characterized in that the monitoring camera is aligned to the target object by controlling the mode that the monitoring camera rotates the target transverse included angle and the target longitudinal included angle. And adjusting the lens structure of the monitoring camera according to the distance relation, so that the target object falls on the focusing surface of the monitoring camera, and focusing on the target object is realized. Through the operation, the monitoring camera can focus the target object so as to acquire a fine video picture of the target object.
In the adjusting module, the adopted scheme is realized based on the positioning position of the target object, so that the focusing of the target object can be completed by only a single camera. The scheme can reduce the using amount of the camera and reduce the cost. The method has the advantage of reducing the using amount of the camera. The method is suitable for application scenes of monocular shooting.
The second judging module is used for judging whether the target object leaves the monitoring area, and outputting a second response signal when the target object is determined to leave the monitoring area.
In the second determination module, a main consideration is how to determine that the target object leaves the monitoring area. After the coordinates of the positioning position of the target object are obtained, the coordinates can be compared with a preset coordinate set of the monitoring area. When the coordinates of the positioning position of the target object appear a coordinate set which belongs to the non-monitoring area for the first time, the target object is considered to leave the monitoring area. For this purpose, a second response signal is generated.
The first marking module is used for: and marking the moment point of the monitoring video generated by the first response signal according to the first response signal, wherein the mark is a starting moment point.
The second marking module is used for: and marking the time point of the monitoring video generated by the second response signal according to the second response signal, wherein the mark is an ending time point.
The generating module is used for: and intercepting the monitoring video from the starting time point to the ending time point to generate a target video.
In the first marking module, the second marking module and the generating module, the main function is to cut the monitoring video. Since in the adjustment module, the monitoring camera has already been adjusted. Therefore, after the first response signal is output, at this time, the monitoring video photographed by the monitoring camera is clear for photographing of the target object. In order to intercept the clear video, the starting time point is obtained by marking the time point of the first response signal in the monitoring video. And marking the moment when the monitoring video is generated by the second response signal. And taking the mark as an ending time point for intercepting the monitoring video. After determining that the marking of the start time point and the end time point of the monitoring video has been completed. The surveillance video between the start time point and the end time point can be considered to be a clear video that contains the target object. Therefore, the monitoring video during the period from the start time point to the end time point can be intercepted in the monitoring video to obtain the target video.
In a fourth aspect, a computer readable storage medium is provided, in which a processor executable program is stored, which when executed by a processor is configured to implement a target video generation method for monocular monitoring as in any of the above specific embodiments.
The embodiment of the application also discloses a computer program product, which comprises a computer program or computer instructions, wherein the computer program or the computer instructions are stored in a computer readable storage medium, the computer program or the computer instructions are read from the computer readable storage medium by a processor of a computer device, and the computer program or the computer instructions are executed by the processor, so that the computer device executes the target video generation method for monocular monitoring as described in any embodiment.
The terms "first," "second," "third," "fourth," and the like in the description of the application and in the above figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented, for example, in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in the present application, "at least one (item)" means one or more, and "a plurality" means two or more. "and/or" for describing the association relationship of the association object, the representation may have three relationships, for example, "a and/or B" may represent: only a, only B and both a and B are present, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices, or units, which may be in electrical, mechanical, or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
While the present application has been described in considerable detail and with particularity with respect to several described embodiments, it is not intended to be limited to any such detail or embodiments or any particular embodiment, but is to be considered as providing a broad interpretation of such claims by reference to the appended claims in light of the prior art and thus effectively covering the intended scope of the application. Furthermore, the foregoing description of the application has been presented in its embodiments contemplated by the inventors for the purpose of providing a useful description, and for the purposes of providing a non-essential modification of the application that may not be presently contemplated, may represent an equivalent modification of the application.

Claims (10)

1. The target video generation method for monocular monitoring is characterized by comprising the following steps of:
determining the positioning position of a target object, and determining whether the target object enters a monitoring area according to the positioning position;
the determining the positioning position of the target object specifically includes: determining the positioning position of a target object by a Bluetooth three-point positioning method; the target object wears the Bluetooth transmitting node, the position of the Bluetooth transmitting node worn by the target object is determined through three preset anchor nodes,
the three anchor nodes are a first anchor node, a second anchor node and a third anchor node respectively, and a space rectangular coordinate system is established by taking a rotating base plate of the monitoring camera as an origin;
In the space rectangular coordinate system, the coordinates of the first anchor node areThe coordinates of the second anchor node areThe coordinates of the third anchor node are +.>The method comprises the steps of carrying out a first treatment on the surface of the The coordinates of the Bluetooth transmitting node are defined as
The first anchor node calculates and obtains the real-time signal strength of signals sent by each anchor node receiving Bluetooth transmitting node by an RSSI ranging method, and obtains the signals according to the signal receiving strength L per meter by combining the RSSI ranging method: the distance between the Bluetooth transmitting node and the first anchor node is S1, the distance between the Bluetooth transmitting node and the second anchor node is S2, and the distance between the Bluetooth transmitting node and the third anchor node is S3;
establishing an equation set:
solving the equation set to obtain the coordinates (a, b, c) of the Bluetooth transmitting node, and obtaining the coordinates (a, b, c) of the positioning position of the target object;
when the target object is determined to enter the monitoring area for the first time, outputting a first response signal;
after confirming that the first response signal is output, according to the positioning position and the position of the monitoring camera, enabling the monitoring camera to focus on the target object so as to acquire a fine video picture of the target object;
according to the positioning position and the position of the monitoring camera, the method for focusing the target object by the monitoring camera specifically comprises the following steps: based on the space rectangular coordinate system, acquiring the transverse rotation angle of the current monitoring camera And a longitudinal rotation angle->
According to the transverse rotation angleAnd a longitudinal rotation angle->Determining coordinates of a lens center point of the monitoring camera, coordinates of a CCD center point of the monitoring camera and coordinates of a physical center point of the monitoring camera;
the method comprises the steps of recording a lens center point of a monitoring camera as E, recording a CCD center point of the monitoring camera as F, recording a positioning position of a target object as M, and recording a physical center point of the monitoring camera as N; wherein, the physical center point of the monitoring camera means that the position of the point is not changed when the monitoring camera transversely rotates or longitudinally rotates;
by vectorSum vector->Calculating to obtain a space included angle->The method comprises the steps of carrying out a first treatment on the surface of the The space angle->Decomposing to obtain transverse included angle->And longitudinal included angle->Calculating the transverse included angle->And transverse included angle->Obtaining a target transverse included angle by the difference value; calculating longitudinal included angle->And longitudinal included angle->Obtaining a target longitudinal included angle by the difference value;
controlling the angle value of the transverse included angle of the transverse rotation target of the monitoring camera and the angle value of the longitudinal included angle of the longitudinal rotation target of the monitoring camera, so as to aim the monitoring camera at the target object;
after the rotation of the monitoring camera is determined, determining the coordinate of the lens center point of the monitoring camera according to the target transverse included angle and the target longitudinal included angle, wherein the sitting mark is the target coordinate, calculating according to the coordinates of the target coordinate and the positioning position of the target object to obtain the distance between the lens center point of the monitoring camera and the target object, and recording the distance as the target distance;
Adjusting the lens structure of the monitoring camera according to the target distance so that the target object falls on the focusing surface of the monitoring camera to realize focusing on the target object;
judging whether the target object leaves the monitoring area, and outputting a second response signal when the target object is determined to leave the monitoring area;
marking a time point generated by the monitoring video with the first response signal according to the first response signal, wherein the time point is a starting time point;
marking the time point of the monitoring video generated by the second response signal according to the second response signal, wherein the mark is an ending time point;
and intercepting the monitoring video from the starting time point to the ending time point to generate a target video.
2. The method for generating a target video for monocular monitoring according to claim 1, further comprising, after generating the target video, the steps of: and restoring the angle of the monitoring camera and the lens structure to the initial setting.
3. The method for generating a target video for monocular surveillance according to claim 1, wherein determining whether the target object enters the surveillance area according to the positioning position specifically comprises: comparing the positioning position with the monitoring area, and considering that the target object enters the monitoring area when the positioning position first belongs to the monitoring area.
4. The method for generating a target video for monocular monitoring according to claim 2, wherein determining whether the target object leaves the monitoring area, and outputting the second response signal when it is determined that the target object leaves the monitoring area, comprises: when the positioning position first belongs to the monitoring area and then first belongs to the non-monitoring area, the target object is considered to leave the monitoring area, and a second response signal is generated.
5. The method for generating a target video for monocular monitoring according to claim 1, wherein the capturing the monitoring video from a start time point to an end time point, and generating the target video specifically comprises: copying the monitoring video to obtain a monitoring video copy, and intercepting the monitoring video copy from a starting time point to an ending time point to obtain a target video.
6. The method for generating a target video for monocular monitoring according to claim 1, further comprising, after generating the target video, the steps of: and sending the target video to a remote backup storage server.
7. The method of claim 6, further comprising clearing the start time point and the end time point of the monitored video after confirming that the target video has been stored.
8. A target video generating apparatus for monocular monitoring, comprising:
a processor;
a memory for storing a computer readable program;
the computer readable program, when executed by the processor, causes the processor to implement the target video generation method of monocular monitoring as claimed in any one of claims 1 to 7.
9. A target video generation system for monocular surveillance, comprising: the device comprises a first judging module, a determining module, an adjusting module, a second judging module, a first marking module, a second marking module and a generating module;
the first judging module is used for determining the positioning position of the target object and determining whether the target object enters a monitoring area according to the positioning position;
the determining the positioning position of the target object specifically includes: determining the positioning position of a target object by a Bluetooth three-point positioning method; the target object wears the Bluetooth transmitting node, the position of the Bluetooth transmitting node worn by the target object is determined through three preset anchor nodes,
the three anchor nodes are a first anchor node, a second anchor node and a third anchor node respectively, and a space rectangular coordinate system is established by taking a rotating base plate of the monitoring camera as an origin;
In the space rectangular coordinate system, the coordinates of the first anchor node areThe coordinates of the second anchor node areThe coordinates of the third anchor node are +.>The method comprises the steps of carrying out a first treatment on the surface of the The coordinates of the Bluetooth transmitting node are defined as
The first anchor node calculates and obtains the real-time signal strength of signals sent by each anchor node receiving Bluetooth transmitting node by an RSSI ranging method, and obtains the signals according to the signal receiving strength L per meter by combining the RSSI ranging method: the distance between the Bluetooth transmitting node and the first anchor node is S1, the distance between the Bluetooth transmitting node and the second anchor node is S2, and the distance between the Bluetooth transmitting node and the third anchor node is S3;
establishing an equation set:
solving the equation set to obtain the coordinates (a, b, c) of the Bluetooth transmitting node, and obtaining the coordinates (a, b, c) of the positioning position of the target object;
the determining module is used for outputting a first response signal when determining that the target object enters the monitoring area for the first time;
the adjusting module is used for enabling the monitoring camera to focus on the target object according to the positioning position and the position of the monitoring camera after confirming that the first response signal is output, so as to obtain a fine video picture of the target object;
According to the positioning position and the position of the monitoring camera, the method for focusing the target object by the monitoring camera specifically comprises the following steps: based on the space rectangular coordinate system, acquiring the transverse rotation angle of the current monitoring cameraAnd a longitudinal rotation angle->
According to the transverse rotation angleAnd a longitudinal rotation angle->Determining coordinates of a lens center point of the monitoring camera, coordinates of a CCD center point of the monitoring camera and coordinates of a physical center point of the monitoring camera;
the method comprises the steps of recording a lens center point of a monitoring camera as E, recording a CCD center point of the monitoring camera as F, recording a positioning position of a target object as M, and recording a physical center point of the monitoring camera as N; wherein, the physical center point of the monitoring camera means that the position of the point is not changed when the monitoring camera transversely rotates or longitudinally rotates;
by vectorSum vector->Calculating to obtain a space included angle->
The space included angle is setDecomposing to obtain transverse included angle->And longitudinal included angle->Calculating the transverse included angle->And transverse included angle->Obtaining a target transverse included angle by the difference value;
calculating a longitudinal angleAnd longitudinal included angle->Obtaining a target longitudinal included angle by the difference value; controlling the angle value of the transverse included angle of the transverse rotation target of the monitoring camera and the angle value of the longitudinal included angle of the longitudinal rotation target of the monitoring camera, so as to aim the monitoring camera at the target object;
After the rotation of the monitoring camera is determined, determining the coordinate of the lens center point of the monitoring camera according to the target transverse included angle and the target longitudinal included angle, wherein the sitting mark is the target coordinate, calculating according to the coordinates of the target coordinate and the positioning position of the target object to obtain the distance between the lens center point of the monitoring camera and the target object, and recording the distance as the target distance;
adjusting the lens structure of the monitoring camera according to the target distance so that the target object falls on the focusing surface of the monitoring camera to realize focusing on the target object;
the second judging module is used for: judging whether the target object leaves the monitoring area, and outputting a second response signal when the target object is determined to leave the monitoring area;
the first marking module is used for: marking a time point generated by the monitoring video with the first response signal according to the first response signal, wherein the time point is a starting time point;
the second marking module is used for: marking the time point of the monitoring video generated by the second response signal according to the second response signal, wherein the mark is an ending time point;
the generating module is used for: and intercepting the monitoring video from the starting time point to the ending time point to generate a target video.
10. A computer-readable storage medium, in which a processor-executable program is stored, which when executed by a processor is for implementing the target video generation method of monocular monitoring according to any one of claims 1 to 7.
CN202310966123.XA 2023-03-09 2023-08-02 Target video generation method, device, system and medium for monocular monitoring Active CN117221483B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2023102185783 2023-03-09
CN202310218578.3A CN116074627A (en) 2023-03-09 2023-03-09 Target video generation method, device, system and medium based on monitoring

Publications (2)

Publication Number Publication Date
CN117221483A true CN117221483A (en) 2023-12-12
CN117221483B CN117221483B (en) 2024-03-19

Family

ID=86183818

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202310218578.3A Pending CN116074627A (en) 2023-03-09 2023-03-09 Target video generation method, device, system and medium based on monitoring
CN202310966123.XA Active CN117221483B (en) 2023-03-09 2023-08-02 Target video generation method, device, system and medium for monocular monitoring

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202310218578.3A Pending CN116074627A (en) 2023-03-09 2023-03-09 Target video generation method, device, system and medium based on monitoring

Country Status (1)

Country Link
CN (2) CN116074627A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109120904A (en) * 2018-10-19 2019-01-01 武汉星巡智能科技有限公司 Binocular camera monitoring method, device and computer readable storage medium
CN110243339A (en) * 2019-06-25 2019-09-17 重庆紫光华山智安科技有限公司 A kind of monocular cam localization method, device, readable storage medium storing program for executing and electric terminal
CN114567728A (en) * 2022-03-10 2022-05-31 上海市政工程设计研究总院(集团)有限公司 Video tracking method, system, electronic device and storage medium
CN114900661A (en) * 2022-05-10 2022-08-12 上海浦东发展银行股份有限公司 Monitoring method, device, equipment and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111510656B (en) * 2020-07-02 2020-10-27 北京梦天门科技股份有限公司 Law enforcement video intercepting method, device, electronic device and storage medium
CN111968315A (en) * 2020-08-31 2020-11-20 中国银行股份有限公司 ATM monitoring method and device, storage medium and electronic equipment
CN112489280A (en) * 2020-11-20 2021-03-12 国网山东省电力公司五莲县供电公司 Transformer substation personal safety monitoring method, system, terminal and storage medium
CN113593074B (en) * 2021-07-15 2023-09-22 盛景智能科技(嘉兴)有限公司 Method and device for generating monitoring video
CN114037934A (en) * 2021-11-01 2022-02-11 西安诚迈软件科技有限公司 Method for identifying wearing behavior of industrial garment, terminal device and storage medium
CN114245033A (en) * 2021-11-03 2022-03-25 浙江大华技术股份有限公司 Video synthesis method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109120904A (en) * 2018-10-19 2019-01-01 武汉星巡智能科技有限公司 Binocular camera monitoring method, device and computer readable storage medium
CN110243339A (en) * 2019-06-25 2019-09-17 重庆紫光华山智安科技有限公司 A kind of monocular cam localization method, device, readable storage medium storing program for executing and electric terminal
CN114567728A (en) * 2022-03-10 2022-05-31 上海市政工程设计研究总院(集团)有限公司 Video tracking method, system, electronic device and storage medium
CN114900661A (en) * 2022-05-10 2022-08-12 上海浦东发展银行股份有限公司 Monitoring method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN116074627A (en) 2023-05-05
CN117221483B (en) 2024-03-19

Similar Documents

Publication Publication Date Title
KR101826897B1 (en) Method and camera for determining an image adjustment parameter
US9521311B2 (en) Quick automatic focusing method and image acquisition apparatus
CN103988227A (en) Method and apparatus for image capture targeting
US20070018977A1 (en) Method and apparatus for generating a depth map
KR101530255B1 (en) Cctv system having auto tracking function of moving target
EP2426909B1 (en) Determination of reliability of positional information associated with a photographed image
JP2020031421A (en) Method and camera system combining views from plurality of cameras
US20200267309A1 (en) Focusing method and device, and readable storage medium
JP2011166264A (en) Image processing apparatus, imaging device and image processing method, and program
US10277888B2 (en) Depth triggered event feature
CN113572958B (en) Method and equipment for automatically triggering camera to focus
CN104333694B (en) A method of prevent shops from visiting fraud of taking pictures
CN108063909B (en) Video conference system, image tracking and collecting method and device
CN110602376B (en) Snapshot method and device and camera
JP2009004873A (en) Camera control system and method, program and storage medium
CN117221483B (en) Target video generation method, device, system and medium for monocular monitoring
CN105467741A (en) Panoramic shooting method and terminal
CN113329171A (en) Video processing method, device, equipment and storage medium
WO2022147703A1 (en) Focus following method and apparatus, and photographic device and computer-readable storage medium
CN111917989A (en) Video shooting method and device
CN113840087B (en) Sound processing method, sound processing device, electronic equipment and computer readable storage medium
CN112711966B (en) Video file processing method and device and electronic equipment
WO2023189081A1 (en) Image processing device, image processing method, and program
CN110290315B (en) Tracking focusing method, apparatus, camera and medium for pan-tilt camera
KR102552071B1 (en) Image transmission apparatus and method for transmitting image shooted by cctv to manager terminal through network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant