CN113591651A - Image capturing method, image display method and device and storage medium - Google Patents

Image capturing method, image display method and device and storage medium Download PDF

Info

Publication number
CN113591651A
CN113591651A CN202110831445.4A CN202110831445A CN113591651A CN 113591651 A CN113591651 A CN 113591651A CN 202110831445 A CN202110831445 A CN 202110831445A CN 113591651 A CN113591651 A CN 113591651A
Authority
CN
China
Prior art keywords
target object
video monitoring
frame
target
panoramic picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110831445.4A
Other languages
Chinese (zh)
Inventor
吴允
蔡合瑶
谢俞胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202110831445.4A priority Critical patent/CN113591651A/en
Publication of CN113591651A publication Critical patent/CN113591651A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The embodiment of the application provides an image capturing method, an image display device and a storage medium, and belongs to the technical field of image processing. According to the image capturing method, if the target object triggering the alarm rule exists in the video monitoring frames, the target object is identified in a plurality of subsequent continuous video monitoring frames, the panoramic picture of the video monitoring frames meeting the corresponding moment of the capture rule is captured after the capture rule is met, and the identification results of the panoramic picture and the video monitoring frames meeting the corresponding moment of the capture rule are sent to the rear end, so that the rear end extracts the target frame comprising the target object from the panoramic picture according to the identification results. Therefore, only one panoramic picture needs to be captured, and the target object triggering the alarm rule can be tracked in real time in the video monitoring frame, so that the target frame comprising the target object can be accurately extracted when the rear end captures the target frame.

Description

Image capturing method, image display method and device and storage medium
Technical Field
The embodiment of the application relates to the technical field of image processing, in particular to a method for capturing an image, an image display method, an image display device and a storage medium.
Background
In video monitoring, a camera is used for detecting target objects such as pedestrians, motor vehicles, non-motor vehicles and the like, and when the target objects are detected to trigger certain rules such as wire mixing rules, area intrusion rules, article leaving rules and the like, a target frame including the target objects and corresponding panoramic pictures need to be captured so as to preserve corresponding evidences.
In the related art, two schemes are often adopted to capture an image, the first scheme is to capture a frame of panoramic picture at the current moment randomly when an event occurs, that is, a target object trigger rule is detected, and the disadvantage of the scheme is that a specific target object of the trigger rule cannot be displayed on the panoramic picture. The second scheme is to precisely capture a frame of panoramic picture corresponding to the event occurrence time and a corresponding target frame including a target object when the event occurs, and has the disadvantages that when the number of events occurs is large, the picture is lost due to the limitation of various hardware resources and network bandwidth, and a large amount of storage space is consumed for storing the picture.
Disclosure of Invention
In order to solve the existing technical problems, embodiments of the present application provide a method, an image display device, and a storage medium for capturing an image, which can ensure that a target frame including a target object is accurately captured in a full picture, and reduce a storage space required for storing the picture.
In order to achieve the above purpose, the technical solution of the embodiment of the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a method for capturing an image, where the method includes:
if a target object triggering an alarm rule exists in the video monitoring frames, identifying the target object in a plurality of subsequent continuous video monitoring frames;
after the capture rule is met, capturing the panoramic picture of the video monitoring frame at the corresponding moment meeting the capture rule; the video monitoring frame meeting the corresponding moment of the capturing rule is the last video monitoring frame in the continuous multiple video monitoring frames; the target object is included in each of the continuous video monitoring frames;
and sending the identification result of the video monitoring frame at the moment corresponding to the panoramic picture and the capturing rule to a back end so that the back end extracts a target frame comprising the target object from the panoramic picture according to the identification result.
According to the method for capturing the image, if the target object triggering the alarm rule exists in the video monitoring frames, the target object is identified in a plurality of subsequent continuous video monitoring frames, after the capture rule is met, the panoramic picture of the video monitoring frame meeting the corresponding moment of the capture rule is captured, and the identification results of the panoramic picture and the video monitoring frame meeting the corresponding moment of the capture rule are sent to the rear end, so that the rear end extracts the target frame including the target object in the panoramic picture according to the identification results. The method comprises the steps of capturing a target object, namely capturing a panoramic picture of a monitoring video frame corresponding to a current moment, wherein the capturing is not performed immediately when an alarm rule is triggered, but the panoramic picture of the monitoring video frame corresponding to the capture rule is captured, and tracking the target object in real time after the alarm rule is triggered by the target object, so that only one panoramic picture needs to be captured, and the target frame including the target object can be accurately extracted when the target frame is extracted by a rear end.
In an optional embodiment, if there is a target object triggering an alarm rule in a video surveillance frame, identifying the target object in a subsequent consecutive plurality of video surveillance frames includes:
if a target object triggering an alarm rule exists in a video monitoring frame, respectively determining a target identifier and a position coordinate corresponding to the target object; the position coordinate corresponding to the target object is determined according to the first position of the target object in the video monitoring frame;
according to the target identification corresponding to the target object, identifying the target object in a plurality of subsequent continuous video monitoring frames, and respectively determining second positions of the target object in the plurality of continuous video monitoring frames;
each time one of the second locations is determined, the location coordinates are updated in accordance with the determined second location.
In this embodiment, if a target object triggering an alarm rule exists in a video surveillance frame, a target identifier and a position coordinate corresponding to the target object are respectively determined, where the position coordinate corresponding to the target object is determined according to a first position of the target object in the video surveillance frame, and the target object is identified in subsequent consecutive video surveillance frames according to the target identifier corresponding to the target object, second positions of the target object in the consecutive video surveillance frames are respectively determined, and the position coordinate is updated according to the determined second positions each time one second position is determined. The position of the target object in the video monitoring frame can be tracked in real time, so that the accuracy of updating the position coordinate corresponding to the target object can be ensured.
In an optional embodiment, the identifying, according to a target identifier corresponding to the target object, the target object in subsequent consecutive video surveillance frames, and respectively determining second positions of the target object in the consecutive video surveillance frames includes:
for each video surveillance frame in the continuous plurality of video surveillance frames, respectively performing the following operations:
identifying the video monitoring frame and determining each object in the video monitoring frame;
and matching the target object with each object according to the target identification corresponding to the target object, and determining a second position of the target object in the video monitoring frame.
In this embodiment, for each video surveillance frame in the consecutive plurality of video surveillance frames, the following operations may be performed, respectively: and identifying the video monitoring frame, determining each object in the video monitoring frame, matching the target object with each object according to the target identification corresponding to the target object, and determining the second position of the target object in the video monitoring frame. Therefore, the position of the target object can be ensured to be tracked in real time in the video monitoring frame.
In an optional embodiment, the sending, to the back end, the recognition result of the video monitoring frame at the time corresponding to the capture rule includes:
sending the target identification and the position coordinate corresponding to the target object to a back end; and the position coordinate corresponding to the target object is determined according to the second position of the target object in the video monitoring frame at the moment corresponding to the grabbing rule.
In this embodiment, the target identifier and the position coordinate corresponding to the target object may be sent to the back end, where the position coordinate corresponding to the target object is determined according to the second position of the target object in the video surveillance frame at the time corresponding to the capture rule. The target identification and the position coordinate corresponding to the target object can be sent to the back end, so that the back end can accurately extract the target frame comprising the target object from the panoramic picture according to the target identification and the position coordinate corresponding to the target object.
In an optional embodiment, the capturing the panoramic picture of the video surveillance frame at the time corresponding to the capturing rule includes:
if the time for triggering the alarm rule reaches the preset time, capturing a panoramic picture of the video monitoring frame corresponding to the preset time; and/or
And if the target object cannot be identified in one video monitoring frame of the continuous video monitoring frames and the target object is identified in the previous video monitoring frame of the video monitoring frame, capturing a panoramic picture of the previous video monitoring frame.
In this embodiment, if the time for triggering the alarm rule reaches the preset time, the panoramic picture of the video surveillance frame corresponding to the preset time is captured, and/or if the target object cannot be identified in one video surveillance frame of a plurality of consecutive video surveillance frames and the target object is identified in the previous video surveillance frame of the one video surveillance frame, the panoramic picture of the previous video surveillance frame is captured. The capture rule is that the time for triggering the alarm rule reaches the preset time and/or the target object disappears before the video monitoring frame, so that the target object exists in the video monitoring frame when the panoramic picture of the video monitoring frame at the moment corresponding to the capture rule is captured.
In a second aspect, an embodiment of the present application further provides an image displaying method, where the method includes:
receiving a panoramic picture sent by video monitoring equipment and an identification result of a video monitoring frame corresponding to the panoramic picture, wherein the panoramic picture is obtained by capturing the last video monitoring frame in a plurality of continuous video monitoring frames after the video monitoring frame of a target object with a triggering alarm rule exists, and the identification result is determined after the target object in the last video monitoring frame is identified;
and extracting a target frame comprising a target object from the panoramic picture according to the identification result, and displaying the panoramic picture and the target frame.
The data image display method provided by the embodiment of the application can receive a panoramic picture sent by video monitoring equipment and an identification result of a video monitoring frame corresponding to the panoramic picture, wherein the panoramic picture is obtained by capturing the last video monitoring frame in a plurality of continuous video monitoring frames after the video monitoring frame of a target object with a triggering alarm rule exists, the identification result is determined after identifying the target object in the last video monitoring frame, a target frame including the target object is extracted from the panoramic picture according to the identification result, and the panoramic picture and the target frame are displayed. As only one panoramic picture needs to be captured, and the target frame comprising the target object with the triggering alarm rule is captured in the panoramic picture, the storage space required by storing the pictures can be reduced, and the storage pressure of related equipment is reduced.
In a third aspect, an embodiment of the present application further provides an apparatus for capturing an image, including:
the target object identification unit is used for identifying the target object in a plurality of subsequent continuous video monitoring frames if the target object triggering the alarm rule exists in the video monitoring frames;
the image capturing unit is used for capturing the panoramic picture of the video monitoring frame at the corresponding moment meeting the capturing rule after meeting the capturing rule; the video monitoring frame meeting the corresponding moment of the capturing rule is the last video monitoring frame in the continuous multiple video monitoring frames; the target object is included in each of the continuous video monitoring frames;
and the image sending unit is used for sending the identification result of the video monitoring frame at the moment corresponding to the panoramic picture and the capturing rule to a rear end so that the rear end extracts a target frame comprising the target object from the panoramic picture according to the identification result.
In an optional embodiment, the target object identification unit is specifically configured to:
if a target object triggering an alarm rule exists in a video monitoring frame, respectively determining a target identifier and a position coordinate corresponding to the target object; the position coordinate corresponding to the target object is determined according to the first position of the target object in the video monitoring frame;
according to the target identification corresponding to the target object, identifying the target object in a plurality of subsequent continuous video monitoring frames, and respectively determining second positions of the target object in the plurality of continuous video monitoring frames;
each time one of the second locations is determined, the location coordinates are updated in accordance with the determined second location.
In an optional embodiment, the target object identification unit is further configured to:
for each video surveillance frame in the continuous plurality of video surveillance frames, respectively performing the following operations:
identifying the video monitoring frame and determining each object in the video monitoring frame;
and matching the target object with each object according to the target identification corresponding to the target object, and determining a second position of the target object in the video monitoring frame.
In an optional embodiment, the image sending unit is specifically configured to:
sending the target identification and the position coordinate corresponding to the target object to a back end; and the position coordinate corresponding to the target object is determined according to the second position of the target object in the video monitoring frame at the moment corresponding to the grabbing rule.
In an optional embodiment, the image capture unit is specifically configured to:
if the time for triggering the alarm rule reaches the preset time, capturing a panoramic picture of the video monitoring frame corresponding to the preset time; and/or
And if the target object cannot be identified in one video monitoring frame of the continuous video monitoring frames and the target object is identified in the previous video monitoring frame of the video monitoring frame, capturing a panoramic picture of the previous video monitoring frame.
In a fourth aspect, an embodiment of the present application further provides an image display apparatus, including:
the system comprises an image receiving unit, a processing unit and a processing unit, wherein the image receiving unit is used for receiving a panoramic picture sent by video monitoring equipment and an identification result of a video monitoring frame corresponding to the panoramic picture, the panoramic picture is obtained by grabbing a last video monitoring frame in a plurality of continuous video monitoring frames after the video monitoring frame of a target object with a triggering alarm rule exists, and the identification result is determined after the target object in the last video monitoring frame is identified;
and the image display unit is used for extracting a target frame comprising a target object from the panoramic picture according to the identification result and displaying the panoramic picture and the target frame.
In a fifth aspect, this application embodiment further provides a computer-readable storage medium, in which a computer program is stored, and when the computer program is executed by a processor, the method for capturing an image according to the first aspect is implemented.
In a sixth aspect, the present application further provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the image displaying method of the second aspect is implemented.
In a seventh aspect, an embodiment of the present application further provides a video monitoring apparatus, including a memory and a processor, where the memory stores a computer program executable on the processor, and when the computer program is executed by the processor, the processor is enabled to implement the method for capturing an image according to the first aspect.
In an eighth aspect, the present application further provides an image displaying apparatus, including a memory and a processor, where the memory stores a computer program executable on the processor, and when the computer program is executed by the processor, the processor is enabled to implement the image displaying method of the second aspect.
For technical effects brought by any one implementation manner of the third aspect, the fifth aspect, and the seventh aspect, reference may be made to technical effects brought by a corresponding implementation manner of the first aspect, and details are not repeated here.
For technical effects brought by any one implementation manner of the fourth aspect, the sixth aspect, and the eighth aspect, reference may be made to technical effects brought by a corresponding implementation manner of the second aspect, and details are not described here.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic structural diagram of an image processing system according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of a video monitoring apparatus according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of another image displaying apparatus provided in the embodiment of the present application;
fig. 4 is a flowchart of a method for capturing an image according to an embodiment of the present disclosure;
FIG. 5 is a flowchart of another method for capturing images according to an embodiment of the present disclosure;
fig. 6 is a flowchart of an image displaying method according to an embodiment of the present disclosure;
FIG. 7 is a flowchart of another image displaying method according to an embodiment of the present disclosure;
fig. 8 is a schematic diagram of a video surveillance frame according to an embodiment of the present application;
fig. 9 is a schematic diagram of an image capture result according to an embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of an apparatus for capturing an image according to an embodiment of the present disclosure;
fig. 11 is a schematic structural diagram of an image display apparatus according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the accompanying drawings, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that references in the specification of the present application to the terms "comprises" and "comprising," and variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The technical solutions provided by the embodiments of the present application will be described in detail below with reference to the accompanying drawings.
The word "exemplary" is used hereinafter to mean "serving as an example, embodiment, or illustration. Any embodiment described as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments. In the description of the embodiments of the present application, "a plurality" means two or more unless otherwise specified.
Fig. 1 schematically shows a structure of an image processing system. As shown in fig. 1, the image processing system may include a video monitoring apparatus 100 and an image presentation apparatus 200.
The video monitoring apparatus 100 may be any apparatus capable of implementing the method for capturing images proposed in the present application, for example, the video monitoring apparatus 100 may be a camera. In this embodiment, the structure of video surveillance apparatus 100 may be as shown in FIG. 2, including memory 101, sending component 103, and one or more processors 102.
A memory 101 for storing a computer program for execution by the processor 102. The memory 101 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, a program required for running an instant messaging function, and the like; the storage data area can store various instant messaging information, operation instruction sets and the like.
The memory 101 may be a volatile memory (volatile memory), such as a random-access memory (RAM); the memory 101 may also be a non-volatile memory (non-volatile memory) such as, but not limited to, a read-only memory (rom), a flash memory (flash memory), a Hard Disk Drive (HDD) or a solid-state drive (SSD), or the memory 101 may be any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Memory 101 may be a combination of the above.
The processor 102 may include one or more Central Processing Units (CPUs), or be a digital processing unit, etc. And the processor 102 is configured to implement the method for capturing an image provided by the embodiment of the present application when calling the computer program stored in the memory 101.
The sending component 103 is configured to send the panoramic picture and the recognition result of the video surveillance frame at the moment corresponding to the capture rule to the image display apparatus 200.
The specific connection medium among the memory 101, the sending component 103 and the processor 102 is not limited in the embodiments of the present application. In the embodiment of the present application, the memory 101 and the processor 102 are connected by the bus 104 in fig. 2, the bus 104 is represented by a thick line in fig. 2, and the connection manner between other components is merely illustrative and is not limited thereto. The bus 104 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 2, but it is not intended that there be only one bus or one type of bus.
The image presentation apparatus 200 may be any apparatus capable of implementing the image presentation method proposed by the present application, and the image presentation apparatus 200 may be a backend. For example, the image presentation apparatus 200 may be an NVR (Network Video Recorder), a DVR (Digital Video Recorder), an IVSS (Intelligent Video Surveillance System), a platform, or the like. In this embodiment, the structure of the image presentation apparatus 200 may be as shown in fig. 3, including a memory 201, a receiving component 203, and one or more processors 202.
A memory 201 for storing a computer program executed by the processor 202. The memory 201 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, a program required for running an instant messaging function, and the like; the storage data area can store various instant messaging information, operation instruction sets and the like.
The memory 201 may be a volatile memory (volatile memory), such as a random-access memory (RAM); the memory 201 may also be a non-volatile memory (non-volatile memory) such as, but not limited to, a read-only memory (rom), a flash memory (flash memory), a hard disk (HDD) or a solid-state drive (SSD), or the memory 201 may be any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory 201 may be a combination of the above memories.
The processor 202 may include one or more Central Processing Units (CPUs), or be a digital processing unit, etc. The processor 202 is configured to implement the image display method provided in the embodiment of the present application when the computer program stored in the memory 201 is called.
The receiving component 203 is configured to receive the identification result of the panoramic picture and the corresponding video surveillance frame sent by the video surveillance device 100.
The specific connection medium among the memory 201, the receiving component 203 and the processor 202 is not limited in the embodiments of the present application. In the embodiment of the present application, the memory 201 and the processor 202 are connected by the bus 204 in fig. 3, the bus 204 is represented by a thick line in fig. 3, and the connection manner between other components is merely illustrative and is not limited thereto. The bus 204 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 3, but this does not mean only one bus or one type of bus.
The video monitoring apparatus 100 may be directly connected to the image display apparatus 200 or communicatively connected via a network, or may be communicatively connected via other methods, which are not limited herein.
In some embodiments, a flowchart of the method for capturing images performed by the video surveillance apparatus 100 may be seen in fig. 4, and includes the following steps:
step S401, if there is a target object triggering an alarm rule in the video surveillance frame, identifying the target object in a plurality of subsequent consecutive video surveillance frames.
If a target object triggering an alarm rule exists in the video monitoring frame, a target identifier and a position coordinate corresponding to the target object can be respectively determined. The position coordinate corresponding to the target object is determined according to the first position of the target object in the video monitoring frame. Alarm rules may include wire mix rules, area intrusion rules, item leave-behind rules, and the like.
Then, for each video surveillance frame in the continuous plurality of video surveillance frames, the following operations may be performed respectively: and identifying the video monitoring frame, determining each object in the video monitoring frame, matching the target object with each object according to the target identification corresponding to the target object, and determining the second position of the target object in the video monitoring frame.
And, each time a second location is determined, the location coordinates corresponding to the target object may be updated according to the determined second location.
And step S402, capturing the panoramic picture of the video monitoring frame at the moment corresponding to the capturing rule after the capturing rule is satisfied.
The video monitoring frame meeting the corresponding moment of the capturing rule is the last video monitoring frame in the plurality of continuous video monitoring frames, and the plurality of continuous video monitoring frames all comprise the target object.
In an embodiment, if the time for triggering the alarm rule reaches a preset time, capturing a panoramic picture of a video surveillance frame corresponding to the preset time, where the video surveillance frame corresponding to the preset time includes a target object.
In another embodiment, if the target object cannot be identified in one video surveillance frame of a plurality of continuous video surveillance frames and the target object is identified in the previous video surveillance frame of the one video surveillance frame, the panoramic picture of the previous video surveillance frame is captured.
In another embodiment, if the time for triggering the alarm rule reaches the preset time and the target object cannot be identified in one video surveillance frame of the continuous video surveillance frames and the target object is identified in the previous video surveillance frame of the video surveillance frame, capturing the panoramic picture of the video surveillance frame corresponding to the current moment.
Step S403, sending the identification result of the video monitoring frame at the moment corresponding to the panoramic picture and the capture rule to the back end, so that the back end extracts a target frame including a target object from the panoramic picture according to the identification result.
The panoramic picture, and the target identification and the position coordinates corresponding to the target object can be sent to the back end. The panoramic picture is obtained by capturing the video monitoring frame meeting the capturing rule at the corresponding moment, and the position coordinate corresponding to the target object is determined according to the second position of the target object in the video monitoring frame at the corresponding moment of the capturing rule.
The method for capturing the image provided by the embodiment of the application detects that the target object triggering the alarm rule exists in the video monitoring frame, that is, when an event occurs, the panoramic picture of the video monitoring frame is not immediately captured, but is delayed by N milliseconds, that is, whether a grab rule is satisfied is determined, and the target object is identified in a plurality of consecutive video surveillance frames included within the N milliseconds, respectively, that is, within the N milliseconds, each video surveillance frame updates the position coordinates of the target object according to the target identifier corresponding to the target object, that is, all target objects with events occurring within the N milliseconds are mapped onto the mth video surveillance frame corresponding to the N milliseconds, and when the Nth millisecond is reached, capturing the Mth video monitoring frame corresponding to the Nth millisecond, and sending the target identification and the position coordinates of all the target objects corresponding to the Mth video monitoring frame to the rear end. The relative accuracy of the positions of all the target objects in the Mth video monitoring frame can be ensured, so that the rear end can accurately capture the target frames respectively comprising all the target objects according to the Mth video monitoring frame, and the storage space required during storing pictures can be reduced due to the fact that only the Mth video monitoring frame is captured, so that the problem that all the target objects cannot be captured in real time when the performance of related equipment is insufficient is solved.
The calculation formula of the capture rule N can be as follows:
Figure BDA0003175702150000121
and the value range of N is that N is more than or equal to 0, and when N is equal to 0, the capture rule is satisfied, and the video monitoring frame is immediately captured. Cs is an intelligent frame rate, an actual frame rate configured for the device for performing intelligent detection on the video monitoring frame may be configured with different values according to the device and the service requirement, for example, may be configured as 8, 12, 16, 24, and the like. Cp is a comprehensive evaluation index of the hardware of the device, and a value range of Cp is {0,1}, which is obtained according to indexes such as CPU main frequency Qa (unit is MHZ), CPU number Qn, bandwidth Qbw (unit is GB/s), computational power Qt (unit is TOPS) and the like, which are controlled by the device, and a technical formula is as follows: cp ═ Qa/900 × Qn + Qbw/3.6+ Qt/0.5. Cf is a comprehensive evaluation coefficient of the performance index of the equipment, and can be determined according to the maximum event number Mf of a single video monitoring frame and the maximum alarm number Mt per second: cf ═ m f + Mt)/16. Ct is the time consumed by each frame of intelligent processing and the time consumed by processing one video monitoring frame of data for the current operating environment, and the parameter can be calculated and obtained when the equipment operates.
Nt is the time from the target object triggering the alarm rule in one video monitoring frame to the target object disappearing in another video monitoring frame when the target object does not reach N milliseconds after triggering the alarm rule, namely disappears.
In other embodiments, a flowchart of the method for capturing images performed by the video surveillance apparatus 100 is further shown in fig. 5, and includes the following steps:
step S501, detecting a video monitoring frame.
And each object in the video monitoring frame can be detected in real time according to the set alarm rule.
Step S502, determining whether a target object triggering alarm rule exists in the video monitoring frame; if yes, go to step S503; if not, go to step S504.
Step S503, adding the target identifier and the position coordinate corresponding to the target object to the linked list.
Judging whether a target object triggers an alarm rule or not in the video monitoring frame, if so, acquiring a target identifier and a position coordinate corresponding to the target object, wherein the position coordinate corresponding to the target object is determined according to the position of the target object in the video monitoring frame, and adding the target identifier and the position coordinate corresponding to the target object into a linked list.
Step S504, determine whether there is a target object triggering the alarm rule in the video monitoring frame; if yes, go to step S505; if not, step S501 is executed.
And step S505, updating the position coordinates corresponding to the target objects in the linked list.
After the target identifier and the position coordinates corresponding to the target object are added to the linked list, the position coordinates corresponding to the target object in the linked list can be updated according to the position of the target object in the next video monitoring frame.
If the target object triggering alarm rule does not exist in the video monitoring frame, whether the target object triggering the alarm rule exists in the video monitoring frame or not can be judged, and if the target object triggering the alarm rule exists in the video monitoring frame, the position coordinate corresponding to the target object in the linked list can be updated according to the position of the target object in the video monitoring frame. If not, the next video monitoring frame can be continuously detected, and whether a target object exists in the next video monitoring frame or not is determined to trigger an alarm rule.
Specifically, the process of updating the position coordinates corresponding to the target object in the linked list may be: and acquiring each object in the video monitoring frame, respectively matching the target identifier corresponding to the target object in the linked list with each object, determining whether the target identifier corresponding to the target object is the same as each object, and if so, updating the position coordinate corresponding to the target object in the linked list according to the position of the target object in the video monitoring frame.
Step S506, determining whether the grabbing rule is met; if yes, go to step S507; if not, step S501 is executed.
And step S507, capturing the panoramic picture of the video monitoring frame meeting the capturing rule at the corresponding moment.
Whether the capture rule is met or not can be judged, namely whether the time for triggering the alarm rule reaches the preset time or not is determined, and/or whether the situation that the target object cannot be detected in one video monitoring frame exists or not, and the situation that the target object can be detected in the previous video monitoring frame of the video monitoring frame exists or not, namely whether the target object disappears in advance exists or not is determined.
And if the capture rule is met, capturing the panoramic picture of the video monitoring frame at the moment corresponding to the capture rule.
If the capture rule is not satisfied, the video monitoring frame can be continuously detected, and whether a target object triggers an alarm rule exists in the video monitoring frame or not is determined.
And step S508, sending the panoramic picture, the target identification and the position coordinate corresponding to the target object to the back end.
The panoramic picture of the video surveillance frame at the moment corresponding to the capturing rule, and the target identifier and the position coordinate corresponding to the target object in the linked list can be sent to the back end, wherein the position coordinate is the position of the target object in the video surveillance frame at the moment corresponding to the capturing rule.
After the panoramic picture, the target identification and the position coordinate corresponding to the target object are sent to the back end, the linked list can be emptied.
In some embodiments, a flowchart of an image displaying method performed by the image displaying apparatus 200 may be shown in fig. 6, and includes the following steps:
step S601, receiving the panoramic picture sent by the video monitoring device and the identification result of the video monitoring frame corresponding to the panoramic picture.
The panoramic picture is obtained by capturing the last video monitoring frame in a plurality of continuous video monitoring frames after the video monitoring frame of the target object with the triggering alarm rule exists, and the identification result is determined after the target object in the last video monitoring frame is identified. The identification result is the position coordinate corresponding to the target object and the target identification corresponding to the target object, which are determined according to the position of the target object in the last video monitoring frame.
Step S602, extracting a target frame including a target object from the panoramic picture according to the identification result, and displaying the panoramic picture and the target frame.
And extracting a target frame comprising the target object from the panoramic picture according to the target identification and the position coordinate corresponding to the target object, and displaying the panoramic picture and the target frame in a related interface in response to an instruction of a user for displaying the picture.
In other embodiments, the flowchart of the image displaying method performed by the image displaying apparatus 200 can be further shown in fig. 7, and includes the following steps:
step S701, receiving a panoramic picture sent by a video monitoring device and an identification result of a video monitoring frame corresponding to the panoramic picture.
The panoramic picture is a panoramic picture of the video monitoring frame meeting the corresponding moment of the grabbing rule, and the identification result comprises a position coordinate corresponding to the target object and a target identification corresponding to the target object, wherein the position coordinate is determined according to the position of the target object in the video monitoring frame meeting the corresponding moment of the grabbing rule.
Step S702, according to the identification result, the attribute and the characteristic information of the target object are extracted.
According to the position coordinates and the target identification corresponding to the target object, the attribute and the characteristic information of the target object can be extracted from the panoramic picture, and the extracted attribute and the extracted characteristic information can be analyzed.
Step S703 of determining whether a target frame including a target object is displayed in the panoramic picture; if yes, go to step S704; if not, go to step S705.
Step S704, according to the recognition result, draws a target frame including the target object in the panoramic picture.
In response to an instruction of a user to display a target frame including a target object in the panoramic picture, the target frame including the target object may be drawn in the panoramic picture according to the position coordinates and the target identifier corresponding to the target object.
Step S705, determining whether to independently display the target frame; if yes, go to step S706; if not, step S707 is executed.
Step S706, clipping the target frame.
In response to an instruction of a user to individually present a target frame, after the target frame including a target object is drawn in the panoramic picture, the target frame may be cropped.
And step S707, displaying the panoramic picture, the target frame and the attribute and characteristic information of the target object in the interface.
After a target frame including a target object is drawn in the panoramic picture, the target frame, and the attribute and feature information of the target object may be displayed in a related interface.
Step S708, a target frame is displayed in the interface.
After the target box is obtained by clipping, the target box can be displayed in a related interface.
For example, as shown in FIG. 8 for a plurality of video surveillance frames, assume that target object ID1 triggers an alarm rule in frame 1 and target object ID2 triggers an alarm rule in frame 2.
Then at frame 1, detecting that the target object ID1 triggers an alarm rule, at this time, not grabbing the corresponding frame 1, but saving the position coordinate determined according to the position of the target object ID1 in frame 1 into a linked list, and counting time.
At frame 2, the target object ID2 triggers an alarm rule, which may save the location coordinates determined from the location of the target object ID2 in frame 2 into a linked list, and update the location coordinates of the target object ID1 saved in the linked list to the location coordinates determined from the location of the target object ID1 in frame 2, with a fixed accumulation of time counts.
And performing the same process as the 2 nd frame in the continuous multiframes after the 2 nd frame, namely detecting whether a target object triggers an alarm rule in each frame of the continuous multiframes after the 2 nd frame, if so, saving the position coordinates corresponding to the target object into a linked list, and updating the position coordinates of the target object ID1 and the target object ID2 saved in the linked list according to the positions of the target object ID1 and the target object ID2 in the frame.
When the time reaches the capture rule of N milliseconds or one of the target object ID1 and the target object ID2 disappears in advance, capturing the current frame, that is, capturing the panorama of the M-th frame corresponding to the N millisecond, wherein the captured panorama is the M-th frame in fig. 8. And then the target identification and the position coordinates corresponding to the panoramic picture, the target object ID1 and the target object ID2 are sent to the back end.
After receiving the target identifier and the position coordinates corresponding to the panoramic image, the target object ID1 and the target object ID2, respectively, the backend may extract a target frame including the target object ID1 and a target frame including the target object ID2 from the panoramic image according to the position coordinates of the target object ID1 and the position coordinates of the target object ID2, and the obtained images may be as shown in fig. 9(a) and 9(b), respectively.
The method for capturing the image shown in fig. 4 is based on the same inventive concept, and an apparatus for capturing the image is also provided in the embodiment of the present application. Because the device is a device corresponding to the method for capturing the image and the principle of solving the problem of the device is similar to that of the method, the implementation of the device can refer to the implementation of the method, and repeated parts are not repeated.
Fig. 10 is a schematic structural diagram illustrating an apparatus for capturing an image according to an embodiment of the present application, and as shown in fig. 10, the apparatus for capturing an image includes a target object identifying unit 1001, an image capturing unit 1002, and an image transmitting unit 1003.
The target object identification unit 1001 is configured to identify a target object in subsequent consecutive video surveillance frames if the target object triggering an alarm rule exists in the video surveillance frames;
the image capturing unit 1002 is configured to capture a panoramic image of the video surveillance frame at a time corresponding to a capture rule after the capture rule is satisfied; the video monitoring frame meeting the corresponding moment of the capturing rule is the last video monitoring frame in the continuous multiple video monitoring frames; the continuous video monitoring frames comprise target objects;
an image sending unit 1003, configured to send the recognition result of the video surveillance frame at the time corresponding to the panoramic picture and the capture rule to the back end, so that the back end extracts a target frame including the target object from the panoramic picture according to the recognition result.
In an alternative embodiment, the target object identifying unit 1001 is specifically configured to:
if a target object triggering an alarm rule exists in the video monitoring frame, respectively determining a target identifier and a position coordinate corresponding to the target object; the position coordinate corresponding to the target object is determined according to the first position of the target object in the video monitoring frame;
according to the target identification corresponding to the target object, identifying the target object in the subsequent continuous video monitoring frames, and respectively determining the second positions of the target object in the continuous video monitoring frames;
each time a second location is determined, the location coordinates are updated based on the determined second location.
In an alternative embodiment, the target object identification unit 1001 is further configured to:
for each video monitoring frame in a plurality of continuous video monitoring frames, respectively executing the following operations:
identifying the video monitoring frame and determining each object in the video monitoring frame;
and matching the target object with each object according to the target identification corresponding to the target object, and determining the second position of the target object in the video monitoring frame.
In an alternative embodiment, the image sending unit 1003 is specifically configured to:
sending the target identification and the position coordinate corresponding to the target object to the rear end; and the position coordinate corresponding to the target object is determined according to the second position of the target object in the video monitoring frame at the moment corresponding to the capturing rule.
In an alternative embodiment, the image capture unit 1002 is specifically configured to:
if the time for triggering the alarm rule reaches the preset time, capturing a panoramic picture of the video monitoring frame corresponding to the preset time; and/or
And if the target object cannot be identified in one video monitoring frame of the continuous video monitoring frames and the target object is identified in the previous video monitoring frame of the video monitoring frame, capturing the panoramic picture of the previous video monitoring frame.
The image display method shown in fig. 6 is based on the same inventive concept, and an embodiment of the present application further provides an image display apparatus. Because the device is a device corresponding to the image display method of the application and the principle of solving the problems of the device is similar to that of the method, the implementation of the device can refer to the implementation of the method, and repeated details are not repeated.
Fig. 11 shows a schematic structural diagram of an image displaying apparatus provided in an embodiment of the present application, and as shown in fig. 11, the image displaying apparatus includes an image receiving unit 1101 and an image displaying unit 1102.
The image receiving unit 1101 is configured to receive a panoramic picture sent by a video monitoring device and an identification result of a video monitoring frame corresponding to the panoramic picture, where the panoramic picture is obtained by capturing a last video monitoring frame of a plurality of consecutive video monitoring frames after the video monitoring frame of a target object having a trigger alarm rule exists, and the identification result is determined after identifying the target object in the last video monitoring frame;
the image display unit 1102 is configured to extract a target frame including a target object from the panoramic picture according to the recognition result, and display the panoramic picture and the target frame.
According to an aspect of the application, a computer program product or computer program is provided, comprising computer instructions, the computer instructions being stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions to make the computer device execute the image capturing method or the image displaying method in the above embodiments. The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application.

Claims (11)

1. A method of capturing an image, the method comprising:
if a target object triggering an alarm rule exists in the video monitoring frames, identifying the target object in a plurality of subsequent continuous video monitoring frames;
after the capture rule is met, capturing the panoramic picture corresponding to the video monitoring frame at the moment corresponding to the capture rule; the video monitoring frame meeting the corresponding moment of the capturing rule is the last video monitoring frame in the continuous multiple video monitoring frames; the target object is included in each of the continuous video monitoring frames;
and sending the identification result of the video monitoring frame at the moment corresponding to the panoramic picture and the capturing rule to a back end so that the back end extracts a target frame comprising the target object from the panoramic picture according to the identification result.
2. The method of claim 1, wherein if a target object triggering an alarm rule exists in the video surveillance frames, identifying the target object in a plurality of subsequent consecutive video surveillance frames comprises:
if a target object triggering an alarm rule exists in a video monitoring frame, respectively determining a target identifier and a position coordinate corresponding to the target object; the position coordinate corresponding to the target object is determined according to the first position of the target object in the video monitoring frame;
according to the target identification corresponding to the target object, identifying the target object in a plurality of subsequent continuous video monitoring frames, and respectively determining second positions of the target object in the plurality of continuous video monitoring frames;
each time one of the second locations is determined, the location coordinates are updated in accordance with the determined second location.
3. The method according to claim 2, wherein the identifying the target object in a plurality of subsequent consecutive video surveillance frames according to the target identifier corresponding to the target object, and respectively determining the second positions of the target object in the plurality of consecutive video surveillance frames comprises:
for each video surveillance frame in the continuous plurality of video surveillance frames, respectively performing the following operations:
identifying the video monitoring frame and determining each object in the video monitoring frame;
and matching the target object with each object according to the target identification corresponding to the target object, and determining a second position of the target object in the video monitoring frame.
4. The method according to claim 2, wherein the sending the recognition result of the video monitoring frame at the moment corresponding to the capturing rule to a back end comprises:
sending the target identification and the position coordinate corresponding to the target object to a back end; and the position coordinate corresponding to the target object is determined according to the second position of the target object in the video monitoring frame at the moment corresponding to the grabbing rule.
5. The method according to any one of claims 1 to 4, wherein the capturing the panoramic image of the video surveillance frame at the time corresponding to the capturing rule includes:
if the time for triggering the alarm rule reaches the preset time, capturing a panoramic picture of the video monitoring frame corresponding to the preset time; and/or
And if the target object cannot be identified in one video monitoring frame of the continuous video monitoring frames and the target object is identified in the previous video monitoring frame of the video monitoring frame, capturing a panoramic picture of the previous video monitoring frame.
6. An image presentation method, characterized in that the method comprises:
receiving a panoramic picture sent by video monitoring equipment and an identification result of a video monitoring frame corresponding to the panoramic picture, wherein the panoramic picture is obtained by capturing the last video monitoring frame in a plurality of continuous video monitoring frames after the video monitoring frame of a target object with a triggering alarm rule exists, and the identification result is determined after the target object in the last video monitoring frame is identified;
and extracting a target frame comprising a target object from the panoramic picture according to the identification result, and displaying the panoramic picture and the target frame.
7. An apparatus for capturing an image, comprising:
the target object identification unit is used for identifying the target object in a plurality of subsequent continuous video monitoring frames if the target object triggering the alarm rule exists in the video monitoring frames;
the image capturing unit is used for capturing the panoramic picture of the video monitoring frame at the corresponding moment meeting the capturing rule after meeting the capturing rule; the video monitoring frame meeting the corresponding moment of the capturing rule is the last video monitoring frame in the continuous multiple video monitoring frames; the target object is included in each of the continuous video monitoring frames;
and the image sending unit is used for sending the identification result of the video monitoring frame at the moment corresponding to the panoramic picture and the capturing rule to a rear end so that the rear end extracts a target frame comprising the target object from the panoramic picture according to the identification result.
8. An image display apparatus, comprising:
the system comprises an image receiving unit, a processing unit and a processing unit, wherein the image receiving unit is used for receiving a panoramic picture sent by video monitoring equipment and an identification result of a video monitoring frame corresponding to the panoramic picture, the panoramic picture is obtained by grabbing a last video monitoring frame in a plurality of continuous video monitoring frames after the video monitoring frame of a target object with a triggering alarm rule exists, and the identification result is determined after the target object in the last video monitoring frame is identified;
and the image display unit is used for extracting a target frame comprising a target object from the panoramic picture according to the identification result and displaying the panoramic picture and the target frame.
9. A video surveillance apparatus comprising a memory and a processor, the memory having stored thereon a computer program operable on the processor, the computer program, when executed by the processor, implementing the method of any of claims 1 to 5.
10. An image presentation device comprising a memory and a processor, the memory having stored thereon a computer program operable on the processor, the computer program, when executed by the processor, implementing the method of claim 6.
11. A computer-readable storage medium having a computer program stored therein, the computer program characterized by: the computer program, when executed by a processor, implements the method of any one of claims 1 to 5 or claim 6.
CN202110831445.4A 2021-07-22 2021-07-22 Image capturing method, image display method and device and storage medium Pending CN113591651A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110831445.4A CN113591651A (en) 2021-07-22 2021-07-22 Image capturing method, image display method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110831445.4A CN113591651A (en) 2021-07-22 2021-07-22 Image capturing method, image display method and device and storage medium

Publications (1)

Publication Number Publication Date
CN113591651A true CN113591651A (en) 2021-11-02

Family

ID=78249000

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110831445.4A Pending CN113591651A (en) 2021-07-22 2021-07-22 Image capturing method, image display method and device and storage medium

Country Status (1)

Country Link
CN (1) CN113591651A (en)

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101231786A (en) * 2007-12-28 2008-07-30 北京航空航天大学 Vehicle checking method based on video image characteristic
CN102946528A (en) * 2012-12-14 2013-02-27 安徽水天信息科技有限公司 Airport runway monitoring system based on intelligent video monitoring for whole scenic spot
CN104125433A (en) * 2014-07-30 2014-10-29 西安冉科信息技术有限公司 Moving object video surveillance method based on multi-PTZ (pan-tilt-zoom)-camera linkage structure
CN104135645A (en) * 2014-07-31 2014-11-05 天津市亚安科技股份有限公司 Video surveillance system and method for face tracking and capturing
CN104469317A (en) * 2014-12-18 2015-03-25 天津市亚安科技股份有限公司 Video monitoring system for pipeline safety early warning
CN106791715A (en) * 2017-02-24 2017-05-31 深圳英飞拓科技股份有限公司 Classification joint control intelligent control method and system
CN108091142A (en) * 2017-12-12 2018-05-29 公安部交通管理科学研究所 For vehicle illegal activities Tracking Recognition under highway large scene and the method captured automatically
CN108108698A (en) * 2017-12-25 2018-06-01 哈尔滨市舍科技有限公司 Method for tracking target and system based on recognition of face and panoramic video
CN109922250A (en) * 2017-12-12 2019-06-21 杭州海康威视数字技术股份有限公司 A kind of target object grasp shoot method, device and video monitoring equipment
CN110867083A (en) * 2019-11-20 2020-03-06 浙江宇视科技有限公司 Vehicle monitoring method, device, server and machine-readable storage medium
CN111372037A (en) * 2018-12-25 2020-07-03 杭州海康威视数字技术股份有限公司 Target snapshot system and method
CN111405238A (en) * 2019-12-16 2020-07-10 杭州海康威视系统技术有限公司 Transmission method, device and system for snap pictures, camera and storage equipment
US20200228720A1 (en) * 2017-06-16 2020-07-16 Hangzhou Hikvision Digital Technology Co., Ltd. Target Object Capturing Method and Device, and Video Monitoring Device
CN111753609A (en) * 2019-08-02 2020-10-09 杭州海康威视数字技术股份有限公司 Target identification method and device and camera
CN111815570A (en) * 2020-06-16 2020-10-23 浙江大华技术股份有限公司 Regional intrusion detection method and related device thereof
CN111914592A (en) * 2019-05-08 2020-11-10 杭州海康威视数字技术股份有限公司 Multi-camera combined evidence obtaining method, device and system
CN112055158A (en) * 2020-10-16 2020-12-08 苏州科达科技股份有限公司 Target tracking method, monitoring device, storage medium and system
CN112767711A (en) * 2021-01-27 2021-05-07 湖南优美科技发展有限公司 Multi-class multi-scale multi-target snapshot method and system
CN112948627A (en) * 2019-12-11 2021-06-11 杭州海康威视数字技术股份有限公司 Alarm video generation method, display method and device

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101231786A (en) * 2007-12-28 2008-07-30 北京航空航天大学 Vehicle checking method based on video image characteristic
CN102946528A (en) * 2012-12-14 2013-02-27 安徽水天信息科技有限公司 Airport runway monitoring system based on intelligent video monitoring for whole scenic spot
CN104125433A (en) * 2014-07-30 2014-10-29 西安冉科信息技术有限公司 Moving object video surveillance method based on multi-PTZ (pan-tilt-zoom)-camera linkage structure
CN104135645A (en) * 2014-07-31 2014-11-05 天津市亚安科技股份有限公司 Video surveillance system and method for face tracking and capturing
CN104469317A (en) * 2014-12-18 2015-03-25 天津市亚安科技股份有限公司 Video monitoring system for pipeline safety early warning
CN106791715A (en) * 2017-02-24 2017-05-31 深圳英飞拓科技股份有限公司 Classification joint control intelligent control method and system
US20200228720A1 (en) * 2017-06-16 2020-07-16 Hangzhou Hikvision Digital Technology Co., Ltd. Target Object Capturing Method and Device, and Video Monitoring Device
CN109922250A (en) * 2017-12-12 2019-06-21 杭州海康威视数字技术股份有限公司 A kind of target object grasp shoot method, device and video monitoring equipment
CN108091142A (en) * 2017-12-12 2018-05-29 公安部交通管理科学研究所 For vehicle illegal activities Tracking Recognition under highway large scene and the method captured automatically
CN108108698A (en) * 2017-12-25 2018-06-01 哈尔滨市舍科技有限公司 Method for tracking target and system based on recognition of face and panoramic video
CN111372037A (en) * 2018-12-25 2020-07-03 杭州海康威视数字技术股份有限公司 Target snapshot system and method
CN111914592A (en) * 2019-05-08 2020-11-10 杭州海康威视数字技术股份有限公司 Multi-camera combined evidence obtaining method, device and system
CN111753609A (en) * 2019-08-02 2020-10-09 杭州海康威视数字技术股份有限公司 Target identification method and device and camera
CN110867083A (en) * 2019-11-20 2020-03-06 浙江宇视科技有限公司 Vehicle monitoring method, device, server and machine-readable storage medium
CN112948627A (en) * 2019-12-11 2021-06-11 杭州海康威视数字技术股份有限公司 Alarm video generation method, display method and device
CN111405238A (en) * 2019-12-16 2020-07-10 杭州海康威视系统技术有限公司 Transmission method, device and system for snap pictures, camera and storage equipment
CN111815570A (en) * 2020-06-16 2020-10-23 浙江大华技术股份有限公司 Regional intrusion detection method and related device thereof
CN112055158A (en) * 2020-10-16 2020-12-08 苏州科达科技股份有限公司 Target tracking method, monitoring device, storage medium and system
CN112767711A (en) * 2021-01-27 2021-05-07 湖南优美科技发展有限公司 Multi-class multi-scale multi-target snapshot method and system

Similar Documents

Publication Publication Date Title
US11875467B2 (en) Processing method for combining a real-world environment with virtual information according to a video frame difference value to provide an augmented reality scene, terminal device, system, and computer storage medium
KR101687530B1 (en) Control method in image capture system, control apparatus and a computer-readable storage medium
US9477891B2 (en) Surveillance system and method based on accumulated feature of object
JP5058279B2 (en) Image search device
US20110052069A1 (en) Image search apparatus
KR20080058171A (en) Camera tampering detection
JP6568476B2 (en) Information processing apparatus, information processing method, and program
US11468683B2 (en) Population density determination from multi-camera sourced imagery
CN111583118B (en) Image stitching method and device, storage medium and electronic equipment
JP2022019753A (en) Analyzer, analysis method, and program
CN110826496A (en) Crowd density estimation method, device, equipment and storage medium
WO2018149322A1 (en) Image identification method, device, apparatus, and data storage medium
CN110505438B (en) Queuing data acquisition method and camera
JP5758165B2 (en) Article detection device and stationary person detection device
CN113470013A (en) Method and device for detecting moved article
US10783365B2 (en) Image processing device and image processing system
CN112188108A (en) Photographing method, terminal, and computer-readable storage medium
CN113591651A (en) Image capturing method, image display method and device and storage medium
CN113038261A (en) Video generation method, device, equipment, system and storage medium
JP2017168885A (en) Imaging control device and camera
CN112232113B (en) Person identification method, person identification device, storage medium, and electronic apparatus
CN111008611B (en) Queuing time length determining method and device, storage medium and electronic device
KR20220002626A (en) Picture-based multidimensional information integration method and related devices
JP2009239804A (en) Surveillance system and image retrieval server
JP6734820B2 (en) Video search device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination