CN110659391A - Video detection method and device - Google Patents

Video detection method and device Download PDF

Info

Publication number
CN110659391A
CN110659391A CN201910850468.2A CN201910850468A CN110659391A CN 110659391 A CN110659391 A CN 110659391A CN 201910850468 A CN201910850468 A CN 201910850468A CN 110659391 A CN110659391 A CN 110659391A
Authority
CN
China
Prior art keywords
target
image
suspected target
suspected
feature information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910850468.2A
Other languages
Chinese (zh)
Inventor
罗茜
张斯尧
谢喜林
王思远
黄晋
文戎
张�诚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Vision Polytron Technologies Inc
Original Assignee
Suzhou Vision Polytron Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Vision Polytron Technologies Inc filed Critical Suzhou Vision Polytron Technologies Inc
Publication of CN110659391A publication Critical patent/CN110659391A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/738Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Library & Information Science (AREA)
  • Computational Linguistics (AREA)
  • Image Analysis (AREA)
  • Alarm Systems (AREA)

Abstract

The invention provides a video detection method and a video detection device, wherein the method comprises the following steps: intercepting a target image from a monitoring video, selecting a suspected target from the target image, extracting the feature information of the movable target in the video of the monitoring facilities around the monitoring facilities corresponding to the target image within the case time range, establishing a feature coding library, and according to the similarity between the feature information of the suspected target and the feature information of the suspected target, screening images containing moving targets similar to the suspected target in a characteristic coding library, determining the images containing the suspected target from the screened images, obtaining the moving track of the suspected target according to the selected monitoring facility location and monitoring time period corresponding to the image containing the suspected target, and displaying the activity track in the PGIS map, deducing the activity range and the activity time period of the suspected target according to the activity track of the suspected target, and controlling in advance, so that the video detection efficiency can be improved.

Description

Video detection method and device
Technical Field
The invention belongs to the technical field of computer vision and intelligent traffic, and particularly relates to a video detection method, a video detection device, terminal equipment and a computer readable medium.
Background
Along with the construction of skynet projects, monitoring cameras are basically deployed in large, medium and small cities in China. The video monitoring system is widely applied to the field of public security investigation, and the video investigation becomes an important technical guarantee for a safe society.
However, in the prior art, the video detection data volume is large, means for extracting data information and tracking a suspected target are lacked, the suspected target in the video is mainly identified manually, the suspected target is tracked in a manual staring mode after the activity range of the suspected target is determined, the detection workload is large, and the detection efficiency is seriously influenced.
Disclosure of Invention
In view of this, embodiments of the present invention provide a video detection method, an apparatus, a terminal device and a computer readable medium, which can improve video detection efficiency.
A first aspect of an embodiment of the present invention provides a video surveillance method, including:
intercepting a target image from the monitoring video, and selecting a suspected target from the target image;
acquiring monitoring facilities around the monitoring facilities corresponding to the target image, extracting feature information of movable targets in videos of the surrounding monitoring facilities within a case time range, establishing a feature coding library, screening an image containing the movable targets approximate to the suspected target from the feature coding library according to the similarity between the feature information of the suspected target and the feature information of the suspected target by adopting a method of searching the image by using the image, and determining the image containing the suspected target from the screened image containing the movable targets approximate to the suspected target;
deducing the place and the time of the suspected target according to the place and the monitoring time period of the monitoring facility corresponding to the selected image containing the suspected target, obtaining the activity track of the suspected target, and displaying the activity track in a PGIS (geographic information system) map of an police;
and deducing the activity range and the activity time period of the suspected target according to the activity track of the suspected target, arranging and controlling in advance, and carrying out alarm processing when the suspected target is identified.
A second aspect of an embodiment of the present invention provides a video surveillance apparatus, including:
the selection module is used for intercepting a target image from the monitoring video and selecting a suspected target from the target image;
the searching module is used for acquiring monitoring facilities around the monitoring facilities corresponding to the target image, extracting feature information of movable targets of the surrounding monitoring facilities in videos within a case time range, establishing a feature coding library, screening an image containing the movable targets approximate to the suspected target from the feature coding library according to the similarity between the feature information of the suspected target and the feature information of the suspected target by adopting a method of searching the image by using the image, and determining the image containing the suspected target from the screened image containing the movable targets approximate to the suspected target;
the track acquisition module is used for deducing the place and the time of the suspected target according to the place and the monitoring time period of the monitoring facility corresponding to the selected image containing the suspected target, acquiring the activity track of the suspected target and displaying the activity track in a PGIS map;
and the control alarm module is used for deducing the activity range and the activity time of the suspected target according to the activity track of the suspected target, controlling in advance and carrying out alarm processing when the suspected target is identified.
A third aspect of the embodiments of the present invention provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the video surveillance method when executing the computer program.
A sixth aspect of the embodiments of the present invention provides a computer-readable medium, which stores a computer program, and when the computer program is processed and executed, the computer program implements the steps of the video surveillance method.
In the video investigation method provided by the embodiment of the invention, a target image can be captured from a surveillance video, a suspected target is selected from the target image, surveillance facilities around the surveillance facilities corresponding to the target image are obtained, feature information of moving targets in videos of the surrounding surveillance facilities within a case time range is extracted, a feature coding library is established, a method of searching the images by using the images is adopted, images containing the moving targets approximate to the suspected target are screened out from the feature coding library according to the similarity between the feature information of the suspected target and the feature information of the suspected target, the images containing the suspected target are determined from the screened images containing the moving targets approximate to the suspected target, the place and the time of the suspected target are deduced according to the place and the surveillance time period of the surveillance facilities corresponding to the selected images containing the suspected target, the method comprises the steps of obtaining the activity track of the suspected target, displaying the activity track in a PGIS map, deducing the activity range and the activity time period of the suspected target according to the activity track of the suspected target, controlling in advance, and carrying out alarm processing when the suspected target is identified, so that the video detection efficiency can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a flowchart of a video surveillance method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a video surveillance apparatus according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a detailed structure of the search module in FIG. 2;
FIG. 4 is a schematic diagram of a detailed structure of the deploy control alarm module of FIG. 2;
fig. 5 is a schematic diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
Referring to fig. 1, fig. 1 is a schematic diagram illustrating a video detection method according to an embodiment of the present invention. As shown in fig. 1, the video surveillance method of the present embodiment includes the following steps:
and S101, intercepting a target image from the monitoring video and selecting a suspected target in the target image.
In the embodiment of the invention, the target image can be intercepted from the monitoring video, the suspected target can be selected from the target image, the point unit selection, the frame selection and the user-defined selection can be carried out according to the requirements of users, and the suspected target can be a pedestrian, a vehicle, a person riding and the like. The monitoring video source comprises a monitoring camera, a human face bayonet, a vehicle bayonet and a passerby shooting video. The monitoring videos of the monitoring camera, the face card port and the vehicle card port can be obtained, or videos shot by passers can be uploaded locally. Before the video is displayed, transcoding preprocessing is required to convert videos with different formats from different sources into coding formats which can be identified by the video reconnaissance device or system provided by the embodiment of the invention.
S102: acquiring monitoring facilities around the monitoring facilities corresponding to the target image, extracting feature information of movable targets in videos of the surrounding monitoring facilities within a case time range, establishing a feature coding library, screening an image containing the movable targets approximate to the suspected target from the feature coding library according to the similarity between the feature information of the suspected target and the feature information of the suspected target by adopting a method of searching the image by using the image, and determining the image containing the suspected target from the screened image containing the movable targets approximate to the suspected target.
In an embodiment of the present invention, a convolutional neural network comprising convolutional layers and fully-connected layers may be constructed first. The full-connection layer is used for extracting image high-level feature information, and the convolution layer is used for extracting image bottom-level feature information. In a geographic information system (PGIS) map for police, the monitoring facilities corresponding to the target image are used as centers and are scattered all around, the monitoring facilities around the monitoring facilities corresponding to the target image are framed, the framing range can be selected in a self-defined mode according to requirements, and the used frames can be rectangular frames, circular frames, irregular graphic frames and the like. And then, acquiring videos of monitoring facilities around the monitoring facilities corresponding to the framed target images within the case time range, extracting images of moving targets in the videos, inputting the images of the moving targets into the convolutional neural network, outputting characteristic information of the images of the moving targets in the videos, and constructing characteristic codes of the images of the moving targets in the videos into a characteristic code library. The feature information may include bottom layer feature information and high layer feature information; the bottom-layer feature information may include a target type, a color, a posture and the like, and the high-layer feature information may include a vector of a high latitude and the like. The case time range can be customized and can also be determined according to the case occurrence time and the target activity time. And then, the image of the suspect target can be input into a convolutional neural network, the characteristic information of the image of the suspect target is output, and the distance between the characteristic information of the image of the suspect target and each characteristic information in the characteristic code library is calculated. The distance is short, and the similarity of the two feature vectors is high; the distance is far, and the similarity of the two feature vectors is small. The calculation of the distance between two feature information (or called feature vectors) is the same as the prior art, and therefore, is not described herein again. Calculating a distance lag from each feature information in the feature code library based on the feature information of the image of the suspect target, scoring and sorting may be performed according to how far the feature information of the image of the suspect target is from the feature information of the feature code library, screening images containing moving targets similar to the suspected target in the feature code library (the top 10% of sorted images can be selected), and determining the image containing the suspected target from the screened image containing the moving target similar to the suspected target, wherein the specific method for determining the image containing the suspected target from the screened image containing the moving target similar to the suspected target can be manual screening, or the image video scout device provided by the embodiment of the invention automatically determines according to the distance between the characteristic information.
S103: and deducing the place and the time of the suspected target according to the place and the monitoring time period of the monitoring facility corresponding to the selected image containing the suspected target, obtaining the activity track of the suspected target, and displaying the activity track in a PGIS map.
In the embodiment of the invention, the place and the time of the suspected target can be deduced according to the place and the monitoring time period of the monitoring facility corresponding to the selected image containing the suspected target, so that the activity track of the suspected target is obtained, and the activity track can be displayed in the PGIS map.
S104: and deducing the activity range and the activity time of the suspected target according to the activity track of the suspected target, arranging and controlling in advance, and carrying out alarm processing when the suspected target is identified.
Specifically, the activity range and the activity time period of the suspected target can be inferred according to the activity track of the suspected target. And then according to the inferred activity time period of the suspect target, arranging monitoring equipment in the inferred activity range of the suspect target in advance, acquiring the inferred video in the activity range of the suspect target in real time, and extracting the characteristic information of the activity target in the video in the activity range of the suspect target. When the similarity between the extracted feature information of the moving target in the video within the moving range of the suspected target and the feature information of the suspected target is larger than the threshold (or the distance between the two feature information is larger than the threshold), the relevant information (such as name, physical characteristics, accompanying number of people, position and time of the moving target) of the moving target with the feature information of which the similarity with the feature information of the suspected target is larger than the threshold can be sent to the monitoring platform, and the position of the moving target with the feature information of which the similarity with the feature information of the suspected target is larger than the threshold can be sent to a terminal held by a nearby patrol police, so that the monitoring platform and the patrol police nearby the suspected target can perform processing modes such as timely tracking or catching of a suspected person.
In the video investigation method provided by fig. 1, an image containing a suspected target is found out in a surrounding monitoring video by using a map searching method, and a suspected target track map is formed according to the image, so that the moving range and the moving place of the suspected target are predicted and controlled in advance, and the case handling efficiency can be improved.
Referring to fig. 2, fig. 2 is a block diagram of a video detection apparatus according to an embodiment of the present invention. As shown in fig. 2, the video surveillance apparatus 20 of the present embodiment includes a selection module 201, a search module 202, a trajectory acquisition module 203, and a deployment alarm module 204. The selection module 201, the search module 202, the trajectory acquisition module 203 and the deployment alarm module 204 are respectively configured to execute the specific methods in S101, S102, S103 and S104 in fig. 1, and details can be referred to the related introduction of fig. 1 and are only briefly described here:
the selecting module 201 is configured to intercept a target image from the monitoring video, and select a suspected target in the target image.
The searching module 202 is configured to acquire monitoring facilities around the monitoring facility corresponding to the target image, extract feature information of a moving target of the surrounding monitoring facilities in a video within a case time range, establish a feature coding library, screen out an image including the moving target similar to the suspected target in the feature coding library according to similarity between the feature information of the suspected target and the feature information of the suspected target by using a graph searching method, and determine the image including the suspected target from the screened image including the moving target similar to the suspected target.
The track obtaining module 203 is configured to infer a place and time where the suspected target appears according to the place and the monitoring time period of the monitoring facility corresponding to the selected image including the suspected target, obtain an activity track of the suspected target, and display the activity track in the PGIS map.
And the control alarm module 204 is configured to infer an activity range and an activity time of the suspected target according to the activity track of the suspected target, control in advance, and perform alarm processing when the suspected target is identified.
Further, as can be seen in fig. 3, the search module 202 may specifically include a network building unit 2021, a feature library building unit 2022, a similarity calculating unit 2023, and a filtering unit 2024:
a network construction unit 2021, configured to construct a convolutional neural network including convolutional layers and fully-connected layers; the full-connection layer is used for extracting image high-level feature information, and the convolution layer is used for extracting image bottom-level feature information.
The feature library construction unit 2022 is configured to select, in a PGIS map of a police geographic information system, monitoring facilities around the monitoring facilities corresponding to the target image as a center, frame the monitoring facilities around the monitoring facilities corresponding to the target image, obtain a video of the surrounding monitoring facilities within a case time range, extract an image of a moving target in the video, input the image of the moving target into the convolutional neural network, output feature information of a moving target image in the video, and construct feature codes of the moving target image in the video as a feature code library.
A similarity calculation unit 2023, configured to input the image of the suspect target to a convolutional neural network, output feature information of the image of the suspect target, and calculate a distance between the feature information of the image of the suspect target and each feature information in the feature code library.
The screening unit 2024 is configured to score and sort according to distances between the feature information of the image of the suspect target and the feature information of the feature code library, screen out an image including a moving target similar to the suspect target from the feature code library, and determine the image including the suspect target from the screened image including the moving target similar to the suspect target.
Further, as can be seen in fig. 4, the deployment alarm module 204 may specifically include an inference unit 2041, a deployment unit 2042, and an alarm unit 2043:
the inference unit 2041 is configured to infer an activity range and an activity time period of the suspected target according to the activity track of the suspected target.
The deployment and control unit 2042 is configured to arrange monitoring equipment in the inferred moving range of the suspected target in advance according to the inferred moving time period of the suspected target, obtain a video in the inferred moving range of the suspected target in real time, and extract feature information of a moving target in the video in the moving range of the suspected target.
An alarm unit 2043, configured to, when the similarity between the extracted feature information of the moving target in the video in the moving range of the suspect target and the feature information of the suspect target is greater than the threshold, send the relevant information of the moving target having the feature information whose similarity with the feature information of the suspect target is greater than the threshold to the monitoring platform, and send the position of the moving target having the feature information whose similarity with the feature information of the suspect target is greater than the threshold to a terminal held by a nearby patrol police.
Fig. 2 provides a video detection apparatus, which uses a graph searching method to find out an image containing a suspected target from surrounding monitoring videos, and forms a suspected target trajectory graph according to the image, so as to predict the movement range and the location of the suspected target, and arrange control in advance, thereby improving the case handling efficiency.
Fig. 5 is a schematic diagram of a terminal device according to an embodiment of the present invention. As shown in fig. 5, the terminal device 5 of this embodiment includes: a processor 50, a memory 51 and a computer program 52, such as a program for performing video reconnaissance, stored in said memory 51 and executable on said processor 50. The processor 50, when executing the computer program 52, implements the steps in the above-described method embodiments, e.g., S101 to S104 shown in fig. 1. Alternatively, the processor 50, when executing the computer program 52, implements the functions of each module/unit in the above-mentioned device embodiments, for example, the functions of the modules 201 to 204 shown in fig. 2.
Illustratively, the computer program 52 may be partitioned into one or more modules/units that are stored in the memory 51 and executed by the processor 50 to implement the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program 52 in the terminal device 5. For example, the computer program 52 may be partitioned into a selection module 201, a search module 202, a trajectory acquisition module 203, and a deployment alarm module 204. (modules in the virtual device), the specific functions of each module are as follows:
the selecting module 201 is configured to intercept a target image from the monitoring video, and select a suspected target in the target image.
The searching module 202 is configured to acquire monitoring facilities around the monitoring facility corresponding to the target image, extract feature information of a moving target of the surrounding monitoring facilities in a video within a case time range, establish a feature coding library, screen out an image including the moving target similar to the suspected target in the feature coding library according to similarity between the feature information of the suspected target and the feature information of the suspected target by using a graph searching method, and determine the image including the suspected target from the screened image including the moving target similar to the suspected target.
The track obtaining module 203 is configured to infer a place and time where the suspected target appears according to the place and the monitoring time period of the monitoring facility corresponding to the selected image including the suspected target, obtain an activity track of the suspected target, and display the activity track in the PGIS map.
And the control alarm module 204 is configured to infer an activity range and an activity time of the suspected target according to the activity track of the suspected target, control in advance, and perform alarm processing when the suspected target is identified.
The terminal device 5 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device 5 may include, but is not limited to, a processor 50, a memory 51. Those skilled in the art will appreciate that fig. 5 is merely an example of a terminal device 5 and does not constitute a limitation of terminal device 5 and may include more or fewer components than shown, or some components may be combined, or different components, e.g., the terminal device may also include input-output devices, network access devices, buses, etc.
The Processor 50 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 51 may be an internal storage unit of the terminal device 5, such as a hard disk or a memory of the terminal device 5. The memory 51 may also be an external storage device of the terminal device 5, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) and the like provided on the terminal device 5. Further, the memory 51 may also include both an internal storage unit of the terminal device 5 and an external storage device. The memory 51 is used for storing the computer programs and other programs and data required by the terminal device 5. The memory 51 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. A video surveillance method, comprising:
intercepting a target image from the monitoring video, and selecting a suspected target from the target image;
acquiring monitoring facilities around the monitoring facilities corresponding to the target image, extracting feature information of movable targets in videos of the surrounding monitoring facilities within a case time range, establishing a feature coding library, screening an image containing the movable targets approximate to the suspected target from the feature coding library according to the similarity between the feature information of the suspected target and the feature information of the suspected target by adopting a method of searching the image by using the image, and determining the image containing the suspected target from the screened image containing the movable targets approximate to the suspected target;
deducing the place and the time of the suspected target according to the place and the monitoring time period of the monitoring facility corresponding to the selected image containing the suspected target, obtaining the activity track of the suspected target, and displaying the activity track in a PGIS (geographic information system) map of an police;
and deducing the activity range and the activity time period of the suspected target according to the activity track of the suspected target, arranging and controlling in advance, and carrying out alarm processing when the suspected target is identified.
2. The video investigation method of claim 1, wherein the obtaining of the monitoring facilities around the monitoring facility corresponding to the target image, extracting feature information of moving targets in the video of the surrounding monitoring facilities within a case time range, establishing a feature coding library, and using a method of searching a map with a map, according to similarity between the feature information of the suspected target and the feature information of the suspected target, screening out an image including a moving target similar to the suspected target from the feature coding library, and determining the image including the suspected target from the screened image including a moving target similar to the suspected target, comprises:
constructing a convolutional neural network comprising convolutional layers and full-link layers; the full-connection layer is used for extracting image high-level feature information, and the convolution layer is used for extracting image bottom-level feature information;
in a PGIS map, with a monitoring facility corresponding to the target image as a center, framing the monitoring facilities around the monitoring facility corresponding to the target image, acquiring videos of the surrounding monitoring facilities within a case time range, extracting images of moving targets in the videos, inputting the images of the moving targets into the convolutional neural network, outputting feature information of the images of the moving targets in the videos, and constructing feature codes of the images of the moving targets in the videos into a feature code library;
inputting the image of the suspect target into a convolutional neural network, outputting the characteristic information of the image of the suspect target, and calculating the distance between the characteristic information of the image of the suspect target and each characteristic information in the characteristic coding library;
and scoring and sorting according to the distance between the characteristic information of the image of the suspect target and the characteristic information of the characteristic coding library, screening an image containing a moving target similar to the suspect target from the characteristic coding library, and determining the image containing the suspect target from the screened image containing the moving target similar to the suspect target.
3. The video surveillance method according to claim 1, wherein the extracted feature information includes bottom-layer feature information and high-layer feature information; the bottom-layer feature information comprises a target type, a color and a posture, and the high-layer feature information comprises a high-latitude vector.
4. The video surveillance method according to claim 1, wherein the steps of deducing a place and time of a suspected target according to a place and a surveillance time period of a surveillance facility corresponding to the selected image including the suspected target, obtaining an activity track of the suspected target, and displaying the activity track in a PGIS map include:
deducing the place and time of the suspected target according to the place and the monitoring time period of the monitoring facility corresponding to the selected image containing the suspected target, and obtaining the activity track of the suspected target;
and displaying the activity track in a PGIS map.
5. The video surveillance method according to claim 1, wherein the steps of inferring an activity range and an activity time of a suspected target according to an activity track of the suspected target, controlling in advance, and performing alarm processing when the suspected target is identified include:
deducing the activity range and the activity time period of the suspected target according to the activity track of the suspected target;
according to the inferred activity time period of the suspect target, arranging monitoring equipment in the inferred activity range of the suspect target in advance, acquiring a video in the inferred activity range of the suspect target in real time, and extracting characteristic information of the activity target in the video in the activity range of the suspect target;
when the similarity between the extracted feature information of the moving target in the video within the moving range of the suspected target and the feature information of the suspected target is larger than a threshold value, sending the related information of the moving target with the feature information of which the similarity with the feature information of the suspected target is larger than the threshold value to a monitoring platform, and sending the position of the moving target with the feature information of which the similarity with the feature information of the suspected target is larger than the threshold value to a terminal held by a nearby patrol police.
6. A video reconnaissance apparatus, comprising:
the selection module is used for intercepting a target image from the monitoring video and selecting a suspected target from the target image;
the searching module is used for acquiring monitoring facilities around the monitoring facilities corresponding to the target image, extracting feature information of movable targets of the surrounding monitoring facilities in videos within a case time range, establishing a feature coding library, screening an image containing the movable targets approximate to the suspected target from the feature coding library according to the similarity between the feature information of the suspected target and the feature information of the suspected target by adopting a method of searching the image by using the image, and determining the image containing the suspected target from the screened image containing the movable targets approximate to the suspected target;
the track acquisition module is used for deducing the place and the time of the suspected target according to the place and the monitoring time period of the monitoring facility corresponding to the selected image containing the suspected target, acquiring the activity track of the suspected target and displaying the activity track in a PGIS map;
and the control alarm module is used for deducing the activity range and the activity time of the suspected target according to the activity track of the suspected target, controlling in advance and carrying out alarm processing when the suspected target is identified.
7. The video reconnaissance device of claim 6, wherein the search module comprises:
the network construction unit is used for constructing a convolutional neural network comprising convolutional layers and full-connection layers; the full-connection layer is used for extracting image high-level feature information, and the convolution layer is used for extracting image bottom-level feature information;
a feature library construction unit, configured to select, in a PGIS map, monitoring facilities around the monitoring facility corresponding to the target image by using the monitoring facility corresponding to the target image as a center, obtain a video of the surrounding monitoring facilities within a case time range, extract an image of a moving target in the video, input the image of the moving target into the convolutional neural network, output feature information of the moving target image in the video, and construct feature codes of the moving target image in the video as a feature code library;
the similarity calculation unit is used for inputting the image of the suspect target to a convolutional neural network, outputting the characteristic information of the image of the suspect target and calculating the distance between the characteristic information of the image of the suspect target and each characteristic information in the characteristic code library;
and the screening unit is used for grading and sequencing according to the distance between the characteristic information of the image of the suspect target and the characteristic information of the characteristic coding library, screening an image containing a moving target approximate to the suspect target from the characteristic coding library, and determining the image containing the suspect target from the screened image containing the moving target approximate to the suspect target.
8. The video reconnaissance device of claim 6, wherein the deployment alarm module comprises:
the inference unit is used for inferring the activity range and the activity time period of the suspected target according to the activity track of the suspected target;
the control unit is used for arranging monitoring equipment in the inferred moving range of the suspected target in advance according to the inferred moving time period of the suspected target, acquiring videos in the inferred moving range of the suspected target in real time, and extracting feature information of moving targets in the videos in the moving range of the suspected target;
and the alarm unit is used for sending the relevant information of the moving target with the characteristic information of which the similarity with the characteristic information of the suspect target is greater than the threshold value to the monitoring platform and sending the position of the moving target with the characteristic information of which the similarity with the characteristic information of the suspect target is greater than the threshold value to a nearby terminal held by a patrol police when the similarity between the extracted characteristic information of the moving target in the video in the moving range of the suspect target and the characteristic information of the suspect target is greater than the threshold value.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1-5 when executing the computer program.
10. A computer-readable medium, in which a computer program is stored which, when being processed and executed, carries out the steps of the method according to any one of claims 1 to 5.
CN201910850468.2A 2019-08-29 2019-09-10 Video detection method and device Pending CN110659391A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910805702X 2019-08-29
CN201910805702 2019-08-29

Publications (1)

Publication Number Publication Date
CN110659391A true CN110659391A (en) 2020-01-07

Family

ID=69038037

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910850468.2A Pending CN110659391A (en) 2019-08-29 2019-09-10 Video detection method and device

Country Status (1)

Country Link
CN (1) CN110659391A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111651690A (en) * 2020-05-29 2020-09-11 深圳市天一智联科技有限公司 Case-related information searching method and device and computer equipment
CN111814690A (en) * 2020-07-09 2020-10-23 浙江大华技术股份有限公司 Target re-identification method and device and computer readable storage medium
CN111935450A (en) * 2020-07-15 2020-11-13 长江大学 Intelligent suspect tracking method and system and computer readable storage medium
CN112270205A (en) * 2020-09-22 2021-01-26 苏州千视通视觉科技股份有限公司 Case investigation method and device
CN112364683A (en) * 2020-09-22 2021-02-12 苏州千视通视觉科技股份有限公司 Case evidence fixing method and device
CN112364682A (en) * 2020-09-22 2021-02-12 苏州千视通视觉科技股份有限公司 Case searching method and device
CN112434557A (en) * 2020-10-20 2021-03-02 深圳市华橙数字科技有限公司 Three-dimensional display method and device of motion trail, terminal and storage medium
CN113225457A (en) * 2020-12-29 2021-08-06 视联动力信息技术股份有限公司 Data processing method and device, electronic equipment and storage medium
CN114267003A (en) * 2022-03-02 2022-04-01 城云科技(中国)有限公司 Road damage detection method, device and application

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102843547A (en) * 2012-08-01 2012-12-26 安科智慧城市技术(中国)有限公司 Intelligent tracking method and system for suspected target
CN106096577A (en) * 2016-06-24 2016-11-09 安徽工业大学 Target tracking system in a kind of photographic head distribution map and method for tracing
CN109344267A (en) * 2018-09-06 2019-02-15 苏州千视通视觉科技股份有限公司 Relay method for tracing and system based on PGIS map

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102843547A (en) * 2012-08-01 2012-12-26 安科智慧城市技术(中国)有限公司 Intelligent tracking method and system for suspected target
CN106096577A (en) * 2016-06-24 2016-11-09 安徽工业大学 Target tracking system in a kind of photographic head distribution map and method for tracing
CN109344267A (en) * 2018-09-06 2019-02-15 苏州千视通视觉科技股份有限公司 Relay method for tracing and system based on PGIS map

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111651690A (en) * 2020-05-29 2020-09-11 深圳市天一智联科技有限公司 Case-related information searching method and device and computer equipment
CN111814690A (en) * 2020-07-09 2020-10-23 浙江大华技术股份有限公司 Target re-identification method and device and computer readable storage medium
CN111814690B (en) * 2020-07-09 2023-09-01 浙江大华技术股份有限公司 Target re-identification method, device and computer readable storage medium
CN111935450A (en) * 2020-07-15 2020-11-13 长江大学 Intelligent suspect tracking method and system and computer readable storage medium
CN112270205A (en) * 2020-09-22 2021-01-26 苏州千视通视觉科技股份有限公司 Case investigation method and device
CN112364683A (en) * 2020-09-22 2021-02-12 苏州千视通视觉科技股份有限公司 Case evidence fixing method and device
CN112364682A (en) * 2020-09-22 2021-02-12 苏州千视通视觉科技股份有限公司 Case searching method and device
CN112434557A (en) * 2020-10-20 2021-03-02 深圳市华橙数字科技有限公司 Three-dimensional display method and device of motion trail, terminal and storage medium
CN113225457A (en) * 2020-12-29 2021-08-06 视联动力信息技术股份有限公司 Data processing method and device, electronic equipment and storage medium
CN114267003A (en) * 2022-03-02 2022-04-01 城云科技(中国)有限公司 Road damage detection method, device and application

Similar Documents

Publication Publication Date Title
CN110659391A (en) Video detection method and device
CN110390262B (en) Video analysis method, device, server and storage medium
US9002060B2 (en) Object retrieval in video data using complementary detectors
CN106878670B (en) A kind of method for processing video frequency and device
CN108665476B (en) Pedestrian tracking method and electronic equipment
Zabłocki et al. Intelligent video surveillance systems for public spaces–a survey
CN104106260A (en) Geographic map based control
CN109766779A (en) It hovers personal identification method and Related product
KR102511287B1 (en) Image-based pose estimation and action detection method and appratus
CN111666821A (en) Personnel gathering detection method, device and equipment
CN112434566A (en) Passenger flow statistical method and device, electronic equipment and storage medium
CN110276321A (en) Remote sensing video target tracking method and system
CN111445442A (en) Crowd counting method and device based on neural network, server and storage medium
CN114169425A (en) Training target tracking model and target tracking method and device
CN107301373B (en) Data processing method, device and storage medium
CN114913470B (en) Event detection method and device
CN114360064B (en) Office place personnel behavior lightweight target detection method based on deep learning
CN115019242A (en) Abnormal event detection method and device for traffic scene and processing equipment
CN114677627A (en) Target clue finding method, device, equipment and medium
CN114743262A (en) Behavior detection method and device, electronic equipment and storage medium
Teja et al. Man-on-man brutality identification on video data using Haar cascade algorithm
CN114782883A (en) Abnormal behavior detection method, device and equipment based on group intelligence
CN112861711A (en) Regional intrusion detection method and device, electronic equipment and storage medium
CN113626726A (en) Space-time trajectory determination method and related product
CN113435352B (en) Civilized city scoring method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination