CN114842424A - Intelligent security image identification method and device based on motion compensation - Google Patents

Intelligent security image identification method and device based on motion compensation Download PDF

Info

Publication number
CN114842424A
CN114842424A CN202210639487.2A CN202210639487A CN114842424A CN 114842424 A CN114842424 A CN 114842424A CN 202210639487 A CN202210639487 A CN 202210639487A CN 114842424 A CN114842424 A CN 114842424A
Authority
CN
China
Prior art keywords
image data
security
original image
compensation
motion compensation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210639487.2A
Other languages
Chinese (zh)
Other versions
CN114842424B (en
Inventor
温建伟
李营
其他发明人请求不公开姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhuohe Technology Co Ltd
Original Assignee
Beijing Zhuohe Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhuohe Technology Co Ltd filed Critical Beijing Zhuohe Technology Co Ltd
Priority to CN202210639487.2A priority Critical patent/CN114842424B/en
Publication of CN114842424A publication Critical patent/CN114842424A/en
Application granted granted Critical
Publication of CN114842424B publication Critical patent/CN114842424B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • G06Q10/06311Scheduling, planning or task assignment for a person or group
    • G06Q10/063114Status monitoring or status determination for a person or group
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0635Risk analysis of enterprise or organisation activities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • G06Q50/265Personal security, identity or safety

Landscapes

  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Engineering & Computer Science (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • Theoretical Computer Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Development Economics (AREA)
  • Quality & Reliability (AREA)
  • Operations Research (AREA)
  • Game Theory and Decision Science (AREA)
  • Multimedia (AREA)
  • Computer Security & Cryptography (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an intelligent security image identification method and device based on motion compensation. Wherein, the method comprises the following steps: acquiring first original image data and second original image data, wherein the acquisition time of the first original image data is behind that of the second original image data; performing image motion compensation according to the second original image number to obtain first security image data; inputting the first security image data into a security countermeasure model to obtain second security image data; and generating a security judgment result according to the second security data. The invention solves the technical problems that the security monitoring and image identification processing method in the prior art only judges security risks and security emergencies according to the real-time dynamic monitoring result of the image, needs to consume a large amount of manual workload and a large amount of computing resources, and reduces the efficiency and accuracy of security identification.

Description

Intelligent security image identification method and device based on motion compensation
Technical Field
The invention relates to the field of security image identification and processing, in particular to an intelligent security image identification method and device based on motion compensation.
Background
Along with the continuous development of intelligent science and technology, people use intelligent equipment more and more among life, work, the study, use intelligent science and technology means, improved the quality of people's life, increased the efficiency of people's study and work.
At present, for security protection and control video monitoring and image processing, dynamic real-time monitoring is adopted in the process of security protection and monitoring, image processing is performed according to a monitored image, the image processing can be distortion recovery, distortion recovery or binarization simplification and the like, and the processed image is subjected to an intelligent mode to obtain a security protection judgment result, but the security protection monitoring and image identification processing method in the prior art only judges security protection risks and security protection emergencies according to the real-time dynamic image monitoring result, needs to consume a large amount of manual workload and a large amount of computing resources, and reduces the efficiency and accuracy of security protection identification.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides an intelligent security image identification method and device based on motion compensation, and aims to at least solve the technical problems that security monitoring and image identification processing methods in the prior art only judge security risks and security emergencies according to image real-time dynamic monitoring results, a large amount of manual workload and a large amount of computing resources are consumed, and the security identification efficiency and accuracy are reduced.
According to one aspect of the embodiment of the invention, an intelligent security image identification method based on motion compensation is provided, and comprises the following steps: acquiring first original image data and second original image data, wherein the acquisition time of the first original image data is behind that of the second original image data; performing image motion compensation according to the second original image number to obtain first security image data; inputting the first security image data into a security countermeasure model to obtain second security image data; and generating a security judgment result according to the second security data.
Optionally, the acquiring the first original image data and the second original image data includes: acquiring first time stamp information and second time stamp information according to the compensation displacement in the action compensation algorithm; acquiring the first original image data according to the first timestamp information; and acquiring the second original image data according to the second timestamp information.
Optionally, the performing image motion compensation according to the second original image number to obtain first security image data includes: according to the formula of motion compensation
Figure BDA0003681883450000021
Calculating to obtain the first security image data D, wherein i is an action coordination factor, j is an action generation factor, f Pij For motion capture parameters in the first raw image data,
Figure BDA0003681883450000022
motion capture parameters in the second raw image data.
Optionally, before the first security image data is input into the security countermeasure model to obtain the second security image data, the method further includes: obtaining a delay progression algorithm factor alpha related to the motion compensation; and training the security countermeasure model according to the delay progressive algorithm factor alpha and a security historical database.
According to another aspect of the embodiments of the present invention, there is also provided an intelligent security image recognition apparatus based on motion compensation, including: the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring first original image data and second original image data, and the acquisition time of the first original image data is behind that of the second original image data; the compensation module is used for performing image motion compensation according to the second original image number to obtain first security image data; the input module is used for inputting the first security image data into a security countermeasure model to obtain second security image data; and the generating module is used for generating a security judgment result according to the second security data.
Optionally, the obtaining module includes: the acquiring unit is used for acquiring first timestamp information and second timestamp information according to the compensation displacement in the action compensation algorithm; the first acquisition unit is used for acquiring the first original image data according to the first timestamp information; and the second acquisition unit is used for acquiring the second original image data according to the second timestamp information.
Optionally, the compensation module includes: a compensation calculation unit for calculating a compensation formula based on the motion
Figure BDA0003681883450000023
Calculating to obtain the first security image data D, wherein i is an action coordination factor, j is an action generation factor, f Pij For motion capture parameters in the first raw image data,
Figure BDA0003681883450000024
capturing parameters for motion in the second raw image data.
Optionally, the apparatus further comprises: obtaining a delay progression algorithm factor alpha related to the motion compensation; and training the security countermeasure model according to the delay progressive algorithm factor alpha and a security historical database.
According to another aspect of the embodiment of the invention, a nonvolatile storage medium is further provided, and the nonvolatile storage medium includes a stored program, wherein the program controls a device where the nonvolatile storage medium is located to execute an intelligent security image recognition method based on motion compensation when running.
According to another aspect of the embodiments of the present invention, there is also provided an electronic device, including a processor and a memory; the memory is stored with computer readable instructions, and the processor is used for executing the computer readable instructions, wherein the computer readable instructions execute an intelligent security image identification method based on motion compensation when running.
In the embodiment of the invention, acquiring first original image data and second original image data is adopted, wherein the acquisition time of the first original image data is after that of the second original image data; performing image motion compensation according to the second original image number to obtain first security image data; inputting the first security image data into a security countermeasure model to obtain second security image data; and generating a security judgment result according to the second security data, so that the technical problems that security monitoring and image identification processing methods in the prior art only judge security risks and security emergencies according to image real-time dynamic monitoring results, a large amount of manual workload and a large amount of calculation resources are consumed, and the security identification efficiency and accuracy are reduced are solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a flowchart of an intelligent security image recognition method based on motion compensation according to an embodiment of the present invention;
FIG. 2 is a block diagram of an intelligent security image recognition device based on motion compensation according to an embodiment of the present invention;
fig. 3 is a block diagram of a terminal device for performing a method according to the present invention, according to an embodiment of the present invention;
fig. 4 is a memory unit for holding or carrying program code implementing a method according to the invention, according to an embodiment of the invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In accordance with an embodiment of the present invention, there is provided a method embodiment of an intelligent security image recognition method based on motion compensation, it is noted that the steps illustrated in the flowchart of the drawings may be executed in a computer system such as a set of computer executable instructions, and although a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be executed in an order different from that herein.
Example one
Fig. 1 is a flowchart of an intelligent security image recognition method based on motion compensation according to an embodiment of the present invention, and as shown in fig. 1, the method includes the following steps:
step S102, acquiring first original image data and second original image data, wherein the acquisition time of the first original image data is after the second original image data.
Specifically, in order to solve the technical problems that security monitoring and image recognition processing methods in the prior art only judge security risks and security emergencies according to real-time dynamic monitoring results of images, a large amount of manual workload and a large amount of computing resources are consumed, and security recognition efficiency and accuracy are reduced, firstly, a camera device collects real-time dynamic images, the collected images can be stored in a preset period T, and image data at T +1 and T are respectively stored as the first original image data and the second original image data, so that two collected image data with a fixed time difference value and the same camera device are collected and stored in the first step of the embodiment of the present invention and can be used for subsequent image compensation operation results.
Optionally, the acquiring the first original image data and the second original image data includes: acquiring first time stamp information and second time stamp information according to the compensation displacement in the action compensation algorithm; acquiring the first original image data according to the first timestamp information; and acquiring the second original image data according to the second timestamp information.
Specifically, in order to satisfy the compensation parameter in the motion compensation algorithm, the embodiment of the present invention needs to acquire the first timestamp information and the second timestamp information according to the compensation parameter, i.e., the displacement time compensation amount in the motion compensation algorithm, wherein the unit of the two timestamps may be a second level, the capturing degree conforms to the gaussian normal level rate, then the first original image data is acquired according to the first timestamp information, and the second original image data is acquired according to the second timestamp information.
And step S104, performing image motion compensation according to the second original image number to obtain first security image data.
Optionally, the performing image motion compensation according to the second original image number to obtain first security image data includes: according to the formula of motion compensation
Figure BDA0003681883450000051
Calculating to obtain the first security image dataD, wherein i is a motion coordination factor, j is a motion generation factor, f Pij For motion capture parameters in the first raw image data,
Figure BDA0003681883450000052
motion capture parameters in the second raw image data.
Specifically, after first original image data and second original image data with different timestamps are acquired, motion capture parameters in the first original image are required to be used as displacement original parameters applicable to a motion compensation formula, and motion compensation is performed by using the second original image data as a compensation basis to obtain first security image data, wherein the first security image data is a set of images with compensation marks and is not a single unprocessed image, which is different from a process of judging only an existing dynamic image in the prior art, so that the image judgment accuracy can be greatly increased, and the early probability of a security event or a security hidden danger is improved. For example, the motion compensation algorithm described above may use block-wise motion compensation, with each frame being divided into blocks of pixels (in most video coding standards, such as MPEG, into 16 × 16 blocks of pixels). The current block is predicted from a block with equal size at a certain position of a reference frame, only translation is carried out in the prediction process, and the translation size is called a motion vector. For block motion compensation, motion vectors are the necessary parameters of the model and must be coded together into the code stream. Since the motion vectors are not independent (e.g. correlation between two neighboring motion vectors belonging to the same moving object is usually large), differential coding is usually used to reduce the code rate. This means that adjacent motion vectors are differenced before they are encoded, and only the differenced portion is encoded. Encoding the components of a motion vector using entropy coding can further eliminate the statistical redundancy of motion vectors (usually the difference of motion vectors is concentrated around the 0 vector). The value of the motion vector may be non-integer, and the motion compensation in this case is called sub-pixel precision motion compensation. This is done by sub-pixel level interpolation of the reference frame pixel values followed by motion compensation. The simplest sub-pixel precision motion compensation uses half-pixel precision, and there are also motion compensation algorithms that use 1/4 pixel and 1/8 pixel precision. Higher sub-pixel accuracy may improve the accuracy of motion compensation, but the large number of interpolation operations greatly increases the computational complexity.
And S106, inputting the first security image data into a security countermeasure model to obtain second security image data.
Specifically, since the security image data is acquired, in order to reduce a large amount of occupied computing resources, the security countermeasure model can be trained by using the generated countermeasure network model, and data used by the training model can be input feature vectors and output simulation feature vectors by using specific parameters of related security events in a large data platform, so that whether a security problem exists or not can be directly judged by the training part through the first security image data.
Optionally, when training the GAN network countermeasure model, before the first security image data is input into the security countermeasure model to obtain the second security image data, the method further includes: obtaining a delay progression algorithm factor alpha related to the motion compensation; and training the security countermeasure model according to the delay progressive algorithm factor alpha and a security historical database.
And S108, generating a security judgment result according to the second security data.
Specifically, after the second security data generated by the security countermeasure model is acquired, the embodiment of the invention can perform early warning, alarming or prompting according to the content in the second security data.
By the embodiment, the technical problems that the security monitoring and image identification processing method in the prior art only judges security risks and security emergencies according to the real-time dynamic image monitoring result, a large amount of manual workload and a large amount of computing resources are consumed, and the security identification efficiency and accuracy are reduced are solved.
Example two
Fig. 2 is a block diagram of an intelligent security image recognition device based on motion compensation according to an embodiment of the present invention, and as shown in fig. 2, the device includes:
an acquiring module 20, configured to acquire first raw image data and second raw image data, where an acquisition time of the first raw image data is later than that of the second raw image data.
Specifically, in order to solve the technical problems that security monitoring and image recognition processing methods in the prior art only judge security risks and security emergencies according to real-time dynamic monitoring results of images, a large amount of manual workload and a large amount of computing resources are consumed, and security recognition efficiency and accuracy are reduced, firstly, a camera device collects real-time dynamic images, the collected images can be stored in a preset period T, and image data at T +1 and T are respectively stored as the first original image data and the second original image data, so that two collected image data with a fixed time difference value and the same camera device are collected and stored in the first step of the embodiment of the present invention and can be used for subsequent image compensation operation results.
Optionally, the obtaining module includes: the acquiring unit is used for acquiring first timestamp information and second timestamp information according to the compensation displacement in the action compensation algorithm; the first acquisition unit is used for acquiring the first original image data according to the first timestamp information; and the second acquisition unit is used for acquiring the second original image data according to the second timestamp information.
Specifically, in order to satisfy the compensation parameter in the motion compensation algorithm, the embodiment of the present invention needs to acquire the first timestamp information and the second timestamp information according to the compensation parameter, i.e., the displacement time compensation amount in the motion compensation algorithm, wherein the unit of the two timestamps may be a second level, the capturing degree conforms to the gaussian normal level rate, then the first original image data is acquired according to the first timestamp information, and the second original image data is acquired according to the second timestamp information.
And the compensation module 22 is configured to perform image motion compensation according to the second original image number to obtain first security image data.
Optionally, the compensation module includes: a compensation calculation unit for calculating a compensation formula based on the motion
Figure BDA0003681883450000071
Calculating to obtain the first security image data D, wherein i is an action coordination factor, j is an action generation factor, f Pij For motion capture parameters in the first raw image data,
Figure BDA0003681883450000072
motion capture parameters in the second raw image data.
Specifically, after first original image data and second original image data with different timestamps are acquired, motion capture parameters in the first original image are required to be used as displacement original parameters applicable to a motion compensation formula, and motion compensation is performed by using the second original image data as a compensation basis to obtain first security image data, wherein the first security image data is a set of images with compensation marks and is not a single unprocessed image, which is different from a process of judging only an existing dynamic image in the prior art, so that the image judgment accuracy can be greatly increased, and the early probability of a security event or a security hidden danger is improved. For example, the motion compensation algorithm described above may use block-wise motion compensation, with each frame being divided into blocks of pixels (in most video coding standards, such as MPEG, into 16 × 16 blocks of pixels). The current block is predicted from a block with equal size at a certain position of a reference frame, only translation is carried out in the prediction process, and the translation size is called a motion vector. For block motion compensation, motion vectors are the necessary parameters of the model and must be coded together into the code stream. Since the motion vectors are not independent (e.g. correlation between two neighboring motion vectors belonging to the same moving object is usually large), differential coding is usually used to reduce the code rate. This means that adjacent motion vectors are differenced before they are encoded, and only the differenced portion is encoded. Encoding the components of a motion vector using entropy coding can further eliminate the statistical redundancy of motion vectors (usually the difference of motion vectors is concentrated around the 0 vector). The value of the motion vector may be non-integer, and the motion compensation in this case is called sub-pixel precision motion compensation. This is done by sub-pixel level interpolation of the reference frame pixel values followed by motion compensation. The simplest sub-pixel precision motion compensation uses half-pixel precision, and there are also motion compensation algorithms that use 1/4 pixel and 1/8 pixel precision. Higher sub-pixel accuracy may improve the accuracy of motion compensation, but the large number of interpolation operations greatly increases the computational complexity.
And the input module 24 is configured to input the first security image data into the security countermeasure model to obtain second security image data.
Specifically, since the security image data is acquired, in order to reduce a large amount of occupied computing resources, the security countermeasure model can be trained by using the generated countermeasure network model, and data used by the training model can be input feature vectors and output simulation feature vectors by using specific parameters of related security events in a large data platform, so that whether a security problem exists or not can be directly judged by the training part through the first security image data.
Optionally, when training the GAN network countermeasure model, the apparatus in the embodiment of the present invention further includes: obtaining a delay progression algorithm factor alpha related to the motion compensation; and training the security countermeasure model according to the delay progressive algorithm factor alpha and a security historical database.
And the generating module 26 is configured to generate a security judgment result according to the second security data.
Specifically, after the second security data generated by the security countermeasure model is acquired, the embodiment of the invention can perform early warning, alarming or prompting according to the content in the second security data.
According to another aspect of the embodiment of the invention, a nonvolatile storage medium is further provided, and the nonvolatile storage medium includes a stored program, wherein the program controls a device where the nonvolatile storage medium is located to execute an intelligent security image recognition method based on motion compensation when running.
Specifically, the method comprises the following steps: acquiring first original image data and second original image data, wherein the acquisition time of the first original image data is behind that of the second original image data; performing image motion compensation according to the second original image number to obtain first security image data; inputting the first security image data into a security countermeasure model to obtain second security image data; and generating a security judgment result according to the second security data. Optionally, the acquiring the first original image data and the second original image data includes: acquiring first time stamp information and second time stamp information according to the compensation displacement in the action compensation algorithm; acquiring the first original image data according to the first timestamp information; and acquiring the second original image data according to the second timestamp information. Optionally, the performing image motion compensation according to the second original image number to obtain first security image data includes: according to the formula of motion compensation
Figure BDA0003681883450000081
Calculating to obtain the first security image data D, wherein i is an action coordination factor, j is an action generation factor, f Pij For motion capture parameters in the first raw image data,
Figure BDA0003681883450000091
motion capture parameters in the second raw image data. Optionally, before the first security image data is input into the security countermeasure model to obtain the second security image data, the method further includes: obtaining a delay progression algorithm factor alpha related to the motion compensation; training the security according to the delay progressive algorithm factor alpha and a security historical databaseAn anti-confrontation model.
According to another aspect of the embodiments of the present invention, there is also provided an electronic device, including a processor and a memory; the memory is stored with computer readable instructions, and the processor is used for executing the computer readable instructions, wherein the computer readable instructions execute an intelligent security image identification method based on motion compensation when running.
Specifically, the method includes: acquiring first original image data and second original image data, wherein the acquisition time of the first original image data is behind that of the second original image data; performing image motion compensation according to the second original image number to obtain first security image data; inputting the first security image data into a security countermeasure model to obtain second security image data; and generating a security judgment result according to the second security data. Optionally, the acquiring the first original image data and the second original image data includes: acquiring first time stamp information and second time stamp information according to the compensation displacement in the action compensation algorithm; acquiring the first original image data according to the first timestamp information; and acquiring the second original image data according to the second timestamp information. Optionally, the performing image motion compensation according to the second original image number to obtain first security image data includes: according to the formula of motion compensation
Figure BDA0003681883450000092
Calculating to obtain the first security image data D, wherein i is an action coordination factor, j is an action generation factor, f Pij For motion capture parameters in the first raw image data,
Figure BDA0003681883450000093
motion capture parameters in the second raw image data. Optionally, before the first security image data is input into the security countermeasure model to obtain the second security image data, the method further includes: obtaining a delay progression algorithm factor alpha related to the motion compensation;and training the security countermeasure model according to the delay progressive algorithm factor alpha and a security historical database.
By the embodiment, the technical problems that the security monitoring and image identification processing method in the prior art only judges security risks and security emergencies according to the real-time dynamic image monitoring result, a large amount of manual workload and a large amount of computing resources are consumed, and the security identification efficiency and accuracy are reduced are solved.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, fig. 3 is a schematic diagram of a hardware structure of a terminal device according to an embodiment of the present application. As shown in fig. 3, the terminal device may include an input device 30, a processor 31, an output device 32, a memory 33, and at least one communication bus 34. The communication bus 34 is used to realize communication connections between the elements. The memory 33 may comprise a high speed RAM memory, and may also include a non-volatile memory NVM, such as at least one disk memory, in which various programs may be stored for performing various processing functions and implementing the method steps of the present embodiment.
Alternatively, the processor 31 may be implemented by, for example, a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a controller, a microcontroller, a microprocessor, or other electronic components, and the processor 31 is coupled to the input device 30 and the output device 32 through a wired or wireless connection.
Optionally, the input device 30 may include a variety of input devices, for example, at least one of a user-oriented user interface, a device-oriented device interface, a software programmable interface, a camera, and a sensor. Optionally, the device interface facing the device may be a wired interface for data transmission between devices, or may be a hardware plug-in interface (e.g., a USB interface, a serial port, etc.) for data transmission between devices; optionally, the user-facing user interface may be, for example, a user-facing control key, a voice input device for receiving voice input, and a touch sensing device (e.g., a touch screen with a touch sensing function, a touch pad, etc.) for receiving user touch input; optionally, the programmable interface of the software may be, for example, an entry for a user to edit or modify a program, such as an input pin interface or an input interface of a chip; optionally, the transceiver may be a radio frequency transceiver chip with a communication function, a baseband processing chip, a transceiver antenna, and the like. An audio input device such as a microphone may receive voice data. The output device 32 may include a display, a sound, or other output device.
In this embodiment, the processor of the terminal device includes a module for executing the functions of the modules of the data processing apparatus in each device, and specific functions and technical effects may refer to the foregoing embodiments, which are not described herein again.
Fig. 4 is a schematic diagram of a hardware structure of a terminal device according to another embodiment of the present application. Fig. 4 is a specific embodiment of fig. 3 in an implementation process. As shown in fig. 4, the terminal device of the present embodiment includes a processor 41 and a memory 42.
The processor 41 executes the computer program code stored in the memory 42 to implement the method in the above-described embodiment.
The memory 42 is configured to store various types of data to support operations at the terminal device. Examples of such data include instructions for any application or method operating on the terminal device, such as messages, pictures, videos, and so forth. The memory 42 may include a Random Access Memory (RAM) and may also include a non-volatile memory (non-volatile memory), such as at least one disk memory.
Optionally, the processor 41 is provided in the processing assembly 40. The terminal device may further include: a communication component 43, a power component 44, a multimedia component 45, an audio component 46, an input/output interface 47 and/or a sensor component 48. The specific components included in the terminal device are set according to actual requirements, which is not limited in this embodiment.
The processing component 40 generally controls the overall operation of the terminal device. Processing component 40 may include one or more processors 41 to execute instructions to perform all or a portion of the steps of the above-described method. Further, processing component 40 may include one or more modules that facilitate interaction between processing component 40 and other components. For example, the processing component 40 may include a multimedia module to facilitate interaction between the multimedia component 45 and the processing component 40.
The power supply component 44 provides power to the various components of the terminal device. The power components 44 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the terminal device.
The multimedia component 45 includes a display screen providing an output interface between the terminal device and the user. In some embodiments, the display screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the display screen includes a touch panel, the display screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
The audio component 46 is configured to output and/or input audio signals. For example, the audio component 46 includes a Microphone (MIC) configured to receive external audio signals when the terminal device is in an operational mode, such as a voice recognition mode. The received audio signal may further be stored in the memory 42 or transmitted via the communication component 43. In some embodiments, audio assembly 46 also includes a speaker for outputting audio signals.
The input/output interface 47 provides an interface between the processing component 40 and peripheral interface modules, which may be click wheels, buttons, etc. These buttons may include, but are not limited to: a volume button, a start button, and a lock button.
The sensor assembly 48 includes one or more sensors for providing various aspects of status assessment for the terminal device. For example, the sensor assembly 48 may detect the open/closed status of the terminal device, the relative positioning of the components, the presence or absence of user contact with the terminal device. The sensor assembly 48 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact, including detecting the distance between the user and the terminal device. In some embodiments, the sensor assembly 48 may also include a camera or the like.
The communication component 43 is configured to facilitate communication between the terminal device and other devices in a wired or wireless manner. The terminal device may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In one embodiment, the terminal device may include a SIM card slot for inserting a SIM card therein, so that the terminal device can log on to a GPRS network and establish communication with the server via the internet.
From the above, the communication component 43, the audio component 46, the input/output interface 47 and the sensor component 48 referred to in the embodiment of fig. 4 can be implemented as the input device in the embodiment of fig. 3.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (10)

1. An intelligent security image identification method based on motion compensation is characterized by comprising the following steps:
acquiring first original image data and second original image data, wherein the acquisition time of the first original image data is behind that of the second original image data;
performing image motion compensation according to the second original image number to obtain first security image data;
inputting the first security image data into a security countermeasure model to obtain second security image data;
and generating a security judgment result according to the second security data.
2. The method of claim 1, wherein the acquiring first raw image data and second raw image data comprises:
acquiring first time stamp information and second time stamp information according to the compensation displacement in the action compensation algorithm;
acquiring the first original image data according to the first timestamp information;
and acquiring the second original image data according to the second timestamp information.
3. The method according to claim 1, wherein the performing image motion compensation according to the second original image number to obtain first security image data comprises:
according to the formula of motion compensation
Figure FDA0003681883440000011
Calculating to obtain the first security image data D, wherein i is an action coordination factor, j is an action generation factor, f Pij For motion capture parameters in the first raw image data,
Figure FDA0003681883440000012
motion capture parameters in the second raw image data.
4. The method of claim 1, wherein prior to the inputting the first security image data into the security countermeasure model resulting in second security image data, the method further comprises:
obtaining a delay progression algorithm factor alpha related to the motion compensation;
and training the security countermeasure model according to the delay progressive algorithm factor alpha and a security historical database.
5. The utility model provides an intelligent security protection image recognition device based on motion compensation which characterized in that includes:
the device comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring first original image data and second original image data, and the acquisition time of the first original image data is later than that of the second original image data;
the compensation module is used for performing image motion compensation according to the second original image number to obtain first security image data;
the input module is used for inputting the first security image data into a security countermeasure model to obtain second security image data;
and the generating module is used for generating a security judgment result according to the second security data.
6. The apparatus of claim 5, wherein the obtaining module comprises:
the acquiring unit is used for acquiring first timestamp information and second timestamp information according to the compensation displacement in the action compensation algorithm;
the first acquisition unit is used for acquiring the first original image data according to the first timestamp information;
and the second acquisition unit is used for acquiring the second original image data according to the second timestamp information.
7. The apparatus of claim 5, wherein the compensation module comprises:
a compensation calculation unit for calculating a compensation formula based on the motion
Figure FDA0003681883440000021
Calculating to obtain the first security image data D, wherein i is an action coordination factor, j is an action generation factor, f Pij For motion capture parameters in the first raw image data,
Figure FDA0003681883440000022
motion capture parameters in the second raw image data.
8. The apparatus of claim 5, further comprising:
obtaining a delay progression algorithm factor alpha related to the motion compensation;
and training the security countermeasure model according to the delay progressive algorithm factor alpha and a security historical database.
9. A non-volatile storage medium, comprising a stored program, wherein the program, when executed, controls an apparatus in which the non-volatile storage medium is located to perform the method of any one of claims 1 to 4.
10. An electronic device comprising a processor and a memory; the memory has stored therein computer readable instructions for execution by the processor, wherein the computer readable instructions when executed perform the method of any one of claims 1 to 4.
CN202210639487.2A 2022-06-07 2022-06-07 Intelligent security image identification method and device based on motion compensation Active CN114842424B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210639487.2A CN114842424B (en) 2022-06-07 2022-06-07 Intelligent security image identification method and device based on motion compensation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210639487.2A CN114842424B (en) 2022-06-07 2022-06-07 Intelligent security image identification method and device based on motion compensation

Publications (2)

Publication Number Publication Date
CN114842424A true CN114842424A (en) 2022-08-02
CN114842424B CN114842424B (en) 2023-01-24

Family

ID=82574160

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210639487.2A Active CN114842424B (en) 2022-06-07 2022-06-07 Intelligent security image identification method and device based on motion compensation

Country Status (1)

Country Link
CN (1) CN114842424B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115293985A (en) * 2022-08-11 2022-11-04 北京拙河科技有限公司 Super-resolution noise reduction method and device for image optimization
CN115426525A (en) * 2022-09-05 2022-12-02 北京拙河科技有限公司 High-speed moving frame based linkage image splitting method and device
CN116030501A (en) * 2023-02-15 2023-04-28 北京拙河科技有限公司 Method and device for extracting bird detection data

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101416523A (en) * 2006-04-03 2009-04-22 英特尔公司 Motion compensated frame rate conversion with protection against compensation artifacts
CN101762810A (en) * 2009-12-15 2010-06-30 中国科学院声学研究所 Synthetic-aperture sonar motion compensation method under wide swath
CN101895762A (en) * 2010-07-30 2010-11-24 天津大学 Frame frequency lifting algorithm based on zero detection and vector filtering
CN102355556A (en) * 2011-11-02 2012-02-15 无锡博视芯半导体科技有限公司 Three-dimensional noise reduction method for video and image based on motion estimation
CN107274347A (en) * 2017-07-11 2017-10-20 福建帝视信息科技有限公司 A kind of video super-resolution method for reconstructing based on depth residual error network
CN109087243A (en) * 2018-06-29 2018-12-25 中山大学 A kind of video super-resolution generation method generating confrontation network based on depth convolution
CN113033575A (en) * 2021-03-03 2021-06-25 杭州天时亿科技有限公司 Early warning method and device based on image recognition
CN114255187A (en) * 2021-12-22 2022-03-29 中国电信集团系统集成有限责任公司 Multi-level and multi-level image optimization method and system based on big data platform
WO2022063265A1 (en) * 2020-09-28 2022-03-31 华为技术有限公司 Inter-frame prediction method and apparatus
CN114422804A (en) * 2021-12-21 2022-04-29 浙江智慧视频安防创新中心有限公司 Method, device and system for jointly encoding and decoding digital retina video stream and feature stream

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101416523A (en) * 2006-04-03 2009-04-22 英特尔公司 Motion compensated frame rate conversion with protection against compensation artifacts
CN101762810A (en) * 2009-12-15 2010-06-30 中国科学院声学研究所 Synthetic-aperture sonar motion compensation method under wide swath
CN101895762A (en) * 2010-07-30 2010-11-24 天津大学 Frame frequency lifting algorithm based on zero detection and vector filtering
CN102355556A (en) * 2011-11-02 2012-02-15 无锡博视芯半导体科技有限公司 Three-dimensional noise reduction method for video and image based on motion estimation
CN107274347A (en) * 2017-07-11 2017-10-20 福建帝视信息科技有限公司 A kind of video super-resolution method for reconstructing based on depth residual error network
CN109087243A (en) * 2018-06-29 2018-12-25 中山大学 A kind of video super-resolution generation method generating confrontation network based on depth convolution
WO2022063265A1 (en) * 2020-09-28 2022-03-31 华为技术有限公司 Inter-frame prediction method and apparatus
CN113033575A (en) * 2021-03-03 2021-06-25 杭州天时亿科技有限公司 Early warning method and device based on image recognition
CN114422804A (en) * 2021-12-21 2022-04-29 浙江智慧视频安防创新中心有限公司 Method, device and system for jointly encoding and decoding digital retina video stream and feature stream
CN114255187A (en) * 2021-12-22 2022-03-29 中国电信集团系统集成有限责任公司 Multi-level and multi-level image optimization method and system based on big data platform

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
JIANPING LIN ET.AL.: "Generative Adversarial Network-Based Frame Extrapolation for Video Coding", 《2018 IEEE VISUAL COMMUNICATIONS AND IMAGE PROCESSING (VCIP)》 *
何伟等: "视频压缩前端去隔行算法研究及系统设计", 《电子技术应用》 *
孙辉等: "电子稳像技术在船载电视监视系统中的应用", 《光电工程》 *
孙辉等: "航空光电成像电子稳像技术", 《光学精密工程》 *
张兆阳等: "一种非静止背景下的运动目标检测方法", 《电子设计工程》 *
杨露等: "多相邻重叠块运动补偿帧率提升算法", 《电子测量技术》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115293985A (en) * 2022-08-11 2022-11-04 北京拙河科技有限公司 Super-resolution noise reduction method and device for image optimization
CN115426525A (en) * 2022-09-05 2022-12-02 北京拙河科技有限公司 High-speed moving frame based linkage image splitting method and device
CN115426525B (en) * 2022-09-05 2023-05-26 北京拙河科技有限公司 High-speed dynamic frame linkage image splitting method and device
CN116030501A (en) * 2023-02-15 2023-04-28 北京拙河科技有限公司 Method and device for extracting bird detection data
CN116030501B (en) * 2023-02-15 2023-10-10 北京拙河科技有限公司 Method and device for extracting bird detection data

Also Published As

Publication number Publication date
CN114842424B (en) 2023-01-24

Similar Documents

Publication Publication Date Title
CN114842424B (en) Intelligent security image identification method and device based on motion compensation
CN115426525B (en) High-speed dynamic frame linkage image splitting method and device
CN115631122A (en) Image optimization method and device for edge image algorithm
CN107578024A (en) A kind of face identification system
CN115170818A (en) Dynamic frame image feature extraction method and device
CN114999092A (en) Disaster early warning method and device based on multiple forest fire model
CN115623336A (en) Image tracking method and device for hundred million-level camera equipment
CN115984126A (en) Optical image correction method and device based on input instruction
US11393091B2 (en) Video image processing and motion detection
CN112991274A (en) Crowd counting method and device, computer equipment and storage medium
CN115474091A (en) Motion capture method and device based on decomposition metagraph
CN115334291A (en) Tunnel monitoring method and device based on hundred million-level pixel panoramic compensation
CN115527045A (en) Image identification method and device for snow field danger identification
CN115035685A (en) Square security monitoring method and device based on dispersive motor neural network
CN112580543B (en) Behavior recognition method, system and device
CN115914819B (en) Picture capturing method and device based on orthogonal decomposition algorithm
CN116402935B (en) Image synthesis method and device based on ray tracing algorithm
CN115205313B (en) Picture optimization method and device based on least square algorithm
CN116664413B (en) Image volume fog eliminating method and device based on Abbe convergence operator
CN115187570B (en) Singular traversal retrieval method and device based on DNN deep neural network
CN115511735A (en) Snow field gray level picture optimization method and device
CN116744102B (en) Ball machine tracking method and device based on feedback adjustment
CN116723419B (en) Acquisition speed optimization method and device for billion-level high-precision camera
CN116088580B (en) Flying object tracking method and device
CN116309523A (en) Dynamic frame image dynamic fuzzy recognition method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant