CN114866702A - Multi-auxiliary linkage camera shooting technology-based border monitoring and collecting method and device - Google Patents

Multi-auxiliary linkage camera shooting technology-based border monitoring and collecting method and device Download PDF

Info

Publication number
CN114866702A
CN114866702A CN202210659665.8A CN202210659665A CN114866702A CN 114866702 A CN114866702 A CN 114866702A CN 202210659665 A CN202210659665 A CN 202210659665A CN 114866702 A CN114866702 A CN 114866702A
Authority
CN
China
Prior art keywords
image data
pixel
camera
auxiliary
function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210659665.8A
Other languages
Chinese (zh)
Inventor
袁潮
温建伟
其他发明人请求不公开姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhuohe Technology Co Ltd
Original Assignee
Beijing Zhuohe Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhuohe Technology Co Ltd filed Critical Beijing Zhuohe Technology Co Ltd
Priority to CN202210659665.8A priority Critical patent/CN114866702A/en
Publication of CN114866702A publication Critical patent/CN114866702A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention discloses a border monitoring and collecting method and device based on a multi-auxiliary linkage camera shooting technology. Wherein, the method comprises the following steps: acquiring an activation parameter of a camera array; acquiring images according to the activation parameters to obtain main image data and auxiliary image data; decomposing the auxiliary image data according to the camera array function information to obtain pixel function image data and extended function image data; and fusing the main image data and the pixel function image data by using the extended function image data, and generating an intrusion judgment result. The invention solves the technical problems that management and analysis of border security are only obtained by splicing a single camera or a plurality of cameras in regions, and the same placement region can not be judged by the collection of a plurality of or dozens of camera systems in the prior art, thereby reducing the efficiency and quality of judging the border security time.

Description

Multi-auxiliary linkage camera shooting technology-based border monitoring and collecting method and device
Technical Field
The invention relates to the field of image processing of a linkage camera system, in particular to a border monitoring and collecting method and device based on a multi-auxiliary linkage camera technology.
Background
Along with the continuous development of intelligent science and technology, people use intelligent equipment more and more among life, work, the study, use intelligent science and technology means, improved the quality of people's life, increased the efficiency of people's study and work.
At present, in the process of monitoring and collecting security incidents of a border, a common monitoring camera is generally adopted to shoot and scan security areas, and the image data of the shooting and scanning results are analyzed and processed, or a deep network model calculation method is utilized to judge the image data to obtain a method for judging whether security risks exist in a placement area, but in the prior art, the management and analysis of the border security are only obtained by splicing a single camera or a plurality of cameras in different areas, and a method for judging the same placement area by a set of several or dozens of camera systems cannot be used, so that the efficiency and quality of judging the border security time are reduced.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a border monitoring and collecting method and device based on a multi-auxiliary linkage camera shooting technology, and the method and device at least solve the technical problems that management and analysis of border security in the prior art are only obtained by splicing a single camera or a plurality of cameras in a regional mode, and the same placement region can not be judged by the collection of a plurality of camera systems or dozens of camera systems, so that the efficiency and quality of judging the border security time are reduced.
According to an aspect of the embodiments of the present invention, a border monitoring and collecting method based on a multi-auxiliary linkage camera shooting technology is provided, including: acquiring an activation parameter of a camera array; acquiring images according to the activation parameters to obtain main image data and auxiliary image data; decomposing the auxiliary image data according to the camera array function information to obtain pixel function image data and extended function image data; and fusing the main image data and the pixel function image data by using the extended function image data, and generating an intrusion judgment result.
Optionally, the activation parameters include: the starting state of the main camera equipment, the starting state of the auxiliary camera equipment and the running state of the linkage channel.
Optionally, decomposing the auxiliary image data according to the camera array function information to obtain pixel function image data and extended function image data includes: acquiring array equipment function information in the camera array, wherein the equipment function information comprises equipment use and equipment use parameters; and analyzing the auxiliary image data according to the equipment function information, and separating pixel function image data and extended function image data under each function.
Optionally, the fusing the main image data and the pixel functional image data by using the extended functional image data, and generating an intrusion determination result includes: setting a pixel point position X in the main image data as a variable point position X; setting a pixel point position Y in the pixel function image data as a variable point position Y, wherein the pixel point position Y in the pixel function image data is a set of all auxiliary camera shooting device dot matrix pixel images of the camera shooting array except the main camera shooting device; by multilevel lattice pixel algorithm
Figure BDA0003689988740000021
Calculating a fusion pixel solution; generating an intrusion judgment result according to the fusion pixel connection and intrusion judgment model, wherein the intrusion judgment result comprises: intrusion state, non-intrusion state.
According to another aspect of the embodiments of the present invention, there is also provided a border monitoring and collecting device based on a multi-auxiliary linkage camera shooting technology, including: the acquisition module is used for acquiring the activation parameters of the camera array; the acquisition module is used for acquiring images according to the activation parameters to obtain main image data and auxiliary image data; the decomposition module is used for decomposing the auxiliary image data according to the camera array function information to obtain pixel function image data and extended function image data; and the fusion module is used for fusing the main image data and the pixel function image data by using the extended function image data and generating an intrusion judgment result.
Optionally, the activation parameters include: the starting state of the main camera equipment, the starting state of the auxiliary camera equipment and the running state of the linkage channel.
Optionally, the decomposition module includes: the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring array device function information in the camera array, and the device function information comprises device usage and device usage parameters; and the analysis unit is used for analyzing the auxiliary image data according to the equipment function information and separating pixel function image data and extended function image data under each function.
Optionally, the fusion module includes: a setting unit configured to set a pixel point location X in the main image data to a variable point location X; the conversion unit is used for setting a pixel point position Y in the pixel function image data as a variable point position Y, wherein the pixel point position Y in the pixel function image data is a set of all auxiliary camera shooting equipment dot matrix pixel images of the camera shooting array except the main camera shooting equipment; a calculation unit for passing a multi-level lattice pixel algorithm
Figure BDA0003689988740000031
Calculating a fusion pixel solution; the generating unit is used for generating an intrusion judgment result according to the fusion pixel connection and intrusion judgment model, wherein the intrusion judgment result comprises: intrusion state, non-intrusion state.
According to another aspect of the embodiment of the present invention, a nonvolatile storage medium is further provided, where the nonvolatile storage medium includes a stored program, and the program controls, when running, a device where the nonvolatile storage medium is located to execute a border monitoring and collecting method based on a multi-auxiliary linked shooting technology.
According to another aspect of the embodiments of the present invention, there is also provided an electronic device, including a processor and a memory; the storage is stored with computer readable instructions, and the processor is used for operating the computer readable instructions, wherein the computer readable instructions execute a border monitoring and collecting method based on a multi-auxiliary linkage camera shooting technology when operating.
In the embodiment of the invention, the method comprises the steps of acquiring the activation parameters of the camera array; acquiring images according to the activation parameters to obtain main image data and auxiliary image data; decomposing the auxiliary image data according to the camera array function information to obtain pixel function image data and extended function image data; by utilizing the extended function image data, the main image data and the pixel function image data are fused, and an intrusion judgment result is generated, so that the technical problems that management and analysis of border security in the prior art are obtained only by splicing a single camera or a plurality of cameras in a regional mode, and the judgment of the same placement region cannot be performed by the collection of a plurality of or dozens of camera systems, and the judgment efficiency and quality of the border security time are reduced are solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a flowchart of a border monitoring and collecting method based on a multi-auxiliary linkage camera shooting technology according to an embodiment of the present invention;
FIG. 2 is a block diagram of a border monitoring and collecting device based on a multi-auxiliary linkage camera shooting technology according to an embodiment of the present invention;
fig. 3 is a block diagram of a terminal device for performing a method according to the present invention, according to an embodiment of the present invention;
fig. 4 is a memory unit for holding or carrying program code implementing a method according to the invention, according to an embodiment of the invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in other sequences than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to an embodiment of the present invention, there is provided a method embodiment of a border monitoring acquisition method based on a multi-auxiliary linkage camera shooting technique, where the steps shown in the flowchart of the drawings may be executed in a computer system such as a set of computer executable instructions, and although a logical order is shown in the flowchart, in some cases, the steps shown or described may be executed in an order different from that shown or described herein.
Example one
Fig. 1 is a flowchart of a border monitoring and collecting method based on a multi-auxiliary linkage camera shooting technology according to an embodiment of the present invention, and as shown in fig. 1, the method includes the following steps:
and step S102, acquiring the activation parameters of the camera array.
Specifically, in order to solve the technical problems that management and analysis of border security in the prior art are only obtained by performing split joint processing on a single camera or multiple cameras in different regions, and a plurality of or dozens of camera systems cannot be integrated to judge the same placement region, so that efficiency and quality of border security time judgment are reduced, a camera array needs to be activated firstly, and camera array activation parameters composed of one main camera and multiple auxiliary camera devices are acquired, so that image acquisition operation can be performed directly through relationship parameters among multiple cameras when multiple images of the camera array are processed subsequently.
Optionally, the activation parameters include: the starting state of the main camera equipment, the starting state of the auxiliary camera equipment and the running state of the linkage channel.
Specifically, the activation parameters in the embodiment of the present invention include: the starting state of the main camera equipment, the starting state of the auxiliary camera equipment and the running state of the linkage channel. The linkage path operating state may include whether the linkage path is already in a physical activation state and whether the linkage path has successfully connected each of the main cameras and the auxiliary cameras, and the connection relationship and the connection parameters of each of the image pickup apparatuses are stored in the memory.
And step S104, acquiring images according to the activation parameters to obtain main image data and auxiliary image data.
Specifically, after the activation parameters are obtained, the image acquisition operation of the camera array may be divided into main image data and auxiliary image data, wherein the main image data is acquired by one or more main camera devices in real time, the auxiliary image data is acquired by a plurality of auxiliary camera devices in real time, and after the main image data and the auxiliary image data are acquired, the two data need to be optimized and summarized for subsequent fusion judgment.
And step S106, decomposing the auxiliary image data according to the camera array function information to obtain pixel function image data and extended function image data.
Optionally, decomposing the auxiliary image data according to the camera array function information to obtain pixel function image data and extended function image data includes: acquiring array equipment function information in the camera array, wherein the equipment function information comprises equipment use and equipment use parameters; and analyzing the auxiliary image data according to the equipment function information, and separating pixel function image data and extended function image data under each function.
Specifically, after various parameters of the camera array are acquired, the data and information acquired in the auxiliary cameras need to be decomposed and separated, so that the image data acquired by each auxiliary camera device and the related image functions of each auxiliary camera device are separated and summarized, for example, the wide-angle camera device, the distortion-prevention camera device, the night vision auxiliary camera device and the like are acquired, and the array device function information in the camera array is acquired, wherein the device function information includes device usage and device usage parameters; and analyzing the auxiliary image data according to the equipment function information, and separating pixel function image data and extended function image data under each function.
And step S108, fusing the main image data and the pixel function image data by using the extended function image data, and generating an intrusion judgment result.
Specifically, various functional parameters and image data of the auxiliary image pickup devices are merged and fused by using the extended function image data, and the merged and fused image data is integrated on the main image data, so that an intrusion judgment image with P ═ a + b ∑ δ is finally formed, wherein P is a synthesized combination of the main and auxiliary images, a is a main image data subset, and δ is a comprehensive convolution of functional factors of a plurality of auxiliary image pickup devices.
Optionally, the fusing the main image data and the pixel functional image data by using the extended functional image data, and generating an intrusion determination result includes: setting a pixel point position X in the main image data as a variable point position X; setting a pixel point position Y in the pixel function image data as a variable point position Y, wherein the pixel point position Y in the pixel function image data is a set of all auxiliary camera shooting device dot matrix pixel images of the camera shooting array except the main camera shooting device; by multilevel lattice pixel algorithm
Figure BDA0003689988740000061
Calculating a fusion pixel solution; generating an intrusion judgment result according to the fusion pixel connection and intrusion judgment model, wherein the intrusion judgment result comprises: intrusion state, non-intrusion state.
By the embodiment, the technical problems that management and analysis of border security are only obtained by splicing a single camera or a plurality of cameras in different regions, and the same placement region can not be judged by the set of a plurality of or dozens of camera systems in the prior art are solved, and therefore the efficiency and quality of border security time judgment are reduced.
Example two
Fig. 2 is a block diagram of a border monitoring and collecting device based on a multi-auxiliary linkage camera shooting technology according to an embodiment of the present invention, and as shown in fig. 2, the border monitoring and collecting device includes:
and the acquisition module 20 is used for acquiring the activation parameters of the camera array.
Specifically, in order to solve the technical problems that management and analysis of border security in the prior art are only obtained by performing split joint processing on a single camera or multiple cameras in different regions, and a plurality of or dozens of camera systems cannot be integrated to judge the same placement region, so that efficiency and quality of border security time judgment are reduced, a camera array needs to be activated firstly, and camera array activation parameters composed of one main camera and multiple auxiliary camera devices are acquired, so that image acquisition operation can be performed directly through relationship parameters among multiple cameras when multiple images of the camera array are processed subsequently.
Optionally, the activation parameters include: the starting state of the main camera equipment, the starting state of the auxiliary camera equipment and the running state of the linkage channel.
Specifically, the activation parameters in the embodiment of the present invention include: the starting state of the main camera equipment, the starting state of the auxiliary camera equipment and the running state of the linkage channel. The linkage path operating state may include whether the linkage path is already in a physical activation state and whether the linkage path has successfully connected each of the main cameras and the auxiliary cameras, and the connection relationship and the connection parameters of each of the image pickup apparatuses are stored in the memory.
And the acquisition module 22 is configured to perform image acquisition according to the activation parameter to obtain main image data and auxiliary image data.
Specifically, after the activation parameters are obtained, the image acquisition operation of the camera array may be divided into main image data and auxiliary image data, wherein the main image data is acquired by one or more main camera devices in real time, the auxiliary image data is acquired by a plurality of auxiliary camera devices in real time, and after the main image data and the auxiliary image data are acquired, the two data need to be optimized and summarized for subsequent fusion judgment.
And the decomposition module 24 is configured to decompose the auxiliary image data according to the camera array function information to obtain pixel function image data and extended function image data.
Optionally, the decomposition module includes: the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring array device function information in the camera array, and the device function information comprises device usage and device usage parameters; and the analysis unit is used for analyzing the auxiliary image data according to the equipment function information and separating pixel function image data and extended function image data under each function.
Specifically, after various parameters of the camera array are acquired, the data and information acquired in the auxiliary cameras need to be decomposed and separated, so that the image data acquired by each auxiliary camera device and the related image functions of each auxiliary camera device are separated and summarized, for example, the wide-angle camera device, the distortion-prevention camera device, the night vision auxiliary camera device and the like are acquired, and the array device function information in the camera array is acquired, wherein the device function information includes device usage and device usage parameters; and analyzing the auxiliary image data according to the equipment function information, and separating pixel function image data and extended function image data under each function.
And a fusion module 26, configured to fuse the main image data and the pixel function image data by using the extended function image data, and generate an intrusion judgment result.
Specifically, various functional parameters and image data of the auxiliary image pickup devices are merged and fused by using the extended function image data, and the merged and fused image data is integrated on the main image data, so that an intrusion judgment image with P ═ a + b ∑ δ is finally formed, wherein P is a synthesized combination of the main and auxiliary images, a is a main image data subset, and δ is a comprehensive convolution of functional factors of a plurality of auxiliary image pickup devices.
Optionally, the fusion module includes: a setting unit configured to set a pixel point location X in the main image data to a variable point location X; the conversion unit is used for setting a pixel point position Y in the pixel function image data as a variable point position Y, wherein the pixel point position Y in the pixel function image data is a set of all auxiliary camera shooting equipment dot matrix pixel images of the camera shooting array except the main camera shooting equipment; a calculation unit for passing a multi-level lattice pixel algorithm
Figure BDA0003689988740000071
Calculating a fusion pixel solution; the generating unit is used for generating an intrusion judgment result according to the fusion pixel connection and intrusion judgment model, wherein the intrusion judgment result comprises: intrusion state, non-intrusion state.
According to another aspect of the embodiment of the present invention, a nonvolatile storage medium is further provided, where the nonvolatile storage medium includes a stored program, and the program controls, when running, a device where the nonvolatile storage medium is located to execute a border monitoring and collecting method based on a multi-auxiliary linked shooting technology.
Specifically, the method comprises the following steps: acquiring an activation parameter of a camera array; acquiring images according to the activation parameters to obtain main image data and auxiliary image data; decomposing the auxiliary image data according to the camera array function information to obtain pixel function image data and extended function image data; and fusing the main image data and the pixel function image data by using the extended function image data, and generating an intrusion judgment result. Optionally, the activation parameters include: the starting state of the main camera equipment, the starting state of the auxiliary camera equipment and the running state of the linkage channel. Optionally, decomposing the auxiliary image data according to the camera array function information to obtain pixel function image data and extended function image data includes: acquiring array equipment function information in the camera array, wherein the equipment function information comprises equipment use and equipment use parameters; and analyzing the auxiliary image data according to the equipment function information, and separating pixel function image data and extended function image data under each function. Optionally, the fusing the main image data and the pixel functional image data by using the extended functional image data, and generating an intrusion determination result includes: setting a pixel point position X in the main image data as a variable point position X; setting a pixel point position Y in the pixel function image data as a variable point position Y, wherein the pixel point position Y in the pixel function image data is a set of all auxiliary camera shooting device dot matrix pixel images of the camera shooting array except the main camera shooting device; by multilevel lattice pixel algorithm
Figure BDA0003689988740000081
Calculating a fusion pixel solution; generating an intrusion judgment result according to the fusion pixel connection and intrusion judgment model, wherein the intrusion judgment result comprises: intrusion state, non-intrusion state.
According to another aspect of the embodiments of the present invention, there is also provided an electronic device, including a processor and a memory; the storage is stored with computer readable instructions, and the processor is used for operating the computer readable instructions, wherein the computer readable instructions execute a border monitoring and collecting method based on a multi-auxiliary linkage camera shooting technology when operating.
Specifically, the method comprises the following steps: acquiring an activation parameter of a camera array; acquiring images according to the activation parameters to obtain main image data and auxiliary image data; decomposing the auxiliary image data according to the camera array function information to obtain pixel function image data and extended function image data; and fusing the main image data and the pixel function image data by using the extended function image data, and generating an intrusion judgment result. Optionally, the activation parameters include: the starting state of the main camera equipment, the starting state of the auxiliary camera equipment and the running state of the linkage channel. Optionally, decomposing the auxiliary image data according to the camera array function information to obtain pixel function image data and extended function image data includes: acquiring array equipment function information in the camera array, wherein the equipment function information comprises equipment use and equipment use parameters; and analyzing the auxiliary image data according to the equipment function information, and separating pixel function image data and extended function image data under each function. Optionally, the fusing the main image data and the pixel functional image data by using the extended functional image data, and generating an intrusion determination result includes: setting a pixel point position X in the main image data as a variable point position X; setting a pixel point position Y in the pixel function image data as a variable point position Y, wherein the pixel point position Y in the pixel function image data is a set of all auxiliary camera shooting device dot matrix pixel images of the camera shooting array except the main camera shooting device; by multilevel lattice pixel algorithm
Figure BDA0003689988740000091
Calculating a fusion pixel solution; generating an intrusion judgment result according to the fusion pixel connection and intrusion judgment model, wherein the intrusion judgment result comprises: intrusion state, non-intrusion state.
By the embodiment, the technical problems that management and analysis of border security are only obtained by splicing a single camera or a plurality of cameras in different regions, and the same placement region can not be judged by the set of a plurality of or dozens of camera systems in the prior art are solved, and therefore the efficiency and quality of border security time judgment are reduced.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, fig. 3 is a schematic diagram of a hardware structure of a terminal device according to an embodiment of the present application. As shown in fig. 3, the terminal device may include an input device 30, a processor 31, an output device 32, a memory 33, and at least one communication bus 34. The communication bus 34 is used to realize communication connections between the elements. The memory 33 may comprise a high speed RAM memory, and may also include a non-volatile memory NVM, such as at least one disk memory, in which various programs may be stored for performing various processing functions and implementing the method steps of the present embodiment.
Alternatively, the processor 31 may be implemented by, for example, a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a controller, a microcontroller, a microprocessor, or other electronic components, and the processor 31 is coupled to the input device 30 and the output device 32 through a wired or wireless connection.
Optionally, the input device 30 may include a variety of input devices, for example, at least one of a user-oriented user interface, a device-oriented device interface, a software programmable interface, a camera, and a sensor. Optionally, the device interface facing the device may be a wired interface for data transmission between devices, or may be a hardware plug-in interface (e.g., a USB interface, a serial port, etc.) for data transmission between devices; optionally, the user-facing user interface may be, for example, a user-facing control key, a voice input device for receiving voice input, and a touch sensing device (e.g., a touch screen with a touch sensing function, a touch pad, etc.) for receiving user touch input; optionally, the programmable interface of the software may be, for example, an entry for a user to edit or modify a program, such as an input pin interface or an input interface of a chip; optionally, the transceiver may be a radio frequency transceiver chip with a communication function, a baseband processing chip, a transceiver antenna, and the like. An audio input device such as a microphone may receive voice data. The output device 32 may include a display, a sound, or other output device.
In this embodiment, the processor of the terminal device includes a module for executing the functions of the modules of the data processing apparatus in each device, and specific functions and technical effects may refer to the foregoing embodiments, which are not described herein again.
Fig. 4 is a schematic diagram of a hardware structure of a terminal device according to another embodiment of the present application. Fig. 4 is a specific embodiment of fig. 3 in an implementation process. As shown in fig. 4, the terminal device of the present embodiment includes a processor 41 and a memory 42.
The processor 41 executes the computer program code stored in the memory 42 to implement the method in the above-described embodiment.
The memory 42 is configured to store various types of data to support operations at the terminal device. Examples of such data include instructions for any application or method operating on the terminal device, such as messages, pictures, videos, and so forth. The memory 42 may include a Random Access Memory (RAM) and may also include a non-volatile memory (non-volatile memory), such as at least one disk memory.
Optionally, the processor 41 is provided in the processing assembly 40. The terminal device may further include: a communication component 43, a power component 44, a multimedia component 45, an audio component 46, an input/output interface 47 and/or a sensor component 48. The specific components included in the terminal device are set according to actual requirements, which is not limited in this embodiment.
The processing component 40 generally controls the overall operation of the terminal device. Processing component 40 may include one or more processors 41 to execute instructions to perform all or a portion of the steps of the above-described method. Further, processing component 40 may include one or more modules that facilitate interaction between processing component 40 and other components. For example, the processing component 40 may include a multimedia module to facilitate interaction between the multimedia component 45 and the processing component 40.
The power supply component 44 provides power to the various components of the terminal device. The power components 44 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the terminal device.
The multimedia component 45 includes a display screen providing an output interface between the terminal device and the user. In some embodiments, the display screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the display screen includes a touch panel, the display screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
The audio component 46 is configured to output and/or input audio signals. For example, the audio component 46 includes a Microphone (MIC) configured to receive external audio signals when the terminal device is in an operational mode, such as a voice recognition mode. The received audio signal may further be stored in the memory 42 or transmitted via the communication component 43. In some embodiments, audio assembly 46 also includes a speaker for outputting audio signals.
The input/output interface 47 provides an interface between the processing component 40 and peripheral interface modules, which may be click wheels, buttons, etc. These buttons may include, but are not limited to: a volume button, a start button, and a lock button.
The sensor assembly 48 includes one or more sensors for providing various aspects of status assessment for the terminal device. For example, the sensor assembly 48 may detect the open/closed status of the terminal device, the relative positioning of the components, the presence or absence of user contact with the terminal device. The sensor assembly 48 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact, including detecting the distance between the user and the terminal device. In some embodiments, the sensor assembly 48 may also include a camera or the like.
The communication component 43 is configured to facilitate communication between the terminal device and other devices in a wired or wireless manner. The terminal device may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In one embodiment, the terminal device may include a SIM card slot for inserting a SIM card therein, so that the terminal device can log on to a GPRS network and establish communication with the server via the internet.
From the above, the communication component 43, the audio component 46, the input/output interface 47 and the sensor component 48 referred to in the embodiment of fig. 4 can be implemented as the input device in the embodiment of fig. 3.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (10)

1. A border monitoring and collecting method based on a multi-auxiliary linkage camera shooting technology is characterized by comprising the following steps:
acquiring an activation parameter of a camera array;
acquiring images according to the activation parameters to obtain main image data and auxiliary image data;
decomposing the auxiliary image data according to the camera array function information to obtain pixel function image data and extended function image data;
and fusing the main image data and the pixel function image data by using the extended function image data, and generating an intrusion judgment result.
2. The method of claim 1, wherein the activation parameters comprise: the starting state of the main camera equipment, the starting state of the auxiliary camera equipment and the running state of the linkage channel.
3. The method of claim 1, wherein decomposing the secondary image data to obtain pixel function image data and extended function image data based on camera array function information comprises:
acquiring array equipment function information in the camera array, wherein the equipment function information comprises equipment use and equipment use parameters;
and analyzing the auxiliary image data according to the equipment function information, and separating pixel function image data and extended function image data under each function.
4. The method according to claim 1, wherein said fusing the main image data and the pixel function image data using the extended function image data and generating an intrusion determination result comprises:
setting a pixel point position X in the main image data as a variable point position X;
setting a pixel point position Y in the pixel function image data as a variable point position Y, wherein the pixel point position Y in the pixel function image data is a set of all auxiliary camera shooting device dot matrix pixel images of the camera shooting array except the main camera shooting device;
by multilevel lattice pixel algorithm
Figure FDA0003689988730000011
Calculating a fusion pixel solution;
generating an intrusion judgment result according to the fusion pixel connection and intrusion judgment model, wherein the intrusion judgment result comprises: intrusion state, non-intrusion state.
5. The utility model provides a border control collection system based on many assist linkage camera techniques which characterized in that includes:
the acquisition module is used for acquiring the activation parameters of the camera array;
the acquisition module is used for acquiring images according to the activation parameters to obtain main image data and auxiliary image data;
the decomposition module is used for decomposing the auxiliary image data according to the camera array function information to obtain pixel function image data and extended function image data;
and the fusion module is used for fusing the main image data and the pixel function image data by using the extended function image data and generating an intrusion judgment result.
6. The apparatus of claim 5, wherein the activation parameters comprise: the starting state of the main camera equipment, the starting state of the auxiliary camera equipment and the running state of the linkage channel.
7. The apparatus of claim 5, wherein the decomposition module comprises:
an acquisition unit, configured to acquire array device function information in the imaging array, where the device function information includes a device usage and a device usage parameter;
and the analysis unit is used for analyzing the auxiliary image data according to the equipment function information and separating pixel function image data and extended function image data under each function.
8. The apparatus of claim 5, wherein the fusion module comprises:
a setting unit configured to set a pixel point location X in the main image data to a variable point location X;
the conversion unit is used for setting a pixel point position Y in the pixel function image data as a variable point position Y, wherein the pixel point position Y in the pixel function image data is a set of all auxiliary camera shooting equipment dot matrix pixel images of the camera shooting array except the main camera shooting equipment;
a calculation unit for passing a multi-level lattice pixel algorithm
Figure FDA0003689988730000021
Calculating a fusion pixel solution;
the generating unit is used for generating an intrusion judgment result according to the fusion pixel connection and intrusion judgment model, wherein the intrusion judgment result comprises: intrusion state, non-intrusion state.
9. A non-volatile storage medium, comprising a stored program, wherein the program, when executed, controls an apparatus in which the non-volatile storage medium is located to perform the method of any one of claims 1 to 4.
10. An electronic device comprising a processor and a memory; the memory has stored therein computer readable instructions for execution by the processor, wherein the computer readable instructions when executed perform the method of any one of claims 1 to 4.
CN202210659665.8A 2022-06-13 2022-06-13 Multi-auxiliary linkage camera shooting technology-based border monitoring and collecting method and device Pending CN114866702A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210659665.8A CN114866702A (en) 2022-06-13 2022-06-13 Multi-auxiliary linkage camera shooting technology-based border monitoring and collecting method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210659665.8A CN114866702A (en) 2022-06-13 2022-06-13 Multi-auxiliary linkage camera shooting technology-based border monitoring and collecting method and device

Publications (1)

Publication Number Publication Date
CN114866702A true CN114866702A (en) 2022-08-05

Family

ID=82625493

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210659665.8A Pending CN114866702A (en) 2022-06-13 2022-06-13 Multi-auxiliary linkage camera shooting technology-based border monitoring and collecting method and device

Country Status (1)

Country Link
CN (1) CN114866702A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115345808A (en) * 2022-08-18 2022-11-15 北京拙河科技有限公司 Picture generation method and device based on multivariate information acquisition

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115345808A (en) * 2022-08-18 2022-11-15 北京拙河科技有限公司 Picture generation method and device based on multivariate information acquisition
CN115345808B (en) * 2022-08-18 2023-07-21 北京拙河科技有限公司 Picture generation method and device based on multi-element information acquisition

Similar Documents

Publication Publication Date Title
WO2024098906A1 (en) Image tracking method and apparatus for gigapixel photographic device
CN115426525A (en) High-speed moving frame based linkage image splitting method and device
CN114866702A (en) Multi-auxiliary linkage camera shooting technology-based border monitoring and collecting method and device
CN115334291A (en) Tunnel monitoring method and device based on hundred million-level pixel panoramic compensation
CN115527045A (en) Image identification method and device for snow field danger identification
CN115293985A (en) Super-resolution noise reduction method and device for image optimization
CN116579964B (en) Dynamic frame gradual-in gradual-out dynamic fusion method and device
CN115511735B (en) Snow field gray scale picture optimization method and device
CN116579965B (en) Multi-image fusion method and device
CN115345808B (en) Picture generation method and device based on multi-element information acquisition
CN116402935B (en) Image synthesis method and device based on ray tracing algorithm
CN116468883B (en) High-precision image data volume fog recognition method and device
CN116757983B (en) Main and auxiliary image fusion method and device
CN116228593B (en) Image perfecting method and device based on hierarchical antialiasing
CN116758165B (en) Image calibration method and device based on array camera
CN116664413B (en) Image volume fog eliminating method and device based on Abbe convergence operator
CN116389915B (en) Method and device for reducing flicker of light field camera
CN115858240B (en) Optical camera data backup method and device
CN116723298B (en) Method and device for improving transmission efficiency of camera end
CN115145950A (en) Method for docking big data application interface involved in complaint
CN115696022A (en) Image acquisition method and device based on human-computer interaction
CN115914819A (en) Image capturing method and device based on orthogonal decomposition algorithm
CN117351341A (en) Unmanned aerial vehicle fish school identification method and device based on decomposition optimization
CN116320202A (en) Camera image encryption method and device
CN116452481A (en) Multi-angle combined shooting method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20220805