CN114998794A - High-altitude parabolic recognition method, system, device and storage medium - Google Patents
High-altitude parabolic recognition method, system, device and storage medium Download PDFInfo
- Publication number
- CN114998794A CN114998794A CN202210605368.5A CN202210605368A CN114998794A CN 114998794 A CN114998794 A CN 114998794A CN 202210605368 A CN202210605368 A CN 202210605368A CN 114998794 A CN114998794 A CN 114998794A
- Authority
- CN
- China
- Prior art keywords
- target
- target image
- information
- determining
- floor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 238000001514 detection method Methods 0.000 claims abstract description 53
- 230000015654 memory Effects 0.000 claims description 13
- 238000012544 monitoring process Methods 0.000 claims description 9
- 238000001914 filtration Methods 0.000 claims description 6
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 230000009286 beneficial effect Effects 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000000605 extraction Methods 0.000 description 4
- 238000012549 training Methods 0.000 description 4
- 238000009529 body temperature measurement Methods 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 238000004378 air conditioning Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000007723 transport mechanism Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/457—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by analysing connectivity, e.g. edge linking, connected component analysis or slices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/44—Event detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02E—REDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
- Y02E60/00—Enabling technologies; Technologies with a potential or indirect contribution to GHG emissions mitigation
- Y02E60/14—Thermal energy storage
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a high-altitude parabolic recognition method, a system, a device and a storage medium, wherein the method comprises the following steps: firstly, acquiring a target image set; the method comprises the steps of acquiring a target image set, determining floor information in the target image according to the target image set and a trained floor detection network, determining moving target information in the target image according to the target image set and the trained moving detection network, and finally determining target floor information according to the floor information and the moving target information. The high-altitude parabolic method in the embodiment of the application combines floor identification and moving target identification, can accurately and automatically judge the target floor of a high-altitude parabolic event, and is beneficial to maintaining community safety.
Description
Technical Field
The present application relates to the field of image recognition technologies, and in particular, to a high altitude parabola identification method, system, device, and storage medium.
Background
Along with the increasing living standard of people, the requirement on the comfort level of living and working environment is higher and higher, and the air conditioner has the function of adjusting the environmental temperature, so that the market demand is higher. In the use of air conditioner, the phenomenon that air conditioning sinks and steam floats can appear, leads to the temperature sensor of air conditioner to respond in advance or postpone, and it is good to feel cold and hot effect, in order to solve this problem, need according to the output of the real-time temperature adjustment air conditioner of target location, ultrasonic temperature measurement is the space temperature measurement method, but ultrasonic temperature measurement need carry out initialization setting in advance, for example input air conditioner with distance between the target location, current initialization setting mode is mainly carried out manual debugging by the debugging personnel after the installation air conditioner and is accomplished, perhaps carries out initialization setting by the user is manual, and troublesome poeration influences user experience.
Disclosure of Invention
The present application is directed to solving, at least in part, one of the technical problems in the related art. Therefore, the application provides a high altitude parabola identification method, a system, a device and a storage medium.
In a first aspect, an embodiment of the present application provides a high altitude parabola identification method, including: acquiring a target image set; wherein the target image set comprises a plurality of target images; determining floor information in the target images according to the target image set and the trained floor detection network; determining moving target information in the target image according to the target image set and the trained motion detection network; and determining target floor information according to the floor information and the moving target information.
Optionally, the method further comprises: acquiring a monitoring video of a target building; and determining the target image set according to the monitoring video.
Optionally, the method further includes a step of compressing the original image to obtain the target image, where the step specifically includes: determining a size ratio according to a first size of the original image and a second size of the target image; determining an original pixel point set in the original image according to the size ratio and the coordinates of target pixel points in the target image; and determining the target pixel point according to the original pixel point set and a preset weight value.
Optionally, the determining, according to the target image set and the trained motion detection network, the moving target information in the target image includes: performing coarse-grained detection on the target image, and determining a connected region and first frame information of a moving target in the target image; performing fine-grained detection on the connected region according to the first frame information, and determining second frame information and a moving object type; and determining the moving target information according to the second frame information and the moving target category.
Optionally, the performing coarse-grained detection on the target image to determine a connected region and first frame information of a moving target in the target image includes: filtering the target image; after filtering is finished, detecting a connected region of the target image through a motion detection algorithm; and after the motion detection is finished, extracting the connected region to obtain first frame information of the motion target.
Optionally, the performing fine-grained detection on the connected region and determining second frame information and a moving object category includes: expanding the connected region according to the first frame information; and inputting the expanded connected region into a motion convolution neural network, and determining second frame information and a motion target category.
In a second aspect, an embodiment of the present application provides a high altitude parabolic recognition system, including: a first module for obtaining a set of target images; wherein the target image set comprises a plurality of target images; the second module is used for determining floor information in the target images according to the target image set and the trained floor detection network; a third module, configured to determine moving target information in the target image according to the target image set and a trained motion detection network; and the fourth module is used for determining target floor information according to the floor information and the moving target information.
In a third aspect, an embodiment of the present application provides a high altitude parabola identification apparatus, including: at least one processor; at least one memory for storing at least one program; when executed by the at least one processor, cause the at least one processor to implement the high altitude parabolic identification method described above.
In a fourth aspect, the present application provides a computer storage medium, in which a program executable by a processor is stored, wherein the program executable by the processor is used for implementing the high altitude parabola identification method when being executed by the processor.
The beneficial effects of the embodiment of the application are as follows: firstly, acquiring a target image set; the method comprises the steps of acquiring a target image set, determining floor information in the target image according to the target image set and a trained floor detection network, determining moving target information in the target image according to the target image set and the trained moving detection network, and finally determining target floor information according to the floor information and the moving target information. The high-altitude parabolic method in the embodiment of the application combines floor identification and moving target identification, can accurately and automatically judge the target floor of a high-altitude parabolic event, and is beneficial to maintaining community safety.
Drawings
The accompanying drawings are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the example serve to explain the principles of the invention.
Fig. 1 is a flowchart illustrating steps of a high altitude parabola identification method according to an embodiment of the present application;
fig. 2 is a flowchart illustrating steps of a detection process of a motion detection network according to an embodiment of the present application;
fig. 3 is a schematic diagram of a high altitude parabolic identification system provided by an embodiment of the present application;
fig. 4 is a schematic diagram of a high altitude parabolic recognition device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It should be noted that although functional block divisions are provided in the system drawings and logical orders are shown in the flowcharts, in some cases, the steps shown and described may be performed in different orders than the block divisions in the systems or in the flowcharts. The terms first, second and the like in the description and in the claims, and the drawings described above, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
The embodiments of the present application will be further explained with reference to the drawings.
Referring to fig. 1, fig. 1 is a flowchart illustrating steps of a high altitude parabolic identification method according to an embodiment of the present application, including, but not limited to, steps S100 to S140:
s100, determining a target image set according to a monitoring video of a target building;
specifically, in the embodiment of the present application, the target building refers to a building in which a high altitude parabolic event occurs. A monitoring system is arranged opposite to a window or a balcony of a building, the monitoring system comprises a plurality of cameras, the shooting range of the cameras can shoot all floors of a current target building, and for convenience of management, buildings and floor information shot by each camera can be associated with the cameras. When a high-altitude parabolic event occurs, a monitoring video shot by a camera corresponding to a target building is obtained, and the monitoring video is divided into a series of single-frame pictures, so that a target image in the embodiment of the application is obtained.
In some embodiments, in order to improve the accuracy of target image identification, a single frame of original image split from the surveillance video may be preprocessed. For example, an original image is compressed, thereby eliminating irrelevant information in the image, restoring useful feature information, enhancing the detectability of the feature information, and simplifying image data to the maximum extent, thereby improving feature extraction reliability.
The method for compressing the original image comprises the following steps: first, a size ratio is determined according to a first size of the original image and a second size of the target image, for example, assuming that the first size of the original image is m × n and the second size of the target image is h × 1, the size ratio can be determined asAndand then, determining an original pixel point set in the original image according to the size ratio and the coordinates of the target pixel points in the target image. For example, if the coordinates of the target pixel points in the target image are (i, j), it is determined that the corresponding pixel point coordinates in the original image should be (i, j) according to the size ratioSince the coordinates in the original image obtained by such calculation are not generally integers, 4 pixels closest to the non-integer coordinates are determined as an original pixel set, and a weight value can be preset in order to determine which pixel is used as a target pixel. For example, if the weight of the pixel closer to the center of the original image is set to be higher, the target pixel can be determined according to the preset weight and the original pixel set.
S110, acquiring a target image set;
specifically, according to the step S100, a target image set that needs high-altitude parabolic recognition is determined, where the target image set includes a plurality of target images.
S120, determining floor information in the target images according to the target image set and the trained floor detection network;
specifically, the target image set is input into the trained floor detection network, and the floor training network may output floor information in the target image, where the floor information may include the position and corresponding coordinates of each floor in the target image.
In the embodiment of the application, the floor training network is constructed based on the yolo network. Before training, the floor images under different time, light and weather are marked manually at fixed time intervals (for example, 30min), and the yolo network is trained by using the floor images. And the floor detection network obtained after training can detect and frame the floors in the images, and acquire the number of floors and coordinates corresponding to different floors.
S130, determining moving target information in the target image according to the target image set and the trained motion detection network;
specifically, the target image set is input into the trained motion detection network, and then the motion detection network may output specific information of the moving target, where the moving target information includes a moving target category and position information of the moving target.
In the embodiment of the present application, a detection process of a motion detection network is shown in fig. 2, where fig. 2 is a flowchart illustrating steps of the detection process of the motion detection network provided in the embodiment of the present application, and the method includes, but is not limited to, steps S200 to S230:
s200, performing coarse-grained detection on the target image, and determining a connected region and first frame information of a moving target in the target image;
specifically, in the embodiment of the present application, it is necessary to obtain the connected region of the moving object by coarse-grained detection. Firstly, filtering a target image to reduce noise in the target image; after filtering is completed, detecting a connected region of the target image through a motion detection algorithm, and suppressing noise of the connected region based on mathematical morphology. And after finishing the motion detection, extracting the connected region to obtain the frame information of the moving target.
In the embodiment of the application, a low-complexity connected region detection algorithm is provided to extract a connected region. In the algorithm, a connected region detected by a motion detection algorithm is input, after the pixel values of each column in the connected region are summed, the connected region is vertically divided into non-zero columns and all-zero columns, and a threshold value is set to determine how many pixels can be combined in the connected region. If the number of all-zero columns is less than a preset threshold, then the columns are considered to be connected to adjacent non-zero columns; the pixel values for each row in each successive non-zero column are then summed. In the horizontal direction, the connected component is divided horizontally into non-zero columns and all-zero columns, also as described above, and the pixel values of each column are summed. Finally, first frame information of the moving object is determined. In the detection algorithm, only some addition and judgment operations are performed, and the complexity is low, so that the algorithm speed can be effectively improved.
S210, performing fine-grained detection on the connected region, and determining second frame information and a moving object type;
specifically, after the first frame information of the moving object is determined, fine-grained detection is continuously performed on the cut connected region. Since there is a high possibility that the first frame information of the moving object obtained by the coarse-grained detection may become incomplete due to the influence of noise, it is necessary to expand the clipped connected region. First, first frame information that yields a maximum value cross-over ratio (IoU) in the current connected region is selected, and the current connected region is expanded and completed based on the aspect ratio of the first frame information.
The above equation represents the relationship between the width-to-height ratio and the expansion rate width-to-height ratio in the first frame information, where w represents the width in the first frame information, h represents the height in the first frame information, Δ w represents the pixel value of each expansion of the width, and Δ h represents the pixel value of each expansion of the height.
In the related art, a seed algorithm commonly used for filling a connected region needs to recursively search from one point in the region to the periphery, and each direction needs to be recursively searched and judged continuously. The method provided by the embodiment of the application can endow the expansion rate with different widths and heights according to the aspect ratio in the first frame information, so that each pixel point does not need to be subjected to recursive judgment, and the filling efficiency of the connected region can be improved.
And after the expansion is finished, inputting the expanded connected region into a motion convolution neural network, and determining second frame information and a motion target category. The moving convolutional neural network is used for further correcting the position of the moving target, determining second frame information and identifying a specific class of the moving target. It can be understood that the moving convolutional neural network can be trained in advance through a large number of images containing common high-altitude parabolic target objects, and the moving targets identified by the moving convolutional neural network can also be identified into specific categories through big data.
S220, determining moving target information according to the second frame information and the moving target category;
specifically, after the position of the moving object is further corrected by the fine-grained detection in step 210, the moving object information is determined. The moving object information includes the specific position coordinates of the moving object in the image (i.e., the second frame information), and the category of the moving object.
The content of the above step S130 has already been explained through steps S200-S220, and the explanation of step S140 is continued.
S140, determining target floor information according to the floor information and the moving target information;
specifically, since the specific position coordinates of the moving object in the object image are obtained and the floor and the corresponding floor coordinates are determined in step S120, the coordinates of the moving object may be mapped to the floor coordinates, so as to determine in which floors the moving object specifically moves. It can be understood that the floor where the moving starting point of the moving object is located is the target floor where the high altitude parabolic event occurs, and the target floor information is recorded for the subsequent high altitude parabolic event processing.
Through steps S100-S140, the embodiment of the application provides a high-altitude parabolic recognition method, and a target image set is obtained firstly; the method comprises the steps of acquiring a target image set, determining floor information in the target image according to the target image set and a trained floor detection network, determining moving target information in the target image according to the target image set and the trained moving detection network, and finally determining target floor information according to the floor information and the moving target information. The high-altitude parabolic method in the embodiment of the application combines floor identification and moving target identification, can accurately and automatically judge the target floor of a high-altitude parabolic event, and is beneficial to maintaining community safety. In addition, the embodiment of the application also specifically provides a low-complexity connected region extraction algorithm, and through tests, the speed of the extraction algorithm provided by the embodiment of the application is about 21 times that of the connected region extraction algorithm in the current mainstream skeamge library, so that the algorithm is high in speed and high in efficiency. In addition, a method for expanding the region according to the priori knowledge of the bounding box is provided when the connected region is expanded, and the expansion efficiency is effectively improved.
Referring to fig. 3, fig. 3 is a schematic diagram of a high altitude parabolic recognition system according to an embodiment of the present application, where the system 300 includes: a first module 310, a second module 320, a third module 330, and a fourth module 340, the first module being configured to obtain a set of target images; the target image set comprises a plurality of target images; the second module is used for determining floor information in the target images according to the target image set and the trained floor detection network; the third module is used for determining the moving target information in the target image according to the target image set and the trained motion detection network; and the fourth module is used for determining target floor information according to the floor information and the moving target information.
Referring to fig. 4, fig. 4 is a schematic diagram of a high altitude parabolic identification apparatus provided in an embodiment of the present application, where the apparatus 400 includes at least one processor 410 and at least one memory 420 for storing at least one program; one processor and one memory are exemplified in fig. 4.
The processor and memory may be connected by a bus or other means, such as by a bus in FIG. 4.
The memory, as a non-transitory computer-readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer-executable programs. Further, the memory may include high speed random access memory, and may also include non-transitory memory, such as at least one disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory located remotely from the processor, and these remote memories may be connected to the device over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The above-described embodiments of the apparatus are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, i.e. may be located in one place, or may also be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The embodiment of the application also discloses a computer storage medium, wherein a program executable by a processor is stored, and the program executable by the processor is used for realizing the method provided by the application when being executed by the processor.
One of ordinary skill in the art will appreciate that all or some of the steps, systems, and methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by a computer. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to those skilled in the art.
While the preferred embodiments of the present invention have been described, the present invention is not limited to the above embodiments, and those skilled in the art can make various equivalent modifications or substitutions without departing from the spirit of the present invention, and such equivalent modifications or substitutions are included in the scope of the present invention defined by the claims.
Claims (9)
1. A high altitude parabola identification method is characterized by comprising the following steps:
acquiring a target image set;
wherein the target image set comprises a plurality of target images;
determining floor information in the target images according to the target image set and the trained floor detection network;
determining moving target information in the target image according to the target image set and the trained motion detection network;
and determining target floor information according to the floor information and the moving target information.
2. The high altitude parabolic recognition method according to claim 1, further comprising:
acquiring a monitoring video of a target building;
and determining the target image set according to the monitoring video.
3. The high-altitude parabolic recognition method according to claim 1, further comprising a step of compressing an original image to obtain the target image, wherein the step specifically comprises:
determining a size ratio according to a first size of the original image and a second size of the target image;
determining an original pixel point set in the original image according to the size ratio and the coordinates of target pixel points in the target image;
and determining the target pixel point according to the original pixel point set and a preset weight value.
4. The high-altitude parabolic recognition method according to claim 1, wherein the determining moving object information in the target image according to the target image set and a trained motion detection network comprises:
performing coarse-grained detection on the target image, and determining a connected region and first frame information of a moving target in the target image;
performing fine-grained detection on the connected region according to the first frame information, and determining second frame information and a moving object type;
and determining the moving target information according to the second frame information and the moving target category.
5. The high-altitude parabolic recognition method according to claim 4, wherein the performing coarse-grained detection on the target image to determine a connected region and first frame information of a moving target in the target image comprises:
filtering the target image;
after filtering is finished, detecting a connected region of the target image through a motion detection algorithm;
and after the motion detection is finished, extracting the connected region to obtain first frame information of the motion target.
6. The high-altitude parabolic recognition method according to claim 5, wherein the fine-grained detection of the connected region and determination of the second frame information and the moving object category comprise:
expanding the connected region according to the first frame information;
and inputting the expanded connected region into a motion convolution neural network, and determining second frame information and a motion target category.
7. A high altitude parabola identification system, comprising:
a first module for obtaining a set of target images;
wherein the target image set comprises a plurality of target images;
the second module is used for determining floor information in the target images according to the target image set and the trained floor detection network;
a third module, configured to determine moving target information in the target image according to the target image set and a trained motion detection network;
and the fourth module is used for determining target floor information according to the floor information and the moving target information.
8. A high altitude parabolic recognition device, comprising:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement the high altitude parabolic identification method as defined in any one of claims 1-6.
9. A computer storage medium having stored therein a processor-executable program, wherein the processor-executable program, when executed by the processor, is for implementing a high altitude parabolic recognition method as defined in any one of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210605368.5A CN114998794A (en) | 2022-05-31 | 2022-05-31 | High-altitude parabolic recognition method, system, device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210605368.5A CN114998794A (en) | 2022-05-31 | 2022-05-31 | High-altitude parabolic recognition method, system, device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114998794A true CN114998794A (en) | 2022-09-02 |
Family
ID=83030363
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210605368.5A Pending CN114998794A (en) | 2022-05-31 | 2022-05-31 | High-altitude parabolic recognition method, system, device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114998794A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116994201A (en) * | 2023-07-20 | 2023-11-03 | 山东产研鲲云人工智能研究院有限公司 | Method for tracing and monitoring high-altitude parabolic objects and computing equipment |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101436297A (en) * | 2007-11-14 | 2009-05-20 | 比亚迪股份有限公司 | Image scaling method |
CN111899175A (en) * | 2020-07-30 | 2020-11-06 | 海信视像科技股份有限公司 | Image conversion method and display device |
CN111931719A (en) * | 2020-09-22 | 2020-11-13 | 苏州科达科技股份有限公司 | High-altitude parabolic detection method and device |
CN111950484A (en) * | 2020-08-18 | 2020-11-17 | 青岛聚好联科技有限公司 | High-altitude parabolic information analysis method and electronic equipment |
CN112016414A (en) * | 2020-08-14 | 2020-12-01 | 熵康(深圳)科技有限公司 | Method and device for detecting high-altitude parabolic event and intelligent floor monitoring system |
CN112257557A (en) * | 2020-10-20 | 2021-01-22 | 中国电子科技集团公司第五十八研究所 | High-altitude parabolic detection and identification method and system based on machine vision |
CN113034397A (en) * | 2021-03-30 | 2021-06-25 | 北京睿芯高通量科技有限公司 | Real-time multi-environment self-adaptive track automatic tracing high-altitude parabolic detection method |
CN113065454A (en) * | 2021-03-30 | 2021-07-02 | 青岛海信智慧生活科技股份有限公司 | High-altitude parabolic target identification and comparison method and device |
CN113255697A (en) * | 2021-06-01 | 2021-08-13 | 南京图菱视频科技有限公司 | High-precision high-altitude parabolic detection system and method under complex scene |
CN113705525A (en) * | 2021-09-07 | 2021-11-26 | 深圳天海宸光科技有限公司 | High-altitude parabolic accurate positioning and tracing method |
CN114170091A (en) * | 2021-12-29 | 2022-03-11 | 杭州海康机器人技术有限公司 | Image scaling method and device, electronic equipment and storage medium |
-
2022
- 2022-05-31 CN CN202210605368.5A patent/CN114998794A/en active Pending
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101436297A (en) * | 2007-11-14 | 2009-05-20 | 比亚迪股份有限公司 | Image scaling method |
CN111899175A (en) * | 2020-07-30 | 2020-11-06 | 海信视像科技股份有限公司 | Image conversion method and display device |
CN112016414A (en) * | 2020-08-14 | 2020-12-01 | 熵康(深圳)科技有限公司 | Method and device for detecting high-altitude parabolic event and intelligent floor monitoring system |
CN111950484A (en) * | 2020-08-18 | 2020-11-17 | 青岛聚好联科技有限公司 | High-altitude parabolic information analysis method and electronic equipment |
CN111931719A (en) * | 2020-09-22 | 2020-11-13 | 苏州科达科技股份有限公司 | High-altitude parabolic detection method and device |
CN112257557A (en) * | 2020-10-20 | 2021-01-22 | 中国电子科技集团公司第五十八研究所 | High-altitude parabolic detection and identification method and system based on machine vision |
CN113034397A (en) * | 2021-03-30 | 2021-06-25 | 北京睿芯高通量科技有限公司 | Real-time multi-environment self-adaptive track automatic tracing high-altitude parabolic detection method |
CN113065454A (en) * | 2021-03-30 | 2021-07-02 | 青岛海信智慧生活科技股份有限公司 | High-altitude parabolic target identification and comparison method and device |
CN113255697A (en) * | 2021-06-01 | 2021-08-13 | 南京图菱视频科技有限公司 | High-precision high-altitude parabolic detection system and method under complex scene |
CN113705525A (en) * | 2021-09-07 | 2021-11-26 | 深圳天海宸光科技有限公司 | High-altitude parabolic accurate positioning and tracing method |
CN114170091A (en) * | 2021-12-29 | 2022-03-11 | 杭州海康机器人技术有限公司 | Image scaling method and device, electronic equipment and storage medium |
Non-Patent Citations (1)
Title |
---|
HAIDI ZHU 等: "Moving Object Detection With Deep CNNs", pages 29729 - 29741 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116994201A (en) * | 2023-07-20 | 2023-11-03 | 山东产研鲲云人工智能研究院有限公司 | Method for tracing and monitoring high-altitude parabolic objects and computing equipment |
CN116994201B (en) * | 2023-07-20 | 2024-03-29 | 山东产研鲲云人工智能研究院有限公司 | Method for tracing and monitoring high-altitude parabolic objects and computing equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11430103B2 (en) | Method for image processing, non-transitory computer readable storage medium, and electronic device | |
US9213896B2 (en) | Method for detecting and tracking objects in image sequences of scenes acquired by a stationary camera | |
US10692225B2 (en) | System and method for detecting moving object in an image | |
US8995714B2 (en) | Information creation device for estimating object position and information creation method and program for estimating object position | |
CN112733690B (en) | High-altitude parabolic detection method and device and electronic equipment | |
US20200250803A1 (en) | Method for detecting and tracking target object, target object tracking apparatus, and computer-program product | |
WO2021139049A1 (en) | Detection method, detection apparatus, monitoring device, and computer readable storage medium | |
CN109711256B (en) | Low-altitude complex background unmanned aerial vehicle target detection method | |
CN113505643B (en) | Method and related device for detecting violation target | |
CN114120171A (en) | Fire smoke detection method, device and equipment based on video frame and storage medium | |
CN114998794A (en) | High-altitude parabolic recognition method, system, device and storage medium | |
CN115937746A (en) | Smoke and fire event monitoring method and device and storage medium | |
CN113065454B (en) | High-altitude parabolic target identification and comparison method and device | |
JP2003187248A (en) | System and apparatus of image processing | |
CN112668389A (en) | High-altitude parabolic target detection method, device, system and storage medium | |
CN112184814A (en) | Positioning method and positioning system | |
CN114943954B (en) | Parking space detection method, device and system | |
CN112906594B (en) | Defense deployment area generation method, device, equipment and storage medium | |
CN111932629A (en) | Target positioning method and system based on deep neural network | |
CN112465850A (en) | Peripheral boundary modeling method, intelligent monitoring method and device | |
CN110826455A (en) | Target identification method and image processing equipment | |
CN111931682B (en) | Abnormal behavior detection method and device | |
CN113949830B (en) | Image processing method | |
CN115690631A (en) | Unmanned aerial vehicle target identification method based on deep learning | |
KR102653755B1 (en) | System and method for collecting field image data sets for learning artificial intelligence image deep learning models |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20220902 |
|
RJ01 | Rejection of invention patent application after publication |