CN116883897A - Low-resolution target identification method - Google Patents

Low-resolution target identification method Download PDF

Info

Publication number
CN116883897A
CN116883897A CN202310842176.0A CN202310842176A CN116883897A CN 116883897 A CN116883897 A CN 116883897A CN 202310842176 A CN202310842176 A CN 202310842176A CN 116883897 A CN116883897 A CN 116883897A
Authority
CN
China
Prior art keywords
video data
human body
target
video
optical flow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310842176.0A
Other languages
Chinese (zh)
Inventor
钱国良
陈超
龚利武
张嘉辉
邓子龙
黄勤斌
胡雷剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pinghu General Electrical Installation Co ltd
State Grid Zhejiang Electric Power Co Ltd Pinghu Power Supply Co
State Grid Zhejiang Electric Power Co Ltd
Jiaxing Power Supply Co of State Grid Zhejiang Electric Power Co Ltd
Original Assignee
Pinghu General Electrical Installation Co ltd
State Grid Zhejiang Electric Power Co Ltd Pinghu Power Supply Co
State Grid Zhejiang Electric Power Co Ltd
Jiaxing Power Supply Co of State Grid Zhejiang Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pinghu General Electrical Installation Co ltd, State Grid Zhejiang Electric Power Co Ltd Pinghu Power Supply Co, State Grid Zhejiang Electric Power Co Ltd, Jiaxing Power Supply Co of State Grid Zhejiang Electric Power Co Ltd filed Critical Pinghu General Electrical Installation Co ltd
Priority to CN202310842176.0A priority Critical patent/CN116883897A/en
Publication of CN116883897A publication Critical patent/CN116883897A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Abstract

The application provides a low-resolution target identification method, which comprises the following steps: acquiring first video data containing a low-resolution human body target, and simultaneously acquiring a thermal imaging video of an imaging area identical to the first video data; carrying out gray level processing on the thermal imaging video according to a preset temperature gradient to obtain second video data; performing human body target identification on the second video data by adopting an optical flow algorithm of region matching, and acquiring the position of a human body target in the second video data under the second video data; and identifying the region in the first video data according to the binding of the human body target in the region of the second video data. The method and the system aim at the low-resolution image recognition method of the video frame with the fuzzy smear target, thermodynamic trend of the fuzzy smear video frame can be effectively recognized by utilizing an imaging image combined with an optical flow algorithm, and the position of the target corresponding to the fuzzy smear is judged according to the thermodynamic trend combined with the traditional image frame, so that the low-resolution target can be effectively and accurately recognized.

Description

Low-resolution target identification method
Technical Field
The application relates to the technical field of image recognition, in particular to a low-resolution target recognition method.
Background
The traditional target recognition method is obtained by tracking a target based on a relatively static clear and stable video screen, and the traditional commonly used target tracking detection algorithm comprises a YOLO algorithm, an R-CNN algorithm, a Fast-CNN algorithm and the like, wherein the YOLO algorithm is an algorithm based on grid segmentation, the image is segmented into different grids, the probability and the frame of each grid class are predicted, and the R-CNN algorithm and the Fast-CNN algorithm are still optimization algorithms based on a convolutional neural network CNN. Although the above-mentioned object detection algorithm has a fast response speed, the accuracy of object recognition for low resolution is still poor, especially for fast moving people object recognition, the problem of blurred smear of multiple image frames may occur, and object detection based on the YOLO algorithm generally cannot accurately generate grids for segmentation, and cannot accurately predict the class and frame of the network. Problems including loss of target tracking are very likely to occur when a person quickly crosses a barrier. The R-CNN algorithm and the Fast-CNN algorithm can also cause the problem of reduced tracking recognition target accuracy due to fuzzy smear.
Disclosure of Invention
One of the purposes of the application is to provide a low-resolution target recognition method and a system, which aim at the low-resolution image recognition method with a fuzzy smear target video frame, and the thermodynamic trend of the fuzzy smear video frame can be effectively recognized by utilizing an imaging image combined with an optical flow algorithm, and the position of the fuzzy smear corresponding to the target is judged according to the thermodynamic trend and combined with a traditional image frame, so that the low-resolution target can be effectively and accurately recognized.
Another object of the present application is to provide a method and system for identifying a low resolution target, which uses the thermal imaging image to provide accurate position determination and trend determination for rapid movement of a human target including but not limited to a human target, thereby overcoming the problem of target detection accuracy of a low resolution video frame in a high dynamic video.
Another object of the present application is to provide a method and a system for identifying a low resolution target, where the method and the system utilize an improved optical flow algorithm, perform target area tracking on a thermal image formed by thermal imaging by adopting an optical flow algorithm based on area matching, and perform positioning tracking display on a target area on an RGB image frame that is actually photographed, where the optical flow algorithm based on area matching can have stronger robustness on detecting a high-motion human target.
The application further aims to provide a low-resolution target recognition method and a low-resolution target recognition system, the searching method and the searching system are used for searching the dense optical flow for the region matching of the optical flow algorithm through the utilized variable optical flow algorithm, and the matching region of the dense optical flow can be rapidly determined by utilizing the variable optical flow algorithm and combining the thermodynamic imaging image of the target image, so that the target tracking accuracy of the human target under high dynamic state is realized.
In order to achieve at least one of the above objects, the present application further provides a low resolution object recognition method comprising:
acquiring first video data containing a human body target, and simultaneously acquiring a thermal imaging video of an imaging area identical to the first video data;
carrying out gray level processing on the thermal imaging video according to a preset temperature gradient to obtain second video data;
performing human body target identification on the second video data by adopting an optical flow algorithm of region matching, and acquiring the position of a human body target in the second video data under the second video data;
and identifying the region in the first video data according to the binding of the human body target in the region of the second video data.
According to one preferred embodiment of the present application, the method for processing thermal imaging video includes: setting a temperature gradient from high to low, identifying a temperature value of each pixel point in the thermal imaging video, and configuring continuously corresponding gray scales for each pixel point according to the temperature gradient from high to low, wherein the temperature gradient corresponds to the gray values from high to low.
According to another preferred embodiment of the present application, pixels exceeding a preset image temperature gradient region in the thermal imaging video are identified, and the gray scale of the pixels exceeding the preset image temperature gradient region is set to 0.
According to another preferred embodiment of the present application, the method for binding the human body target in the second video data to the first video data includes: and identifying coordinate data of a human body target area according to an optical flow algorithm, acquiring the center point coordinate of the human body target area, and synchronously identifying the center point coordinate into the first video data.
According to another preferred embodiment of the present application, the area matching optical flow algorithm for identifying a human body target comprises: and acquiring a temperature parameter of a human body target from the first video data, converting the temperature parameter into gray level data in the second video data, and initializing a human body target area of a frame in the second video frame according to the gray level data.
According to another preferred embodiment of the present application, the area matching optical flow algorithm for identifying a human body target comprises: determining a first pixel point of a first frame image in the second video data, taking the first pixel point of the first frame image as a center, generating a first search area, determining a second pixel point at the same position in a second frame image adjacent to the first frame image, forming a second search area with the same shape and size as the first search area by taking the second pixel point as a center, and further calculating optical flow area matching items of the first search area and the second search area:
wherein E is M In order for the item to be a match,for a non-square penalty function that remains the same form as the data item, where (u, v) T Optical flow vector for adjacent first frame image and second frame image, (u) i,j ,v i,j ) T Representing image pixels (i, j) T The optical flow vector in the neighborhood, D is the relevant search area, and X is the corresponding pixel point coordinates.
According to another preferred embodiment of the present application, the region-matching optical flow algorithm further includes: determining a gray value I (X) of the first pixel point x= (X, y), and calculating a variable optical flow meter energy function epsilon of image local structure region matching:
wherein W represents the distance that the first pixel point or the second pixel point tracks and moves, Ω represents the image size, T represents the structure tensor of the image,representing smooth term coefficients, ++>Representing a smooth term.
In order to achieve at least one of the above objects, the present application further provides a low resolution object recognition system that performs one of the above low resolution object recognition methods by a computer program.
The present application further provides a computer readable storage medium storing a computer program for execution by a processor to implement a low resolution target recognition method as described above.
Drawings
FIG. 1 is a flow chart of a method for identifying a low resolution object according to the present application.
FIG. 2 shows a schematic representation of optical flow tracking recognition after gray scale grading of a thermographic image.
Detailed Description
The following description is presented to enable one of ordinary skill in the art to make and use the application. The preferred embodiments in the following description are by way of example only and other obvious variations will occur to those skilled in the art. The basic principles of the application defined in the following description may be applied to other embodiments, variations, modifications, equivalents, and other technical solutions without departing from the spirit and scope of the application.
It will be understood that the terms "a" and "an" should be interpreted as referring to "at least one" or "one or more," i.e., in one embodiment, the number of elements may be one, while in another embodiment, the number of elements may be plural, and the term "a" should not be interpreted as limiting the number.
Referring to fig. 1 and 2, the present application provides a method and a system for identifying a low resolution target. Wherein the method mainly comprises the following steps: on the basis of a traditional RGB camera, an infrared camera is arranged at the same or adjacent position of the RGB camera, the RGB camera is used for shooting color videos or images of real scenes, the color video data shot by the RGB camera are first video data, the images acquired by the infrared camera are thermal imaging images, and second video data are obtained after gray scale grading treatment is carried out on the thermal imaging images.
Because a great amount of target image smear exists when the human body target passes through the lens rapidly, the target image smear almost completely exceeds the resolution recognition capability of the traditional RGB image, a low-resolution image can be formed, and the recognition effect of the traditional target recognition algorithm is greatly reduced due to the image defect of the low-resolution image. The human body object in the present application means that the temperature is maintained in a relatively stable state for a short period of time, for example, 1 minute, for example, a stable state which can be varied in a range of 2-3 degrees.
Specifically, the application calculates the position information of each pixel point of the first video data aiming at the first video data, and the infrared camera configured by the application has the same view field as the RGB camera. I.e. with the same pixel resolution and the same viewing angle, the image imaging areas taken by the RGB camera and the infrared camera may be considered to be the same.
Specific exemplary operations of the present application are as follows: the RGB camera and the infrared camera are simultaneously opened to shoot the same imaging area, a rapidly moving human body target such as a flying bird or a running athlete is arranged in the space range of the imaging area, the thermal imaging area based on the human body target is usually clear, and the whole form is in accordance with the target form under the real scene, so that the position and the state of the human body target can be directly and accurately reflected by infrared thermal imaging, the influence of the moving speed, the gesture, the angle and the obstacle is small, the obstacle is arranged in the imaging area, the human body target can be rapidly moved from the back of the obstacle opposite to the camera, the human body target intermittently appears in the imaging areas of the two cameras in the moving process, and the identification effect on the human body target is seriously influenced by the smear on the RGB camera due to the rapid movement of the human body target. Therefore, the application performs the following operations on the second video data shot by the infrared camera: the gradation range is preconfigured, for example, the present application can configure the gradation range to be 55-255, and divide the 10 gradation values of the gradation range into [255-55 ]/10=20 gradation levels: [55,65),[65,75).... Taking the intermediate value of each gray level as each gray level value, for example, the intermediate value of [55,65 ] may be configured such that the gray level value in the (55+65)/2=60, [55,65 ] gray level range is 60. Further calculating a gray value of each pixel, wherein the gray value of the pixel is within the corresponding gray value level, the gray value of the pixel is taken as the gray value level, for example, the gray value of the current pixel is 57, the gray value of the current pixel is within the [55,65 ] gray level range, the gray value of the current pixel is set at 60, and the gray value of the pixel, which is not within the gray level range, of the pixel in the thermal imaging video data is set at 0. It should be noted that the above gray scale processing method is merely illustrative, and the setting of the gray scale range may be set according to an actual image and a human body target, where the setting of the gray scale range is not described in detail in the present application, and the processing process of gray scale data of each pixel point of the second video data makes the gray scale range of the thermal imaging data collected by the infrared camera smaller, so that the second video data image may meet the requirement of the optical flow algorithm of area matching for constant brightness, and the search tracking in the thermal imaging area of the present application is more accurate based on the optical flow algorithm of the matching area.
It should be noted that, after the gray data processing of the second video data is completed, the method further performs identification and tracking of a human target in the second video data, where the method for identifying and tracking the target of the second video data includes: the primary identification of the human body targets including but not limited to human bodies and animals from the first video data can be performed by adopting the existing methods including but not limited to YOLO algorithm and CNN algorithm, the above primary identification video frames are images with relatively clear human body targets, so that the conventional YOLO and CNN algorithms can be realized, the coordinates of the central point pixel of the human body targets in the first video can be further calculated, and the coordinates of the central point pixel can be obtained by calculating the mean value of the coordinates of the human body target region. Of course, in other preferred embodiments of the present application, the human target in the first video data may be directly marked, so as to obtain the center point pixel coordinates of the corresponding human target. It should be noted that the pixel coordinate of the central point is a pixel point for marking a human body target, and the application redefines the meaning of the technology.
After the above-mentioned determination of the human body target center point sitting pixel coordinates is completed, the pixel point coordinates of the corresponding positions in the second video data are found according to the human body target center point pixel coordinates in the first video data, and the pixel point coordinates of the corresponding positions of the first frame image in the second video data are used as the first pixel point coordinates. And further searching for second pixel point coordinates of a second frame image adjacent to the first frame image in the second video data, wherein the second pixel point coordinate values of the second frame image are the same as the first pixel point coordinate values of the first frame image. Defining a first pixel point coordinate of the first frame image as x= (X, y) T With the first pixel point coordinates (x, y) of the first frame image T Taking a first search area S with the size of (2n+1) and the size of (2n+1) as the center on the first frame image 1 Defining the first frame image and the second frame image as S, and simultaneously using the second pixel point coordinates (x 2 ,y 2 ) T Constructing a second search area S of (2n+1) size for the center 2 Wherein x is 2 =x,y 2 =y, the first search area S 1 And a second search area S 2 Are smaller than the first frame image and the second frame image areas S. In the present application, (2n+1) ×2n+1 is the pixel area, i.e., (2n+1) represents the number of pixels in the horizontal and vertical directions, respectively. Where n is a variable parameter, when n takes a larger value, a larger first search can be generatedRegion S 1 And a second search area S 2 When the value of n is smaller, a smaller first search area S can be generated 1 And a second search area S 2
The application further defines a gray value I of a pixel, wherein the first pixel x= (X, y) T May be represented as I (X), defining the first pixel point x= (X, y) T Distance shifted by (u, v) at adjacent frames, constant w= (u, v) T As optical flow vectors of two adjacent frames, the gray value of the first pixel point X after being moved is: i (X+W), the optical flow constraint equation is: i (x+w) -I (X) =0. Wherein the upper right label T in the above formula is the transposed matrix.
The application further calculates a local image structure tensor T in the first pixel point area:
wherein I is x And I y G is the gray gradient of the partial image along the x-axis and the y-axis σ The gaussian filter function is effective in reducing noise effects. The above-described structure tensor calculation takes into account the image wave
Local structure information in the neighborhood of the pixel points can realize insensitivity to gray level and brightness change, and meets the gray level brightness stability requirement of an optical flow algorithm.
The application further carries out the region matching item calculation based on the optical flow according to the first search region and the second search region, and the calculation method specifically comprises the following steps:
wherein E is M In order for the item to be a match,for a non-square penalty function that remains the same form as the data item, where (u, v) T Is adjacent toOptical flow vectors of the first frame image and the second frame image, (u) i,j ,v i,j ) T Representing image pixels (i, j) T The optical flow vector in the neighborhood, D is the relevant search area, and X is the corresponding pixel point coordinates.
And calculating a variable optical flow meter energy function epsilon of image local structure region matching:
wherein W represents the distance that the first pixel point or the second pixel point tracks and moves, Ω represents the image size, T represents the structure tensor of the image,representing smooth term coefficients, ++>Representing a smooth term.
That is, the application can rapidly identify and track the corresponding human body target from the second video data by combining the optical flow algorithm with the infrared thermal imaging technology, and further bind and mark the coordinate position of the human body target in the second video data and the corresponding position in the first video data for display. Therefore, the technical scheme can effectively identify and track the highly dynamic human body target.
The processes described above with reference to flowcharts may be implemented as computer software programs in accordance with the disclosed embodiments of the application. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such embodiments, the computer program may be downloaded and installed from a network via a communication portion, and/or installed from a removable medium. The above-described functions defined in the method of the present application are performed when the computer program is executed by a Central Processing Unit (CPU). The computer readable medium of the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the above. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wire segments, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present application, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It will be understood by those skilled in the art that the embodiments of the present application described above and shown in the drawings are merely illustrative and not restrictive of the current application, and that this application has been shown and described with respect to the functional and structural principles thereof, without departing from such principles, and that any modifications or adaptations of the embodiments of the application may be possible and practical.

Claims (9)

1. A method of low resolution object recognition, the method comprising:
acquiring first video data comprising a low-resolution human body target, and simultaneously acquiring a thermal imaging video of the same imaging area as the first video data;
carrying out gray level processing on the thermal imaging video according to a preset temperature gradient to obtain second video data;
performing human body target identification on the second video data by adopting an optical flow algorithm of region matching, and acquiring the position of a human body target in the second video data under the second video data;
and identifying the region in the first video data according to the binding of the human body target in the region of the second video data.
2. The method of claim 1, wherein the method of processing the thermal imaging video comprises: setting a temperature gradient from high to low, identifying a temperature value of each pixel point in the thermal imaging video, and configuring continuously corresponding gray scales for each pixel point according to the temperature gradient from high to low, wherein the temperature gradient corresponds to the gray values from high to low.
3. The method for identifying a low resolution target according to claim 1, wherein pixels exceeding a preset image temperature gradient region in the thermal imaging video are identified, and gray scales of the pixels exceeding the preset image temperature gradient region are set to 0.
4. The method of claim 1, wherein the method of binding the first video data to the human body object in the second video data comprises: and identifying coordinate data of a human body target area according to an optical flow algorithm, acquiring the center point coordinate of the human body target area, and synchronously identifying the center point coordinate into the first video data.
5. The method of claim 1, wherein the area-matched optical flow algorithm for identifying a human target comprises: and acquiring a temperature parameter of a human body target from the first video data, converting the temperature parameter into gray level data in the second video data, and initializing a human body target area of a frame in the second video frame according to the gray level data.
6. The method of claim 1, wherein the area-matched optical flow algorithm for identifying a human target comprises: determining a first pixel point of a first frame image in the second video data, taking the first pixel point of the first frame image as a center, generating a first search area, determining a second pixel point at the same position in a second frame image adjacent to the first frame image, forming a second search area with the same shape and size as the first search area by taking the second pixel point as a center, and further calculating optical flow area matching items of the first search area and the second search area:
wherein E is M In order for the item to be a match,for a non-square penalty function that remains the same form as the data item, where (u, v) T Optical flow vector for adjacent first frame image and second frame image, (u) i,j ,v i,j ) T Representing image pixels (i, j) T The optical flow vector in the neighborhood, D is the relevant search area, and X is the corresponding pixel point coordinates.
7. The method of claim 6, wherein the region-matching optical flow algorithm further comprises: determining a gray value I (X) of the first pixel point x= (X, y), and calculating a variable optical flow meter energy function epsilon of image local structure region matching:
wherein W represents the distance that the first pixel point or the second pixel point tracks and moves, Ω represents the image size, T represents the structure tensor of the image,representing smooth term coefficients, ++>Representing a smooth term.
8. A low resolution object recognition system, characterized in that the system performs a low resolution object recognition method according to any of the preceding claims 1-7 by means of a computer program.
9. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program, which is executed by a processor to implement a low resolution object recognition method according to any one of claims 1-8.
CN202310842176.0A 2023-07-10 2023-07-10 Low-resolution target identification method Pending CN116883897A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310842176.0A CN116883897A (en) 2023-07-10 2023-07-10 Low-resolution target identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310842176.0A CN116883897A (en) 2023-07-10 2023-07-10 Low-resolution target identification method

Publications (1)

Publication Number Publication Date
CN116883897A true CN116883897A (en) 2023-10-13

Family

ID=88263782

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310842176.0A Pending CN116883897A (en) 2023-07-10 2023-07-10 Low-resolution target identification method

Country Status (1)

Country Link
CN (1) CN116883897A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117319662A (en) * 2023-11-28 2023-12-29 杭州杰竞科技有限公司 Image compression and decompression method and system for human body target recognition

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117319662A (en) * 2023-11-28 2023-12-29 杭州杰竞科技有限公司 Image compression and decompression method and system for human body target recognition
CN117319662B (en) * 2023-11-28 2024-02-27 杭州杰竞科技有限公司 Image compression and decompression method and system for human body target recognition

Similar Documents

Publication Publication Date Title
JP6095018B2 (en) Detection and tracking of moving objects
WO2021000664A1 (en) Method, system, and device for automatic calibration of differences in cross-modal target detection
CN107452015B (en) Target tracking system with re-detection mechanism
CN105488811B (en) A kind of method for tracking target and system based on concentration gradient
CN103778645B (en) Circular target real-time tracking method based on images
CN108198201A (en) A kind of multi-object tracking method, terminal device and storage medium
CN110910421B (en) Weak and small moving object detection method based on block characterization and variable neighborhood clustering
CN111383252B (en) Multi-camera target tracking method, system, device and storage medium
CN111382613A (en) Image processing method, apparatus, device and medium
CN112364865B (en) Method for detecting small moving target in complex scene
CN110647836B (en) Robust single-target tracking method based on deep learning
CN109166137A (en) For shake Moving Object in Video Sequences detection algorithm
CN110738667A (en) RGB-D SLAM method and system based on dynamic scene
JP2017522647A (en) Method and apparatus for object tracking and segmentation via background tracking
Nallasivam et al. Moving human target detection and tracking in video frames
CN116883897A (en) Low-resolution target identification method
CN113379789B (en) Moving target tracking method in complex environment
Funde et al. Object detection and tracking approaches for video surveillance over camera network
Roy et al. A comprehensive survey on computer vision based approaches for moving object detection
Li et al. Fast visual tracking using motion saliency in video
CN116862832A (en) Three-dimensional live-action model-based operator positioning method
CN110047103A (en) Mixed and disorderly background is removed from image to carry out object detection
CN110322474B (en) Image moving target real-time detection method based on unmanned aerial vehicle platform
Erokhin et al. Detection and tracking of moving objects with real-time onboard vision system
Depraz et al. Real-time object detection and tracking in omni-directional surveillance using GPU

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination