CN114463440A - Single-camera target positioning method, system, equipment and storage medium - Google Patents

Single-camera target positioning method, system, equipment and storage medium Download PDF

Info

Publication number
CN114463440A
CN114463440A CN202210100200.9A CN202210100200A CN114463440A CN 114463440 A CN114463440 A CN 114463440A CN 202210100200 A CN202210100200 A CN 202210100200A CN 114463440 A CN114463440 A CN 114463440A
Authority
CN
China
Prior art keywords
target
foreground
image
camera
dynamic image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210100200.9A
Other languages
Chinese (zh)
Inventor
郑祖丽
曹金霞
唐亚
李福萌
赵永富
吴刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Xingyi Technology Co ltd
Original Assignee
Hangzhou Xingyi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Xingyi Technology Co ltd filed Critical Hangzhou Xingyi Technology Co ltd
Priority to CN202210100200.9A priority Critical patent/CN114463440A/en
Publication of CN114463440A publication Critical patent/CN114463440A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a single-camera target positioning method, a system, equipment and a storage medium, wherein the method comprises the following steps: acquiring an image to be processed by using a single camera, and performing foreground detection on the image to be processed to acquire a foreground dynamic image; performing algorithm processing on the foreground dynamic image to highlight a target object; carrying out contour recognition on the target object to obtain a target point; and acquiring the position coordinates of the target point in the image. According to the invention, through acquisition, algorithm processing, contour recognition and position coordinate conversion of the foreground dynamic image, target positioning of the image acquired by a single camera is realized rapidly, the influence of external condition change on the tracking and positioning effect is reduced, the positioning is timely and accurate, and the method has the advantages of good stability and small interference.

Description

Single-camera target positioning method, system, equipment and storage medium
Technical Field
The invention relates to the technical field of image positioning, in particular to a single-camera target positioning method, a single-camera target positioning system, a single-camera target positioning device and a storage medium.
Background
Digital Image Processing (Digital Image Processing) is a method and technique for performing processes such as denoising, enhancement, restoration, segmentation, feature extraction, and the like on an Image by a computer. Generally, the main purpose of processing (or processing, analyzing) an image is three-fold: (1) improving the visual quality of the image, such as performing brightness and color transformation of the image, enhancing and inhibiting certain components, performing geometric transformation on the image and the like to improve the quality of the image; (2) certain features or specific information contained in the image are extracted, and the extracted features or information often provide convenience for a computer to analyze the image. The process of extracting features or information is pattern recognition or computer vision preprocessing. The extracted features may include many aspects, such as frequency domain features, grayscale or color features, boundary features, region features, texture features, shape features, topological features, relational structures, and the like; (3) transformation, encoding and compression of image data to facilitate storage and transmission of images.
Regardless of the purpose of image processing, an image processing system composed of a computer and an image-dedicated apparatus is required to input, process, and output image data. Computer vision is actually image processing plus image recognition, requires very complex processing techniques, and requires the design of high-speed dedicated hardware. The target positioning and tracking is an important branch of the computer vision field, relates to a plurality of fields such as image processing, artificial intelligence, mode recognition, automatic control and the like, and mainly aims at detecting and tracking a specified target so as to obtain information such as the position, the track and the like of the target. The tracking effect of the image is often influenced due to the change of the external condition, so that the tracking stability is poor, and the problems of untimely and inaccurate tracking exist.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a single-camera target positioning method, a single-camera target positioning system, a single-camera target positioning device and a storage medium.
In a first aspect, a method for positioning a single-camera target includes:
acquiring an image to be processed by using a single camera, and performing foreground detection on the image to be processed to acquire a foreground dynamic image;
performing algorithm processing on the foreground dynamic image to highlight a target object;
carrying out contour recognition on the target object to obtain a target point;
and acquiring the position coordinates of the target point in the image.
Further, the obtaining of the foreground dynamic image specifically includes:
carrying out foreground detection on the image to be processed by adopting a KNN-based background/foreground segmentation algorithm to obtain a foreground dynamic image;
the KNN-based background/foreground segmentation algorithm includes a BACKGROUNDSUBTRACTORKNN algorithm.
Further, the performing algorithm processing on the foreground dynamic image to highlight the target object specifically includes:
converting the color space of the foreground dynamic image from an RGB space to an HSV space;
acquiring an HSV space target range according to a target color, and performing color image segmentation on the foreground dynamic image by adopting an inRange function to obtain a target color area;
and performing morphological processing on the target color area to reduce noise and highlight a target object.
Further, the contour recognition of the target object is performed to obtain a target point, which specifically is:
detecting and identifying the contour of the target object by using a findContours function;
calculating the outline area of the target object by adopting a contourArea function;
and setting a contour threshold according to the size of the contour area of the target object, and rejecting non-target points according to the contour threshold to obtain target points.
In a second aspect, a single-camera object positioning system includes:
a foreground detection module: the system comprises a single camera, a foreground detection module, a foreground analysis module and a foreground analysis module, wherein the single camera is used for acquiring an image to be processed and carrying out foreground detection on the image to be processed so as to acquire a foreground dynamic image;
an algorithm processing module: the foreground dynamic image is used for carrying out algorithm processing to highlight a target object;
a contour identification module: the system is used for carrying out contour recognition on the target object to obtain a target point;
a position coordinate acquisition module: for obtaining the position coordinates of the target point in the image.
Further, the foreground detection module is specifically configured to:
carrying out foreground detection on the image to be processed by adopting a KNN-based background/foreground segmentation algorithm to obtain a foreground dynamic image;
the KNN-based background/foreground segmentation algorithm includes a BACKGROUNDSUBTRACTORKNN algorithm.
Further, the algorithm processing module is specifically configured to:
converting the color space of the foreground dynamic image from an RGB space to an HSV space;
acquiring an HSV space target range according to a target color, and performing color image segmentation on the foreground dynamic image by adopting an inRange function to obtain a target color area;
and performing morphological processing on the target color area to reduce noise and highlight a target object.
Further, the contour identification module is specifically configured to:
detecting and identifying the outline of the target object by using a findContours function;
calculating the outline area of the target object by adopting a contourArea function;
and setting a contour threshold according to the size of the contour area of the target object, and eliminating non-target points according to the contour threshold to obtain target points.
In a third aspect, a single-camera object localization apparatus includes a processor, an input device, an output device, and a memory, where the processor, the input device, the output device, and the memory are connected to each other, where the memory is used to store a computer program, and the computer program includes program instructions, and the processor is configured to call the program instructions to execute the method of the first aspect.
In a fourth aspect, a computer-readable storage medium stores a computer program comprising program instructions which, when executed by a processor, cause the processor to perform the method of the first aspect described above.
The invention has the beneficial effects that: by acquiring the foreground dynamic image, processing the algorithm, identifying the outline and converting the position coordinate, the target positioning of the image acquired by the single camera is realized quickly, the influence of external condition change on the tracking and positioning effect is reduced, the positioning is timely and accurate, and the method has the advantages of good stability and small interference.
Drawings
In order to more clearly illustrate the detailed description of the invention or the technical solutions in the prior art, the drawings that are needed in the detailed description of the invention or the prior art will be briefly described below. Throughout the drawings, like elements or portions are generally identified by like reference numerals. In the drawings, elements or portions are not necessarily drawn to scale.
Fig. 1 is a flowchart of a single-camera target positioning method according to an embodiment of the present invention;
fig. 2 is a block diagram of a single-camera target positioning system according to an embodiment of the present invention;
fig. 3 is a structural diagram of a single-camera object location device according to a second embodiment of the present invention.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and therefore are only examples, and the protection scope of the present invention is not limited thereby.
It is to be noted that, unless otherwise specified, technical or scientific terms used herein shall have the ordinary meaning as understood by those skilled in the art to which the invention pertains.
As shown in fig. 1, a single-camera target positioning method includes the steps of:
s1: acquiring an image to be processed by using a single camera, and performing foreground detection on the image to be processed to acquire a foreground dynamic image;
specifically, hardware is identified through a single camera, images to be processed of the camera are collected, foreground detection is carried out on the collected images to be processed through a KNN-based background/foreground segmentation algorithm, and a foreground dynamic image is obtained. The background/foreground segmentation algorithm based on KNN comprises a BACKGROUNDSUBTRACTORKNN algorithm, historical frame information of each pixel point of the collected image to be processed is stored in real time, then new image pixel points are compared with the historical frame pixel points, and foreground/background judgment is carried out, so that a foreground dynamic image is obtained, and interference caused by background static objects is reduced.
S2: performing algorithm processing on the foreground dynamic image to highlight a target object;
specifically, RGB is the most common color space, red (R), green (G) and blue (B), and all other colors can be formed by different combinations of the three colors, and since any color is related to the three color components and the three color components are highly correlated with each other, it is not intuitive to continuously change the colors, and the RGB color space is a color space with poor uniformity. Compared with the RGB space, the HSV space can more visually express the hue, the vividness and the brightness of the color, and is easier to track and position an object with a certain color. Therefore, after the foreground dynamic image is obtained, firstly, the color space of the foreground dynamic image is converted into HSV space from RGB space, then the target range of the HSV space is obtained according to the target color, and the inRange function is adopted to carry out color image segmentation on the foreground dynamic image to obtain the target color area. The HSV space consists of color information (H), saturation (S) and brightness (V), the foreground dynamic image is segmented by adjusting the color information (H), the saturation (S) and the brightness (V) interval of the foreground dynamic image by utilizing an inRange function according to the color to be identified, and the corresponding HSV space range is input to select the required image area, so that the target color area is obtained.
Further, due to the influence of illumination noise and the like, holes often appear in the separated target color region or scattered small regions which are wrongly divided into targets appear in the image, and in order to reduce the influence of noise in the foreground dynamic image on the extraction process of the target point at the back, a morphological processing method can be adopted to process the obtained target color region so as to reduce the noise and highlight the target object. The morphological processing method applied in the present embodiment includes, but is not limited to, dilation algorithm, erosion operation, open operation, and close operation.
S3: carrying out contour recognition on the target object to obtain a target point;
specifically, a findContours function is adopted to extract the contour of the target object, a contourArea function is adopted to calculate the contour area of the target object, a contour area threshold value is set according to the calculated contour area size of the target object, and non-target points outside the area threshold value are removed, so that the target point is obtained.
After the contour of the target object is identified, a plurality of noise sources often exist in the image, and the noise sources are from various aspects, such as residual noise in the image acquisition process, so that noise elimination needs to be performed again. There are many methods for eliminating noise in the image, including but not limited to, using an average filter, an adaptive wiener filter, a median filter, and a morphological noise filter.
S4: acquiring the position coordinates of the target point in the image;
specifically, after the target point is obtained, a pixel corresponding to the center of the target point in the foreground dynamic image is obtained, and the pixel is converted into a position coordinate of the target point in the image in an equal proportion mode. For example: on a 100 × 100 pixel picture, a target point is on (10, 20) and converted to 1920 × 1080 on the screen, and the coordinates of the target point are (1920 × 100/10, 1080 × 100/20).
Preferably, after obtaining the target point, before acquiring the position coordinates of the target point in the image, the method further includes shielding interference of other points in the foreground dynamic image on the target point. When it is determined in the foregoing that the target point is the target point, other object points may also meet the determination condition, and therefore, when the target point is already obtained, the search function needs to be always run to search, and the search function is used to search for other object points that suddenly appear except the target point. For example, when a moving green object in the image is identified, after a target point has been found, exactly one person wearing a green hat walks into the camera screen, and the person wearing the green hat is an object point which appears suddenly, so that the person needs to be temporarily shielded, the object point which appears suddenly is temporarily added into a queue, and after the starting target point leaves the camera screen, the object point is taken out from the queue and used as a new target point.
By implementing the method, the target of the image acquired by the single camera is quickly positioned by acquiring the foreground dynamic image, processing the algorithm, identifying the outline and converting the position coordinate, the influence of the external condition change on the tracking and positioning effect is reduced, and the method has the advantages of timely and accurate positioning, good stability and small interference.
Based on the same inventive concept, an embodiment of the present invention provides a single-camera object positioning system, as shown in fig. 2, including:
a foreground detection module: the system comprises a single camera, a foreground detection module, a foreground analysis module and a foreground analysis module, wherein the single camera is used for acquiring an image to be processed and carrying out foreground detection on the image to be processed so as to acquire a foreground dynamic image;
an algorithm processing module: the foreground dynamic image is used for carrying out algorithm processing to highlight a target object;
a contour identification module: the system is used for carrying out contour recognition on the target object to obtain a target point;
a position coordinate acquisition module: for obtaining the position coordinates of the target point in the image.
Further, the foreground detection module is specifically configured to:
carrying out foreground detection on the image to be processed by adopting a KNN-based background/foreground segmentation algorithm to obtain a foreground dynamic image;
the KNN-based background/foreground segmentation algorithm includes a BACKGROUNDSUBTRACTORKNN algorithm.
Further, the algorithm processing module is specifically configured to:
converting the color space of the foreground dynamic image from an RGB space to an HSV space;
acquiring an HSV space target range according to a target color, and performing color image segmentation on the foreground dynamic image by adopting an inRange function to obtain a target color area;
and performing morphological processing on the target color area to reduce noise and highlight a target object.
Further, the contour identification module is specifically configured to:
detecting and identifying the outline of the target object by using a findContours function;
calculating the outline area of the target object by adopting a contourArea function;
and setting a contour threshold according to the size of the contour area of the target object, and rejecting non-target points according to the contour threshold to obtain target points.
Alternatively, in another preferred embodiment of the present invention, as shown in fig. 3, the single-camera object locating device may include: one or more processors 101, one or more input devices 102, one or more output devices 103, and memory 104, the processors 101, input devices 102, output devices 103, and memory 104 being interconnected via a bus 105. The memory 104 is used for storing a computer program comprising program instructions, the processor 101 being configured for invoking the program instructions for performing the methods of the above-described method embodiment parts.
It should be understood that, in the embodiment of the present invention, the Processor 101 may be a Central Processing Unit (CPU), a deep learning graphics card (e.g., NPU, england GPU, google TPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an FPGA (Field-Programmable Gate Array) or other Programmable logic device, a discrete Gate or transistor logic device, a discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The input device 102 may include a keyboard or the like, and the output device 103 may include a display (LCD or the like), a speaker, or the like.
The memory 104 may include read-only memory and random access memory, and provides instructions and data to the processor 101. A portion of the memory 104 may also include non-volatile random access memory. For example, the memory 104 may also store device type information.
In specific implementation, the processor 101, the input device 102, and the output device 103 described in the embodiment of the present invention may execute the implementation manner described in the embodiment of the single-camera target positioning method provided in the embodiment of the present invention, and details are not described herein again.
It should be noted that, in the embodiment of the present invention, a more specific work flow and related details of the single-camera object positioning device are referred to in the foregoing method embodiment section, and are not described herein again.
Further, corresponding to the foregoing method and apparatus, an embodiment of the present invention further provides a readable storage medium storing a computer program, where the computer program includes program instructions, and when executed by a processor, the program instructions implement: the single-camera target positioning method.
The computer readable storage medium may be an internal storage unit of the system according to any of the foregoing embodiments, for example, a hard disk or a memory of the system. The computer readable storage medium may also be an external storage device of the system, such as a plug-in hard drive, Smart Media Card (SMC), Secure Digital (SD) Card, Flash memory Card (Flash Card), etc. provided on the system. Further, the computer readable storage medium may also include both an internal storage unit and an external storage device of the system. The computer-readable storage medium is used for storing the computer program and other programs and data required by the system. The computer readable storage medium may also be used to temporarily store data that has been output or is to be output.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided in the present application, it should be understood that the disclosed system and method may be implemented in other ways. For example, the above-described system embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electric, mechanical or other form of connection.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, and various media capable of storing program codes.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A single-camera target positioning method is characterized by comprising the following steps:
acquiring an image to be processed by using a single camera, and performing foreground detection on the image to be processed to acquire a foreground dynamic image;
performing algorithm processing on the foreground dynamic image to highlight a target object;
carrying out contour recognition on the target object to obtain a target point;
and acquiring the position coordinates of the target point in the image.
2. The single-camera target positioning method according to claim 1, wherein the obtaining of the foreground dynamic image specifically includes:
carrying out foreground detection on the image to be processed by adopting a KNN-based background/foreground segmentation algorithm to obtain a foreground dynamic image;
the KNN-based background/foreground segmentation algorithm includes a BACKGROUNDSUBTRACTORKNN algorithm.
3. The single-camera target positioning method according to claim 2, wherein the foreground dynamic image is subjected to algorithm processing to highlight a target object, specifically:
converting the color space of the foreground dynamic image from an RGB space to an HSV space;
acquiring an HSV space target range according to a target color, and performing color image segmentation on the foreground dynamic image by adopting an inRange function to obtain a target color area;
and performing morphological processing on the target color area to reduce noise and highlight a target object.
4. The single-camera target positioning method according to claim 3, wherein the contour recognition is performed on the target object to obtain a target point, specifically:
detecting and identifying the contour of the target object by using a findContours function;
calculating the outline area of the target object by adopting a contourArea function;
and setting a contour threshold according to the size of the contour area of the target object, and rejecting non-target points according to the contour threshold to obtain target points.
5. A single-camera object location system, comprising:
a foreground detection module: the system comprises a single camera, a foreground detection module, a foreground analysis module and a foreground analysis module, wherein the single camera is used for acquiring an image to be processed and carrying out foreground detection on the image to be processed so as to acquire a foreground dynamic image;
an algorithm processing module: the foreground dynamic image is used for carrying out algorithm processing to highlight a target object;
a contour identification module: the system is used for carrying out contour recognition on the target object to obtain a target point;
a position coordinate acquisition module: for obtaining the position coordinates of the target point in the image.
6. The single-camera object localization system of claim 5, wherein the foreground detection module is specifically configured to:
carrying out foreground detection on the image to be processed by adopting a KNN-based background/foreground segmentation algorithm to obtain a foreground dynamic image;
the KNN-based background/foreground segmentation algorithm includes a BACKGROUNDSUBTRACTORKNN algorithm.
7. The single-camera object positioning system of claim 6, wherein the algorithm processing module is specifically configured to:
converting the color space of the foreground dynamic image from an RGB space to an HSV space;
acquiring an HSV space target range according to a target color, and performing color image segmentation on the foreground dynamic image by adopting an inRange function to obtain a target color area;
and performing morphological processing on the target color region to reduce noise and highlight the target object.
8. The single-camera object locating system of claim 7, wherein the contour identification module is specifically configured to:
detecting and identifying the outline of the target object by using a findContours function;
calculating the outline area of the target object by adopting a contourArea function;
and setting a contour threshold according to the size of the contour area of the target object, and rejecting non-target points according to the contour threshold to obtain target points.
9. A single-camera object localization device comprising a processor, an input device, an output device, and a memory, the processor, the input device, the output device, and the memory being interconnected, wherein the memory is configured to store a computer program comprising program instructions, the processor being configured to invoke the program instructions to perform the method of any of claims 1-4.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program comprising program instructions that, when executed by a processor, cause the processor to carry out the method according to any one of claims 1-4.
CN202210100200.9A 2022-01-27 2022-01-27 Single-camera target positioning method, system, equipment and storage medium Pending CN114463440A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210100200.9A CN114463440A (en) 2022-01-27 2022-01-27 Single-camera target positioning method, system, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210100200.9A CN114463440A (en) 2022-01-27 2022-01-27 Single-camera target positioning method, system, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114463440A true CN114463440A (en) 2022-05-10

Family

ID=81411491

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210100200.9A Pending CN114463440A (en) 2022-01-27 2022-01-27 Single-camera target positioning method, system, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114463440A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117523379A (en) * 2023-11-20 2024-02-06 广东海洋大学 Underwater photographic target positioning method and system based on AI

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117523379A (en) * 2023-11-20 2024-02-06 广东海洋大学 Underwater photographic target positioning method and system based on AI
CN117523379B (en) * 2023-11-20 2024-04-30 广东海洋大学 Underwater photographic target positioning method and system based on AI

Similar Documents

Publication Publication Date Title
CN107833220B (en) Fabric defect detection method based on deep convolutional neural network and visual saliency
JP4373840B2 (en) Moving object tracking method, moving object tracking program and recording medium thereof, and moving object tracking apparatus
US20190197313A1 (en) Monitoring device
CN108898132B (en) Terahertz image dangerous article identification method based on shape context description
CN110969046B (en) Face recognition method, face recognition device and computer-readable storage medium
CN109344864B (en) Image processing method and device for dense object
US10229498B2 (en) Image processing device, image processing method, and computer-readable recording medium
CN109544583A (en) A kind of method, device and equipment for extracting Leather Image area-of-interest
CN113744316A (en) Multi-target tracking method based on deep neural network
CN113487473B (en) Method and device for adding image watermark, electronic equipment and storage medium
CN114463440A (en) Single-camera target positioning method, system, equipment and storage medium
CN116385567A (en) Method, device and medium for obtaining color card ROI coordinate information
Avazov et al. Automatic moving shadow detection and removal method for smart city environments
CN114820718A (en) Visual dynamic positioning and tracking algorithm
CN112085683B (en) Depth map credibility detection method in saliency detection
Yang et al. Cherry recognition based on color channel transform
CN109934215B (en) Identification card identification method
CN113128372A (en) Blackhead identification method and device based on image processing and terminal equipment
CN111105394B (en) Method and device for detecting characteristic information of luminous pellets
CN112348112A (en) Training method and device for image recognition model and terminal equipment
RU2440609C1 (en) Method for segmentation of bitmap images based on growing and merging regions
KR100927642B1 (en) A face edge extraction method using visual division histogram analysis
CN110188601A (en) A kind of airport remote sensing images detection method based on study
CN111325209A (en) License plate recognition method and system
WO2023007920A1 (en) Image processing method and image processing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination