WO2024098906A1 - 一种亿级摄像设备的图像跟踪方法及装置 - Google Patents

一种亿级摄像设备的图像跟踪方法及装置 Download PDF

Info

Publication number
WO2024098906A1
WO2024098906A1 PCT/CN2023/116101 CN2023116101W WO2024098906A1 WO 2024098906 A1 WO2024098906 A1 WO 2024098906A1 CN 2023116101 W CN2023116101 W CN 2023116101W WO 2024098906 A1 WO2024098906 A1 WO 2024098906A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
tracking
image data
target
original image
Prior art date
Application number
PCT/CN2023/116101
Other languages
English (en)
French (fr)
Inventor
袁潮
邓迪旻
温建伟
Original Assignee
北京拙河科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京拙河科技有限公司 filed Critical 北京拙河科技有限公司
Publication of WO2024098906A1 publication Critical patent/WO2024098906A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Definitions

  • the present invention relates to the field of image processing, and in particular to an image tracking method and device for billion-level camera equipment.
  • the array camera system for billion-level camera equipment is used in extreme sports or high-risk environment monitoring.
  • billion-level camera equipment is used for image capture and image tracking
  • the data to be tracked collected by the camera will be transmitted and analyzed according to the preset tracking and recognition algorithm, and the content in the image or dynamic video will be used to determine whether it is a tracking item.
  • the image tracking process in the prior art only directly recognizes the image angle and analyzes the image content through the original image data collected by the camera, and cannot use the diversified sensor data to utilize the dynamic parameters in the surrounding environment, which reduces the efficiency of image recognition and image tracking judgment.
  • the embodiments of the present invention provide an image tracking method and device for billion-level camera equipment, so as to at least solve the technical problem that the image tracking process in the prior art only directly identifies the image angle and parses and judges the image content through the original image data collected by the camera, and cannot utilize the dynamic parameters in the surrounding environment by using diversified sensor data, thereby reducing the efficiency of image recognition and image tracking judgment.
  • an image tracking method for billion-level camera equipment comprising: acquiring original image data and environmental sensor information; extracting dynamic tracking activation data in the environmental sensor information; identifying a tracking target in the original image data according to the dynamic tracking activation data to obtain target trajectory data; generating a marking strategy according to the target trajectory data, marking the original image data, and generating tracking image data.
  • the environmental sensing information includes: environmental movement information and environmental dynamic tracking activation parameters.
  • identifying the tracking target in the original image data according to the dynamic tracking activation data to obtain the target trajectory data includes: extracting target feature data and target direction data in the dynamic tracking activation data; and calculating the target feature data and target direction data with the original image data using a preset formula to obtain the target trajectory data, wherein the preset formula is:
  • generating a marking strategy based on the target trajectory data, marking the original image data, and generating tracking image data includes: matching the target trajectory data with a strategy analysis matrix to obtain the marking strategy; marking the tracking position in the original image data according to the marking strategy to obtain the tracking image data.
  • the environmental sensing information includes: environmental movement information and environmental dynamic tracking activation parameters.
  • the recognition module includes: an extraction unit, used to extract target feature data and target direction data from the dynamic tracking activation data; a calculation unit, used to calculate the target feature data and target direction data with the original image data using a preset formula to obtain the target trajectory data, wherein the preset formula is:
  • the generation module includes: a matching unit, used to match the target trajectory data and the strategy analysis matrix to obtain the marking strategy; and a labeling unit, used to mark the tracking position in the original image data according to the marking strategy to obtain the tracking image data.
  • a non-volatile storage medium is further provided, wherein the non-volatile storage medium includes a stored program, wherein when the program is executed, the device where the non-volatile storage medium is located is controlled.
  • An image tracking method for billion-level camera devices is implemented.
  • an electronic device comprising a processor and a memory; the memory stores computer-readable instructions, and the processor is used to run the computer-readable instructions, wherein the computer-readable instructions execute an image tracking method for billion-level camera equipment when running.
  • the method of acquiring original image data and environmental sensor information; extracting dynamic tracking activation data in the environmental sensor information; identifying the tracking target in the original image data according to the dynamic tracking activation data to obtain target trajectory data; generating a marking strategy according to the target trajectory data, marking the original image data, and generating tracking image data solves the technical problem that the image tracking process in the prior art only directly recognizes the image angle and parses and judges the image content through the original image data collected by the camera, and cannot utilize the dynamic parameters in the surrounding environment by using diversified sensor data, thereby reducing the efficiency of image recognition and image tracking judgment.
  • FIG1 is a flow chart of an image tracking method for a billion-level camera device according to an embodiment of the present invention
  • FIG2 is a structural block diagram of an image tracking device for billion-level camera equipment according to an embodiment of the present invention.
  • FIG. 3 is a block diagram of a terminal device for executing a method according to an embodiment of the present invention.
  • FIG. 4 is a storage unit for holding or carrying a program code for implementing a method according to an embodiment of the present invention.
  • a method embodiment of an image tracking method for billion-level camera devices is provided. It should be noted that the steps shown in the flowchart of the accompanying drawings can be executed in a computer system such as a set of computer executable instructions, and although the logical order is shown in the flowchart, in some cases, the steps shown or described can be executed in an order different from that shown here.
  • FIG. 1 is a flow chart of an image tracking method for a billion-level camera device according to an embodiment of the present invention. As shown in FIG. 1 , the method includes the following steps:
  • Step S102 obtaining original image data and environmental sensing information.
  • the embodiments of the present invention first need to collect and obtain the original image data and environmental sensor information in the array high-precision camera device, and store the original image data as the source data for subsequent image tracking, wherein the environmental sensor information is composed of a number of sensors configured outside the high-precision array camera system, which can collect preset environmental parameters for image tracking processing in conjunction with the high-precision array camera system.
  • the environmental sensing information includes: environmental movement information and environmental dynamic tracking activation parameters.
  • the environmental sensor may adopt a TF8 infrared motion monitoring sensor to record and collect information when an object moves beyond a threshold.
  • Step S104 extracting dynamic tracking activation data from the environmental sensing information.
  • the embodiment of the present invention needs to extract the dynamic tracking activation data in the environmental sensor information in order to further determine whether motion tracking judgment is needed and whether the parameters in the motion sensor need to be transmitted to the memory and CPU of the high-precision array camera system.
  • Step S106 identifying the tracking target in the original image data according to the dynamic tracking activation data to obtain target trajectory data.
  • the embodiment of the present invention needs to decompose the dynamic tracking activation data and use relevant tracking parameters to judge and identify the original image data to obtain the target trajectory.
  • identifying the tracking target in the original image data according to the dynamic tracking activation data to obtain the target trajectory data includes: extracting target feature data and target direction data in the dynamic tracking activation data; and calculating the target feature data and target direction data with the original image data using a preset formula to obtain the target trajectory data, wherein the preset formula is:
  • the dynamic tracking activation data after obtaining the dynamic tracking activation data, it is necessary to split the dynamic tracking activation data to obtain target feature data for determining the characteristics and morphological coordinates of the target, and to obtain target direction data for determining the direction parameters of the target within the range of the original image data.
  • Step S108 generating a marking strategy according to the target trajectory data, marking the original image data, and generating tracking image data.
  • generating a marking strategy based on the target trajectory data, marking the original image data, and generating tracking image data includes: matching the target trajectory data with a strategy analysis matrix to obtain the marking strategy; marking the tracking position in the original image data according to the marking strategy to obtain the tracking image data.
  • a marking strategy can be generated based on the target trajectory data, and the original image data can be marked.
  • Generating tracking image data includes: matching the target trajectory data with a strategy analysis matrix to obtain the marking strategy; marking the tracking position in the original image data according to the marking strategy to obtain the tracking image data.
  • the above-mentioned embodiments solve the technical problem that the image tracking process in the prior art only directly identifies the image angle and parses and judges the image content through the original image data collected by the camera, and cannot utilize the dynamic parameters in the surrounding environment by using diversified sensor data, thereby reducing the efficiency of image recognition and image tracking judgment.
  • FIG. 2 is a structural block diagram of an image tracking device for billion-level camera equipment according to an embodiment of the present invention. As shown in FIG. 2 , the device includes:
  • the acquisition module 20 is used to acquire original image data and environmental sensing information.
  • the embodiments of the present invention first need to collect and obtain the original image data and environmental sensor information in the array high-precision camera device, and store the original image data as the source data for subsequent image tracking, wherein the environmental sensor information is composed of a number of sensors configured outside the high-precision array camera system, which can collect preset environmental parameters for image tracking processing in conjunction with the high-precision array camera system.
  • the environmental sensing information includes: environmental movement information and environmental dynamic tracking activation parameters.
  • the environmental sensor may adopt a TF8 infrared motion monitoring sensor to record and collect information when an object moves beyond a threshold.
  • the extraction module 22 is used to extract the dynamic tracking activation data in the environmental sensing information.
  • the embodiment of the present invention needs to extract the dynamic tracking activation data in the environmental sensor information in order to further determine whether motion tracking is needed and whether the parameters in the motion sensor need to be transmitted to the memory and CPU of the high-precision array camera system.
  • the identification module 24 is used to identify the tracking target in the original image data according to the dynamic tracking activation data to obtain target trajectory data.
  • the embodiment of the present invention needs to decompose the dynamic tracking activation data and use relevant tracking parameters to judge and identify the original image data to obtain the target trajectory.
  • the recognition module includes: an extraction unit, used to extract target feature data and target direction data from the dynamic tracking activation data; a calculation unit, used to calculate the target feature data and target direction data with the original image data using a preset formula to obtain the target trajectory data, wherein the preset formula is:
  • the dynamic tracking activation data after obtaining the dynamic tracking activation data, it is necessary to split the dynamic tracking activation data to obtain target feature data for determining the characteristics and morphological coordinates of the target, and to obtain target direction data for determining the direction parameters of the target within the range of the original image data.
  • the generating module 26 is used to generate a marking strategy according to the target trajectory data, mark the original image data, and generate tracking image data.
  • the generation module includes: a matching unit, used to match the target trajectory data and the strategy analysis matrix to obtain the marking strategy; and a labeling unit, used to mark the tracking position in the original image data according to the marking strategy to obtain the tracking image data.
  • a marking strategy can be generated based on the target trajectory data, and the original image data can be marked.
  • the generation of tracking image data includes: matching the target trajectory data with a strategy analysis matrix to obtain the marking strategy; marking the tracking position in the original image data according to the marking strategy to obtain the tracking image data.
  • the above-mentioned embodiments solve the technical problem that the image tracking process in the prior art only directly identifies the image angle and parses and judges the image content through the original image data collected by the camera, and cannot utilize the dynamic parameters in the surrounding environment by using diversified sensor data, thereby reducing the efficiency of image recognition and image tracking judgment.
  • a non-volatile storage medium is further provided, wherein the non-volatile storage medium includes a stored program, wherein when the program is run, the device where the non-volatile storage medium is located is controlled to execute an image tracking method for a billion-level camera device.
  • the above method includes: acquiring raw image data and environmental sensor information; extracting dynamic tracking activation data from the environmental sensor information; identifying the tracking target in the raw image data according to the dynamic tracking activation data to obtain target trajectory data; generating a marking strategy according to the target trajectory data, marking the raw image data, and generating tracking image data.
  • the environmental sensor information includes: environmental movement information, environmental dynamic tracking activation parameters.
  • identifying the tracking target in the raw image data according to the dynamic tracking activation data to obtain target trajectory data includes: extracting target feature data and target direction data from the dynamic tracking activation data; calculating the target feature data and target direction data with the raw image data using a preset formula to obtain the target trajectory data, wherein the preset formula is:
  • generating a marking strategy based on the target trajectory data, marking the original image data, and generating tracking image data includes: matching the target trajectory data with a strategy analysis matrix to obtain the marking strategy; marking the tracking position in the original image data according to the marking strategy to obtain the tracking image data.
  • an electronic device comprising a processor and a memory; the memory stores computer-readable instructions, and the processor is used to run the computer-readable instructions, wherein the computer-readable instructions execute an image tracking method for billion-level camera equipment when running.
  • the above method includes: acquiring raw image data and environmental sensor information; extracting dynamic tracking activation data from the environmental sensor information; identifying the tracking target in the raw image data according to the dynamic tracking activation data to obtain target trajectory data; generating a marking strategy according to the target trajectory data, marking the raw image data, and generating tracking image data.
  • the environmental sensor information includes: environmental movement information, environmental dynamic tracking activation parameters.
  • identifying the tracking target in the raw image data according to the dynamic tracking activation data to obtain target trajectory data includes: extracting target feature data and target direction data from the dynamic tracking activation data; calculating the target feature data and target direction data with the raw image data using a preset formula to obtain the target trajectory data, wherein the preset formula is:
  • generating a marking strategy based on the target trajectory data, marking the original image data, and generating tracking image data includes: matching the target trajectory data with a strategy analysis matrix to obtain the marking strategy; marking the tracking position in the original image data according to the marking strategy to obtain the tracking image data.
  • the disclosed technical content can be implemented in other ways.
  • the device embodiments described above are only exemplary, for example, the division of the units, It can be divided into a logical function, and there may be other divisions in actual implementation, such as multiple units or components can be combined or integrated into another system, or some features can be ignored or not executed.
  • the coupling or direct coupling or communication connection between each other shown or discussed can be an indirect coupling or communication connection through some interface, unit or module, which can be electrical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place or distributed on multiple units. Some or all of the units may be selected according to actual needs to achieve the purpose of the present embodiment.
  • FIG3 is a schematic diagram of the hardware structure of a terminal device provided in an embodiment of the present application.
  • the terminal device may include an input device 30, a processor 31, an output device 32, a memory 33 and at least one communication bus 34.
  • the communication bus 34 is used to realize the communication connection between the components.
  • the memory 33 may include a high-speed RAM memory, and may also include a non-volatile storage NVM, such as at least one disk storage.
  • Various programs can be stored in the memory 33 to complete various processing functions and implement the method steps of this embodiment.
  • the processor 31 may be implemented as a central processing unit (CPU), an application-specific integrated circuit (ASIC), a digital signal processor (DSP), a digital signal processing device (DSPD), a programmable logic device (PLD), a field programmable gate array (FPGA), a controller, a microcontroller, a microprocessor or other electronic components, and the processor 31 is coupled to the input device 30 and the output device 32 via a wired or wireless connection.
  • CPU central processing unit
  • ASIC application-specific integrated circuit
  • DSP digital signal processor
  • DSPD digital signal processing device
  • PLD programmable logic device
  • FPGA field programmable gate array
  • the input device 30 may include multiple input devices, for example, it may include at least one of a user interface for users, a device interface for devices, a programmable interface for software, a camera, and a sensor.
  • the device interface for devices may be a wired interface for data transmission between devices, or a hardware insertion interface for data transmission between devices (such as a USB interface, a serial port, etc.);
  • the user interface for users may be, for example, a control button for users, a voice input device for receiving voice input, and a touch sensing device for users to receive user touch input (such as a touch screen with a touch sensing function, a touch pad, etc.);
  • the programmable interface for the software may be, for example, an entry for users to edit or modify programs, such as an input pin interface or an input interface of a chip;
  • the transceiver may be a radio frequency transceiver chip with communication function, a baseband processing chip, and a transceiver antenna, etc.
  • the processor of the terminal device includes functions for executing each module of the data processing device in each device.
  • the specific functions and technical effects can be referred to the above embodiments and will not be repeated here.
  • FIG4 is a schematic diagram of the hardware structure of a terminal device provided in another embodiment of the present application.
  • the terminal device of this embodiment includes a processor 41 and a memory 42 .
  • the processor 41 executes the computer program code stored in the memory 42 to implement the method in the above embodiment.
  • the memory 42 is configured to store various types of data to support operations on the terminal device. Examples of such data include instructions for any application or method operating on the terminal device, such as messages, pictures, videos, etc.
  • the memory 42 may include random access memory (RAM) and may also include non-volatile memory, such as at least one disk storage.
  • the processor 41 is provided in the processing component 40.
  • the terminal device may further include: a communication component 43, a power component 44, a multimedia component 45, an audio component 46, an input/output interface 47 and/or a sensor component 48.
  • the specific components included in the terminal device are set according to actual needs, and this embodiment does not limit this.
  • the processing component 40 generally controls the overall operation of the terminal device.
  • the processing component 40 may include one or more processors 41 to execute instructions to complete all or part of the steps of the above method.
  • the processing component 40 may include one or more modules to facilitate the interaction between the processing component 40 and other components.
  • the processing component 40 may include a multimedia module to facilitate the interaction between the multimedia component 45 and the processing component 40.
  • the power supply component 44 provides power to various components of the terminal device.
  • the power supply component 44 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power to the terminal device.
  • the multimedia component 45 includes a display screen that provides an output interface between the terminal device and the user.
  • the display screen may include a liquid crystal display (LCD) and a touch panel (TP). If the display screen includes a touch panel, the display screen may be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundaries of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
  • the audio component 46 is configured to output and/or input audio signals.
  • the audio component 46 includes a microphone (MIC), and when the terminal device is in an operating mode, such as a speech recognition mode, the microphone is configured to receive an external audio signal.
  • the received audio signal can be further stored in the memory 42 or sent via the communication component 43.
  • the audio component 46 also includes a speaker for outputting audio signals.
  • the input/output interface 47 provides an interface between the processing component 40 and the peripheral interface modules, which may be click wheels, buttons, etc. These buttons may include, but are not limited to, volume buttons, start buttons, and lock buttons.
  • the sensor component 48 includes one or more sensors for providing status assessment of various aspects for the terminal device.
  • the sensor assembly 48 can detect the open/closed state of the terminal device, the relative positioning of the components, the presence or absence of contact between the user and the terminal device.
  • the sensor assembly 48 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact, including detecting the distance between the user and the terminal device.
  • the sensor assembly 48 may also include a camera, etc.
  • the communication component 43 is configured to facilitate wired or wireless communication between the terminal device and other devices.
  • the terminal device can access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof.
  • the terminal device may include a SIM card slot for inserting a SIM card, so that the terminal device can log in to the GPRS network and establish communication with the service end through the Internet.
  • the communication component 43, the audio component 46, the input/output interface 47, and the sensor component 48 involved in the embodiment of FIG. 4 can all be used as implementations of the input device in the embodiment of FIG. 3.
  • the disclosed technical content can be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the units can be a logical function division. There may be other division methods in actual implementation.
  • multiple units or components can be combined or integrated into another system, or some features can be ignored or not executed.
  • Another point is that the mutual coupling or direct coupling or communication connection shown or discussed can be through some interfaces, indirect coupling or communication connection of units or modules, which can be electrical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place or distributed on multiple units. Some or all of the units may be selected according to actual needs to achieve the purpose of the present embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit may be implemented in the form of hardware or in the form of software functional units.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the technical solution of the present invention in essence, or the part that contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including a number of instructions for a computer device (which can be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the method described in each embodiment of the present invention.
  • the aforementioned storage medium includes: U disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic disk or optical disk and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

本发明公开了一种亿级摄像设备的图像跟踪方法及装置。其中,该方法包括:获取原始图像数据和环境传感信息;抽取所述环境传感信息中的动态跟踪激活数据;根据所述动态跟踪激活数据将所述原始图像数据中的跟踪目标进行识别,得到目标轨迹数据;根据所述目标轨迹数据生成标记策略,对所述原始图像数据进行标记,生成跟踪图像数据。本发明解决了现有技术中的图像跟踪过程仅仅是通过摄像头采集到的原始图像数据直接进行图像角度的识别和图像内容的解析判断,无法利用多元化传感数据将周围环境中的动态参数加以利用,降低了图像识别和图像跟踪判断的效率的技术问题。

Description

一种亿级摄像设备的图像跟踪方法及装置 技术领域
本发明涉及图像处理领域,具体而言,涉及一种亿级摄像设备的图像跟踪方法及装置。
背景技术
随着智能化科技的不断发展,人们的生活、工作、学习之中越来越多地用到了智能化设备,使用智能化科技手段,提高了人们生活的质量,增加了人们学习和工作的效率。
目前,针对亿级摄像设备的阵列摄像系统,会应用在极限运动或者高危环境监测中,那么通常利用亿级摄像设备进行图像捕捉和图像跟踪的时候,会根据预设的跟踪识别算法将摄像头采集到的待跟踪数据进行传输和分析,并根据图像或者动态视频中的内容判断是否为跟踪项目。但是现有技术中的图像跟踪过程仅仅是通过摄像头采集到的原始图像数据直接进行图像角度的识别和图像内容的解析判断,无法利用多元化传感数据将周围环境中的动态参数加以利用,降低了图像识别和图像跟踪判断的效率。
针对上述的问题,目前尚未提出有效的解决方案。
发明内容
本发明实施例提供了一种亿级摄像设备的图像跟踪方法及装置,以至少解决现有技术中的图像跟踪过程仅仅是通过摄像头采集到的原始图像数据直接进行图像角度的识别和图像内容的解析判断,无法利用多元化传感数据将周围环境中的动态参数加以利用,降低了图像识别和图像跟踪判断的效率的技术问题。
根据本发明实施例的一个方面,提供了一种亿级摄像设备的图像跟踪方法,包括:获取原始图像数据和环境传感信息;抽取所述环境传感信息中的动态跟踪激活数据;根据所述动态跟踪激活数据将所述原始图像数据中的跟踪目标进行识别,得到目标轨迹数据;根据所述目标轨迹数据生成标记策略,对所述原始图像数据进行标记,生成跟踪图像数据。
可选的,所述环境传感信息包括:环境移动信息、环境动态跟踪激活参数。
可选的,所述根据所述动态跟踪激活数据将所述原始图像数据中的跟踪目标进行识别,得到目标轨迹数据包括:提取所述动态跟踪激活数据中的目标特征数据和目标走向数据;利用预设公式将所述目标特征数据和目标走向数据与所述原始图像数据进行计算,得到所述目标轨迹数据,其中,所述预设公式为:
其中,是目标轨迹数据,P是原始图像数据,h是目标特征数据,x是目标走向数据,t是事件发生时间戳。
可选的,所述根据所述目标轨迹数据生成标记策略,对所述原始图像数据进行标记,生成跟踪图像数据包括:将所述目标轨迹数据和策略分析矩阵进行匹配,得到所述标记策略;根据所述标记策略对所述原始图像数据中的跟踪位置进行记号标注,得到所述跟踪图像数据。
根据本发明实施例的另一方面,还提供了一种亿级摄像设备的图像跟踪装置,包括:获取模块,用于获取原始图像数据和环境传感信息;抽取模块,用于抽取所述环境传感信息中的动态跟踪激活数据;识别模块,用于根据所述动态跟踪激活数据将所述原始图像数据中的跟踪目标进行识别,得到目标轨迹数据;生成模块,用于根据所述目标轨迹数据生成标记策略,对所述原始图像数据进行标记,生成跟踪图像数据。
可选的,所述环境传感信息包括:环境移动信息、环境动态跟踪激活参数。
可选的,所述识别模块包括:提取单元,用于提取所述动态跟踪激活数据中的目标特征数据和目标走向数据;计算单元,用于利用预设公式将所述目标特征数据和目标走向数据与所述原始图像数据进行计算,得到所述目标轨迹数据,其中,所述预设公式为:
其中,是目标轨迹数据,P是原始图像数据,h是目标特征数据,x是目标走向数据,t是事件发生时间戳。
可选的,所述生成模块包括:匹配单元,用于将所述目标轨迹数据和策略分析矩阵进行匹配,得到所述标记策略;标注单元,用于根据所述标记策略对所述原始图像数据中的跟踪位置进行记号标注,得到所述跟踪图像数据。
根据本发明实施例的另一方面,还提供了一种非易失性存储介质,所述非易失性存储介质包括存储的程序,其中,所述程序运行时控制非易失性存储介质所在的设备 执行一种亿级摄像设备的图像跟踪方法。
根据本发明实施例的另一方面,还提供了一种电子装置,包含处理器和存储器;所述存储器中存储有计算机可读指令,所述处理器用于运行所述计算机可读指令,其中,所述计算机可读指令运行时执行一种亿级摄像设备的图像跟踪方法。
在本发明实施例中,采用获取原始图像数据和环境传感信息;抽取所述环境传感信息中的动态跟踪激活数据;根据所述动态跟踪激活数据将所述原始图像数据中的跟踪目标进行识别,得到目标轨迹数据;根据所述目标轨迹数据生成标记策略,对所述原始图像数据进行标记,生成跟踪图像数据的方式,解决了现有技术中的图像跟踪过程仅仅是通过摄像头采集到的原始图像数据直接进行图像角度的识别和图像内容的解析判断,无法利用多元化传感数据将周围环境中的动态参数加以利用,降低了图像识别和图像跟踪判断的效率的技术问题。
附图说明
此处所说明的附图用来提供对本发明的进一步理解,构成本申请的一部分,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。在附图中:
图1是根据本发明实施例的一种亿级摄像设备的图像跟踪方法的流程图;
图2是根据本发明实施例的一种亿级摄像设备的图像跟踪装置的结构框图;
图3是根据本发明实施例的用于执行根据本发明的方法的终端设备的框图;
图4是根据本发明实施例的用于保持或者携带实现根据本发明的方法的程序代码的存储单元。
具体实施方式
为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分的实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本发明保护的范围。
需要说明的是,本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本发明的实施例能够以除了在 这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
根据本发明实施例,提供了一种亿级摄像设备的图像跟踪方法的方法实施例,需要说明的是,在附图的流程图示出的步骤可以在诸如一组计算机可执行指令的计算机系统中执行,并且,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。
实施例一
图1是根据本发明实施例的一种亿级摄像设备的图像跟踪方法的流程图,如图1所示,该方法包括如下步骤:
步骤S102,获取原始图像数据和环境传感信息。
具体的,本发明实施例为了解决现有技术中的图像跟踪过程仅仅是通过摄像头采集到的原始图像数据直接进行图像角度的识别和图像内容的解析判断,无法利用多元化传感数据将周围环境中的动态参数加以利用,降低了图像识别和图像跟踪判断的效率的技术问题,首先需要将阵列高精度摄像设备中的原始图像数据和环境传感信息进行采集和获取,并将原始图像数据作为后续图像跟踪的源数据进行存储,其中,环境传感信息通过若干配置在高精度阵列摄像系统外面的传感器组成,可以收集预设的环境参数,用于配合高精度阵列摄像系统进行图像跟踪处理。
可选的,所述环境传感信息包括:环境移动信息、环境动态跟踪激活参数。
具体的,环境传感器可以采用TF8型红外动作监测传感器,在发生物体超过阈值的移动时来进行记录和采集信息。
步骤S104,抽取所述环境传感信息中的动态跟踪激活数据。
具体的,本发明实施例在获取到了上述原始图像数据和环境传感信息之后,需要将环境传感信息之中的动态跟踪激活数据进行抽取,以便进一步判断是否需要进行动作跟踪判断,是否需要将动作传感器中的参数传输至高精度阵列摄像系统的存储器和CPU中。
步骤S106,根据所述动态跟踪激活数据将所述原始图像数据中的跟踪目标进行识别,得到目标轨迹数据。
具体的,本发明实施例为了根据传感器中的动态跟踪激活数据来处理原始图像数据,并将原始图像数据中目标活动轨迹加以提取和识别,需要将动态跟踪激活数据进行分解,并利用相关跟踪参数来对原始图像数据进行判断识别,得到目标轨迹。
可选的,所述根据所述动态跟踪激活数据将所述原始图像数据中的跟踪目标进行识别,得到目标轨迹数据包括:提取所述动态跟踪激活数据中的目标特征数据和目标走向数据;利用预设公式将所述目标特征数据和目标走向数据与所述原始图像数据进行计算,得到所述目标轨迹数据,其中,所述预设公式为:
其中,是目标轨迹数据,P是原始图像数据,h是目标特征数据,x是目标走向数据,t是事件发生时间戳。
具体的,在获取到了动态跟踪激活数据之后,需要将上述动态跟踪激活数据进行拆分,得到目标特征数据,用于判断目标的特征和形态坐标,得到目标走向数据,用于判断目标在原始图像数据的范围之内的走向参数。
步骤S108,根据所述目标轨迹数据生成标记策略,对所述原始图像数据进行标记,生成跟踪图像数据。
可选的,所述根据所述目标轨迹数据生成标记策略,对所述原始图像数据进行标记,生成跟踪图像数据包括:将所述目标轨迹数据和策略分析矩阵进行匹配,得到所述标记策略;根据所述标记策略对所述原始图像数据中的跟踪位置进行记号标注,得到所述跟踪图像数据。
具体的,为了将目标轨迹数据标记在原始图像数据中,便于后续用户进行分析和展示,可以根据所述目标轨迹数据生成标记策略,对所述原始图像数据进行标记,生成跟踪图像数据包括:将所述目标轨迹数据和策略分析矩阵进行匹配,得到所述标记策略;根据所述标记策略对所述原始图像数据中的跟踪位置进行记号标注,得到所述跟踪图像数据。
通过上述实施例,解决了现有技术中的图像跟踪过程仅仅是通过摄像头采集到的原始图像数据直接进行图像角度的识别和图像内容的解析判断,无法利用多元化传感数据将周围环境中的动态参数加以利用,降低了图像识别和图像跟踪判断的效率的技术问题。
实施例二
图2是根据本发明实施例的一种亿级摄像设备的图像跟踪装置的结构框图,如图2所示,该装置包括:
获取模块20,用于获取原始图像数据和环境传感信息。
具体的,本发明实施例为了解决现有技术中的图像跟踪过程仅仅是通过摄像头采集到的原始图像数据直接进行图像角度的识别和图像内容的解析判断,无法利用多元化传感数据将周围环境中的动态参数加以利用,降低了图像识别和图像跟踪判断的效率的技术问题,首先需要将阵列高精度摄像设备中的原始图像数据和环境传感信息进行采集和获取,并将原始图像数据作为后续图像跟踪的源数据进行存储,其中,环境传感信息通过若干配置在高精度阵列摄像系统外面的传感器组成,可以收集预设的环境参数,用于配合高精度阵列摄像系统进行图像跟踪处理。
可选的,所述环境传感信息包括:环境移动信息、环境动态跟踪激活参数。
具体的,环境传感器可以采用TF8型红外动作监测传感器,在发生物体超过阈值的移动时来进行记录和采集信息。
抽取模块22,用于抽取所述环境传感信息中的动态跟踪激活数据。
具体的,本发明实施例在获取到了上述原始图像数据和环境传感信息之后,需要将环境传感信息之中的动态跟踪激活数据进行抽取,以便进一步判断是否需要进行动作跟踪判断,是否需要将动作传感器中的参数传输至高精度阵列摄像系统的存储器和CPU中。
识别模块24,用于根据所述动态跟踪激活数据将所述原始图像数据中的跟踪目标进行识别,得到目标轨迹数据。
具体的,本发明实施例为了根据传感器中的动态跟踪激活数据来处理原始图像数据,并将原始图像数据中目标活动轨迹加以提取和识别,需要将动态跟踪激活数据进行分解,并利用相关跟踪参数来对原始图像数据进行判断识别,得到目标轨迹。
可选的,所述识别模块包括:提取单元,用于提取所述动态跟踪激活数据中的目标特征数据和目标走向数据;计算单元,用于利用预设公式将所述目标特征数据和目标走向数据与所述原始图像数据进行计算,得到所述目标轨迹数据,其中,所述预设公式为:
其中,是目标轨迹数据,P是原始图像数据,h是目标特征数据,x是目标走向 数据,t是事件发生时间戳。
具体的,在获取到了动态跟踪激活数据之后,需要将上述动态跟踪激活数据进行拆分,得到目标特征数据,用于判断目标的特征和形态坐标,得到目标走向数据,用于判断目标在原始图像数据的范围之内的走向参数。
生成模块26,用于根据所述目标轨迹数据生成标记策略,对所述原始图像数据进行标记,生成跟踪图像数据。
可选的,所述生成模块包括:匹配单元,用于将所述目标轨迹数据和策略分析矩阵进行匹配,得到所述标记策略;标注单元,用于根据所述标记策略对所述原始图像数据中的跟踪位置进行记号标注,得到所述跟踪图像数据。
具体的,为了将目标轨迹数据标记在原始图像数据中,便于后续用户进行分析和展示,可以根据所述目标轨迹数据生成标记策略,对所述原始图像数据进行标记,生成跟踪图像数据包括:将所述目标轨迹数据和策略分析矩阵进行匹配,得到所述标记策略;根据所述标记策略对所述原始图像数据中的跟踪位置进行记号标注,得到所述跟踪图像数据。
通过上述实施例,解决了现有技术中的图像跟踪过程仅仅是通过摄像头采集到的原始图像数据直接进行图像角度的识别和图像内容的解析判断,无法利用多元化传感数据将周围环境中的动态参数加以利用,降低了图像识别和图像跟踪判断的效率的技术问题。
根据本发明实施例的另一方面,还提供了一种非易失性存储介质,所述非易失性存储介质包括存储的程序,其中,所述程序运行时控制非易失性存储介质所在的设备执行一种亿级摄像设备的图像跟踪方法。
具体的,上述方法包括:获取原始图像数据和环境传感信息;抽取所述环境传感信息中的动态跟踪激活数据;根据所述动态跟踪激活数据将所述原始图像数据中的跟踪目标进行识别,得到目标轨迹数据;根据所述目标轨迹数据生成标记策略,对所述原始图像数据进行标记,生成跟踪图像数据。可选的,所述环境传感信息包括:环境移动信息、环境动态跟踪激活参数。可选的,所述根据所述动态跟踪激活数据将所述原始图像数据中的跟踪目标进行识别,得到目标轨迹数据包括:提取所述动态跟踪激活数据中的目标特征数据和目标走向数据;利用预设公式将所述目标特征数据和目标走向数据与所述原始图像数据进行计算,得到所述目标轨迹数据,其中,所述预设公式为:
其中,是目标轨迹数据,P是原始图像数据,h是目标特征数据,x是目标走向数据,t是事件发生时间戳。可选的,所述根据所述目标轨迹数据生成标记策略,对所述原始图像数据进行标记,生成跟踪图像数据包括:将所述目标轨迹数据和策略分析矩阵进行匹配,得到所述标记策略;根据所述标记策略对所述原始图像数据中的跟踪位置进行记号标注,得到所述跟踪图像数据。
根据本发明实施例的另一方面,还提供了一种电子装置,包含处理器和存储器;所述存储器中存储有计算机可读指令,所述处理器用于运行所述计算机可读指令,其中,所述计算机可读指令运行时执行一种亿级摄像设备的图像跟踪方法。
具体的,上述方法包括:获取原始图像数据和环境传感信息;抽取所述环境传感信息中的动态跟踪激活数据;根据所述动态跟踪激活数据将所述原始图像数据中的跟踪目标进行识别,得到目标轨迹数据;根据所述目标轨迹数据生成标记策略,对所述原始图像数据进行标记,生成跟踪图像数据。可选的,所述环境传感信息包括:环境移动信息、环境动态跟踪激活参数。可选的,所述根据所述动态跟踪激活数据将所述原始图像数据中的跟踪目标进行识别,得到目标轨迹数据包括:提取所述动态跟踪激活数据中的目标特征数据和目标走向数据;利用预设公式将所述目标特征数据和目标走向数据与所述原始图像数据进行计算,得到所述目标轨迹数据,其中,所述预设公式为:
其中,是目标轨迹数据,P是原始图像数据,h是目标特征数据,x是目标走向数据,t是事件发生时间戳。可选的,所述根据所述目标轨迹数据生成标记策略,对所述原始图像数据进行标记,生成跟踪图像数据包括:将所述目标轨迹数据和策略分析矩阵进行匹配,得到所述标记策略;根据所述标记策略对所述原始图像数据中的跟踪位置进行记号标注,得到所述跟踪图像数据。
上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。
在本发明的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的技术内容,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分, 可以为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,图3为本申请一实施例提供的终端设备的硬件结构示意图。如图3所示,该终端设备可以包括输入设备30、处理器31、输出设备32、存储器33和至少一个通信总线34。通信总线34用于实现元件之间的通信连接。存储器33可能包含高速RAM存储器,也可能还包括非易失性存储NVM,例如至少一个磁盘存储器,存储器33中可以存储各种程序,用于完成各种处理功能以及实现本实施例的方法步骤。
可选的,上述处理器31例如可以为中央处理器(Central Processing Unit,简称CPU)、应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,该处理器31通过有线或无线连接耦合到上述输入设备30和输出设备32。
可选的,上述输入设备30可以包括多种输入设备,例如可以包括面向用户的用户接口、面向设备的设备接口、软件的可编程接口、摄像头、传感器中至少一种。可选的,该面向设备的设备接口可以是用于设备与设备之间进行数据传输的有线接口、还可以是用于设备与设备之间进行数据传输的硬件插入接口(例如USB接口、串口等);可选的,该面向用户的用户接口例如可以是面向用户的控制按键、用于接收语音输入的语音输入设备以及用户接收用户触摸输入的触摸感知设备(例如具有触摸感应功能的触摸屏、触控板等);可选的,上述软件的可编程接口例如可以是供用户编辑或者修改程序的入口,例如芯片的输入引脚接口或者输入接口等;可选的,上述收发信机可以是具有通信功能的射频收发芯片、基带处理芯片以及收发天线等。麦克风等音频输入设备可以接收语音数据。输出设备32可以包括显示器、音响等输出设备。
在本实施例中,该终端设备的处理器包括用于执行各设备中数据处理装置各模块的功能,具体功能和技术效果参照上述实施例即可,此处不再赘述。
图4为本申请另一实施例提供的终端设备的硬件结构示意图。图4是对图3在实 现过程中的一个具体的实施例。如图4所示,本实施例的终端设备包括处理器41以及存储器42。
处理器41执行存储器42所存放的计算机程序代码,实现上述实施例中的方法。
存储器42被配置为存储各种类型的数据以支持在终端设备的操作。这些数据的示例包括用于在终端设备上操作的任何应用程序或方法的指令,例如消息,图片,视频等。存储器42可能包含随机存取存储器(random access memory,简称RAM),也可能还包括非易失性存储器(non-volatile memory),例如至少一个磁盘存储器。
可选地,处理器41设置在处理组件40中。该终端设备还可以包括:通信组件43,电源组件44,多媒体组件45,音频组件46,输入/输出接口47和/或传感器组件48。终端设备具体所包含的组件等依据实际需求设定,本实施例对此不作限定。
处理组件40通常控制终端设备的整体操作。处理组件40可以包括一个或多个处理器41来执行指令,以完成上述方法的全部或部分步骤。此外,处理组件40可以包括一个或多个模块,便于处理组件40和其他组件之间的交互。例如,处理组件40可以包括多媒体模块,以方便多媒体组件45和处理组件40之间的交互。
电源组件44为终端设备的各种组件提供电力。电源组件44可以包括电源管理系统,一个或多个电源,及其他与为终端设备生成、管理和分配电力相关联的组件。
多媒体组件45包括在终端设备和用户之间的提供一个输出接口的显示屏。在一些实施例中,显示屏可以包括液晶显示器(LCD)和触摸面板(TP)。如果显示屏包括触摸面板,显示屏可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。
音频组件46被配置为输出和/或输入音频信号。例如,音频组件46包括一个麦克风(MIC),当终端设备处于操作模式,如语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器42或经由通信组件43发送。在一些实施例中,音频组件46还包括一个扬声器,用于输出音频信号。
输入/输出接口47为处理组件40和外围接口模块之间提供接口,上述外围接口模块可以是点击轮,按钮等。这些按钮可包括但不限于:音量按钮、启动按钮和锁定按钮。
传感器组件48包括一个或多个传感器,用于为终端设备提供各个方面的状态评估。 例如,传感器组件48可以检测到终端设备的打开/关闭状态,组件的相对定位,用户与终端设备接触的存在或不存在。传感器组件48可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在,包括检测用户与终端设备间的距离。在一些实施例中,该传感器组件48还可以包括摄像头等。
通信组件43被配置为便于终端设备和其他设备之间有线或无线方式的通信。终端设备可以接入基于通信标准的无线网络,如WiFi,2G或3G,或它们的组合。在一个实施例中,该终端设备中可以包括SIM卡插槽,该SIM卡插槽用于插入SIM卡,使得终端设备可以登录GPRS网络,通过互联网与服务端建立通信。
由上可知,在图4实施例中所涉及的通信组件43、音频组件46以及输入/输出接口47、传感器组件48均可以作为图3实施例中的输入设备的实现方式。
在本申请所提供的几个实施例中,应该理解到,所揭露的技术内容,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,可以为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述仅是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。

Claims (4)

  1. 一种亿级摄像设备的图像跟踪方法,其特征在于,包括:
    获取原始图像数据和环境传感信息;
    抽取所述环境传感信息中的动态跟踪激活数据;
    根据所述动态跟踪激活数据将所述原始图像数据中的跟踪目标进行识别,得到目标轨迹数据;
    根据所述目标轨迹数据生成标记策略,对所述原始图像数据进行标记,生成跟踪图像数据;
    所述环境传感信息包括:环境移动信息、环境动态跟踪激活参数;
    所述根据所述动态跟踪激活数据将所述原始图像数据中的跟踪目标进行识别,得到目标轨迹数据包括:
    提取所述动态跟踪激活数据中的目标特征数据和目标走向数据;
    利用预设公式将所述目标特征数据和目标走向数据与所述原始图像数据进行计算,得到所述目标轨迹数据,其中,所述预设公式为:
    其中,是目标轨迹数据,P是原始图像数据,h是目标特征数据,x是目标走向数据,t是事件发生时间戳;
    所述根据所述目标轨迹数据生成标记策略,对所述原始图像数据进行标记,生成跟踪图像数据包括:
    将所述目标轨迹数据和策略分析矩阵进行匹配,得到所述标记策略;
    根据所述标记策略对所述原始图像数据中的跟踪位置进行记号标注,得到所述跟踪图像数据。
  2. 一种亿级摄像设备的图像跟踪装置,其特征在于,包括:
    获取模块,用于获取原始图像数据和环境传感信息;
    抽取模块,用于抽取所述环境传感信息中的动态跟踪激活数据;
    识别模块,用于根据所述动态跟踪激活数据将所述原始图像数据中的跟踪目标进行识别,得到目标轨迹数据;
    生成模块,用于根据所述目标轨迹数据生成标记策略,对所述原始图像数据进行标记,生成跟踪图像数据;
    所述环境传感信息包括:环境移动信息、环境动态跟踪激活参数;
    所述识别模块包括:
    提取单元,用于提取所述动态跟踪激活数据中的目标特征数据和目标走向数据;
    计算单元,用于利用预设公式将所述目标特征数据和目标走向数据与所述原始图像数据进行计算,得到所述目标轨迹数据,其中,所述预设公式为:
    其中,是目标轨迹数据,P是原始图像数据,h是目标特征数据,x是目标走向数据,t是事件发生时间戳;
    所述生成模块包括:
    匹配单元,用于将所述目标轨迹数据和策略分析矩阵进行匹配,得到所述标记策略;
    标注单元,用于根据所述标记策略对所述原始图像数据中的跟踪位置进行记号标注,得到所述跟踪图像数据。
  3. 一种非易失性存储介质,其特征在于,所述非易失性存储介质包括存储的程序,其中,所述程序运行时控制非易失性存储介质所在的设备执行权利要求1所述的方法。
  4. 一种电子装置,其特征在于,包含处理器和存储器;所述存储器中存储有计算机可读指令,所述处理器用于运行所述计算机可读指令,其中,所述计算机可读指令运行时执行权利要求1所述的方法。
PCT/CN2023/116101 2022-11-07 2023-08-31 一种亿级摄像设备的图像跟踪方法及装置 WO2024098906A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211382755.3A CN115623336B (zh) 2022-11-07 2022-11-07 一种亿级摄像设备的图像跟踪方法及装置
CN202211382755.3 2022-11-07

Publications (1)

Publication Number Publication Date
WO2024098906A1 true WO2024098906A1 (zh) 2024-05-16

Family

ID=84878988

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/116101 WO2024098906A1 (zh) 2022-11-07 2023-08-31 一种亿级摄像设备的图像跟踪方法及装置

Country Status (2)

Country Link
CN (1) CN115623336B (zh)
WO (1) WO2024098906A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115623336B (zh) * 2022-11-07 2023-06-30 北京拙河科技有限公司 一种亿级摄像设备的图像跟踪方法及装置
CN116543013B (zh) * 2023-04-19 2024-06-14 北京拙河科技有限公司 一种球类运动轨迹分析方法及装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170019574A1 (en) * 2015-07-17 2017-01-19 Amaryllo International B.V. Dynamic tracking device
CN111345029A (zh) * 2019-05-30 2020-06-26 深圳市大疆创新科技有限公司 一种目标追踪方法、装置、可移动平台及存储介质
CN113569645A (zh) * 2021-06-28 2021-10-29 广东技术师范大学 基于图像检测的轨迹生成方法、装置及系统
CN113841380A (zh) * 2020-10-20 2021-12-24 深圳市大疆创新科技有限公司 确定目标跟随策略的方法、装置、系统、设备及存储介质
CN114187327A (zh) * 2021-12-14 2022-03-15 西安领创电子科技有限公司 目标识别跟踪方法及装置、计算机可读介质、电子设备
CN115623336A (zh) * 2022-11-07 2023-01-17 北京拙河科技有限公司 一种亿级摄像设备的图像跟踪方法及装置

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107004271B (zh) * 2016-08-22 2021-01-15 深圳前海达闼云端智能科技有限公司 显示方法、装置、电子设备、计算机程序产品和存储介质
CN111739054A (zh) * 2019-03-25 2020-10-02 北京京东尚科信息技术有限公司 目标跟踪标记方法、系统、电子设备及可读存储介质
CN110399039A (zh) * 2019-07-03 2019-11-01 武汉子序科技股份有限公司 一种基于眼动跟踪的虚实场景融合方法
CN110910422A (zh) * 2019-11-13 2020-03-24 北京环境特性研究所 目标跟踪方法、装置、电子设备和可读存储介质
CN113869177A (zh) * 2021-09-18 2021-12-31 温州大学大数据与信息技术研究院 一种用于跟踪多目标的方法及装置
CN114049382B (zh) * 2022-01-12 2023-04-18 华砺智行(武汉)科技有限公司 一种智能网联环境下目标融合跟踪方法、系统和介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170019574A1 (en) * 2015-07-17 2017-01-19 Amaryllo International B.V. Dynamic tracking device
CN111345029A (zh) * 2019-05-30 2020-06-26 深圳市大疆创新科技有限公司 一种目标追踪方法、装置、可移动平台及存储介质
CN113841380A (zh) * 2020-10-20 2021-12-24 深圳市大疆创新科技有限公司 确定目标跟随策略的方法、装置、系统、设备及存储介质
CN113569645A (zh) * 2021-06-28 2021-10-29 广东技术师范大学 基于图像检测的轨迹生成方法、装置及系统
CN114187327A (zh) * 2021-12-14 2022-03-15 西安领创电子科技有限公司 目标识别跟踪方法及装置、计算机可读介质、电子设备
CN115623336A (zh) * 2022-11-07 2023-01-17 北京拙河科技有限公司 一种亿级摄像设备的图像跟踪方法及装置

Also Published As

Publication number Publication date
CN115623336B (zh) 2023-06-30
CN115623336A (zh) 2023-01-17

Similar Documents

Publication Publication Date Title
WO2024098906A1 (zh) 一种亿级摄像设备的图像跟踪方法及装置
US20150149925A1 (en) Emoticon generation using user images and gestures
TWI470549B (zh) A method of using an image recognition guide to install an application, and an electronic device
CN104731868A (zh) 拦截广告的方法及装置
CN109753202B (zh) 一种截屏方法和移动终端
CN104077563A (zh) 人脸识别方法和装置
CN105335714A (zh) 照片处理方法、装置和设备
US20160105620A1 (en) Methods, apparatus, and terminal devices of image processing
CN105426904A (zh) 照片处理方法、装置和设备
CN110007836B (zh) 一种账单生成方法及移动终端
CN115409869B (zh) 一种基于mac跟踪的雪场轨迹分析方法及装置
CN114866702A (zh) 一种基于多辅联动摄像技术的边境监控采集方法及装置
CN111353422B (zh) 信息提取方法、装置及电子设备
CN115914819B (zh) 一种基于正交分解算法的画面捕捉方法及装置
CN116723298B (zh) 一种提升相机端传输效率的方法及装置
CN116579964B (zh) 一种动帧渐入渐出动态融合方法及装置
CN116579965B (zh) 一种多图像融合方法及装置
CN115460210B (zh) 一种基于大数据智能平台分析方法及装置
CN116758165B (zh) 一种基于阵列相机的图像标定方法及装置
CN116468883B (zh) 一种高精度图像数据体积雾识别方法及装置
CN116088580B (zh) 一种飞行物体跟踪方法及装置
CN116744102B (zh) 一种基于反馈调节的球机跟踪方法及装置
CN116723419B (zh) 一种用于十亿级高精度相机的采集速度优化方法及装置
CN116485841A (zh) 一种基于多广角的运动规则识别方法及装置
CN115984333A (zh) 一种飞机目标平滑跟踪方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23887601

Country of ref document: EP

Kind code of ref document: A1