CN111460972B - Object tracking method, device and storage medium - Google Patents

Object tracking method, device and storage medium Download PDF

Info

Publication number
CN111460972B
CN111460972B CN202010235806.4A CN202010235806A CN111460972B CN 111460972 B CN111460972 B CN 111460972B CN 202010235806 A CN202010235806 A CN 202010235806A CN 111460972 B CN111460972 B CN 111460972B
Authority
CN
China
Prior art keywords
rotation angle
image
tracking
image acquisition
acquisition assembly
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010235806.4A
Other languages
Chinese (zh)
Other versions
CN111460972A (en
Inventor
杨大鹏
罗灿锋
张祖良
王峰
刘国正
陆国煜
过全
周端继
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Keda Technology Co Ltd
Original Assignee
Suzhou Keda Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Keda Technology Co Ltd filed Critical Suzhou Keda Technology Co Ltd
Priority to CN202010235806.4A priority Critical patent/CN111460972B/en
Publication of CN111460972A publication Critical patent/CN111460972A/en
Application granted granted Critical
Publication of CN111460972B publication Critical patent/CN111460972B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The application relates to an object tracking method, an object tracking device and a storage medium, which belong to the technical field of computers, and the method comprises the following steps: acquiring pixel coordinates of a target object in a panoramic image acquired by a first image acquisition assembly; determining a rotation angle of a second image acquisition assembly based on pixel coordinates in the panoramic image; controlling the second image acquisition assembly to rotate according to the rotation angle so that the second image acquisition assembly acquires a tracking image of the target object in a tracking manner; the function of tracking the target object is realized; the problem that the precision of the tracking target object is low due to the random error in the production process of the tracking equipment can be solved; the target object is tracked by combining the positioning results of the panoramic camera and the tracking device instead of tracking the target object by using the positioning result of a single tracking device, so that the accuracy of tracking and positioning can be improved.

Description

Object tracking method, device and storage medium
Technical Field
The application relates to an object tracking method, an object tracking device and a storage medium, and belongs to the technical field of computers.
Background
In a video conference, a speaker is usually required to be taken as a main subject for tracking shooting, so that an intelligent tracking camera is required to be capable of accurate tracking positioning.
In a typical object tracking method, an image is captured by a tracking camera, and if a target object appears in the image, the target object is tracked and positioned.
However, since the tracking camera has random errors during the production process, a problem of low tracking accuracy of the tracking target object may result.
Disclosure of Invention
The application provides an object tracking method, an object tracking device and a storage medium, which can realize the tracking of a specific object and solve the problem of low accuracy of a tracked target object caused by random errors in the production process of tracking equipment. The application provides the following technical scheme:
in a first aspect, an object tracking method is provided, the method including:
acquiring pixel coordinates of a target object in a panoramic image acquired by a first image acquisition assembly;
determining a rotation angle of a second image acquisition assembly based on pixel coordinates in the panoramic image;
and controlling the second image acquisition assembly to rotate according to the rotation angle so as to enable the second image acquisition assembly to acquire the tracking image of the target object in a tracking manner.
In a second aspect, an object tracking apparatus is provided, the apparatus comprising:
the coordinate acquisition module is used for acquiring pixel coordinates of the target object in the panoramic image acquired by the first image acquisition assembly;
the angle determining module is used for determining the rotation angle of the second image acquisition assembly based on the pixel coordinates in the panoramic image;
and the object tracking module is used for controlling the second image acquisition assembly to rotate according to the rotation angle so as to enable the second image acquisition assembly to track and acquire a tracking image of the target object.
In a third aspect, an object tracking apparatus is provided, the apparatus comprising a processor and a memory; the memory has stored therein a program that is loaded and executed by the processor to implement the object tracking method of the first aspect.
In a fourth aspect, a computer-readable storage medium is provided, in which a program is stored, the program being loaded and executed by the processor to implement the object tracking method of the first aspect.
The beneficial effect of this application lies in: acquiring pixel coordinates of a target object in a panoramic image acquired by a first image acquisition assembly; determining a rotation angle of a second image acquisition assembly based on pixel coordinates in the panoramic image; controlling the second image acquisition assembly to rotate according to the rotation angle so as to enable the second image acquisition assembly to track and acquire a tracking image of the target object; the function of tracking the target object is realized; the problem that the precision of the tracking target object is low due to the random error in the production process of the tracking equipment can be solved; the target object is tracked by combining the positioning results of the panoramic camera and the tracking device instead of tracking the target object by using the positioning result of a single tracking device, so that the accuracy of tracking and positioning can be improved.
In addition, training a recognition model by using the sample pixel coordinates of the training object in the first image acquisition assembly and the actual rotation angle corresponding to each sample pixel coordinate, thereby obtaining the parameters of the recognition model of the second image acquisition assembly; when the identification model is used for tracking the target object, the obtained rotation angle of the second image acquisition assembly is more consistent with the expected rotation angle, the positioning error of the second image acquisition assembly can be reduced, and the tracking and positioning accuracy is improved.
The foregoing description is only an overview of the technical solutions of the present application, and in order to make the technical solutions of the present application more clear and clear, and to implement the technical solutions according to the content of the description, the following detailed description is made with reference to the preferred embodiments of the present application and the accompanying drawings.
Drawings
FIG. 1 is a schematic diagram of an object tracking system according to an embodiment of the present application;
FIG. 2 is a flow diagram of an object tracking method provided by an embodiment of the present application;
FIG. 3 is a schematic diagram of determining an angle of a human face relative to a first image acquisition assembly according to one embodiment of the present application;
FIG. 4 is a schematic diagram of the relationship between the horizontal angle and the horizontal pixel (x-axis) coordinate of the training object in the panoramic image provided by an embodiment of the present application;
FIG. 5 is a schematic diagram of a relationship between a vertical angle and a vertical pixel (y-axis) coordinate of a training object in a panoramic image according to an embodiment of the present application;
FIG. 6 is a block diagram of an object tracking device provided by an embodiment of the present application;
fig. 7 is a block diagram of an object tracking apparatus according to an embodiment of the present application.
Detailed Description
The following detailed description of the present application will be made with reference to the accompanying drawings and examples. The following examples are intended to illustrate the present application but are not intended to limit the scope of the present application.
Fig. 1 is a schematic structural diagram of an object tracking system according to an embodiment of the present application, and as shown in fig. 1, the system at least includes: a first image acquisition assembly 110, a second image acquisition assembly 120, and a control assembly 130.
The first image capturing component 110 is used to capture panoramic images. The first image capturing component 110 may also be referred to as a panoramic camera, etc., and the name of the first image capturing component is not limited in this embodiment.
The first image acquisition assembly 110 is communicatively coupled to the control assembly 130. The first image acquisition component 110 sends the acquired panoramic image to the control component 130 through the communication connection with the control component 130; alternatively, the first image capturing component 110 may also recognize the target object and send the pixel coordinates of the recognized target object to the control component 130.
Optionally, the control component 130 may be a device such as a mobile phone, a tablet computer, a computer, and a cradle head, and the implementation manner of the control component 130 is not limited in this embodiment.
In this embodiment, the control component 130 is configured to: acquiring pixel coordinates of a target object in a panoramic image acquired by the first image acquisition component 110; determining a rotation angle of the second image capturing component 120 based on the pixel coordinates in the panoramic image; and controlling the second image acquisition assembly 120 to rotate according to the rotation angle, so that the second image acquisition assembly tracks and acquires the tracking image of the target object.
The pixel coordinates of the target object in the panoramic image are determined based on a coordinate system established by taking the central point of the panoramic image as an origin, the horizontal axis as an x-axis and the vertical axis as a y-axis; alternatively, the pixel coordinates may be determined based on a coordinate system established with the lower left vertex of the panoramic image as the origin, the horizontal bottom side as the x axis, and the vertical left side as the y axis, or may be determined based on other coordinate systems, and the determination method of the coordinate system is not limited in this embodiment.
The object (including a target object and a training object hereinafter) in the present application refers to an object tracked by the second image capturing component 120, and the object may be a human face, a vehicle, an animal, or the like, and the present embodiment does not limit the type of the object.
The control assembly 130 is also communicatively coupled to the second image capturing assembly 120. The second image capturing component 120 is configured to track an image of the captured object to obtain a tracked image. The shooting angle of the second image capturing assembly 120 can be rotated, for example: 360 degrees in the horizontal direction and 180 degrees in the vertical direction. Optionally, the focal length of the second acquisition assembly is variable. In practical implementation, the second image capturing component 120 may be a camera with a holder, and the holder is an electronic holder, that is, the camera may be driven to rotate according to the control command of the control component 130.
Optionally, the acquisition range of the second image acquisition assembly 120 is smaller than the acquisition range of the first image acquisition assembly 110. Optionally, after the second image capturing component 120 captures the tracking image, the captured tracking image may be sent to the control component 130.
In one example, the second image capturing assembly 120 is located on the same axis and adjacent to the first image capturing assembly 110. Such as: the second image acquisition assembly 120 is located directly below the first image acquisition assembly 110; or, directly above.
Of course, in other embodiments, the second image capturing assembly 120 may not be located on the same axis as the first image capturing assembly 110, and in this case, the relative position relationship between the second image capturing assembly 120 and the first image capturing assembly 110 is stored in the control assembly 130.
Optionally, the first image capturing assembly 110 and the second image capturing assembly 120 rotate synchronously; alternatively, the first image capturing assembly does not rotate in synchronization with the second image capturing assembly 120.
In this embodiment, only one number of the first image assemblies 120 and the second image assemblies 120 is taken as an example for explanation, and in practical implementation, the control assembly 130 may be communicatively connected to the plurality of first image assemblies 120 and the plurality of second image assemblies 120, respectively, and the number of the first image assemblies 120 and the second image assemblies 120 is not limited in this embodiment.
Fig. 2 is a flowchart of an object tracking method according to an embodiment of the present application, where the embodiment describes that the method is applied to the object tracking system shown in fig. 1, and an execution subject of each step is a control component 130 in the system as an example. The method at least comprises the following steps:
step 201, acquiring pixel coordinates of the target object in the panoramic image acquired by the first image acquisition component.
Optionally, the pixel coordinates of the target object in the panoramic image may be sent by the first image capturing component (that is, the target object in the panoramic image is recognized by the first image capturing component); or the control component identifies the panoramic image sent by the first image acquisition component.
The pixel coordinates of the target object in the panoramic image are determined based on a coordinate system established by taking the central point of the panoramic image as an origin, the horizontal axis as an x-axis and the vertical axis as a y-axis; or, the pixel coordinates may be determined based on a coordinate system established with the lower left vertex of the panoramic image as the origin, the horizontal bottom side as the x-axis, and the vertical left side as the y-axis, or, of course, the pixel coordinates may also be determined based on other coordinate systems, and the determination method of the coordinate system is not limited in this embodiment.
Optionally, the pixel coordinates of the target object in the panoramic image are: pixel coordinates of a center point of a target object (such as a face center point); or, an average value of pixel coordinates of each point of the target object; or, pixel coordinates of respective points of the target object are included.
It should be added that, if the target object is identified by the first image capturing component, the panoramic image may be an image in a view finder of the first image capturing component; or accessible images captured by the first image acquisition component.
Step 202, determining the rotation angle of the second image acquisition assembly based on the pixel coordinates in the panoramic image.
The second image acquisition assembly is used for tracking a target object to acquire a tracking image. The shooting angle of the second image acquisition assembly can be rotated.
Optionally, the rotation angle of the second image capturing assembly includes a rotation angle in a horizontal direction and a rotation angle in a vertical direction.
Optionally, determining the rotation angle of the second image capturing assembly based on the pixel coordinates in the panoramic image comprises: and the control component inputs the pixel coordinates in the panoramic image into the identification model to obtain the rotation angle.
The identification model is determined by using sample pixel coordinates of the training object in the panoramic image and the actual rotation angle corresponding to each sample pixel coordinate. The actual rotation angle refers to an angle through which the second image capturing assembly rotates to a desired position.
Alternatively, the desired position refers to a position where the target object is located in the middle range of the tracking image. The intermediate range is a range determined based on the center point of the tracking image.
The training objects and the target objects may be different types of objects, such as: the target object is a human face, and correspondingly, the training object can be a human face or a vehicle; for another example: the target object is a vehicle, and correspondingly, the training object can be a vehicle or a human face. The target object may be multiple types of objects, such as: the human face and the vehicle are both target objects. That is, the recognition model can be widely applied to various types of target objects.
Before the control component inputs the pixel coordinates in the panoramic image into the recognition model, the recognition model needs to be trained. The control component trains the recognition model, and at least comprises the following steps:
step 1, obtaining sample pixel coordinates of a training object in a panoramic image.
The relevant description of this step is detailed in step 201, and the difference is that the target object is replaced by a training object, and the pixel coordinate is replaced by a sample pixel coordinate, which is not described again in this embodiment.
And 2, acquiring an actual rotation angle corresponding to each sample pixel coordinate.
Optionally, the manner of acquiring the actual rotation angle includes, but is not limited to, the following:
the first method comprises the following steps: and when the pixel coordinates of the sample are obtained, obtaining the horizontal angle and the vertical angle of the training object which is manually measured relative to the second image acquisition assembly, and obtaining the actual rotation angle.
The first mode will be described below as an example. In the training stage, a person stands at different directions of the panoramic camera, actual horizontal angles and actual vertical angles of the person relative to the follow-up camera at different directions are measured, and sample pixel coordinates of the face in the panoramic camera are acquired simultaneously. And performing model training according to the sample pixel coordinates and the measured actual horizontal angle and actual vertical angle to obtain a recognition model. In the actual use stage, the pixel coordinates of the human face in the panoramic camera are input into the recognition model to obtain the rotation angle of the tracking camera, so that the human face is always presented in the center of the screen.
And the second method comprises the following steps: inputting the pixel coordinates of each sample into a preset calculation formula to obtain an initial rotation angle of the training object relative to the second image acquisition assembly, wherein the initial rotation angle comprises a horizontal angle and a vertical angle; controlling the second acquisition assembly to rotate according to the initial rotation angle; determining the offset angle of the second image acquisition assembly based on the pixel coordinates of the training object in the tracking image acquired by the rotated second image acquisition assembly; and determining the actual rotation angle corresponding to the sample pixel coordinate based on the initial rotation angle and the offset angle.
In one example, the preset calculation formula is represented by: tan α = fh/F + b.
Where α is an angle (horizontal angle or vertical angle) of the training object with respect to the second image capturing component, fh is a distance (horizontal distance or vertical distance) between a pixel coordinate of the training object and a screen center coordinate of the second image capturing component, F is a focal length of the first image capturing component (of course, the value of F may be initialized to other values, which does not limit the value of F in this embodiment), b is an offset term, and b is 0 (of course, the value of b may be initialized to other values, which does not limit the value of b in this embodiment). In this embodiment, the first image capturing assembly is adjacent to the second image capturing assembly, and at this time, the angle of the training object relative to the second image capturing assembly can be approximately expressed by the above formula.
Referring to a schematic cross-sectional view of the training object and the first image capturing component shown in fig. 3, the description is given by taking the training object as a face and taking the pixel coordinate of the training object as the pixel coordinate of the center point of the face in fig. 3 as an example. As can be seen from fig. 3, the imaging position (i.e., the pixel coordinate image) of the face center point P in the first image capturing assembly and the lens focal length F form a trigonometric function relationship, and the preset calculation formula can be obtained based on the trigonometric function relationship.
Because the produced equipment inevitably has random errors, the precision of calculating the horizontal angle and the vertical angle corresponding to the pixel coordinate of the training object by directly using the preset calculation formula is low, therefore, in the embodiment, the relation between tan alpha and the pixel coordinate (x, y) can be fitted by using the sample angle data and the sample pixel coordinate corresponding to each group of sample angle data, so as to determine the coefficients, namely 1/F and the offset b, in the preset calculation formula, and obtain the improved preset calculation formula. Wherein the sample angle data refers to a horizontal angle and a vertical angle of the training subject relative to the second image acquisition assembly. Refer to the relationship between the horizontal angle in the sample angle data and the value of the horizontal coordinate x in the sample pixel coordinates shown in fig. 4; reference is made to the relationship between the vertical angle in the sample angle data and the value of the vertical coordinate x in the sample pixel coordinates shown in fig. 5. In fig. 4 and 5, the dots represent sample data, and the dotted lines represent the improved preset calculation formula.
Optionally, if the second image capturing assembly is a focus-following camera, since the focal length of the focus-following camera is changed, the focal length of the second image capturing assembly needs to be acquired. Specifically, determining the offset angle of the second image acquisition assembly based on the pixel coordinates of the training object in the tracking image acquired by the rotated second image acquisition assembly comprises: acquiring a scaling parameter of a second image acquisition component; determining the focal length of the second image acquisition assembly according to the scaling parameter; and inputting the focal length of the second image acquisition assembly and the pixel coordinates of the training object in the tracking image into a preset calculation formula to obtain the offset angle.
The focal length of the second image capturing component is determined according to the scaling parameter, and is expressed by the following formula (other types of devices may have different expression formulas, and the embodiment does not limit the specific expression formulas):
1/F’=[tan(a×zoom+b)]/c
wherein F' is the focal length of the second image capturing component and zoom is the scaling parameter. and a, b and c are specific numerical values obtained by fitting according to related parameters of the second image acquisition assembly, and the values of a, b and c corresponding to the image acquisition assemblies of different models are different. In one embodiment, for example, the focus fit formula for a camera of a certain model is:
1/F’=[tan(-0.0001×zoom+0.785398)]/1350
after obtaining the offset angle, determining an actual rotation angle corresponding to the sample pixel coordinate based on the initial rotation angle and the offset angle, including: and determining the sum of the initial rotation angle and the offset angle as the actual rotation angle.
And 3, determining the identification model by using the sample pixel coordinates and the actual rotation angle.
Determining a recognition model using the sample pixel coordinates and the actual rotation angle, comprising: and training a preset calculation formula based on each sample pixel coordinate and the actual rotation angle corresponding to each sample pixel coordinate, and determining a model obtained by training as an identification model. And F and b in the recognition model are model parameters obtained after training, and the model parameters are the same as or different from F and b in a preset formula.
The second mode will be described below by taking an example. In the training stage, a person stands in different directions of the panoramic camera, and the panoramic camera automatically identifies face coordinates; the control assembly calculates an initial rotation angle of the focus following camera according to the face coordinates, the focus following camera rotates the initial rotation angle, and pixel coordinates and Zoom values of the face in the focus following camera at the moment are returned; after the measurement is finished, determining the focal length of the focus following camera by using the Zoom value; determining an offset angle by using the pixel coordinates of the face in the focus following camera and the focal length of the focus following camera; determining the sum of the offset angle and the initial rotation angle as an actual rotation angle; and performing curve fitting by using the sample pixel coordinates in the panoramic camera and the corresponding actual rotation angle to obtain an identification model, and finishing training. In the actual use stage, the person stands in different directions, and the camera with focus rotates correspondingly, so that the face is always displayed in the center of the screen.
And step 203, controlling the second image acquisition assembly to rotate according to the rotation angle so that the second image acquisition assembly tracks and acquires the tracking image of the target object.
Optionally, the control component sends a control instruction to the second image acquisition component, wherein the control instruction carries the rotation angle; and after the second image acquisition assembly receives the control instruction, the second image acquisition assembly rotates based on the rotation angle in the control instruction. Or the control component controls the equipment body (such as a holder) to rotate according to the rotation angle, and the equipment body drives the second image acquisition component to rotate.
In summary, in the object tracking method provided by this embodiment, the pixel coordinates of the target object in the panoramic image collected by the first image collecting assembly are obtained; determining a rotation angle of a second image acquisition assembly based on pixel coordinates in the panoramic image; controlling the second image acquisition assembly to rotate according to the rotation angle so as to enable the second image acquisition assembly to track and acquire a tracking image of the target object; the function of tracking the target object is realized; the problem that the precision of the tracking target object is low due to the random error in the production process of the tracking equipment can be solved; the target object is tracked by combining the positioning results of the panoramic camera and the tracking device instead of tracking the target object by using the positioning result of a single tracking device, so that the accuracy of tracking and positioning can be improved.
In addition, training a recognition model by using the sample pixel coordinates of the training object in the first image acquisition assembly and the actual rotation angle corresponding to each sample pixel coordinate, thereby obtaining the parameters of the recognition model of the second image acquisition assembly; when the identification model is used for tracking the target object, the obtained rotation angle of the second image acquisition assembly is more consistent with the expected rotation angle, the positioning error of the second image acquisition assembly can be reduced, and the tracking and positioning accuracy is improved.
Fig. 6 is a block diagram of an object tracking apparatus according to an embodiment of the present application, and the embodiment takes the control component 130 of the object tracking system shown in fig. 1 as an example for explanation. The device at least comprises the following modules: a coordinate acquisition module 610, an angle determination module 620, and an object tracking module 630.
The coordinate acquisition module 610 is used for acquiring pixel coordinates of the target object in the panoramic image acquired by the first image acquisition component;
an angle determination module 620, configured to determine a rotation angle of the second image capturing component based on the pixel coordinates in the panoramic image;
an object tracking module 630, configured to control the second image capturing assembly to rotate according to the rotation angle, so that the second image capturing assembly tracks and captures a tracking image of the target object.
Optionally, the angle determining module 620 is configured to: inputting the pixel coordinates in the panoramic image into an identification model to obtain the rotation angle;
the identification model is determined by using sample pixel coordinates of the training object in the panoramic image and an actual rotation angle corresponding to each sample pixel coordinate, wherein the actual rotation angle refers to an angle through which the second image acquisition assembly rotates to a desired position.
Optionally, the apparatus further comprises: a sample coordinate acquisition module 640, an actual angle acquisition module 650, and a recognition model determination module 660.
The sample coordinate acquiring module 640 is configured to acquire sample pixel coordinates of the training object in the panoramic image;
the actual angle obtaining module 650 is configured to obtain an actual rotation angle corresponding to each sample pixel coordinate;
the identification model determination module 660 is configured to determine the identification model using the sample pixel coordinates and the actual rotation angle.
Optionally, the actual angle obtaining module 650 is configured to:
and when the sample pixel coordinates are obtained, obtaining the horizontal angle and the vertical angle of the training object relative to the second image acquisition assembly, which are manually measured, to obtain the actual rotation angle.
Optionally, the actual angle obtaining module 650 is configured to:
inputting the pixel coordinates of each sample into a preset calculation formula to obtain an initial rotation angle of the training object relative to the second image acquisition assembly, wherein the initial rotation angle comprises a horizontal angle and a vertical angle;
controlling the second acquisition assembly to rotate according to the initial rotation angle;
determining an offset angle of a second image acquisition assembly based on pixel coordinates of the training object in a tracking image acquired by the rotated second image acquisition assembly;
and determining the actual rotation angle corresponding to the sample pixel coordinate based on the initial rotation angle and the offset angle.
Optionally, the actual angle obtaining module 650 is specifically configured to:
acquiring a scaling parameter of the second image acquisition assembly;
determining the focal length of the second image acquisition component according to the scaling parameter;
and inputting the focal length of the second image acquisition assembly and the pixel coordinate of the training object in the tracking image into the preset calculation formula to obtain the offset angle.
Reference is made to the above-described method embodiments for relevant details.
It should be noted that: in the object tracking device provided in the above embodiment, when performing object tracking, only the division of the above functional modules is illustrated, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the object tracking device is divided into different functional modules to complete all or part of the above described functions. In addition, the object tracking apparatus provided in the above embodiments and the object tracking method embodiments belong to the same concept, and specific implementation processes thereof are described in detail in the method embodiments, and are not described herein again.
FIG. 7 is a block diagram of an object tracking device, which may be a device including the control component 130 shown in FIG. 1, provided in one embodiment of the present application. The apparatus includes at least a processor 701 and a memory 702.
Processor 701 may include one or more processing cores such as: 4 core processors, 8 core processors, etc. The processor 701 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 701 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 701 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor 701 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 702 may include one or more computer-readable storage media, which may be non-transitory. Memory 702 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 702 is used to store at least one instruction for execution by processor 701 to implement the object tracking methods provided by method embodiments herein.
In some embodiments, the object tracking device may further include: a peripheral device interface and at least one peripheral device. The processor 701, memory 702, and peripheral interface may be connected by buses or signal lines. Each peripheral may be connected to the peripheral interface via a bus, signal line, or circuit board. Illustratively, peripheral devices include, but are not limited to: radio frequency circuit, touch display screen, audio circuit, power supply, etc.
Of course, the object tracking device may include fewer or more components, which is not limited by the embodiment.
Optionally, the present application further provides a computer-readable storage medium, in which a program is stored, and the program is loaded and executed by a processor to implement the object tracking method of the above method embodiment.
Optionally, the present application further provides a computer product, which includes a computer-readable storage medium, in which a program is stored, and the program is loaded and executed by a processor to implement the object tracking method of the above-mentioned method embodiment.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is specific and detailed, but not to be understood as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, and these are all within the scope of protection of the present application. Therefore, the protection scope of the present patent application shall be subject to the appended claims.

Claims (8)

1. An object tracking method, the method comprising:
acquiring pixel coordinates of a target object in a panoramic image acquired by a first image acquisition assembly;
determining a rotation angle of a second image acquisition assembly based on pixel coordinates in the panoramic image;
controlling the second image acquisition assembly to rotate according to the rotation angle so that the second image acquisition assembly acquires a tracking image of the target object in a tracking manner;
the determining a rotation angle of the second image capturing assembly based on the pixel coordinates comprises:
inputting the pixel coordinates in the panoramic image into an identification model to obtain the rotation angle;
the identification model is determined by using sample pixel coordinates of a training object in the panoramic image and an actual rotation angle corresponding to each sample pixel coordinate, wherein the actual rotation angle refers to an angle which the second image acquisition assembly passes when rotating to a desired position.
2. The method of claim 1, further comprising:
acquiring sample pixel coordinates of the training object in a panoramic image;
acquiring an actual rotation angle corresponding to each sample pixel coordinate;
determining the recognition model using the sample pixel coordinates and the actual rotation angle.
3. The method of claim 2, wherein said obtaining an actual rotation angle for each sample pixel coordinate comprises:
and when the sample pixel coordinates are obtained, obtaining the horizontal angle and the vertical angle of the training object relative to the second image acquisition assembly, which are measured manually, to obtain the actual rotation angle.
4. The method of claim 2, wherein obtaining the actual rotation angle for each sample pixel coordinate comprises:
inputting the pixel coordinates of each sample into a preset calculation formula to obtain an initial rotation angle of the training object relative to the second image acquisition assembly, wherein the initial rotation angle comprises a horizontal angle and a vertical angle;
controlling the second acquisition assembly to rotate according to the initial rotation angle;
determining the offset angle of a second image acquisition component based on the pixel coordinates of the training object in the tracking image acquired by the rotated second image acquisition component;
and determining the actual rotation angle corresponding to the sample pixel coordinate based on the initial rotation angle and the offset angle.
5. The method of claim 4, wherein determining the offset angle of the second image acquisition assembly based on pixel coordinates of the training object in a tracking image acquired by the rotated second image acquisition assembly comprises:
acquiring a scaling parameter of the second image acquisition assembly;
determining the focal length of the second image acquisition component according to the scaling parameter;
and inputting the focal length of the second image acquisition assembly and the pixel coordinate of the training object in the tracking image into the preset calculation formula to obtain the offset angle.
6. An object tracking apparatus, characterized in that the apparatus comprises:
the coordinate acquisition module is used for acquiring pixel coordinates of the target object in the panoramic image acquired by the first image acquisition assembly;
the angle determining module is used for determining the rotation angle of the second image acquisition assembly based on the pixel coordinates in the panoramic image;
the object tracking module is used for controlling the second image acquisition assembly to rotate according to the rotation angle so as to enable the second image acquisition assembly to track and acquire a tracking image of the target object;
the determining a rotation angle of the second image capturing assembly based on the pixel coordinates comprises:
inputting the pixel coordinates in the panoramic image into an identification model to obtain the rotation angle;
the identification model is determined by using sample pixel coordinates of the training object in the panoramic image and an actual rotation angle corresponding to each sample pixel coordinate, wherein the actual rotation angle refers to an angle through which the second image acquisition assembly rotates to a desired position.
7. An object tracking apparatus, characterized in that the apparatus comprises a processor and a memory; the memory has stored therein a program that is loaded and executed by the processor to implement the object tracking method according to any one of claims 1 to 5.
8. A computer-readable storage medium, characterized in that a program is stored in the storage medium, which program, when being executed by a processor, is adapted to carry out the object tracking method according to any one of claims 1 to 5.
CN202010235806.4A 2020-03-30 2020-03-30 Object tracking method, device and storage medium Active CN111460972B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010235806.4A CN111460972B (en) 2020-03-30 2020-03-30 Object tracking method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010235806.4A CN111460972B (en) 2020-03-30 2020-03-30 Object tracking method, device and storage medium

Publications (2)

Publication Number Publication Date
CN111460972A CN111460972A (en) 2020-07-28
CN111460972B true CN111460972B (en) 2023-04-07

Family

ID=71685042

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010235806.4A Active CN111460972B (en) 2020-03-30 2020-03-30 Object tracking method, device and storage medium

Country Status (1)

Country Link
CN (1) CN111460972B (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105338248B (en) * 2015-11-20 2018-08-28 成都因纳伟盛科技股份有限公司 Intelligent multiple target active tracing monitoring method and system
CN105407283B (en) * 2015-11-20 2018-12-18 成都因纳伟盛科技股份有限公司 A kind of multiple target initiative recognition tracing and monitoring method
CN109492506A (en) * 2017-09-13 2019-03-19 华为技术有限公司 Image processing method, device and system
CN108012083B (en) * 2017-12-14 2020-02-04 深圳云天励飞技术有限公司 Face acquisition method and device and computer readable storage medium

Also Published As

Publication number Publication date
CN111460972A (en) 2020-07-28

Similar Documents

Publication Publication Date Title
CN106650682B (en) Face tracking method and device
WO2018119889A1 (en) Three-dimensional scene positioning method and device
CN108574825B (en) Method and device for adjusting pan-tilt camera
CN110850872A (en) Robot inspection method and device, computer readable storage medium and robot
US9262862B2 (en) Method and apparatus for reconstructing three dimensional model
CN111325798B (en) Camera model correction method, device, AR implementation equipment and readable storage medium
CN111028205B (en) Eye pupil positioning method and device based on binocular distance measurement
CN106713740B (en) Positioning tracking camera shooting method and system
CN109785322B (en) Monocular human body posture estimation network training method, image processing method and device
CN113365028B (en) Method, device and system for generating routing inspection path
CN111815715A (en) Method and device for calibrating zoom pan-tilt camera and storage medium
CN112541400A (en) Behavior recognition method and device based on sight estimation, electronic equipment and storage medium
CN110751728A (en) Virtual reality equipment and method with BIM building model mixed reality function
CN111460972B (en) Object tracking method, device and storage medium
CN110750094A (en) Method, device and system for determining pose change information of movable equipment
CN111079535B (en) Human skeleton action recognition method and device and terminal
CN112102415A (en) Depth camera external parameter calibration method, device and equipment based on calibration ball
CN109389367B (en) Personnel attendance checking method, device and storage medium
KR20200096426A (en) Moving body detecting device, moving body detecting method, and moving body detecting program
CN111462194B (en) Training method, device and storage medium of object tracking model
CN115841520A (en) Camera internal reference calibration method and device, electronic equipment and medium
RU2383925C2 (en) Method of detecting contours of image objects and device for realising said method
US11790483B2 (en) Method, apparatus, and device for identifying human body and computer readable storage medium
CN113724176A (en) Multi-camera motion capture seamless connection method, device, terminal and medium
CN111223139B (en) Target positioning method and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant