CN112955844A - Target tracking method, device, system and storage medium - Google Patents

Target tracking method, device, system and storage medium Download PDF

Info

Publication number
CN112955844A
CN112955844A CN202080005912.1A CN202080005912A CN112955844A CN 112955844 A CN112955844 A CN 112955844A CN 202080005912 A CN202080005912 A CN 202080005912A CN 112955844 A CN112955844 A CN 112955844A
Authority
CN
China
Prior art keywords
projection
image
target object
image acquisition
acquisition equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080005912.1A
Other languages
Chinese (zh)
Inventor
林顺豪
张鼎
许美蓉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN112955844A publication Critical patent/CN112955844A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0242Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0251Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • G05D1/0285Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using signals transmitted via a public communication network, e.g. GSM network

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application provides a target tracking method, equipment, a system and a storage medium. The tracking device includes: the device comprises a machine body, a multi-axis cradle head and a control module. The machine body is used for mounting the multi-shaft holder; the multi-shaft holder can drive the image acquisition equipment to rotate; the image acquisition equipment is used for acquiring a first projection image corresponding to the reference image, and the first projection image comprises a deformation pattern corresponding to a preset pattern; generating a deformation pattern based on the target object; and the control module is used for adjusting the working state of the multi-axis holder according to the first projection image acquired by the image acquisition equipment so as to drive the image acquisition equipment to track and acquire the deformation pattern. The target tracking method, the target tracking device, the target tracking system and the storage medium can realize the tracking of the target object.

Description

Target tracking method, device, system and storage medium
Technical Field
The present application relates to the field of target detection and tracking technologies, and in particular, to a target tracking method, device, system, and storage medium.
Background
The target detection and tracking technology plays an increasingly important role in the fields of modern security, medicine, civil use and the like. In the existing target detection and tracking technology, a camera is used for shooting an object, then a machine learning technology is used for carrying out target recognition on the object in an image, and then a moving object is tracked in a collection visual field of the camera.
However, the fixed orientation of the camera results in a limited range for the camera to track the moving object.
Disclosure of Invention
Aspects of the present disclosure provide a target tracking method, device, system, and storage medium, which are used to track a moving target.
An embodiment of the present application provides a tracking device, including:
the machine body is used for mounting the multi-shaft holder;
the multi-axis cloud deck is used for carrying image acquisition equipment and driving the image acquisition equipment to rotate; the image acquisition equipment is used for acquiring a first projection image corresponding to the reference image; the reference image is projected outwards by a projection module and has a preset pattern; the first projection image comprises a deformation pattern corresponding to the preset pattern; the deformation pattern is generated based on the target object;
and the control module is electrically connected with the multi-axis cloud platform and used for adjusting the working state of the multi-axis cloud platform according to the first projection image acquired by the image acquisition equipment so as to drive the image acquisition equipment to track and acquire the deformation pattern.
The embodiment of the present application further provides a target tracking method, including:
controlling the projection module to project the reference image outwards; the reference image has a predetermined pattern;
controlling image acquisition equipment carried on the multi-axis pan-tilt head to acquire a first projection image corresponding to the reference image; the first projection image includes: a deformed pattern of the predetermined pattern; the deformation pattern is generated based on the target object;
and adjusting the working state of the multi-axis holder according to the first projection image acquired by the image acquisition equipment so as to drive the image acquisition equipment to track and acquire the deformation pattern.
An embodiment of the present application further provides a target tracking system, including: the system comprises a projection module, a tracking device and a projection surface arranged in a physical environment where the tracking device is located;
the projection module is used for projecting a reference image to a projection surface, and the reference image is provided with a preset pattern;
the tracking device includes: the machine body is used for mounting the multi-shaft holder;
the multi-axis cloud deck is used for carrying image acquisition equipment and driving the image acquisition equipment to rotate; the image acquisition equipment is used for acquiring a first projection image corresponding to a reference image on the projection surface, and the first projection image comprises a deformation pattern corresponding to the predetermined pattern; the deformation pattern is generated based on the target object;
and the control module is electrically connected with the multi-axis cloud platform and used for adjusting the working state of the multi-axis cloud platform according to the first projection image acquired by the image acquisition equipment so as to drive the image acquisition equipment to track and acquire the deformation pattern.
Embodiments of the present application also provide a computer-readable storage medium storing computer instructions, which, when executed by one or more processors, cause the one or more processors to perform the steps of the above-mentioned object tracking method.
The target tracking method, the target tracking device, the target tracking system and the storage medium provided by the embodiment of the application can realize the tracking of the target object, and are beneficial to expanding the tracking range of the target object.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1a is a schematic structural diagram of a tracking device provided in an embodiment of the present application;
fig. 1b and fig. 1c are block diagrams of the tracking device provided in the embodiment of the present application;
FIG. 1d is a schematic diagram illustrating an operating principle of a digital micromirror device according to an embodiment of the present application;
fig. 1e is a schematic diagram of a training process of a neural network model provided in the embodiment of the present application;
fig. 2a is a schematic structural diagram of a target tracking system provided in an embodiment of the present application;
fig. 2b is a schematic diagram of a working process of the target tracking system according to the embodiment of the present application;
fig. 3 is a schematic flowchart of a target tracking method according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The fixed orientation of the existing camera leads to a limitation of the tracking range of the camera to the moving object. To address this technical problem, in some embodiments of the present application, a tracking device is provided. The tracking device includes: the machine body and the multi-shaft holder arranged on the machine body. The machine body can also be provided with a projection module, and the multi-axis holder can carry image acquisition equipment. The projection module may project a reference image having a predetermined pattern outward. The projected image of the reference image is deformed because the target object appears on the projection light of the projection module, and the generated deformed pattern moves following the movement of the target object. In this embodiment, the image acquisition device may acquire a projection image that generates a deformation pattern, and the control module may adjust the operating state of the multi-axis pan-tilt according to the projection image acquired by the image acquisition device to adjust the pose of the image acquisition device, so that the image acquisition device may track and acquire the deformation pattern. The deformation pattern is caused by the target object, so that the target object can be tracked by tracking and acquiring the deformation pattern, and the pose of the image acquisition equipment can be adjusted, thereby being beneficial to expanding the tracking range of the target object.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 1a is a schematic structural diagram of a tracking device according to an embodiment of the present application. As shown in fig. 1a, the tracking device includes: a body 11, a multi-axis pan-tilt 13, and a control module 15. The machine body 11 is used for mounting a multi-shaft holder; the multi-axis cloud deck 13 is used for carrying the image acquisition equipment 14 and can drive the image acquisition equipment 14 to rotate; the image acquisition device 14 is configured to acquire a first projection image corresponding to the reference image; the reference image is projected outward by the projection module, and the reference image has a predetermined pattern. The first projection image comprises a deformation pattern corresponding to the preset pattern; generating a deformation pattern based on the target object; and the control module 15 is electrically connected to the multi-axis pan/tilt head 13 and is used for adjusting the working state of the multi-axis pan/tilt head 13 according to the first projection image acquired by the image acquisition equipment 14 so as to drive the image acquisition equipment 14 to track and acquire the deformation pattern.
In the present embodiment, the relationship between the projection module 12 and the tracking device is not limited. In some embodiments, the projection module 12 is a stand-alone projection device and is disposed in the physical environment in which the tracking device is located. The projection module 12 is communicatively coupled to the control module 15. The control module 15 may instruct the projection module 12 to project a reference image outward, the reference image having a predetermined pattern.
In another embodiment, the body 11 may be used to mount the projection module 12 and the multi-axis pan/tilt head 13. Fig. 1a shows only the projection module 12 mounted on the body 11. The projection module 12 may be fixed on the body 11. Alternatively, the projection module 12 may be detachably fixed to the body 11. Accordingly, the body 11 is provided with a fixing member for fixing the projection module 12. Alternatively, the fixing member on the machine body 11 may be a clamp, and for the clamp, the projection module 12 may be fixed on the machine body 11 without providing a corresponding fixing member on the projection module 12.
Alternatively, in some embodiments, the projection module 12 can be fixed on the body 11 by using a fixing member on the body 11 to cooperate with a fixing member on the projection module 12. For example, the fixing element on the body 11 and the fixing element on the projection module 12 may be combined to be implemented as a snap or a lock, or the fixing element on the body 11 and the fixing element on the projection module 12 may be implemented as a concave point, a convex edge, or the like. Alternatively, a plurality of pits may be provided on the body 11 and a corresponding number of flanges may be provided on the projection module 12, or a plurality of flanges may be provided on the body 11 and a corresponding number of pits may be provided on the projection module 12, and so on.
In the present embodiment, the multi-axis pan/tilt head 13 is rotatably connected to the body 11. The multi-axis pan/tilt head 13 refers to a pan/tilt head having a plurality of rotation axes. Plural means 2 or more. For example, the multi-axis pan-tilt 13 may be a two-axis pan-tilt, a three-axis pan-tilt, a four-axis pan-tilt, or more axis pan-tilts, etc. In the present embodiment, the multi-axis pan/tilt head 13 is used to mount the image pickup device 14. Correspondingly, a fixing part for fixing the image acquisition equipment 14 is arranged on the multi-axis pan-tilt 13. Alternatively, the fixing member on the multi-axis pan/tilt head 13 may be a clamp, and for the clamp, the image capturing device 14 may be fixed on the body without providing a corresponding fixing member on the image capturing device 14.
Alternatively, in some embodiments, the image capturing device 14 may be fixed to the multi-axis pan/tilt head 13 by using a fixing member of the multi-axis pan/tilt head 13 to cooperate with a fixing member of the image capturing device 14. For example, the fixing member on the multi-axis pan/tilt head 13 and the fixing member on the image capturing apparatus 14 may be implemented as a snap or a catch, or the like, in combination, or the fixing member on the multi-axis pan/tilt head 13 and the fixing member on the image capturing apparatus 14 may be implemented as a concave point, a convex flange, or the like. Alternatively, a plurality of pits may be provided on the multi-axis pan/tilt head 13 and a corresponding number of flanges may be provided on the image pickup device 14, or a plurality of flanges may be provided on the multi-axis pan/tilt head 13 and a corresponding number of pits may be provided on the image pickup device 14, and so on.
In this embodiment, in a case where the image capturing apparatus 14 is mounted on the multi-axis pan/tilt head 13, the image capturing apparatus 14 may rotate along with the rotation of the multi-axis pan/tilt head 13, that is, the multi-axis pan/tilt head 13 may drive the image capturing apparatus 14 to rotate. When the tracking device leaves the factory, the multi-axis pan-tilt 13 of the tracking device can be loaded with the image acquisition device 14; alternatively, when the tracking is shipped from the factory, the image capturing device 14 is not mounted on the multi-axis pan/tilt head 13, and the user fixes the image capturing device 14 to the multi-axis pan/tilt head 13 when the user uses the device.
The multi-axis pan/tilt head 13 is rotatable about its rotational axis, the direction of rotation being determined by the direction of rotation of the rotational axis contained in the multi-axis pan/tilt head 13. For example, for a three-axis pan-tilt head, it comprises: the three-axis tripod head can realize pitching rotation, rolling rotation and yawing rotation. Because image acquisition equipment 14 is carried on the triaxial cloud platform, image acquisition equipment 14 also can realize every single move rotation, roll rotation and driftage rotation along with the rotation of triaxial cloud platform.
In the present embodiment, the implementation form of the image capturing apparatus 14 on which the multi-axis pan/tilt head 13 can be mounted is not limited. Image capture device 14 may be any device capable of image capture. For example, the image capturing device 14 may be a terminal device such as a mobile phone, a tablet computer, and a wearable device with a photographing function, and may also be a camera, a video camera, or a video camera. The image acquisition device 14 has different implementation forms, and the size of the fixing part on the multi-axis pan/tilt head 13 can be adjusted adaptively.
In this embodiment, the tracking device further includes: a control module 15. In the embodiment of the present application, as shown in fig. 1b, the control module 15 may include a processor 15a, a memory, and peripheral circuits of the processor 15a, and the like. Wherein, the processor can be a Central Processing Unit (CPU), a Graphics Processing Unit (GPU) or a Micro Control Unit (MCU); programmable devices such as Field-Programmable Gate arrays (FPGAs), Programmable Array Logic devices (PALs), General Array Logic devices (GAL), Complex Programmable Logic Devices (CPLDs), etc. may also be used; or Advanced Reduced Instruction Set (RISC) processors (ARM), or System On Chips (SOC), etc., but is not limited thereto.
The memory may be implemented by any type of volatile or non-volatile memory device, or combination thereof, such as memory banks 15b1 and 15b2, Static Random Access Memory (SRAM)15b3, Electrically Erasable Programmable Read Only Memory (EEPROM), Erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), magnetic storage, flash memory, magnetic or optical disks, or the like.
In the present embodiment, as shown in fig. 1b and 1c, the control module 15 may instruct the projection module 11 to project a reference image outward, the reference image having a predetermined pattern. The predetermined pattern may be any pattern. For example, the predetermined pattern may be a stripe pattern, a code pattern, a predetermined character pattern, etc., but is not limited thereto.
In some embodiments, the tracking device is pre-set with a predetermined pattern. For the control module 15, the predetermined pattern may be retrieved from the memory in response to a power-on operation for the tracking device and the projection module 11 may be instructed to project the reference image having the predetermined pattern outward.
In other embodiments, the tracking device is pre-set with pattern generation rules. For the control module 15, a predetermined pattern may be generated in accordance with a preset pattern generation rule in response to a power-on operation for the tracking device. Further, the control module 15 may instruct the projection module 11 to project the reference image having the predetermined pattern outward.
Alternatively, the control module 15 may be communicatively coupled to the projection module 12. In this embodiment, the control module 15 and the projection module 12 may be connected through a mobile network, and accordingly, the network format of the mobile network may be any one of 2G (gsm), 2.5G (gprs), 3G (WCDMA, TD-SCDMA, CDMA2000, UTMS), 4G (LTE), 4G + (LTE +), 5G, WiMax, and the like. Optionally, different physical machines may also be connected through bluetooth, WiFi, infrared or other communication methods.
Accordingly, the control module 15 may instruct the projection module 12 to project the reference image having the predetermined pattern outward by an instruction. Alternatively, the control module 15 may send a projection instruction to the projection module 12, the projection instruction being used to instruct the projection module 12 to project the reference image having the predetermined pattern outward. Accordingly, the projection module 12 projects the reference image having the predetermined pattern outward upon receiving the projection instruction.
Alternatively, the control module 15 is electrically connected to the projection module 12. Accordingly, the control module 15 may instruct the projection module 12 to project the reference image having the predetermined pattern outward through the electric signal. Wherein the electrical signal may be a high level or a low level signal. Alternatively, the control module 15 may output an electrical signal to the projection module 12, the electrical signal being used to instruct the projection module 12 to project the reference image having the predetermined pattern outward. Accordingly, the projection module 12 projects a reference image having a predetermined pattern outward upon receiving the electrical signal.
Regardless of the manner in which the control module 15 is connected to the projection module 12, the projection module 12 can project a reference image having the predetermined pattern. In the embodiment of the present application, the specific implementation form of the projection module 12 is not limited. Alternatively, the projection module 12 may be a Digital Light Processing (DLP) projection device or the like. The structure and the operation principle of the projection module 12 will be described below by taking the projection module 12 as a DLP projection device as an example.
As shown in fig. 1b, the DLP projection apparatus includes: a light source 12a, a color wheel 12b, a Digital Micromirror Device (DMD) 12c, and a projection lens 12 d. Alternatively, the color wheel 12b may be a six-segment color wheel or the like. Wherein the color wheel 12b is optically connected between the light source 12a and the DMD device 12 c; the DMD device 12c is optically connected to the projection lens 12 d; and the color wheel 12b and the DMD device 12c are also electrically connected to the control module 15, respectively.
Wherein the light emitted by the light source 12a is incident on the color wheel 12 b. The color wheel 12b filters the received light into monochromatic light under the control of the control module 15, and projects the monochromatic light to the DMD device 12 c. The DMD device 12c modulates the predetermined pattern with monochromatic light under the control of the control module 15, and projects a reference image having the predetermined pattern outward through the projection lens 12 d.
Alternatively, as shown in fig. 1b, the color wheel 12b may comprise: a condenser lens 12b1, a filter 12b2, and a shaping lens 12b 3. The filter 12b2 is optically connected between the condenser lens 12b1 and the shaping lens 12b 3. Condenser lens 12b1 is optically connected to light source 12a, and shaping lens 12b3 is optically connected to DMD device 12 c. Further, the control module 15 is also electrically connected to the filter 12b 2. The color wheel 12b filters the received light into monochromatic light under the control of the control module 15, and projects the monochromatic light to the DMD device 12 c.
When the light emitted from the light source 12a is incident on the condenser lens 12b1 in the color wheel 12b, the control module 15 controls the filter 12b2 in the color wheel 12b to divide the received light into a plurality of monochromatic lights and transmit the lights to the DMD device 12c through the shaping lens 12b3 in the color wheel 12 b. The DMD device 12c modulates the predetermined pattern with monochromatic light under the control of the control module 15, and projects a reference image having the predetermined pattern outward through the projection lens 12 d.
As shown in fig. 1d, the DMD device 12c is a photoelectric conversion mems. Each micromirror and associated structures in DMD device 12c controls a pixel. As shown in fig. 1d, there are 3 operating states for DMD device 12 c. (1) When the digital micromirror rotates 0 °, which is the state shown by reference numeral (1) in fig. 1d, it is flat. (2) When the micromirror is rotated to a positive set angle (e.g., +12 °), which is the state shown by reference numeral (2) in fig. 1d, it represents an on state. In this state, light emitted from the light source is incident on the digital micromirror mirror, and is reflected toward the projection surface by the digital micromirror mirror, and the projection surface is displayed in a "bright state". (3) When the micromirror is rotated to a negative set angle (e.g., -12 °), i.e., the state shown by reference numeral (3) in fig. 1d, an off state is indicated. In this state, light emitted from the light source is incident on the dmd mirror and is reflected by the dmd mirror to the outside of the projection surface, and the projection surface is displayed in a "dark state". Based on the operation principle of DMD device 12c shown in fig. 1d, control module 15 may control the digital micromirror to flip through the SRAM in DMD device 12c, the two-sided addressing electrode and the hinge in DMD device 12 c. Alternatively, for the addressing electrodes on both sides, the bias voltage can be converted into force to control the hinge to rotate, so as to drive the digital micromirror to turn over. Accordingly, the flip angle for the digital micromirror can be adjusted by the magnitude of the bias voltage.
In practical applications, in the case where the projection module 12 projects the reference image outward, if no object appears on the projection light of the projection module 12, the projection image of the reference image on the projection surface also has a predetermined pattern. If an object appears on the projection light of the projection module 12, the projection image corresponding to the predetermined pattern is deformed, and for convenience of description, in the embodiment of the present application, the pattern formed by deforming the projection image corresponding to the predetermined pattern is defined as a deformed pattern. The deformation pattern is generated based on an object appearing on the projection light of the projection module 12, and the deformation pattern moves along with the movement of the object. Based on this, in the embodiment of the present application, tracking of an object appearing on the projection ray of the projection module 12 can be realized based on the deformation pattern. For convenience of description, an object appearing on the projection light of the projection module 12 is defined as a target object a. The target object a may be a moving object.
Based on the above analysis, in the present embodiment, the image pickup device 14 may pick up a projection image corresponding to the reference image. The projection image corresponding to the reference image may include: the reference image is directly projected to a projection image formed by a certain projection surface, that is, the reference image is directly projected to a projection image formed by a certain projection surface without the projection light of the object appearing on the projection module 12. In the case of no object appearing in the projection light of the projection module 12, the projection surface may be a projection screen, such as a projection screen; but may also be other object surfaces in the current environment, such as, but not limited to, walls, floors, or furniture surfaces, etc. Fig. 1a illustrates only the projection screen as a projection screen, but is not limited thereto.
Of course, the projection image corresponding to the reference image may also include: when an object (target object a) appears on the projection light of the projection module 12, the projection light of the projection module 12 passes through the target object a to project a reference image on a projection surface to form a projection image. In the case of the projection light of the projection module 12, the projection surface 16 may be a surface of the target object a, or the projection surface 16 may be a projection screen, such as a projection curtain; or other object surfaces in the current environment, such as, but not limited to, walls, floors, or furniture surfaces, etc. Fig. 1a illustrates only the projection screen as a projection screen, but is not limited thereto.
In this embodiment, the projection surface may be the surface of the target object or the surface of another object in the environment where the tracking device is currently located. Therefore, compared with the existing scheme of carrying out 3D visual detection tracking based on the LCD technology, the tracking equipment provided by the embodiment of the application has lighter volume and better maintainability. This is because in the prior art, since the transistors on the LCD panel do not have light transmittance, there is a gap between pixels and a disadvantage of poor details of dark portions derives. On the other hand, the equipment designed by the LCD technology is large in size and is easily interfered by environmental dust, so that the existing equipment used for 3D visual detection and tracking based on the LCD technology is not easy to maintain. In the embodiment of the application, the surface of the target object or the surface of another object in the current environment of the tracking device can be used as the projection surface, so that the cost of the device can be reduced, and the maintenance of the projection surface is not needed. On the other hand, gaps do not exist among pixels of the projected image obtained by the embodiment of the application, and the accuracy of target detection is improved.
In the following embodiments, for convenience of description and distinction, in the case where an object (target object a) appears on the projection light of the projection module 12, a projection image formed by projecting a reference image on a certain projection surface through the projection light of the projection module 12 passing through the target object a is defined as a first projection image; and a projected image formed by directly projecting the reference image onto a certain projection plane without the projection light of the projection module 12 having the object present thereon is positioned as a second projected image. The first projection image comprises the deformation pattern, and the second projection image does not comprise the deformation pattern. Therefore, the first projection image may reflect information of the target object a, and the second projection image may not reflect information of the target object a, and thus the target object a may not be tracked based on the second projection image. Therefore, in the following embodiments, the control module 15 is focused on the first projection image acquired by the image acquisition device 14 to realize an exemplary description of the tracking process of the target object a.
As shown in fig. 1b and 1c, the control module 15 is in communication connection with the image capturing device 14, and the control module 15 is electrically connected with the multi-axis pan/tilt head 13. The communication connection between the control module 15 and the image capturing device 14 may refer to the communication connection between the control module 15 and the projection module 12, which is not described herein again.
In this embodiment, the image acquisition device 14 may provide the acquired first projection image to the control module 15. Accordingly, the control module 15 may adjust the operating state of the multi-axis pan/tilt head 13 according to the first projection image acquired by the image acquisition device 14. When the working state of the multi-axis pan/tilt head 13 is changed, the multi-axis pan/tilt head 13 can rotate. The multi-axis pan-tilt 13 rotates to drive the image capturing device 14 to rotate, so that the pose of the image capturing device 14 can be adjusted, and the image capturing device 14 can track and capture the deformation pattern. Wherein the pose of the image-pickup device 14 includes the position and orientation of the image-pickup device 14. Because the deformation pattern is caused by the target object A, the deformation pattern is tracked and collected, and the target object A can be tracked. Moreover, the multi-axis pan-tilt 13 drives the image acquisition device 14 to rotate so as to adjust the pose of the image acquisition device 14, which is beneficial to expanding the tracking range of the target object a.
In the embodiment of the present application, adjusting the operating state of the multi-axis pan/tilt head 13 includes: the state of the multi-axis pan/tilt head 13 in at least one direction is adjusted. For example, for a three-axis head, the three-axis head may be adjusted to rotate in at least one of pitch, roll, and yaw. Accordingly, the image capturing device 14 can rotate in at least one of the pitch rotation, the roll rotation, and the yaw with the rotation of the three-axis pan/tilt head. For the multi-axis pan-tilt 13, the adjusted working state thereof can adjust the pose of the image capturing device 14 to capture the deformation pattern caused by the target object a at the subsequent time.
In the embodiment of the present application, in order to achieve tracking acquisition of the deformation pattern caused by the target object a, the image acquisition device 14 may adopt an image acquisition device with a high sampling rate. Preferably, the image capturing device 14 may capture a plurality of frames of the first projection image including the deformation pattern during the movement of the target object a within the projection range of the projection module 12. Accordingly, the sampling period of the image acquisition device 14 is less than the movement time of the target object a within the projection range of the projection module 12. That is, the moving time within the projection range of the projection module 12 may be Q times the sampling period of the image capturing device 14, where Q is greater than or equal to 2. In the present embodiment, a specific value of Q is not limited. For example, Q can be 3, 8, 10, 20, or 30.5, and so forth. In this way, the image-capturing device 14 may capture a plurality of frames of the first projection image.
In the embodiment of the present application, the implementation form of the target object a is not limited. In some embodiments, the target object a may be any moving object that appears on the projection light of the projection module 12. For example, the tracking device may be implemented as a handheld cradle for carrying the image capture device 14, such as a handheld cell phone stand, a handheld camera stand, or a handheld video camera stand, among others. The user may utilize a handheld pan-tilt head to track any moving objects that appear on the projection rays of projection module 12. Based on this, the image capturing device 15 may further capture a second projection image formed by directly projecting the reference pattern on a certain projection surface before capturing the first projection image including the deformation pattern caused by the target object a, and buffer the second projection image in the memory of the control module 15. Further, the control module 15 may determine whether the third projection image is deformed compared with the second projection image according to the third projection image and the second projection image currently acquired by the image acquisition device 14; if the judgment result is yes, determining that the target object A enters the projection range of the projection module; and the third projection image is taken as the first projection image. Alternatively, the third projection image may be taken as the first frame first projection image. After that, the control module 15 starts to track the target object a, that is, starts to track and acquire the deformation pattern. The following embodiments of the control module 15 for tracking and acquiring the deformation pattern will be described in detail in the following embodiments, and will not be described in detail here.
In other embodiments, objects appearing on the projection rays of the projection module 12 are tracked when the objects are designated objects or types of objects. For example, in the security field, when an object appearing on the projection light of the projection module 12 is a person, the object is tracked as the target object a. In this application scenario, the tracking device may be implemented as a monitoring device, which may be deployed in a monitoring area. For another example, in the field of biological detection, when an object appearing on the projection light of the projection module 12 is a cell of a specified type, the object is tracked as the target object a; and so on. In this application scenario, the tracking device may be implemented as a medical detection device such as a microscope.
Based on the above analysis, the image capturing device 14 may further capture a second projection image formed by directly projecting the reference pattern on a certain projection surface before capturing the first projection image containing the deformation pattern caused by the target object a, and buffer the second projection image in the memory of the control module 15. Further, the control module 15 may determine whether the third projection image is deformed compared with the second projection image according to the third projection image and the second projection image currently acquired by the image acquisition device 14; and if so, inputting the third projection image into the neural network model. Then, in the neural network model, calculating the object type of the deformation pattern contained in the third projection image; and if the object type of the deformation pattern contained in the third projection image is the designated type, determining that the target object A enters the projection range of the projection module, and taking the third projection image as the first projection image. Alternatively, the third projection image may be taken as the first frame first projection image. After that, the control module 15 starts to track the target object a, that is, starts to track and acquire the deformation pattern. The following embodiments of the control module 15 for tracking and acquiring the deformation pattern will be described in detail in the following embodiments, and will not be described in detail here.
The object type of the target object is determined by detecting the deformation pattern in the projection image, and compared with the image of the object shot by the image acquisition equipment, the object is identified, so that the workload of image identification can be reduced. On the other hand, since the three-dimensional features of the object cannot be recognized by directly using the image of the object photographed by the general monocular image capturing device, if the three-dimensional features of the object are acquired by photographing the object using the depth camera or the binocular camera, the cost of the image capturing device is undoubtedly increased.
In the embodiment of the application, target detection is performed by using a deformation pattern in a projection image, and the deformation pattern can contain depth information and can be used for measuring three-dimensional characteristics of a target object. Therefore, in the embodiment of the application, by using the deformation pattern in the projection image, the three-dimensional feature of the target object can be measured, which helps to reduce the requirements of the image acquisition equipment, and thus helps to reduce the cost of the image acquisition equipment.
It should be noted that, in the embodiment of the present application, before analyzing the object type of the deformation pattern included in the third projection image by using the neural network model, the neural network model needs to be trained.
In the present embodiment, the model structure of the neural network model is not limited. Optionally, the neural network model may include: a convolutional layer, a pooling layer, and an activation function layer. A Sigmoid function, tanh function, or Relu function may be employed in the activation function layer. Optionally, the number of convolutional and pooling layers is equal. In the present embodiment, the specific number of convolutional layers and pooling layers is not limited. For example, the number of convolutional layers and pooling layers may be 2, 3, or 4 or even more.
In the embodiment of the present application, a network architecture of the initial neural network model may be preset. Optionally, the network architecture of the initial neural network model comprises: convolutional layers, pooling layers, number and arrangement order of these convolutional and pooling layers, and supercool of each convolutional and pooling layerAnd (4) parameters. Wherein, the hyper-parameters of the convolutional layer comprise: the size k (kernel size) of the convolution kernel, the size p (padding size) of the feature map edge extension, and the stride size s (stride size). The hyper-parameters of the pooling layer are the size K and the step size S of the pooling operation core and the like. The activation function layer may be a Relu function:
Figure BDA0003043737050000131
for a neural network model, the output of each convolutional layer can be represented as:
Figure BDA0003043737050000132
wherein, wiAnd biRespectively representing the weight and the bias of each layer for the parameters of the neural network model to be trained; x is the number ofiAn input vector representing the i-th layer (e.g., an input image of the layer). For the convolutional layer, the input image I may be convolved with a convolution kernel K, which may be expressed as:
Figure BDA0003043737050000133
in formula (3), M represents the number of lines of pixel points of the input image; n represents the column number of pixel points of the input image; m is an integer and 0<m<M, n are integers and 0<n<N。
In the present embodiment, the process of training the neural network model can be understood as the process of training the parameters w in the initial neural network modeliAnd biPerforming a training process to obtain a weight w of each convolutional layeriAnd bi. In the embodiment of the application, the loss function can be minimized to be a training target, and model training is performed by using the sample image to obtain the neural network model. The sample image comprises a projection image formed by projecting the reference image on the projection surface by the projection light of the projection module through the specified object. Wherein the specified object belongs to the specified type. The sample image can be one frame or a plurality of frames, the plurality of frames refer to 2 frames or more than 2 frames, and the specific values of the number of the frames can be flexibly set according to actual requirements. In the embodiment of the present application, the source of the sample image is not limited, and the sample image may be: of pre-acquired projection modulesA projection image formed by projecting a reference image onto a projection surface through a predetermined object by projection light; but also images in other three-dimensional image databases or depth image databases; and so on.
The loss function is determined according to the probability that the specified object obtained by model training belongs to the specified type and the actual probability that the specified object belongs to the specified type. Where the actual probability that a given object belongs to a given type may be 1, i.e., 100%. Alternatively, the loss function may be an absolute value of a difference between a probability that the specified object belongs to the specified type and an actual probability that the specified object belongs to the specified type, which are obtained by model training.
In order to more clearly illustrate the above neural network model training process, the model training process provided by the present embodiment is exemplarily illustrated below with reference to fig. 1 d. As shown in fig. 1e, the main steps of the model training process are as follows:
s1: and taking the sample image as an input image of the initial neural network model, and inputting the sample image into the initial neural network model.
S2: using the initial neural network model, the probability of the deformation pattern contained in the sample image under each object type is calculated.
S3: and respectively substituting the probability of the deformation pattern contained in the sample image under each object type and the actual probability of the deformation pattern contained in the sample image under each object type into a loss function, and calculating a loss function value. The type and the number of the object types output by the neural network model can be determined by the richness of the sample image.
S4: judging whether the calculated loss function value is less than or equal to the calculated loss function value of the last W times; if yes, go to step S5; if the determination result is negative, step S6 is executed. Wherein, W is an integer greater than or equal to 1, and the specific value can be flexibly set. For example, W may be equal to 5, 8, 10, etc., but is not limited thereto.
S5: and adjusting the parameters in the neural network model along the negative gradient direction of the parameters in the initial neural network model, taking the adjusted neural network model as the initial neural network model, and returning to execute the step S1.
S6: and taking the neural network model with the smallest loss function value in the last W times as a final neural network model. That is, the weight and offset for each layer when the loss function value is the smallest in the last W times are set as the final weight and offset.
Further, in the case that it is determined that the target object a appears within the projection range of the projection module 13, the target object a may be tracked, that is, a deformation pattern caused by the target object a may be tracked and collected. In order to realize the tracking and acquisition of the image acquisition device 14 on the deformation pattern, that is, to realize the tracking on the target object a, in this embodiment, the control module 15 may adjust the working state of the multi-axis pan/tilt head 13 according to the first projection image acquired by the image acquisition device 14, so that the multi-axis pan/tilt head 13 drives the image acquisition device 14 to perform the tracking and acquisition on the deformation pattern.
In some embodiments, an adjustment period may be set in the control module 15, and a timer or a counter is started to time the adjustment period, and each time the adjustment period is reached, the control module 15 may adjust the working state of the multi-axis pan/tilt head 13 according to the first projection image acquired by the image acquisition device 15 in the current adjustment period, so as to drive the image acquisition device 14 to perform tracking acquisition on the deformation pattern in the next adjustment period. That is, for the multi-axis pan/tilt head 13, the adjusted operating state thereof enables the image capturing device 14 to capture the deformation pattern of the target object a in the next adjustment period.
Further, the control module 15 may calculate the motion information of the target object a according to the first projection image acquired by the image acquisition device 14 in the current adjustment cycle. The motion information of the target object a may include: at least one of displacement information, moving speed, moving direction, and acceleration information of the target object a.
Alternatively, the control module 15 may calculate the pixel difference of the target projection image and the initial projection image corresponding to the current adjustment cycle. The initial projection image corresponding to the current adjustment period may be a first projection image of a first frame acquired by the image acquisition device 14 in the current adjustment period, or may be a projection image formed by pixel averages of first projection images of previous N frames initially acquired by the image acquisition device 14 in the current adjustment period, where N is greater than or equal to 2 and is an integer. The target projection image is the other projection image acquired by the image acquisition device 14 in the current adjustment cycle in addition to the initial projection image. The number of target projection images may be 1 frame or more. The multi-frame means 2 frames or more than 2 frames. Further, the control module 15 may calculate the motion information of the target object a according to the pixel difference between the target projection image and the initial projection image and the pose of the image capturing apparatus 14 at the current adjustment period.
Under the condition that the number of the target projection images is multiple frames, the control module 15 can calculate the displacement change of the target object A according to the pixel difference between the two adjacent frames of target projection images and the initial projection image corresponding to the current adjustment period and the pose of the image acquisition equipment in the current adjustment period; and calculates the movement velocity and/or acceleration of the target object a based on the displacement change of the target object a and the sampling period of the image pickup device 14.
For the control module 15, the initial projection image acquired by the image acquisition device 14 in the current adjustment period may be buffered in the memory bank, and the initial projection image may be denoted as I0. Further, control module 15 may calculate other target projection images acquired by image acquisition device 14 during the current adjustment cycle and initial projection image I0Inter-frame pixel difference of (2): Δ I ═ Ii-I0(ii) a Wherein IiThe image processing method includes that the image acquisition equipment 14 acquires an ith frame of first projection image in a current adjustment cycle, namely an (i-1) frame of target projection image; 2, 3, …, M; m is the total number of first projection images acquired by the image acquisition device 14 during the current adjustment cycle. Control module 15 depends on Δ Ii=Ii-I0A closed contour of the target object a may be acquired. Further, the control module 15 may respectively compare the pixel difference between the two adjacent frames of the target projection image and the initial projection image, i.e. according to Δ Ii+1And Δ IiAnd the pose of the image pickup device 14 at the current adjustment period, the displacement change (Δ x, Δ y) of the target object a is calculated. Further, according to the target pairThe displacement change of the image A and the sampling period of the image acquisition device 14 can calculate the movement speed of the target object A
Figure BDA0003043737050000161
And acceleration
Figure BDA0003043737050000162
Further, the control module 15 may adjust the working state of the multi-axis pan/tilt head according to the motion information of the target object a, so as to drive the image acquisition device 14 to track and acquire the deformation pattern in the next adjustment period. That is, for the multi-axis pan/tilt head 13, the adjusted operating state thereof enables the image capturing device 14 to capture the deformation pattern of the target object a in the next adjustment period.
Further, the control module 15 may calculate a target motion parameter value of the motor in the multi-axis pan/tilt head 13 according to the motion information of the target object a, and adjust the motion parameter of the motor in the multi-axis pan/tilt head 13 to the target motion parameter value, so as to adjust the working state of the multi-axis pan/tilt head 13. For the multi-axis pan/tilt head 13, the adjusted operating state thereof enables the image capturing device 14 to capture a deformation pattern of the target object a in the next adjustment period.
For the motor in the multi-axis pan/tilt head 13, the motion parameters thereof may include: at least one of acceleration, angular acceleration, and rotational speed of a motor in the multi-axis pan/tilt head 13. Accordingly, the target motion parameter values of the motor in the multi-axis pan/tilt head 13 may include: at least one of a target acceleration, and a target rotational speed of the motor in the multi-axis pan/tilt head 13.
Further, the control module 15 may predict the position to which the target object a moves in the next adjustment period according to the motion information of the target object a; calculating the position of the generated deformation pattern according to the position to which the target object A moves in the next adjustment period; further, the control module 15 may calculate the pose of the image capturing device 14 corresponding to the next adjustment period according to the position of the generated deformation pattern; and calculating a target motion parameter value of a motor in the multi-axis pan/tilt head 13 according to the pose of the image acquisition device 14 corresponding to the next adjustment period and the pose of the image acquisition device 14 in the current adjustment period. Further, the control module 15 may control the multi-axis pan/tilt head 13 to adjust the motion parameter of the motor to the target motion parameter value, so as to adjust the working state of the multi-axis pan/tilt head 13, so that the adjusted working state may enable the image acquisition device 14 to track and acquire a deformation pattern of the target object a in the next adjustment period, thereby implementing tracking of the target object a in the next adjustment period.
It should be noted that, as shown in fig. 1b, the tracking device provided in the embodiment of the present application may further include: power supply components 17 and heat sink components 18, etc. The basic components and the structures of the basic components contained in different tracking devices are different, and the embodiments of the present application are only some examples.
In addition to the tracking device described above, embodiments of the present application also provide a target tracking system. As shown in fig. 2a, the system comprises: a projection module 22, a tracking device S20, and a projection surface S21 disposed in the physical environment in which the tracking device S20 is located. A projection module 22 for projecting a reference image having a predetermined pattern onto the projection surface.
In this embodiment, as shown in fig. 2a, the tracking device includes: a body 21, a multi-axis pan-tilt 23, and a control module 25. And the machine body 21 is used for mounting a multi-axis tripod head 23. And the multi-axis cloud platform 23 is used for carrying the image acquisition equipment 24 and driving the image acquisition equipment 24 to rotate. The image acquisition device 24 is configured to acquire a first projection image corresponding to the reference image. The reference image is projected outward by the projection module 22, and has a predetermined pattern. The first projection image includes a deformation pattern corresponding to the predetermined pattern. The deformation pattern is generated based on the target object. And the control module 25 is electrically connected to the multi-axis pan/tilt head 23 and is used for adjusting the working state of the multi-axis pan/tilt head 23 according to the first projection image acquired by the image acquisition equipment 24 so as to drive the image acquisition equipment to track and acquire the deformation pattern.
In the present embodiment, the relationship between the projection module 22 and the tracking device is not limited. In some embodiments, the projection module 22 is a stand-alone projection device and is disposed in the physical environment in which the tracking device is located. The projection module 22 is communicatively coupled to the control module 15. The control module 25 may instruct the projection module 22 to project a reference image outward, the reference image having a predetermined pattern.
In another embodiment, the body 21 may be used to mount the projection module 22 and the multi-axis pan/tilt head 23. The projection module 22 may be fixed on the body 21. Regarding the fixing manner of the projection module 22 on the body 21, reference may be made to the related description of fig. 1a, and details are not repeated herein.
In the present embodiment, the multi-axis platform 23 is rotatably connected to the body 21. The multi-axis pan/tilt head 23 refers to a pan/tilt head having a plurality of rotation axes. Plural means 2 or more. In the present embodiment, the multi-axis pan/tilt head 23 is used to mount the image pickup device 24. As to the manner of mounting the image capturing device 24 on the multi-axis pan/tilt head 13, reference may be made to the related description in fig. 1a, and details thereof are not repeated herein.
In the present embodiment, in a case where the image capturing apparatus 24 is mounted on the multi-axis pan/tilt head 23, the image capturing apparatus 24 may rotate along with the rotation of the multi-axis pan/tilt head 23, that is, the multi-axis pan/tilt head 23 may drive the image capturing apparatus 24 to rotate. Regarding the working principle and the implementation form of the multi-axis pan/tilt head 23 and the implementation form of the image capturing device 24, reference may be made to the relevant contents of the above embodiments, which are not described herein again.
In this embodiment, the tracking device further includes: a control module 25. For a detailed implementation of the control module 25, reference is made to the above-mentioned embodiments of the tracking device. In this embodiment, the computer instructions of the tracking device S20 are executed primarily by the control module 25.
In this embodiment, the control module 25 may instruct the projection module 22 to project a reference image outward, the reference image having a predetermined pattern. The predetermined pattern may be any pattern. For example, the predetermined pattern may be a stripe pattern, a code pattern, a predetermined character pattern, etc., but is not limited thereto.
Alternatively, the control module 25 may be communicatively coupled to the projection module 22. Accordingly, the control module 25 may instruct the projection module 22 to project the reference image having the predetermined pattern outward by an instruction. Alternatively, the control module 25 may send a projection instruction to the projection module 22, the projection instruction being used to instruct the projection module 22 to project the reference image having the predetermined pattern outward. Accordingly, the projection module 22 projects the reference image having the predetermined pattern outward upon receiving the projection instruction.
Alternatively, the control module 25 is electrically connected to the projection module 22. Accordingly, the control module 25 may instruct the projection module 22 to project the reference image having the predetermined pattern outward through the electric signal. Wherein the electrical signal may be a high level or a low level signal. Alternatively, the control module 25 may output an electrical signal to the projection module 22, the electrical signal being used to instruct the projection module 22 to project the reference image having the predetermined pattern outward. Accordingly, the projection module 22 projects a reference image having a predetermined pattern outward upon receiving the electrical signal.
Regardless of the manner in which the control module 25 is connected to the projection module 22, the projection module 22 can project a reference image having the predetermined pattern. In the embodiment of the present application, the specific implementation form of the projection module 22 is not limited. For a description of the implementation form and the operation principle of the projection module 22, reference may be made to the related contents of the above-mentioned tracking device embodiment, and details are not described herein again.
In practical applications, in the case where the projection module 22 projects the reference image outward, if no object appears on the projection light of the projection module 22, the projection image of the reference image on the projection surface S21 has a predetermined pattern. If an object appears on the projection light line of the projection module 22, the projection image corresponding to the predetermined pattern is deformed, and for convenience of description, in the embodiment of the present application, the pattern formed by deforming the projection image corresponding to the predetermined pattern is defined as a deformed pattern. The deformation pattern is generated based on the object appearing on the projection light of the projection module 22, and the deformation pattern moves along with the movement of the object. Based on this, in the embodiment of the present application, tracking of an object appearing on the projection ray of the projection module 22 can be realized based on the deformation pattern. For convenience of description, an object appearing on the projection light of the projection module 22 is defined as a target object a. The target object a may be a moving object.
Based on the above analysis, in the present embodiment, the image pickup device 24 may pick up a projection image corresponding to the reference image. The projection image corresponding to the reference image may include: the reference image is directly projected to the projection image formed by the projection surface S21. Of course, the projection image corresponding to the reference image may also include: in the case where an object (target object a) appears on the projection light of the projection module 22, the projection light of the projection module 22 passes through the target object a to project a reference image on the projection image formed on the projection surface S21. In this embodiment, the projection surface S21 may be the projection surface S21 may be a projection screen, such as a projection curtain; or other object surfaces in the current environment, such as, but not limited to, walls, floors, or furniture surfaces, etc. Fig. 2 illustrates only the projection screen as the projection surface S21, but the present invention is not limited thereto.
In the following embodiments, for convenience of description and distinction, in the case where an object (target object a) appears on the projection light of the projection module 22, a projection image formed by projecting a reference image on a certain projection surface through the projection light of the projection module 22 by passing through the target object a is defined as a first projection image; and a projected image formed by directly projecting the reference image onto a certain projection plane without the projection light of the projection module 22 having the object present thereon is positioned as a second projected image. The first projection image comprises the deformation pattern, and the second projection image does not comprise the deformation pattern. Therefore, the first projection image may reflect information of the target object a, and the second projection image may not reflect information of the target object a, and thus the target object a may not be tracked based on the second projection image. Therefore, in the following embodiments, the control module 25 is focused on the first projection image acquired by the image acquisition device 24 to realize an exemplary description of the tracking process of the target object a.
The control module 25 is in communication connection with the image acquisition device 24, and the control module 25 is electrically connected with the multi-axis pan-tilt 23. Wherein the image acquisition device 24 may provide the acquired first projection image to the control module 25. Accordingly, the control module 25 may adjust the operating state of the multi-axis pan/tilt head 23 according to the first projection image that has been acquired by the image acquisition device 24. When the working state of the multi-axis tripod head 23 is changed, the multi-axis tripod head 23 can rotate. The multi-axis pan-tilt 23 rotates to drive the image capturing device 24 to rotate, so that the pose of the image capturing device 24 can be adjusted, and the image capturing device 24 can track and capture the deformation pattern. Wherein the pose of image capture device 24 includes the position and orientation of image capture device 24. Because the deformation pattern is caused by the target object A, the deformation pattern is tracked and collected, and the target object A can be tracked. Moreover, the multi-axis pan-tilt 23 drives the image acquisition device 24 to rotate so as to adjust the pose of the image acquisition device 24, which is beneficial to expanding the tracking range of the target object A.
In the embodiment of the present application, in order to achieve tracking acquisition of the deformation pattern caused by the target object a, the image acquisition device 24 may adopt an image acquisition device with a high sampling rate. Preferably, the image capturing device 24 may capture a plurality of frames of the first projection image including the deformation pattern during the movement of the target object a within the projection range of the projection module 22. Accordingly, the sampling period of the image acquisition device 24 is less than the movement time of the target object a within the projection range of the projection module 22. That is, the moving time within the projection range of the projection module 22 may be Q times the sampling period of the image acquisition device 24, where Q is greater than or equal to 2. In this way, the image-capturing device 14 may capture a plurality of frames of the first projection image.
In the embodiment of the present application, the implementation form of the target object a is not limited. In some embodiments, target object a may be any moving object that appears on the projection rays of projection module 22. Based on this, the image capturing device 25 may further capture a second projection image formed by directly projecting the reference pattern on a certain projection surface before capturing the first projection image including the deformation pattern caused by the target object a, and buffer the second projection image in the memory of the control module 25. Further, the control module 25 may determine whether the third projection image is deformed compared with the second projection image according to the third projection image and the second projection image currently acquired by the image acquisition device 24; if the judgment result is yes, determining that the target object A enters the projection range of the projection module; and the third projection image is taken as the first projection image. Alternatively, the third projection image may be taken as the first frame first projection image. After that, the control module 25 starts to track the target object a, i.e. starts to track and acquire the deformation pattern. The following embodiments of the control module 25 for tracking and acquiring the deformation pattern will be described in detail in the following embodiments, and will not be described in detail here.
In other embodiments, objects appearing on the projection rays of the projection module 22 are tracked when the objects are designated objects or types of objects. Based on this, the image acquisition device 24 may also acquire a second projection image formed by directly projecting the reference pattern on a certain projection surface before acquiring the first projection image including the deformation pattern caused by the target object a, and buffer the second projection image in the memory of the control module 25. Further, the control module 25 may determine whether the third projection image is deformed compared with the second projection image according to the third projection image and the second projection image currently acquired by the image acquisition device 24; and if so, inputting the third projection image into the neural network model. Then, in the neural network model, calculating the object type of the deformation pattern contained in the third projection image; and if the object type of the deformation pattern contained in the third projection image is the specified type, determining that the target object A enters the projection range of the projection module. And the third projection image is taken as the first projection image. Alternatively, the third projection image may be taken as the first frame first projection image. After that, the control module 25 starts to track the target object a, i.e. starts to track and acquire the deformation pattern. The following embodiments of the control module 25 for tracking and acquiring the deformation pattern will be described in detail in the following embodiments, and will not be described in detail here.
It should be noted that, in the embodiment of the present application, before analyzing the object type of the deformation pattern included in the third projection image by using the neural network model, the neural network model needs to be trained.
In the embodiment of the application, the loss function can be minimized to be a training target, and model training is performed by using the sample image to obtain the neural network model. The sample image comprises a projection image formed by projecting the reference image on the projection surface by the projection light of the projection module through the specified object. The loss function is determined according to the probability that the specified object obtained by model training belongs to the specified type and the actual probability that the specified object belongs to the specified type. Where the actual probability that a given object belongs to a given type may be 1, i.e., 100%. Alternatively, the loss function may be an absolute value of a difference between a probability that the specified object belongs to the specified type and an actual probability that the specified object belongs to the specified type, which are obtained by model training.
For the description of the network architecture of the sample image and the neural network model and the description of the specific process of training the neural network model, reference may be made to the related contents of the above-mentioned tracking device embodiment, which are not described herein again.
Further, in the case that it is determined that the target object a appears within the projection range of the projection module 22, the target object a may be tracked, that is, a deformation pattern caused by the target object a may be tracked and collected. In order to realize the tracking and acquisition of the image acquisition device 24 on the deformation pattern, that is, to realize the tracking on the target object a, in this embodiment, the control module 25 may adjust the working state of the multi-axis pan/tilt head 23 according to the first projection image acquired by the image acquisition device 24, so that the multi-axis pan/tilt head 23 drives the image acquisition device 24 to perform the tracking and acquisition on the deformation pattern.
In some embodiments, an adjustment period may be set in the control module 25, and a timer or a counter is started to time the adjustment period, and each time the adjustment period is reached, the control module 25 may adjust the working state of the multi-axis pan/tilt head 23 according to the first projection image acquired by the image acquisition device 25 in the current adjustment period, so as to drive the image acquisition device 24 to perform tracking acquisition on the deformation pattern in the next adjustment period. That is, for the multi-axis pan/tilt head 23, the adjusted operating state thereof enables the image capturing device 24 to capture the deformation pattern of the target object a in the next adjustment period.
Further, the control module 25 may calculate the motion information of the target object a based on the first projection image acquired by the image acquisition device 24 in the current adjustment period. The motion information of the target object a may include: at least one of displacement information, moving speed, moving direction, and acceleration information of the target object a.
Alternatively, control module 25 may calculate the pixel difference of the target projection image from the initial projection image corresponding to the current adjustment cycle. The initial projection image corresponding to the current adjustment period may be a first projection image of a first frame acquired by the image acquisition device 24 in the current adjustment period, or may be a projection image formed by pixel averages of first projection images of previous N frames initially acquired by the image acquisition device 24 in the current adjustment period, where N is greater than or equal to 2 and is an integer. The target projection image is the other projection image acquired by image acquisition device 24 in the current adjustment cycle in addition to the initial projection image. The number of target projection images may be 1 frame or more. The multi-frame means 2 frames or more than 2 frames. Further, control module 25 may calculate motion information of target object a based on the pixel difference of the target projection image and the initial projection image and the pose of image capture device 24 at the current adjustment period.
Under the condition that the number of the target projection images is multiple frames, the control module 25 can calculate the displacement change of the target object A according to the pixel difference between the two adjacent frames of target projection images and the initial projection image corresponding to the current adjustment period and the pose of the image acquisition equipment in the current adjustment period; and calculates the moving speed and/or acceleration of the target object a according to the displacement change of the target object a and the sampling period of the image pickup device 24.
Further, the control module 25 may adjust the working state of the multi-axis pan/tilt head according to the motion information of the target object a, so as to drive the image acquisition device 24 to track and acquire the deformation pattern in the next adjustment period. That is, for the multi-axis pan/tilt head 23, the adjusted operating state thereof enables the image capturing device 24 to capture the deformation pattern of the target object a in the next adjustment period.
Further, the control module 25 may calculate a target motion parameter value of the motor in the multi-axis pan/tilt head 23 according to the motion information of the target object a, and adjust the motion parameter of the motor in the multi-axis pan/tilt head 23 to the target motion parameter value, so as to adjust the working state of the multi-axis pan/tilt head 23. For the multi-axis pan/tilt head 23, the adjusted operating state thereof enables the image capturing device 24 to capture a deformation pattern of the target object a in the next adjustment period.
For the motor in the multi-axis pan/tilt head 23, the motion parameters thereof may include: at least one of acceleration, angular acceleration, and rotational speed of a motor in the multi-axis pan/tilt head 23. Accordingly, the target motion parameter values of the motor in the multi-axis pan/tilt head 23 may include: at least one of a target acceleration, and a target rotational speed of the motor in the multi-axis pan/tilt head 23.
Further, the control module 25 may predict a position to which the target object a moves in the next adjustment period according to the motion information of the target object a; calculating the position of the generated deformation pattern according to the position to which the target object A moves in the next adjustment period; further, the control module 25 may calculate a pose of the image capturing device 24 corresponding to the next adjustment period according to the position of the generated deformation pattern; and calculates a target motion parameter value of the motor in the multi-axis pan/tilt head 23 according to the pose of the image capturing device 24 corresponding to the next adjustment period and the pose of the image capturing device 24 in the current adjustment period. Further, the control module 25 may control the multi-axis pan/tilt 23 to adjust the motion parameter of the motor to the target motion parameter value, so as to adjust the working state of the multi-axis pan/tilt 13, so that the adjusted working state may enable the image capturing device 14 to track and capture a deformation pattern of the target object a in the next adjustment period, thereby achieving tracking of the target object a in the next adjustment period.
In order to facilitate understanding of the above target tracking process, the predetermined pattern is taken as a stripe pattern, and the target tracking process is exemplarily described below with reference to fig. 2 b.
As shown in fig. 2b, while a ball moves within the projection range of the projection module 22, the projection module projects a stripe pattern a, and the stripe pattern a passes through the ball and is projected onto the projection surface S21 to form a projection image D. The projection image D contains a deformed pattern of the stripe pattern a due to passing through the ball. Image acquisition device 24 acquires projection pattern D. The control module 25 may input the projection image D into a neural network model, and recognize that the object is spherical through the neural network model. Optionally, the control module 25 may also detect a dimple point B on the ball and a crack C on the ball. Meanwhile, the control module 25 may control the motion of the multi-axis pan/tilt head according to the collected projection pattern D, so as to drive the image collection device 24 to track and collect the deformation pattern. The deformation pattern is caused by the ball, so that the motion trail of the deformation pattern can reflect the motion trail of the ball, the deformation pattern is tracked and collected, and the tracking of the ball is realized.
In addition to the above tracking device and system embodiments, the present application embodiment also provides a target tracking method, and the following exemplarily illustrates the target tracking method provided by the present application embodiment from the perspective of the above control module.
Fig. 3 is a schematic flowchart of a target tracking method according to an embodiment of the present application. As shown in fig. 3, the method includes:
301. controlling the projection module to project the reference image outwards; the reference image has a predetermined pattern.
302. Controlling image acquisition equipment carried on the multi-axis pan-tilt head to acquire a first projection image corresponding to the reference image; the first projection image includes: a deformed pattern of the predetermined pattern; the deformation pattern is generated based on the target object.
303. And adjusting the working state of the multi-axis holder according to the first projection image acquired by the image acquisition equipment so as to drive the image acquisition equipment to track and acquire the deformation pattern.
For the implementation and connection structure of the projection module, the multi-axis pan-tilt, and the image capturing device, reference may be made to the related contents of the above tracking device embodiment, which is not described herein again.
The multi-axis tripod head can rotate around the rotating shaft thereof, and the rotatable direction is determined by the rotating direction of the rotating shaft contained in the multi-axis tripod head. Because the image acquisition equipment is carried on the multi-axis tripod head, the image acquisition equipment also rotates along with the rotation of the multi-axis tripod head.
In this embodiment, the projection module may be controlled to project a reference image outward, the reference image having a predetermined pattern. The predetermined pattern may be any pattern. For example, the predetermined pattern may be a stripe pattern, a code pattern, a predetermined character pattern, etc., but is not limited thereto. For the implementation of generating the predetermined pattern and controlling the projection module by the control module, reference may be made to the related contents of the above embodiments, which are not described herein again.
In practical applications, in the case where the projection module projects the reference image outward, if no object appears on the projection light of the projection module, the projection image of the reference image on the projection surface also has a predetermined pattern. If an object appears on the projection light line of the projection module, the projection image corresponding to the predetermined pattern is deformed, and for convenience of description, in the embodiment of the present application, the pattern formed by deforming the projection image corresponding to the predetermined pattern is defined as a deformed pattern. The deformation pattern is generated based on an object appearing on the projection light of the projection module, and the deformation pattern moves along with the movement of the object. Based on this, in the embodiment of the present application, tracking of an object appearing on the projection ray of the projection module can be realized based on the deformation pattern. For convenience of description, an object appearing on the projection light of the projection module is defined as a target object. The target object may be a moving object.
Based on the above analysis, in the present embodiment, the image capturing apparatus may be controlled to capture the projection image corresponding to the reference image. The projection image corresponding to the reference image may include: the reference image is directly projected to a projection image formed by a certain projection surface, namely the reference image is directly projected to the projection image formed by the certain projection surface under the condition that no object appears in the projection light of the projection module.
Of course, the projection image corresponding to the reference image may also include: when an object (target object) appears on the projection light of the projection module, the projection light of the projection module passes through the target object a to project a reference image on a projection surface to form a projection image. For the implementation of the projection plane, reference may be made to the related contents of the above embodiments of the tracking device, and details are not described herein.
In the following embodiments, for convenience of description and distinction, in a case where an object (target object) appears on the projection light of the projection module, a projection image formed by projecting a reference image on a certain projection plane through the target object by the projection light of the projection module is defined as a first projection image; and under the condition that no object appears in the projection light of the projection module, the reference image is directly projected to a projection image formed on a certain projection surface to be positioned as a second projection image. The first projection image comprises the deformation pattern, and the second projection image does not comprise the deformation pattern. Therefore, the first projection image may reflect information of the target object, and the second projection image may not reflect information of the target object, and the target object may not be tracked based on the second projection image. Therefore, in the following embodiments, the focus is on achieving an exemplary description of the tracking procedure of the target object based on the first projection image acquired by the image acquisition device.
In this embodiment, the working state of the multi-axis pan/tilt head may be adjusted according to the first projection image acquired by the image acquisition device. When the working state of the multi-axis tripod head is changed, the multi-axis tripod head can rotate. The multi-axis cradle head can drive the image acquisition equipment to rotate, so that the pose of the image acquisition equipment can be adjusted, and the image acquisition equipment can track and acquire the deformation pattern. Wherein the pose of the image capture device comprises the position and orientation of the image capture device. The deformation pattern is caused by the target object, so that the deformation pattern is tracked and collected, and the target object can be tracked. Moreover, the multi-axis cloud deck drives the image acquisition equipment to rotate, so that the pose of the image acquisition equipment can be adjusted, and the target object tracking range can be expanded.
For the multi-axis tripod head, the adjusted working state can adjust the pose of the image acquisition equipment to acquire the deformation pattern caused by the target object at the subsequent moment.
In the embodiment of the present application, in order to achieve tracking acquisition of a deformation pattern caused by a target object, an image acquisition device with a high sampling rate may be adopted as the image acquisition device. Preferably, the image acquisition device can acquire a plurality of frames of first projection images containing the deformation pattern during the movement of the target object within the projection range of the projection module. Accordingly, the sampling period of the image acquisition device is smaller than the movement time of the target object within the projection range of the projection module. Namely, the moving time in the projection range of the projection module can be Q times of the sampling period of the image acquisition equipment, and Q is more than or equal to 2. In the present embodiment, a specific value of Q is not limited. For example, Q can be 3, 8, 10, 20, or 30.5, and so forth. In this way, the image capturing device may capture a plurality of frames of the first projection image.
In the embodiment of the present application, the implementation form of the target object is not limited. In some embodiments, the target object may be any moving object that appears on the projection rays of the projection module. Based on this, the image acquisition apparatus may also acquire a second projection image formed by projecting the reference pattern directly on a certain projection plane before acquiring the first projection image containing the pattern of deformation caused by the target object. Further, whether the third projection image is deformed compared with the second projection image or not can be judged according to the third projection image and the second projection image which are currently acquired by the image acquisition equipment; if the judgment result is yes, determining that the target object enters the projection range of the projection module; and the third projection image is taken as the first projection image. Alternatively, the third projection image may be taken as the first frame first projection image. And then, starting to track the target object, namely starting to track and acquire the deformation pattern. The following embodiments of tracking and acquiring the deformation pattern will be described in detail in the following embodiments, and will not be described in detail here.
In other embodiments, the object appearing on the projection ray of the projection module is tracked when the object is a designated object or a designated type of object. Based on this, the image acquisition apparatus may also acquire a second projection image formed by projecting the reference pattern directly on a certain projection plane before acquiring the first projection image containing the pattern of deformation caused by the target object. Further, whether the third projection image is deformed compared with the second projection image or not can be judged according to the third projection image and the second projection image which are currently acquired by the image acquisition equipment; and if so, inputting the third projection image into the neural network model. Then, in the neural network model, calculating the object type of the deformation pattern contained in the third projection image; and if the object type of the deformation pattern contained in the third projection image is the specified type, determining that the target object enters the projection range of the projection module, and taking the third projection image as the first projection image. Alternatively, the third projection image may be taken as the first frame first projection image. And then, starting to track the target object, namely starting to track and acquire the deformation pattern. The following embodiments of tracking and acquiring the deformation pattern will be described in detail in the following embodiments, and will not be described in detail here.
It should be noted that, in the embodiment of the present application, before analyzing the object type of the deformation pattern included in the third projection image by using the neural network model, the neural network model needs to be trained.
In the embodiment of the application, the loss function can be minimized to be a training target, and model training is performed by using the sample image to obtain the neural network model. The sample image comprises a projection image formed by projecting the reference image on the projection surface by the projection light of the projection module through the specified object. The sample image can be one frame or a plurality of frames, the plurality of frames refer to 2 frames or more than 2 frames, and the specific values of the number of the frames can be flexibly set according to actual requirements. The loss function is determined according to the probability that the specified object obtained by model training belongs to the specified type and the actual probability that the specified object belongs to the specified type. Where the actual probability that a given object belongs to a given type may be 1, i.e., 100%. Alternatively, the loss function may be an absolute value of a difference between a probability that the specified object belongs to the specified type and an actual probability that the specified object belongs to the specified type, which are obtained by model training.
Further, in the case that it is determined that the target object appears within the projection range of the projection module, the target object may be tracked, that is, a deformation pattern caused by the target object may be tracked and collected. In order to realize the tracking and acquisition of the image acquisition equipment on the deformation pattern, namely realize the tracking of the target object, in this embodiment, the working state of the multi-axis cradle head can be adjusted according to the first projection image acquired by the image acquisition equipment, so that the multi-axis cradle head drives the image acquisition equipment to track and acquire the deformation pattern.
In some embodiments, an adjustment period may be set, and a timer or a counter may be started to time the adjustment period, and when the adjustment period is reached, the working state of the multi-axis pan-tilt may be adjusted according to the first projection image acquired by the image acquisition device in the current adjustment period, so as to drive the image acquisition device to perform tracking acquisition on the deformation pattern in the next adjustment period. Namely, for the multi-axis pan-tilt, the adjusted working state of the multi-axis pan-tilt enables the image acquisition device to acquire a deformation pattern of the target object caused in the next adjustment period.
Further, motion information of the target object may be calculated from the first projection image acquired by the image acquisition device in the current adjustment period. Wherein the motion information of the target object may include: at least one of displacement information, motion speed, motion direction, and acceleration information of the target object.
Alternatively, the pixel difference of the initial projection image corresponding to the current adjustment cycle and the target projection image may be calculated. The initial projection image corresponding to the current adjustment period may be a first projection image of a first frame acquired by the image acquisition device in the current adjustment period, or may be a projection image formed by pixel average values of first projection images of previous N frames initially acquired by the image acquisition device in the current adjustment period, where N is greater than or equal to 2 and is an integer. The target projection image is other projection images acquired by the image acquisition equipment in the current adjustment period except the initial projection image. The number of target projection images may be 1 frame or more. The multi-frame means 2 frames or more than 2 frames. Further, motion information of the target object can be calculated according to pixel difference between the target projection image and the initial projection image and the pose of the image acquisition equipment in the current adjustment period.
Under the condition that the number of the target projection images is multiple frames, the displacement change of the target object can be calculated according to the pixel difference between the two adjacent frames of the target projection images and the initial projection image corresponding to the current adjustment period and the pose of the image acquisition equipment in the current adjustment period; and calculating the movement speed and/or acceleration of the target object A according to the displacement change of the target object and the sampling period of the image acquisition equipment.
Furthermore, the working state of the multi-axis holder can be adjusted according to the motion information of the target object, so that the image acquisition equipment is driven to track and acquire the deformation pattern in the next adjustment period. Namely, for the multi-axis pan-tilt, the adjusted working state of the multi-axis pan-tilt enables the image acquisition device to acquire a deformation pattern of the target object caused in the next adjustment period.
Further, a target motion parameter value of a motor in the multi-axis pan-tilt can be calculated according to the motion information of the target object, and the motion parameter of the motor in the multi-axis pan-tilt is adjusted to the target motion parameter value, so that the working state of the multi-axis pan-tilt can be adjusted. For the multi-axis pan-tilt, the adjusted working state of the multi-axis pan-tilt enables the image acquisition device to acquire a deformation pattern of the target object caused in the next adjustment period.
For a motor in a multi-axis pan/tilt head, the motion parameters may include: at least one of an acceleration, an angular acceleration, and a rotational speed of a motor in the multi-axis pan/tilt head. Accordingly, the target motion parameter values of the motor in the multi-axis pan/tilt head may include: at least one of a target acceleration, and a target rotational speed of a motor in the multi-axis pan/tilt head.
Further, the position to which the target object moves in the next adjustment period may be predicted according to the motion information of the target object; calculating the position of the generated deformation pattern according to the position to which the target object moves in the next adjustment period; further, the corresponding pose of the image acquisition equipment in the next adjustment period can be calculated according to the position of the generated deformation pattern; and calculating a target motion parameter value of a motor in the multi-axis pan-tilt according to the corresponding position and posture of the image acquisition equipment in the next adjustment period and the position and posture of the image acquisition equipment in the current adjustment period. Furthermore, the multi-axis tripod head can be controlled to adjust the motion parameters of the motor of the multi-axis tripod head to target motion parameter values, so that the working state of the multi-axis tripod head is adjusted, the adjusted working state can enable the image acquisition equipment to track and acquire deformation patterns of the target object caused in the next adjustment period, and the target object is tracked in the next adjustment period.
It should be noted that the execution subjects of the steps of the methods provided in the above embodiments may be the same device, or different devices may be used as the execution subjects of the methods. For example, the execution subject of steps 301 and 302 may be device a; for another example, the execution subject of step 301 may be device a, and the execution subject of step 302 may be device B; and so on.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations are included in a specific order, but it should be clearly understood that the operations may be executed out of the order presented herein or in parallel, and the sequence numbers of the operations, such as 301, 302, etc., are merely used for distinguishing different operations, and the sequence numbers do not represent any execution order per se. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel.
Accordingly, embodiments of the present application also provide a computer-readable storage medium storing computer instructions, which, when executed by one or more processors, cause the one or more processors to perform the steps of the above-mentioned target tracking method.
It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (43)

1. A tracking device, comprising:
the machine body is used for mounting the multi-shaft holder;
the multi-axis cloud deck is used for carrying image acquisition equipment and driving the image acquisition equipment to rotate; the image acquisition equipment is used for acquiring a first projection image corresponding to the reference image; the reference image is projected outwards by a projection module and has a preset pattern; the first projection image comprises a deformation pattern corresponding to the preset pattern; the deformation pattern is generated based on the target object;
and the control module is electrically connected with the multi-axis cloud platform and used for adjusting the working state of the multi-axis cloud platform according to the first projection image acquired by the image acquisition equipment so as to drive the image acquisition equipment to track and acquire the deformation pattern.
2. The apparatus of claim 1, wherein the control module is electrically connected to the projection module and instructs the projection module to project the reference image outward.
3. The apparatus of claim 1, wherein a sampling period of the image acquisition device is less than a movement time of the target object within a projection range of the projection module.
4. The device according to claim 1, wherein the control module, when adjusting the operating state of the multi-axis head, is specifically configured to:
and adjusting the working state of the multi-axis holder according to a first projection image acquired by the image acquisition equipment in the current adjustment period so as to drive the image acquisition equipment to track and acquire the deformation pattern in the next adjustment period.
5. The device according to claim 4, wherein the control module, when adjusting the operating state of the multi-axis head, is specifically configured to:
calculating motion information of the target object according to a first projection image acquired by the image acquisition equipment in a current adjustment period;
and adjusting the working state of the multi-axis holder according to the motion information of the target object so as to drive the image acquisition equipment to track and acquire the deformation pattern in the next adjustment period.
6. The device according to claim 5, wherein the control module, when adjusting the operating state of the multi-axis head, is specifically configured to:
calculating a target motion parameter value of a motor in the multi-axis pan-tilt according to the motion information of the target object;
and adjusting the motion parameters of the motor in the multi-axis tripod head to the target motion parameter values so as to adjust the working state of the multi-axis tripod head.
7. The apparatus according to claim 6, wherein the control module, when calculating the target motion parameter value of the motor in the multi-axis head, is specifically configured to:
predicting the position to which the target object moves in the next adjustment period according to the motion information of the target object;
calculating the position of generating the deformation pattern according to the position to which the target object moves in the next adjustment period;
calculating the corresponding pose of the image acquisition equipment in the next adjustment period according to the position of the generated deformation pattern;
and calculating a target motion parameter value of a motor in the multi-axis pan-tilt according to the corresponding pose of the image acquisition equipment in the next adjustment period and the pose of the image acquisition equipment in the current adjustment period.
8. The apparatus of claim 6, wherein the motion parameters of the motor in the multi-axis pan/tilt head comprise: at least one of an acceleration, an angular acceleration, and a rotational speed of a motor in the multi-axis pan/tilt head.
9. The device according to claim 5, wherein the control module, when calculating the motion information of the target object, is specifically configured to:
calculating pixel differences of the target projection image and the initial projection image corresponding to the current adjustment period respectively;
calculating the motion information of the target object according to the pixel difference and the pose of the image acquisition equipment in the current adjustment period;
wherein the target projection image is other projection images acquired by the image acquisition device in the current adjustment cycle except the initial projection image.
10. The device according to claim 9, wherein the control module, when calculating the motion information of the target object, is specifically configured to:
calculating the displacement change of the target object according to the pixel difference between two adjacent frames of target projection images and the initial projection image corresponding to the current adjustment period and the pose of the image acquisition equipment in the current adjustment period;
and calculating the movement speed and/or acceleration of the target object according to the displacement change of the target object and the sampling period of the image acquisition equipment.
11. The apparatus according to any one of claims 1 to 10, wherein the first projection image is formed by projecting the reference image on a projection surface by the projection light of the projection module passing through the target object;
the image acquisition device is further configured to: acquiring a second projection image formed by directly projecting the reference image on the projection surface before acquiring the first projection image; (ii) a
The control module is configured to: judging whether the third projection image is deformed compared with the second projection image or not according to the third projection image and the second projection image which are currently acquired by the image acquisition equipment; if the judgment result is yes, determining that the target object enters the projection range of the projection module; and using the third projection image as the first projection image.
12. The apparatus according to any one of claims 1 to 10, wherein the first projection image is formed by projecting the reference image on a projection surface by the projection light of the projection module passing through the target object;
the image acquisition device is further configured to: acquiring a second projection image in which the reference image is projected directly on a projection surface before acquiring the first projection image;
the control module is configured to: judging whether the third projection image is deformed compared with the second projection image or not according to the third projection image and the second projection image which are currently acquired by the image acquisition equipment; if the judgment result is yes, inputting the third projection image into a neural network model; calculating the object type of the deformation pattern contained in the third projection image in the neural network model; if the object type is a designated type, determining that the target object enters the projection range of the projection module; and using the third projection image as the first projection image.
13. The apparatus of claim 12, wherein the control module is further configured to:
performing model training by using a sample image with a loss function minimization as a training target to obtain the neural network model; the sample image comprises a projection image formed by projecting the reference image on a projection surface by the projection light of the projection module passing through a specified object; the specified object belongs to the specified type;
the loss function is determined according to the probability that the specified object obtained by model training belongs to the specified type and the actual probability that the specified object belongs to the specified type.
14. The apparatus of any of claims 1-10, wherein the predetermined pattern is a stripe pattern.
15. The apparatus of any of claims 1-10, wherein the projection module is a digital light processing projection apparatus.
16. The apparatus of claim 15, wherein the digital light processing projection apparatus comprises: a light source, a color wheel, a digital micromirror device, and a projection lens; wherein the color wheel is optically connected between the light source and the digital micromirror device; the digital micromirror device is optically connected with the projection lens; the color wheel and the digital micromirror device are also electrically connected with the control module respectively;
wherein the light emitted by the light source is incident to the color wheel; the color wheel filters the light into monochromatic light under the control of the control module system and projects the monochromatic light to the digital micromirror device; and the digital micromirror device modulates the preset pattern by using the monochromatic light under the control of the control module, and projects a reference image with the preset pattern outwards through the projection lens.
17. A target tracking method, comprising:
controlling the projection module to project the reference image outwards; the reference image has a predetermined pattern;
controlling image acquisition equipment carried on the multi-axis pan-tilt head to acquire a first projection image corresponding to the reference image; the first projection image includes: a deformed pattern of the predetermined pattern; the deformation pattern is generated based on the target object;
and adjusting the working state of the multi-axis holder according to the first projection image acquired by the image acquisition equipment so as to drive the image acquisition equipment to track and acquire the deformation pattern.
18. The method of claim 17, wherein a sampling period of the image acquisition device is less than a movement time of the target object within a projection range of the projection module.
19. The method according to claim 17, wherein said adjusting the operating state of the multi-axis pan/tilt head according to the first projection image acquired by the image acquisition device comprises:
and adjusting the working state of the multi-axis holder according to a first projection image acquired by the image acquisition equipment in the current adjustment period so as to drive the image acquisition equipment to track and acquire the deformation pattern in the next adjustment period.
20. The method of claim 19, wherein said adjusting the operating state of the multi-axis pan/tilt head according to the first projection image acquired by the image acquisition device in the current adjustment cycle comprises:
calculating motion information of the target object according to a first projection image acquired by the image acquisition equipment in a current adjustment period;
and adjusting the working state of the multi-axis holder according to the motion information of the target object so as to drive the image acquisition equipment to track and acquire the deformation pattern in the next adjustment period.
21. The method according to claim 20, wherein the adjusting the operating state of the multi-axis pan/tilt head according to the motion information of the target object comprises:
calculating a target motion parameter value of a motor in the multi-axis pan-tilt according to the motion information of the target object;
and adjusting the motion parameters of the motor in the multi-axis tripod head to the target motion parameter values so as to adjust the working state of the multi-axis tripod head.
22. The method of claim 21, wherein said calculating a target motion parameter value for a motor in the multi-axis pan/tilt head from motion information of the target object comprises:
predicting the position to which the target object moves in the next adjustment period according to the motion information of the target object;
calculating the position of generating the deformation pattern according to the position to which the target object moves in the next adjustment period;
calculating the corresponding pose of the image acquisition equipment in the next adjustment period according to the position of the generated deformation pattern;
and calculating a target motion parameter value of a motor in the multi-axis holder according to the corresponding position and posture of the image acquisition equipment in the next adjustment period and the position and posture of the image acquisition equipment in the current adjustment period.
23. The method of claim 21, wherein the motion parameters of the motor in the multi-axis pan/tilt head comprise: at least one of an acceleration, an angular acceleration, and a rotational speed of a motor in the multi-axis pan/tilt head.
24. The method of claim 20, wherein calculating motion information of the target object from the first projection image acquired by the image acquisition device during the current adjustment cycle comprises:
calculating pixel differences of the target projection image and the initial projection image corresponding to the current adjustment period respectively;
calculating the motion information of the target object according to the pixel difference and the pose of the image acquisition equipment in the current adjustment period;
wherein the target projection image is other projection images acquired by the image acquisition device in the current adjustment cycle except the initial projection image.
25. The method of claim 24, wherein calculating motion information of the target object based on the pixel difference and a pose of the image acquisition device at a current adjustment period comprises:
calculating the displacement change of the target object according to the pixel difference between two adjacent frames of target projection images and the initial projection image corresponding to the current adjustment period and the pose of the image acquisition equipment in the current adjustment period;
and calculating the movement speed and/or acceleration of the target object according to the displacement change of the target object and the sampling period of the image acquisition equipment.
26. The method according to any one of claims 17 to 25, wherein the first projection image is formed by projecting the reference image on a projection surface by the projection light of the projection module passing through the target object; the method further comprises the following steps:
controlling the image acquisition equipment to acquire a second projection image formed by directly projecting the reference image on the projection surface;
judging whether the third projection image is deformed compared with the second projection image or not according to the third projection image and the second projection image which are currently acquired by the image acquisition equipment;
if the judgment result is yes, determining that the target object enters the projection range of the projection module; and using the third projection image as the first projection image.
27. The method according to any one of claims 17 to 25, wherein the first projection image is formed by projecting the reference image on a projection surface by the projection light of the projection module passing through the target object; the method further comprises the following steps:
controlling the image acquisition equipment to acquire a second projection image formed by directly projecting the reference image on the projection surface;
judging whether the third projection image is deformed compared with the second projection image or not according to the third projection image and the second projection image which are currently acquired by the image acquisition equipment;
if so, inputting the third projection image into a neural network model; calculating the object type of the deformation pattern contained in the third projection image in the neural network model;
if the object type is a designated type, determining that the target object enters the projection range of the projection module; and using the third projection image as a first frame first projection image.
28. The method of claim 27, wherein prior to inputting the third projection image into a neural network model, the method further comprises:
performing model training by using a sample image pair to obtain the neural network model by taking the minimization of the loss function as a training target; the sample image comprises a projection image formed by projecting the reference image on a projection surface by the projection light of the projection module passing through a specified object; the specified object belongs to the specified type;
the loss function is determined according to the probability that the specified object obtained by model training belongs to the specified type and the actual probability that the specified object belongs to the specified type.
29. The method of any one of claims 17-25, wherein the predetermined pattern is a stripe pattern.
30. An object tracking system, comprising: the system comprises a projection module, tracking equipment and a projection surface arranged in a physical environment where the tracking equipment is located;
the projection module is used for projecting a reference image to the projection surface, and the reference image is provided with a preset pattern;
the tracking device includes:
the machine body is used for mounting the multi-shaft holder;
the multi-axis cloud deck is used for carrying image acquisition equipment and driving the image acquisition equipment to rotate; the image acquisition equipment is used for acquiring a first projection image corresponding to a reference image on the projection surface, and the first projection image comprises a deformation pattern corresponding to the predetermined pattern; the deformation pattern is generated based on the target object;
and the control module is electrically connected with the multi-axis cloud platform and used for adjusting the working state of the multi-axis cloud platform according to the first projection image acquired by the image acquisition equipment so as to drive the image acquisition equipment to track and acquire the deformation pattern.
31. The system of claim 30, wherein the control module is electrically connected to the projection module and instructs the projection module to project the reference image outward.
32. The system of claim 30, wherein a sampling period of the image acquisition device is less than a movement time of the target object within a projection range of the projection module.
33. The system according to claim 30, wherein the control module, when adjusting the operating state of the multi-axis head, is specifically configured to:
and adjusting the working state of the multi-axis holder according to a first projection image acquired by the image acquisition equipment in the current adjustment period so as to drive the image acquisition equipment to track and acquire the deformation pattern in the next adjustment period.
34. The system according to claim 33, wherein the control module, when adjusting the operating state of the multi-axis head, is specifically configured to:
calculating motion information of the target object according to a first projection image acquired by the image acquisition equipment in a current adjustment period;
and adjusting the working state of the multi-axis holder according to the motion information of the target object so as to drive the image acquisition equipment to track and acquire the deformation pattern in the next adjustment period.
35. The system according to claim 34, wherein the control module, when adjusting the operating state of the multi-axis head, is specifically configured to:
calculating a target motion parameter value of a motor in the multi-axis pan-tilt according to the motion information of the target object;
and adjusting the motion parameters of the motor in the multi-axis tripod head to the target motion parameter values so as to adjust the working state of the multi-axis tripod head.
36. The system according to claim 35, wherein the control module, when calculating the target motion parameter value of the motor in the multi-axis head, is specifically configured to:
predicting the position to which the target object moves in the next adjustment period according to the motion information of the target object;
calculating the position of generating the deformation pattern according to the position to which the target object moves in the next adjustment period;
calculating the corresponding pose of the image acquisition equipment in the next adjustment period according to the position of the generated deformation pattern;
and calculating a target motion parameter value of a motor in the multi-axis holder according to the corresponding position and posture of the image acquisition equipment in the next adjustment period and the position and posture of the image acquisition equipment in the current adjustment period.
37. The system of claim 35, wherein the motion parameters of the motor in the multi-axis pan/tilt head comprise: at least one of an acceleration, an angular acceleration, and a rotational speed of a motor in the multi-axis pan/tilt head.
38. The system of claim 34, wherein the control module, when calculating the motion information of the target object, is specifically configured to:
calculating pixel differences of the target projection image and the initial projection image corresponding to the current adjustment period respectively;
calculating the motion information of the target object according to the pixel difference and the pose of the image acquisition equipment in the current adjustment period;
wherein the target projection image is other projection images acquired by the image acquisition device in the current adjustment cycle except the initial projection image.
39. The system of claim 38, wherein the control module, when calculating the motion information of the target object, is specifically configured to:
calculating the displacement change of the target object according to the pixel difference between two adjacent frames of target projection images and the initial projection image corresponding to the current adjustment period and the pose of the image acquisition equipment in the current adjustment period;
and calculating the movement speed and/or acceleration of the target object according to the displacement change of the target object and the sampling period of the image acquisition equipment.
40. The system of any one of claims 30-39, wherein the control module is further configured to:
controlling the image acquisition equipment to acquire a second projection image formed by directly projecting the reference image on the projection surface;
judging whether the third projection image is deformed compared with the second projection image or not according to the third projection image and the second projection image which are currently acquired by the image acquisition equipment;
if the judgment result is yes, determining that the target object enters the projection range of the projection module; and using the third projection image as the first projection image.
41. The system of any one of claims 30-39, wherein the control module is further configured to:
controlling the image acquisition equipment to acquire a second projection image of the reference image directly projected on a projection surface;
judging whether the third projection image is deformed compared with the second projection image or not according to the third projection image and the second projection image which are currently acquired by the image acquisition equipment;
if the judgment result is yes, inputting the third projection image into a neural network model; calculating the object type of the deformation pattern contained in the third projection image in the neural network model; and if the object type is a designated type, determining that the target object enters the projection range of the projection module, and taking the third projection image as the first projection image.
42. The system of claim 41, wherein the control module is further configured to:
performing model training by using a sample image pair to obtain the neural network model by taking the minimization of the loss function as a training target; the sample image comprises a projection image formed by projecting the reference image on a projection surface by the projection light of the projection module passing through a specified object; the specified object belongs to the specified type;
the loss function is determined according to the probability that the specified object obtained by model training belongs to the specified type and the actual probability that the specified object belongs to the specified type.
43. A computer-readable storage medium having stored thereon computer instructions, which, when executed by one or more processors, cause the one or more processors to perform the steps of the method of any one of claims 17-29.
CN202080005912.1A 2020-06-30 2020-06-30 Target tracking method, device, system and storage medium Pending CN112955844A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/099161 WO2022000242A1 (en) 2020-06-30 2020-06-30 Target tracking method, device, and system, and storage medium

Publications (1)

Publication Number Publication Date
CN112955844A true CN112955844A (en) 2021-06-11

Family

ID=76236244

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080005912.1A Pending CN112955844A (en) 2020-06-30 2020-06-30 Target tracking method, device, system and storage medium

Country Status (2)

Country Link
CN (1) CN112955844A (en)
WO (1) WO2022000242A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116630374B (en) * 2023-07-24 2023-09-19 贵州翰凯斯智能技术有限公司 Visual tracking method, device, storage medium and equipment for target object

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030026475A1 (en) * 2001-08-01 2003-02-06 Akira Yahashi Three-dimensional measuring method and device, and computer program
WO2008120457A1 (en) * 2007-03-29 2008-10-09 School Juridical Person Of Fukuoka Kogyo Daigaku Three-dimensional image measurement apparatus, three-dimensional image measurement method, and three-dimensional image measurement program of non-static object
CN101297192A (en) * 2005-09-09 2008-10-29 萨克米伊莫拉机械合作社合作公司 Method and device for directly monitoring object
CN102074045A (en) * 2011-01-27 2011-05-25 深圳泰山在线科技有限公司 System and method for projection reconstruction
US20130088575A1 (en) * 2011-10-05 2013-04-11 Electronics And Telecommunications Research Institute Method and apparatus for obtaining depth information using optical pattern
CN103366360A (en) * 2012-04-03 2013-10-23 佳能株式会社 Information processing apparatus and information processing method
KR20140032665A (en) * 2012-09-07 2014-03-17 주식회사 인스펙토 3d shape mesurement mehod and device by using amplitude of projection grating
KR20140041012A (en) * 2012-09-27 2014-04-04 오승태 Multi 3-dimension camera using multi pattern beam and method of the same
US20140098222A1 (en) * 2012-09-04 2014-04-10 Kabushiki Kaisha Toshiba Area identifying device, area identifying method, and computer readable medium
KR20150107423A (en) * 2014-03-14 2015-09-23 벨로스테크놀로지 주식회사 Moving object tracting control system for pan-tilt camera
CN107065935A (en) * 2017-03-23 2017-08-18 广东思锐光学股份有限公司 A kind of cloud platform control method, device and Target Tracking System positioned for light stream
CN107894423A (en) * 2017-11-08 2018-04-10 安吉汽车物流股份有限公司 Bodywork surface matter damages automatic checkout equipment and method, Vehicular intelligent detecting system
CN108647636A (en) * 2018-05-09 2018-10-12 深圳阜时科技有限公司 Identification authentication method, identification authentication device and electronic equipment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105430326A (en) * 2015-11-03 2016-03-23 中国电子科技集团公司第二十八研究所 Smooth CCTV (Closed Circuit Television System) ship video tracking method
US10412371B1 (en) * 2017-05-18 2019-09-10 Facebook Technologies, Llc Thin film acousto-optic structured light generator
CN108037512B (en) * 2017-11-24 2019-09-17 上海机电工程研究所 Half active correlation imaging tracking detection system of laser and method

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030026475A1 (en) * 2001-08-01 2003-02-06 Akira Yahashi Three-dimensional measuring method and device, and computer program
CN101297192A (en) * 2005-09-09 2008-10-29 萨克米伊莫拉机械合作社合作公司 Method and device for directly monitoring object
WO2008120457A1 (en) * 2007-03-29 2008-10-09 School Juridical Person Of Fukuoka Kogyo Daigaku Three-dimensional image measurement apparatus, three-dimensional image measurement method, and three-dimensional image measurement program of non-static object
CN102074045A (en) * 2011-01-27 2011-05-25 深圳泰山在线科技有限公司 System and method for projection reconstruction
US20130088575A1 (en) * 2011-10-05 2013-04-11 Electronics And Telecommunications Research Institute Method and apparatus for obtaining depth information using optical pattern
CN103366360A (en) * 2012-04-03 2013-10-23 佳能株式会社 Information processing apparatus and information processing method
US20140098222A1 (en) * 2012-09-04 2014-04-10 Kabushiki Kaisha Toshiba Area identifying device, area identifying method, and computer readable medium
KR20140032665A (en) * 2012-09-07 2014-03-17 주식회사 인스펙토 3d shape mesurement mehod and device by using amplitude of projection grating
KR20140041012A (en) * 2012-09-27 2014-04-04 오승태 Multi 3-dimension camera using multi pattern beam and method of the same
KR20150107423A (en) * 2014-03-14 2015-09-23 벨로스테크놀로지 주식회사 Moving object tracting control system for pan-tilt camera
CN107065935A (en) * 2017-03-23 2017-08-18 广东思锐光学股份有限公司 A kind of cloud platform control method, device and Target Tracking System positioned for light stream
CN107894423A (en) * 2017-11-08 2018-04-10 安吉汽车物流股份有限公司 Bodywork surface matter damages automatic checkout equipment and method, Vehicular intelligent detecting system
CN108647636A (en) * 2018-05-09 2018-10-12 深圳阜时科技有限公司 Identification authentication method, identification authentication device and electronic equipment

Also Published As

Publication number Publication date
WO2022000242A1 (en) 2022-01-06

Similar Documents

Publication Publication Date Title
KR100967826B1 (en) Image processing device and method, program, program recording medium, data structure, and data recording medium
US10674139B2 (en) Methods and systems for human action recognition using 3D integral imaging
JP6090786B2 (en) Background difference extraction apparatus and background difference extraction method
CN105657238B (en) Track focusing method and device
CN104113686B (en) Camera device and its control method
CN108076281A (en) A kind of auto focusing method and Pan/Tilt/Zoom camera
JP2010136302A5 (en) IMAGING DEVICE, IMAGING DEVICE CONTROL METHOD, AND PROGRAM
CN105594190A (en) Blurless image capturing system
CN110771143B (en) Control method of handheld cloud deck, handheld cloud deck and handheld equipment
CN112119627A (en) Target following method and device based on holder, holder and computer storage medium
CN102771121A (en) Camera platform system
WO2017156302A1 (en) Time multiplexing programmable field of view imaging
CN106888369A (en) Virtual telescope interactive device
CN105892668A (en) Equipment control method and device
CN112955844A (en) Target tracking method, device, system and storage medium
CN205792907U (en) A kind of indoor panoramic view data gathers dolly
CN111953964B (en) Ambiguity detection method, electronic device and storage medium
WO2019205077A1 (en) Image acquisition apparatus
CN110609576B (en) Cloud deck control method, device and system, control equipment and storage medium
CN102625046A (en) Anti-shake device and method for photography
CN113841376B (en) Shooting control method and device
Sueishi et al. Mirror-based high-speed gaze controller calibration with optics and illumination control
CN207515743U (en) Active high speed three-dimensional sighting device based on Digital Micromirror Device
CN106292166A (en) A kind of three-dimensional panorama camera
Ishikawa High-speed image processing devices and its applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination