WO2022000242A1 - 目标跟踪方法、设备、系统及存储介质 - Google Patents
目标跟踪方法、设备、系统及存储介质 Download PDFInfo
- Publication number
- WO2022000242A1 WO2022000242A1 PCT/CN2020/099161 CN2020099161W WO2022000242A1 WO 2022000242 A1 WO2022000242 A1 WO 2022000242A1 CN 2020099161 W CN2020099161 W CN 2020099161W WO 2022000242 A1 WO2022000242 A1 WO 2022000242A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- projection
- image
- acquisition device
- target object
- projection image
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 59
- 230000033001 locomotion Effects 0.000 claims description 106
- 238000003062 neural network model Methods 0.000 claims description 49
- 230000001133 acceleration Effects 0.000 claims description 28
- 238000012549 training Methods 0.000 claims description 25
- 238000005070 sampling Methods 0.000 claims description 19
- 238000006073 displacement reaction Methods 0.000 claims description 16
- 230000008859 change Effects 0.000 claims description 13
- 238000012545 processing Methods 0.000 claims description 12
- 230000006870 function Effects 0.000 description 27
- 230000008569 process Effects 0.000 description 18
- 238000010586 diagram Methods 0.000 description 13
- 238000001514 detection method Methods 0.000 description 10
- 238000004891 communication Methods 0.000 description 9
- 238000011176 pooling Methods 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 8
- 238000004590 computer program Methods 0.000 description 7
- 238000004458 analytical method Methods 0.000 description 4
- 238000007493 shaping process Methods 0.000 description 4
- 230000004913 activation Effects 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000012806 monitoring device Methods 0.000 description 2
- 230000002085 persistent effect Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 239000000428 dust Substances 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000017525 heat dissipation Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000002834 transmittance Methods 0.000 description 1
- 230000007306 turnover Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0242—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0223—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
- G05D1/0251—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0276—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
- G05D1/0285—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using signals transmitted via a public communication network, e.g. GSM network
Definitions
- the present application relates to the technical field of target detection and tracking, and in particular, to a target tracking method, device, system and storage medium.
- Target detection and tracking technology plays an increasingly important role in modern security, medicine, civil and other fields.
- the camera is used to shoot the object, and then the machine learning technology is used to perform target recognition on the object in the image, and then the moving object is tracked within the acquisition field of view of the camera.
- Various aspects of the present application provide a target tracking method, device, system, and storage medium for tracking moving targets.
- the embodiment of the present application provides a tracking device, including:
- Body used to install multi-axis gimbal
- the multi-axis pan/tilt is used for carrying image acquisition equipment and can drive the image acquisition equipment to rotate;
- the image acquisition equipment is used for acquiring the first projection image corresponding to the reference image;
- the reference image is projected outward by the projection module, and the reference image has a predetermined pattern;
- the first projection image includes a deformation pattern corresponding to the predetermined pattern;
- the deformation pattern is generated based on the target object;
- the control module is electrically connected to the multi-axis pan/tilt, and is used for adjusting the working state of the multi-axis pan/tilt according to the first projection image collected by the image acquisition device, so as to drive the image acquisition device to The deformation pattern is tracked and collected.
- the embodiment of the present application also provides a target tracking method, including:
- the projection module controlling the projection module to project a reference image outward; the reference image has a predetermined pattern;
- the image acquisition device mounted on the multi-axis pan/tilt head Controlling the image acquisition device mounted on the multi-axis pan/tilt head to collect the first projection image corresponding to the reference image;
- the first projection image includes: a deformation pattern of the predetermined pattern; the deformation pattern is generated based on the target object;
- the working state of the multi-axis pan/tilt head is adjusted to drive the image collection device to track and collect the deformation pattern.
- Embodiments of the present application further provide a target tracking system, including: a projection module, a tracking device, and a projection surface disposed in a physical environment where the tracking device is located;
- the projection module is used for projecting a reference image to the projection surface, the reference image having a predetermined pattern
- the tracking device includes: a body for installing a multi-axis pan/tilt;
- a multi-axis pan/tilt head is used to carry an image acquisition device and can drive the image acquisition device to rotate; the image acquisition device is used to acquire a first projection image corresponding to a reference image on the projection surface, the first projection image
- the image includes a deformation pattern corresponding to the predetermined pattern; the deformation pattern is generated based on the target object;
- the control module is electrically connected to the multi-axis pan/tilt, and is used for adjusting the working state of the multi-axis pan/tilt according to the first projection image collected by the image acquisition device, so as to drive the image acquisition device to The deformation pattern is tracked and collected.
- the embodiments of the present application further provide a computer-readable storage medium storing computer instructions, when the computer instructions are executed by one or more processors, the one or more processors cause the one or more processors to execute the above target tracking method. step.
- the target tracking method, device, system and storage medium provided by the embodiments of the present application can realize the tracking of the target object and help to expand the tracking range of the target object.
- FIG. 1a is a schematic structural diagram of a tracking device provided by an embodiment of the present application.
- FIG. 1b and FIG. 1c are structural block diagrams of a tracking device provided by an embodiment of the present application.
- FIG. 1d is a schematic diagram of the working principle of the digital micromirror device provided by the embodiment of the application;
- 1e is a schematic diagram of a training process of a neural network model provided by an embodiment of the application.
- 2a is a schematic structural diagram of a target tracking system provided by an embodiment of the application.
- 2b is a schematic diagram of a working process of a target tracking system provided by an embodiment of the application.
- FIG. 3 is a schematic flowchart of a target tracking method provided by an embodiment of the present application.
- the tracking device includes: a body and a multi-axis gimbal set on the body.
- the body can also be equipped with a projection module, and the multi-axis PTZ can be equipped with image acquisition equipment.
- the projection module may project a reference image having a predetermined pattern outward. The projection image of the reference image will be deformed because the target object appears on the projection light of the projection module, and the generated deformation pattern moves with the movement of the target object.
- the image acquisition device can collect the projection image that generates the deformation pattern
- the control module can adjust the working state of the multi-axis pan/tilt according to the projection image collected by the image acquisition device, so as to adjust the pose of the image acquisition device
- the image acquisition device can track and acquire the deformation pattern. Since the deformation pattern is caused by the target object, the tracking acquisition of the deformation pattern can realize the tracking of the target object, and the pose of the image acquisition device is adjustable, which helps to expand the tracking range of the target object.
- FIG. 1a is a schematic structural diagram of a tracking device provided by an embodiment of the present application.
- the tracking device includes: a body 11 , a multi-axis pan/tilt 13 , and a control module 15 .
- the body 11 is used to install the multi-axis pan/tilt;
- the multi-axis pan/tilt 13 is used to carry the image acquisition device 14 and can drive the image acquisition device 14 to rotate; wherein, the image acquisition device 14 is used to acquire the first projection corresponding to the reference image an image; the reference image is projected outward by the projection module, and the reference image has a predetermined pattern.
- the first projection image includes a deformation pattern corresponding to a predetermined pattern; the deformation pattern is generated based on the target object;
- the working state of the multi-axis pan/tilt 13 is adjusted to drive the image acquisition device 14 to track and acquire the deformation pattern.
- the relationship between the projection module 12 and the tracking device is not limited.
- the projection module 12 is an independent projection device and is disposed in the physical environment where the tracking device is located.
- the projection module 12 is connected in communication with the control module 15 .
- the control module 15 may instruct the projection module 12 to project a reference image outward, the reference image having a predetermined pattern.
- the body 11 can be used to install the projection module 12 and the multi-axis pan/tilt head 13 .
- the projection module 12 can be fixed on the body 11 .
- the projection module 12 can be fixed on the body 11 in a detachable manner.
- the body 11 is provided with a fixing member, and the fixing member is used for fixing the projection module 12 .
- the fixing member on the body 11 may be a fixture or the like. For the fixture, the projection module 12 may be fixed on the body 11 without disposing a corresponding fixing member on the projection module 12 .
- the fixing member on the body 11 and the fixing member on the projection module 12 can also cooperate with each other to fix the projection module 12 on the body 11 .
- the fixing member on the body 11 and the fixing member on the projection module 12 can be combined to realize a buckle or a lock, or the fixing member on the body 11 and the fixing member on the projection module 12 can be realized as a concave point and a convex point. Edge and so on.
- a number of concave points can be set on the body 11 and a corresponding number of flanges can be set on the projection module 12, or, a number of flanges can also be set on the body 11, and a corresponding number of flanges can be set on the projection module 12. pits, etc.
- the multi-axis gimbal 13 is rotatably connected to the body 11 .
- the multi-axis pan/tilt 13 refers to a pan/tilt with multiple rotation axes. Plural means two or more.
- the multi-axis PTZ 13 may be a dual-axis PTZ, a 3-axis PTZ, a 4-axis PTZ, or a multi-axis PTZ, and so on.
- the multi-axis pan/tilt 13 is used to carry the image acquisition device 14 .
- the multi-axis pan/tilt head 13 is provided with a fixing member for fixing the image capturing device 14 .
- the fixing member on the multi-axis pan/tilt head 13 may be a clamp or the like.
- the image capturing device 14 may not be provided with a corresponding fixing member, that is, the image capturing device 14 may be fixed on the body.
- the fixing member on the multi-axis platform 13 and the fixing member on the image capturing device 14 can also be used to cooperate with each other to fix the image capturing device 14 on the multi-axis platform 13 .
- the fixing member on the multi-axis pan/tilt 13 and the fixing member on the image acquisition device 14 may be combined to realize a snap or lock, or, the fixing member on the multi-axis pan/tilt 13 and the fixing member on the image acquisition device 14 may be combined.
- Pieces can be realized as pits and flanges, etc.
- several concave points may be set on the multi-axis pan/tilt 13, and a corresponding number of flanges may be provided on the image acquisition device 14, or, several flanges may be provided on the multi-axis pan/tilt 13, and the image A corresponding number of pits and the like are provided on the collecting device 14 .
- the image acquisition device 14 when the image acquisition device 14 is mounted on the multi-axis pan-tilt 13, the image acquisition device 14 can rotate with the rotation of the multi-axis pan-tilt 13, that is, the multi-axis pan-tilt 13 can drive the image acquisition device 14 Turn.
- the multi-axis pan/tilt 13 of the tracking device may be equipped with the image acquisition device 14 when it leaves the factory; The image acquisition device 14 is fixed on the multi-axis pan/tilt 13 .
- the multi-axis pan-tilt 13 can rotate around its rotation axis, and the rotatable direction is determined by the rotation direction of the rotary shaft included in the multi-axis pan-tilt 13 .
- a three-axis gimbal it includes rotation axes in three directions: pitch, roll, and yaw. Therefore, a three-axis gimbal can realize pitch rotation, roll rotation, and yaw rotation. Since the image acquisition device 14 is mounted on the three-axis pan/tilt, the image acquisition device 14 can also realize pitch rotation, roll rotation and yaw rotation with the rotation of the three-axis pan/tilt.
- the implementation form of the image acquisition device 14 that can be mounted on the multi-axis pan/tilt head 13 is not limited.
- the image capture device 14 may be any device capable of image capture.
- the image capturing device 14 may be a terminal device such as a mobile phone, a tablet computer, a wearable device, etc. with a photographing function, and may also be a camera, a video camera, a camera, and the like.
- the realization forms of the image acquisition device 14 are different, and the structure and size of the fixing member on the multi-axis pan/tilt head 13 can be adjusted adaptively.
- the tracking device further includes: a control module 15 .
- the control module 15 may include a processor 15a, a memory, a peripheral circuit of the processor 15a, and the like.
- the processor may be a central processing unit (Central Processing Unit, CPU), a graphics processing unit (Graphics Processing Unit, GPU) or a microcontroller unit (Microcontroller Unit, MCU); it may also be a field programmable gate array (Field-Programmable Gate Array).
- CPU Central Processing Unit
- GPU Graphics Processing Unit
- MCU microcontroller Unit
- Field-Programmable Gate Array Field-Programmable Gate Array
- Programmable devices such as Gate Array (FPGA), Programmable Array Logic (PAL), General Array Logic (GAL), Complex Programmable Logic Device (CPLD), etc.; or It is an advanced reduced instruction set (RISC) processor (Advanced RISC Machines, ARM) or a system on a chip (System on Chip, SOC), etc., but not limited thereto.
- FPGA Gate Array
- PAL Programmable Array Logic
- GAL General Array Logic
- CPLD Complex Programmable Logic Device
- RISC advanced reduced instruction set
- ARM Advanced RISC Machines
- SOC System on Chip
- the memory may be implemented by any type of volatile or non-volatile storage device, or a combination thereof, such as memory banks 15b1 and 15b2, static random access memory (SRAM) 15b3, electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks, etc.
- SRAM static random access memory
- EEPROM electrically erasable programmable read-only memory
- EPROM erasable programmable read-only memory
- PROM programmable read-only memory
- ROM read-only memory
- magnetic memory magnetic memory
- flash memory magnetic or optical disks, etc.
- the control module 15 may instruct the projection module 11 to project a reference image outward, and the reference image has a predetermined pattern.
- the predetermined pattern can be any pattern.
- the predetermined pattern may be a striped pattern, a coding pattern, a predetermined character pattern, etc., but is not limited thereto.
- the tracking device is preset with a predetermined pattern.
- a predetermined pattern may be acquired from the memory, and the projection module 11 may be instructed to project a reference image having the predetermined pattern outward.
- the tracking device is preset with pattern generation rules.
- a predetermined pattern can be generated according to a preset pattern generation rule in response to a power-on operation for the tracking device. Further, the control module 15 may instruct the projection module 11 to project the reference image having the predetermined pattern outward.
- control module 15 may be connected in communication with the projection module 12 .
- the control module 15 and the projection module 12 can be connected through a mobile network communication, correspondingly, the network standard of the mobile network can be 2G (GSM), 2.5G (GPRS), 3G (WCDMA, TD-SCDMA) , CDMA2000, UTMS), 4G (LTE), 4G+ (LTE+), 5G, WiMax, etc.
- GSM Global System for Mobile Communications
- GPRS 2.5G
- WCDMA 3G
- TD-SCDMA 3G
- CDMA2000 Code Division Multiple Access 2000
- UTMS Code Division Multiple Access 2000
- 4G Long Term Evolution
- 4G+ Long Term Evolution+
- 5G WiMax
- different physical machines can also be communicated and connected by means of Bluetooth, WiFi, or infrared rays.
- control module 15 may instruct the projection module 12 to project a reference image with a predetermined pattern outward through an instruction.
- the control module 15 may send a projection instruction to the projection module 12, where the projection instruction is used to instruct the projection module 12 to project a reference image with a predetermined pattern outward.
- the projection module 12 projects a reference image with a predetermined pattern outwards when receiving the projection instruction.
- control module 15 is electrically connected to the projection module 12 .
- the control module 15 may instruct the projection module 12 to project a reference image with a predetermined pattern outward through an electrical signal.
- the electrical signal can be a high-level or low-level signal.
- the control module 15 may output an electrical signal to the projection module 12, where the electrical signal is used to instruct the projection module 12 to project a reference image with a predetermined pattern outward.
- the projection module 12 projects a reference image with a predetermined pattern outwards in the case of receiving the electrical signal.
- the projection module 12 can project a reference image outward, and the reference image has the above-mentioned predetermined pattern.
- the specific implementation form of the projection module 12 is not limited.
- the projection module 12 may be a digital light processing (Digital Light Processing, DLP) projection device or the like.
- DLP Digital Light Processing
- the structure and working principle of the projection module 12 are exemplarily described below by taking the projection module 12 as a DLP projection device as an example.
- the DLP projection device includes: a light source 12a, a color wheel 12b, a digital micromirror device (Digital Micromirror Device, DMD) 12c, and a projection lens 12d.
- the color wheel 12b may be a six-segment color wheel or the like.
- the color wheel 12b is optically connected between the light source 12a and the DMD device 12c; the DMD device 12c is optically connected to the projection lens 12d; and the color wheel 12b and the DMD device 12c are also connected to the control module respectively 15 Electrical connections.
- the light emitted by the light source 12a is incident on the color wheel 12b.
- the color wheel 12b filters the received light into monochromatic light, and projects the monochromatic light to the DMD device 12c.
- the DMD device 12c modulates the above-mentioned predetermined pattern with monochromatic light, and projects a reference image having the predetermined pattern outward through the projection lens 12d.
- the color wheel 12b may include: a condenser lens 12b1, a filter 12b2, and a shaping lens 12b3.
- the filter 12b2 is optically connected between the condenser lens 12b1 and the shaping lens 12b3.
- the condenser lens 12b1 is optically connected to the light source 12a
- the shaping lens 12b3 is optically connected to the DMD device 12c.
- the control module 15 is also electrically connected to the filter 12b2.
- the color wheel 12b filters the received light into monochromatic light under the control of the control module 15, and projects the monochromatic light to the DMD device 12c.
- the control module 15 controls the filter 12b2 in the color wheel 12b to divide the received light into various monochromatic lights, and pass the light through the color wheel 12b
- the shaping lens 12b3 in is transmitted to the DMD device 12c.
- the DMD device 12c modulates the above-mentioned predetermined pattern with monochromatic light, and projects a reference image having the predetermined pattern outward through the projection lens 12d.
- the DMD device 12c is a photoelectric conversion micro-electromechanical system. Each micromirror and associated structure in DMD device 12c controls one pixel. As shown in FIG. 1d, there are three operating states of the DMD device 12c. (1) When the digital micromirror is rotated by 0°, it is the state shown by the symbol (1) in Fig. 1d, which is a flat state. (2) When the micromirror rotates to a positive set angle (eg +12°), it is the state shown by the symbol (2) in FIG. 1d, which means the open state.
- a positive set angle eg +12°
- the light emitted by the light source is incident on the mirror surface of the digital micromirror, and is reflected by the mirror surface of the digital micromirror toward the projection surface, etc., and the projection surface is displayed in a "bright state”.
- a negative set angle eg -12°
- the micromirror rotates to a negative set angle (eg -12°), that is, the state shown by the symbol (3) in FIG. 1d, it indicates the off state.
- the light emitted by the light source is incident on the mirror surface of the digital micromirror, and is reflected out of the projection surface by the mirror surface of the digital micromirror, and the projection surface is displayed as a "dark state”.
- the control module 15 can control the flip of the digital micromirror through the SRAM in the DMD device 12c and the address electrodes and hinges on both sides of the DMD device 12c.
- the bias voltage can be converted into force to control the rotation of the hinge, thereby driving the digital micromirror to turn over.
- the flip angle of the digital micromirror can be adjusted by the magnitude of the bias voltage.
- the projection image presented by the reference image on the projection surface also has a predetermined pattern. If an object appears on the projection light of the projection module 12, the projection image corresponding to the predetermined pattern is deformed.
- the pattern formed by deforming the projection image corresponding to the predetermined pattern is defined as deformation pattern.
- the deformation pattern is generated based on the object appearing on the projection light of the projection module 12 , and the deformation pattern moves with the movement of the object. Based on this, in the embodiment of the present application, the object appearing on the projection light of the projection module 12 can be tracked based on the deformation pattern.
- the object appearing on the projection light of the projection module 12 is defined as the target object A.
- the target object A may be a moving object.
- the image acquisition device 14 may acquire the projection image corresponding to the reference image.
- the projection image corresponding to the reference image may include: a projection image formed by directly projecting the reference image onto a certain projection surface, that is, when no object appears in the projection light of the projection module 12, the reference image is directly projected onto a certain projection surface The resulting projected image.
- the projection surface can be a projection screen, such as a projection screen, etc.; it can also be other object surfaces in the current environment, such as walls, floors or furniture surfaces, etc., but Not limited to this.
- FIG. 1a only the projection surface is used as a projection screen for illustration, but it does not constitute a limitation.
- the projection image corresponding to the reference image may also include: when an object (target object A) appears in the projection light of the projection module 12, the projection light of the projection module 12 passes through the target object A to project the reference image on a certain The projected image formed on the projection surface.
- the projection surface 16 may be the surface of the target object A, or the projection surface 16 may be a projection screen, such as a projection screen, etc.; or the current environment other object surfaces, such as walls, floors or furniture surfaces, etc., but not limited thereto.
- FIG. 1a only the projection surface is used as a projection screen for illustration, but it does not constitute a limitation.
- the projection surface may be the surface of the target object, or the surface of other objects in the environment where the tracking device is currently located. Therefore, compared with the existing solution for 3D visual detection and tracking based on the LCD technology, the tracking device provided by the embodiment of the present application has a lighter volume and better maintainability. This is because in the prior art, since the transistors on the LCD panel do not have light transmittance, there are gaps between the pixels, and the disadvantage of poor detail in dark parts is derived. On the other hand, equipment designed with LCD technology is bulky and easily disturbed by environmental dust, so the existing equipment used for 3D visual detection and tracking based on LCD technology is not easy to maintain.
- the surface of the target object or the surface of other objects in the environment where the tracking device is currently located may be used as the projection surface, which not only reduces the cost of the device, but also does not require maintenance of the projection surface.
- there is no gap between the pixels of the projection image obtained in the embodiment of the present application which helps to improve the accuracy of target detection.
- the projection light of the projection module 12 passes through the target object A and projects the reference image on a certain
- the projection image formed on the projection surface is defined as the first projection image; and the projection image formed by directly projecting the reference image onto a certain projection surface when no object appears in the projection light of the projection module 12 is positioned as The second projected image.
- the first projection image includes the above-mentioned deformation pattern
- the second projection image does not include the deformation pattern. Therefore, the first projection image can reflect the information of the target object A, but the second projection image cannot reflect the information of the target object A, and therefore the target object A cannot be tracked based on the second projection image. Therefore, in the following embodiments, the focus is on the implementation of the tracking process of the target object A by the control module 15 based on the first projection image collected by the image collection device 14 to be exemplarily described.
- control module 15 is connected in communication with the image acquisition device 14 , and the control module 15 is electrically connected with the multi-axis pan/tilt 13 .
- control module 15 and the image acquisition device 14 For the communication connection between the control module 15 and the image acquisition device 14, reference may be made to the communication connection between the control module 15 and the projection module 12, which will not be repeated here.
- the image acquisition device 14 may provide the acquired first projection image to the control module 15 .
- the control module 15 can adjust the working state of the multi-axis pan/tilt head 13 according to the first projection image that has been collected by the image collection device 14 .
- the multi-axis pan/tilt 13 can be rotated.
- the rotation of the multi-axis pan/tilt 13 can drive the image acquisition device 14 to rotate, and then the pose of the image acquisition device 14 can be adjusted, so that the image acquisition device 14 can track and acquire the deformation pattern.
- the pose of the image capture device 14 includes the position and orientation of the image capture device 14 .
- the multi-axis pan/tilt 13 drives the image acquisition device 14 to rotate, and the pose of the image acquisition device 14 can be adjusted, which helps to expand the tracking range of the target object A. As shown in FIG.
- adjusting the working state of the multi-axis pan/tilt head 13 includes: adjusting the state of the multi-axis pan/tilt head 13 in at least one direction.
- the three-axis gimbal can be adjusted to rotate in at least one direction of pitch, roll, and yaw.
- the image acquisition device 14 can rotate in at least one direction among pitch rotation, roll rotation and yaw rotation along with the rotation of the three-axis pan/tilt head.
- the pose of the image capture device 14 can be adjusted to capture the deformation pattern caused by the target object A at subsequent moments.
- the image acquisition device 14 may adopt an image acquisition device with a high sampling rate.
- the image capturing device 14 can capture multiple frames of the first projection images including the deformation pattern.
- the sampling period of the image acquisition device 14 is smaller than the moving time of the target object A within the projection range of the projection module 12 . That is, the moving time within the projection range of the projection module 12 may be Q times the sampling period of the image acquisition device 14, and Q ⁇ 2.
- the specific value of Q is not limited.
- Q can be 3, 8, 10, 20, or 30.5, and so on. In this way, the image acquisition device 14 can acquire multiple frames of the first projection images.
- the realization form of the target object A is not limited.
- the target object A may be any moving object that appears on the projection light of the projection module 12 .
- the tracking device may be implemented as a hand-held pan-tilt for carrying an image acquisition device 14, such as a hand-held mobile phone mount, a hand-held camera mount, or a hand-held camera mount, or the like. The user can use the hand-held pan/tilt head to track any moving object that appears on the projection light of the projection module 12 .
- the image acquisition device 15 may also acquire a second projection image formed by directly projecting the reference pattern on a certain projection surface, and The two projection images are buffered in the memory of the control module 15 . Further, the control module 15 can determine whether the third projection image is deformed compared to the second projection image according to the third projection image and the second projection image currently collected by the image capturing device 14; if the determination result is yes, then determine the target The object A enters the projection range of the projection module; the third projection image is used as the first projection image. Optionally, the third projection image may be used as the first frame of the first projection image.
- control module 15 starts to track the target object A, that is, starts to track and acquire the deformation pattern.
- the specific implementation of tracking and collecting the deformation pattern by the control module 15 will be described in detail in the following embodiments, and will not be described in detail here.
- the object is tracked.
- the tracking device can be implemented as a monitoring device, and the monitoring device can be deployed in the monitoring area.
- the tracking device can be implemented as a medical detection device such as a microscope.
- the image acquisition device 14 before the image acquisition device 14 acquires the first projection image containing the deformation pattern caused by the target object A, it can also acquire the second projection image formed by directly projecting the reference pattern on a certain projection surface, and The second projected image is buffered in the memory of the control module 15 . Further, the control module 15 can determine whether the third projection image is deformed compared to the second projection image according to the third projection image and the second projection image currently collected by the image capturing device 14; The three-projection images are fed into the neural network model.
- the object type of the deformation pattern contained in the third projection image is calculated; if the object type of the deformation pattern contained in the third projection image is the specified type, it is determined that the target object A enters the projection range of the projection module, and use the third projection image as the first projection image.
- the third projection image may be used as the first frame of the first projection image.
- the above-mentioned detection using the deformation pattern in the projection image to determine the object type to which the target object belongs can reduce the workload of image recognition compared to using the image of the object captured by the image acquisition device to recognize the object.
- the image of the object directly captured by the ordinary monocular image acquisition device cannot identify the three-dimensional feature of the object, if the depth camera or binocular camera is used to capture the object to obtain the three-dimensional feature of the object, it will undoubtedly increase the number of objects. Cost of image acquisition equipment.
- the target detection is performed using the deformation pattern in the projection image
- the deformation pattern may include depth information, which may be used to measure the three-dimensional feature of the target object. Therefore, in the embodiment of the present application, the three-dimensional feature of the target object can be measured by using the deformation pattern in the projection image, which helps to reduce the requirements of the image acquisition device, thereby helping to reduce the cost of the image acquisition device.
- the neural network model before using the neural network model to analyze the object type of the deformation pattern contained in the third projection image, the neural network model needs to be trained.
- the model structure of the neural network model is not limited.
- the neural network model may include: a convolution layer, a pooling layer, and an activation function layer.
- Sigmoid function, tanh function or Relu function can be used in the activation function layer.
- the number of convolutional and pooling layers is equal.
- the specific numbers of convolution layers and pooling layers are not limited.
- the number of convolutional and pooling layers can be 2, 3 or 4 or even more.
- the network architecture of the initial neural network model can be preset.
- the network architecture of the initial neural network model includes: convolutional layers, pooling layers, the number and setting order of these convolutional layers and pooling layers, and the hyperparameters of each convolutional layer and pooling layer.
- the hyperparameters of the convolutional layer include: the size of the convolution kernel K (kernel size), the size of the edge expansion of the feature map P (padding size), and the stride size S (stride size).
- the hyperparameters of the pooling layer are the size K of the pooling operation kernel and the stride size S, etc.
- the activation function layer can be a Relu function:
- the output of each convolutional layer can be expressed as: Among them, w i and b i are the parameters of the neural network model to be trained, representing the weights and biases of each layer, respectively; xi represents the input vector of the i-th layer (for example, the input image of this layer).
- the input image I can be convolved with the convolution kernel K, which can be expressed as:
- M represents the number of rows of pixels of the input image
- N represents the number of columns of pixels of the input image
- m is an integer and 0 ⁇ m ⁇ M
- n is an integer and 0 ⁇ n ⁇ N.
- the process of training a neural network model for training understood as a process of initial parameters w i of the neural network model and b i, to give each layer the convolution weights w i and b i.
- the loss function can be minimized as the training target, and the model is trained by using the sample image to obtain the neural network model.
- the sample image includes a projection image formed by the projection light of the projection module passing through the designated object and projecting the reference image on the projection surface.
- the specified object belongs to the specified type.
- the sample image may be one frame or multiple frames, and multiple frames refer to two or more frames, and the specific value of the number can be flexibly set according to actual needs.
- the source of the sample image is not limited, and the sample image may be: a projection image formed by projecting the reference image on the projection surface by the projection light of the pre-collected projection module passing through the specified object; it may also be other three-dimensional projection images. Images in an image database or a depth image database; etc.
- the loss function is determined according to the probability that the specified object belongs to the specified type obtained by the model training and the actual probability that the specified object belongs to the specified type.
- the actual probability that the specified object belongs to the specified type may be 1, that is, 100%.
- the loss function may be the absolute value of the difference between the probability that the specified object belongs to the specified type obtained by model training and the actual probability that the specified object belongs to the specified type.
- model training process provided in this embodiment is exemplarily described below with reference to FIG. 1d.
- main steps of the model training process are as follows:
- S1 Take the sample image as the input image of the initial neural network model, and input the initial neural network model.
- S3 Bring the probability of the deformation pattern contained in the sample image under each object type and the actual probability of the deformation pattern contained in the sample image under each object type into the loss function, and calculate the loss function value.
- the types and quantities of object types output by the neural network model can be determined by the richness of the sample images.
- step S4 Determine whether the loss function value calculated this time is less than or equal to the loss function value calculated in the last W times; if the determination result is yes, go to step S5; if the determination result is no, go to step S6.
- W is an integer greater than or equal to 1, and its specific value can be used for flexible devices. For example, W may be equal to 5, 8, 10, etc., but is not limited thereto.
- step S5 Adjust the parameters in the neural network model along the negative gradient direction of the parameters in the initial neural network model, use the adjusted neural network model as the initial neural network model, and return to step S1.
- the target object A when it is determined that the target object A appears within the projection range of the projection module 13, the target object A can be tracked, that is, the deformation pattern caused by the target object A can be tracked and collected.
- the control module 15 can adjust the multi-axis cloud according to the first projection image collected by the image collection device 14. The working state of the stage 13, so that the multi-axis gimbal 13 drives the image acquisition device 14 to track and collect the deformation pattern.
- an adjustment period can be set in the control module 15, and a timer or a counter can be started to time the adjustment period.
- the control module 15 can capture the current adjustment period according to the image acquisition device 15.
- the obtained first projection image adjust the working state of the multi-axis pan/tilt head 13, so as to drive the image acquisition device 14 to track and acquire the deformation pattern in the next adjustment period. That is, for the multi-axis pan/tilt head 13, the adjusted working state enables the image capture device 14 to capture the deformation pattern of the target object A caused by the next adjustment cycle.
- control module 15 may calculate the motion information of the target object A according to the first projection image acquired by the image acquisition device 14 in the current adjustment period.
- the motion information of the target object A may include at least one of displacement information, motion speed, motion direction, and acceleration information of the target object A.
- the control module 15 may calculate the pixel difference between the target projection image and the initial projection image corresponding to the current adjustment period.
- the initial projection image corresponding to the current adjustment period may be the first frame of the first projection image collected by the image acquisition device 14 during the current adjustment period, or may be the first N frames initially collected by the image acquisition device 14 during the current adjustment period.
- the target projection image is other projection images except the initial projection image acquired by the image acquisition device 14 during the current adjustment period.
- the number of target projection images can be one or more frames. Multi-frame refers to 2 or more frames.
- the control module 15 may calculate the motion information of the target object A according to the pixel difference between the target projection image and the initial projection image and the pose of the image acquisition device 14 in the current adjustment period.
- control module 15 may, according to the pixel difference between the target projection images of two adjacent frames and the initial projection images corresponding to the current adjustment period, and the pose of the image acquisition device in the current adjustment period, Calculate the displacement change of the target object A; and calculate the movement speed and/or acceleration of the target object A according to the displacement change of the target object A and the sampling period of the image acquisition device 14 .
- control module 15 can adjust the working state of the multi-axis pan/tilt according to the motion information of the target object A, so as to drive the image acquisition device 14 to track and acquire the deformation pattern in the next adjustment period. That is, for the multi-axis pan/tilt head 13, its adjusted working state enables the image acquisition device 14 to capture the deformation pattern of the target object A caused by the next adjustment period.
- control module 15 can calculate the target motion parameter value of the motor in the multi-axis pan/tilt 13 according to the motion information of the target object A, and adjust the motion parameter of the motor in the multi-axis pan/tilt 13 to the target motion parameter value, thereby
- the working state of the multi-axis gimbal 13 can be adjusted.
- the adjusted working state enables the image capture device 14 to capture the deformation pattern caused by the target object A in the next adjustment cycle.
- the motor in the multi-axis pan/tilt head 13 its motion parameters may include at least one of acceleration, angular acceleration, and rotational speed of the motor in the multi-axis pan/tilt head 13 .
- the target motion parameter value of the motor in the multi-axis pan/tilt head 13 may include at least one of: target acceleration, target acceleration, and target rotational speed of the motor in the multi-axis pan/tilt head 13 .
- control module 15 can predict the position to which the target object A moves in the next adjustment period according to the motion information of the target object A; and calculate the position where the deformation pattern is generated according to the position to which the target object A moves in the next adjustment period; further, The control module 15 can calculate the pose corresponding to the image acquisition device 14 in the next adjustment period according to the position where the deformation pattern is generated; pose, and calculate the target motion parameter value of the motor in the multi-axis gimbal 13 .
- control module 15 can control the multi-axis pan/tilt 13 to adjust the motion parameters of its motor to the target motion parameter value, so as to adjust the working state of the multi-axis pan/tilt 13, so that the adjusted working state enables the image acquisition device 14 to
- the deformation pattern caused by the target object A in the next adjustment period is tracked and collected, so that the target object A can be tracked in the next adjustment period.
- the tracking device provided by the embodiment of the present application may further include: a power supply component 17 , a heat dissipation component 18 , and the like.
- the basic components included in different tracking devices and the composition of the basic components are different, and the embodiments of the present application enumerate only some examples.
- the embodiment of the present application also provides a target tracking system.
- the system includes: a projection module 22, a tracking device S20 and a projection surface S21 arranged in the physical environment where the tracking device S20 is located.
- the projection module 22 is used for projecting a reference image to the projection surface, and the reference image has a predetermined pattern.
- the tracking device includes: a body 21 , a multi-axis pan/tilt 23 , and a control module 25 .
- the body 21 is used to install the multi-axis head 23 .
- the multi-axis pan/tilt 23 is used to carry the image acquisition device 24 and can drive the image acquisition device 24 to rotate.
- the image acquisition device 24 is used for acquiring the first projection image corresponding to the reference image.
- the reference image is projected outward by the projection module 22, and the reference image has a predetermined pattern.
- the first projected image includes a deformation pattern corresponding to the predetermined pattern. The deformation pattern is generated based on the target object.
- the control module 25 is electrically connected to the multi-axis pan-tilt 23, and is used for adjusting the working state of the multi-axis pan-tilt 23 according to the first projection image collected by the image acquisition device 24, so as to drive the image acquisition device to respond to the deformation.
- the pattern is tracked and collected.
- the relationship between the projection module 22 and the tracking device is not limited.
- the projection module 22 is an independent projection device and is disposed in the physical environment where the tracking device is located.
- the projection module 22 is connected in communication with the control module 15 .
- the control module 25 may instruct the projection module 22 to project a reference image outward, the reference image having a predetermined pattern.
- the body 21 can be used to install the projection module 22 and the multi-axis pan/tilt head 23 .
- the projection module 22 can be fixed on the body 21 .
- the multi-axis pan/tilt 23 is rotatably connected to the body 21 .
- the multi-axis pan/tilt 23 refers to a pan/tilt with multiple rotation axes. Plural means two or more.
- the multi-axis pan/tilt 23 is used to carry the image acquisition device 24.
- the image acquisition device 24 is mounted on the multi-axis pan/tilt head 13 , reference may be made to the relevant description in FIG. 1 a above, which will not be repeated here.
- the image acquisition device 24 when the image acquisition device 24 is mounted on the multi-axis pan-tilt 23, the image acquisition device 24 can rotate with the rotation of the multi-axis pan-tilt 23, that is, the multi-axis pan-tilt 23 can drive the image acquisition device 24 turns.
- the realization form of the multi-axis pan/tilt head 23 and the realization form of the image acquisition device 24 reference can be made to the relevant contents of the above-mentioned embodiments, which will not be repeated here.
- the tracking device further includes: a control module 25 .
- a control module 25 For the specific implementation of the control module 25, reference may be made to the related content of the above-mentioned embodiment of the tracking device.
- the computer instructions of the tracking device S20 are mainly executed by the control module 25 .
- control module 25 may instruct the projection module 22 to project a reference image outward, and the reference image has a predetermined pattern.
- the predetermined pattern can be any pattern.
- the predetermined pattern may be a striped pattern, a coding pattern, a predetermined character pattern, etc., but is not limited thereto.
- control module 25 may be connected in communication with the projection module 22 .
- control module 25 may instruct the projection module 22 to project a reference image with a predetermined pattern outward through an instruction.
- control module 25 may send a projection instruction to the projection module 22, where the projection instruction is used to instruct the projection module 22 to project a reference image with a predetermined pattern outward.
- the projection module 22 projects a reference image with a predetermined pattern outwards when receiving the projection instruction.
- control module 25 is electrically connected to the projection module 22 .
- the control module 25 may instruct the projection module 22 to project a reference image with a predetermined pattern outward through an electrical signal.
- the electrical signal can be a high-level or low-level signal.
- the control module 25 may output an electrical signal to the projection module 22, where the electrical signal is used to instruct the projection module 22 to project a reference image with a predetermined pattern outward.
- the projection module 22 projects a reference image with a predetermined pattern outwards under the condition of receiving the electrical signal.
- the projection module 22 can project a reference image outward, and the reference image has the above-mentioned predetermined pattern.
- the specific implementation form of the projection module 22 is not limited.
- the projection module 22 projects the reference image outward, if no object appears on the projection light of the projection module 22, the projection image presented by the reference image on the projection surface S21 has a predetermined pattern. If an object appears on the projection light of the projection module 22, the projection image corresponding to the predetermined pattern is deformed.
- the pattern formed by deforming the projection image corresponding to the predetermined pattern is defined as deformation pattern.
- the deformation pattern is generated based on the object appearing on the projection light of the projection module 22, and the deformation pattern will move with the movement of the object. Based on this, in the embodiment of the present application, the tracking of the object appearing on the projection light of the projection module 22 may be implemented based on the deformation pattern.
- the object appearing on the projection light of the projection module 22 is defined as the target object A.
- the target object A may be a moving object.
- the image acquisition device 24 may acquire the projection image corresponding to the reference image.
- the projection image corresponding to the reference image may include: a projection image formed by directly projecting the reference image onto the projection surface S21.
- the projection image corresponding to the reference image may also include: when an object (target object A) appears in the projection light of the projection module 22, the projection light of the projection module 22 passes through the target object A to project the reference image on the projection surface
- the projected image formed on S21 may be a projection surface S21 may be a projection screen, such as a projection screen, etc.; or other object surfaces in the current environment, such as walls, floors, or furniture surfaces, etc., but not limited to this .
- FIG. 2 only the projection surface S21 is used as the projection screen for illustration, but it is not limited.
- the projection light of the projection module 22 passes through the target object A and projects the reference image on a certain
- the projection image formed on the projection surface is defined as the first projection image; and the projection image formed by directly projecting the reference image onto a certain projection surface under the condition that no object appears in the projection light of the projection module 22 is positioned as The second projected image.
- the first projection image includes the above-mentioned deformation pattern, and the second projection image does not include the deformation pattern.
- the first projection image can reflect the information of the target object A, but the second projection image cannot reflect the information of the target object A, and therefore the target object A cannot be tracked based on the second projection image. Therefore, in the following embodiments, the focus is on the control module 25 based on the first projection image collected by the image collection device 24 to achieve an exemplary description of the tracking process of the target object A.
- the control module 25 is connected in communication with the image acquisition device 24 , and the control module 25 is electrically connected with the multi-axis pan/tilt 23 .
- the image acquisition device 24 may provide the acquired first projection image to the control module 25 .
- the control module 25 can adjust the working state of the multi-axis pan/tilt head 23 according to the first projection image that has been collected by the image collection device 24 .
- the multi-axis pan/tilt 23 can be rotated.
- the rotation of the multi-axis pan/tilt 23 can drive the image acquisition device 24 to rotate, and then the pose of the image acquisition device 24 can be adjusted, so that the image acquisition device 24 can track and acquire the deformation pattern.
- the pose of the image capture device 24 includes the position and orientation of the image capture device 24 . Since the deformation pattern is caused by the target object A, tracking and collecting the deformation pattern can realize the tracking of the target object A. Moreover, the multi-axis pan/tilt 23 drives the image acquisition device 24 to rotate, and the pose of the image acquisition device 24 can be adjusted, which helps to expand the tracking range of the target object A.
- the image acquisition device 24 may adopt an image acquisition device with a high sampling rate.
- the image capturing device 24 can capture multiple frames of the first projection images including the deformation pattern.
- the sampling period of the image acquisition device 24 is smaller than the moving time of the target object A within the projection range of the projection module 22 . That is, the moving time within the projection range of the projection module 22 may be Q times the sampling period of the image acquisition device 24, and Q ⁇ 2. In this way, the image acquisition device 14 can acquire multiple frames of the first projection images.
- the realization form of the target object A is not limited.
- the target object A may be any moving object that appears on the projection light of the projection module 22 . Based on this, before the image acquisition device 25 acquires the first projection image including the deformation pattern caused by the target object A, it can also acquire the second projection image formed by directly projecting the reference pattern on a certain projection surface, The two projection images are buffered in the memory of the control module 25 .
- control module 25 can determine whether the third projection image is deformed compared to the second projection image according to the third projection image and the second projection image currently collected by the image capturing device 24; if the determination result is yes, then determine the target The object A enters the projection range of the projection module; the third projection image is used as the first projection image.
- the third projection image may be used as the first frame of the first projection image.
- the control module 25 starts to track the target object A, that is, starts to track and acquire the deformation pattern. The specific implementation of tracking and collecting the deformation pattern by the control module 25 will be described in detail in the following embodiments, and will not be described in detail here.
- the image acquisition device 24 may also acquire a second projection image formed by directly projecting the reference pattern on a projection surface, and The two projection images are buffered in the memory of the control module 25 . Further, the control module 25 can determine whether the third projection image is deformed compared to the second projection image according to the third projection image and the second projection image currently collected by the image capturing device 24; The three-projection images are fed into the neural network model.
- the object type of the deformation pattern included in the third projection image is calculated; if the object type of the deformation pattern included in the third projection image is a specified type, it is determined that the target object A enters the projection range of the projection module. and use the third projection image as the first projection image.
- the third projection image may be used as the first frame of the first projection image.
- the neural network model before using the neural network model to analyze the object type of the deformation pattern contained in the third projection image, the neural network model needs to be trained.
- the loss function can be minimized as the training target, and the model is trained by using the sample image to obtain the neural network model.
- the sample image includes a projection image formed by the projection light of the projection module passing through the designated object and projecting the reference image on the projection surface.
- the loss function is determined according to the probability that the specified object belongs to the specified type obtained by the model training and the actual probability that the specified object belongs to the specified type.
- the actual probability that the specified object belongs to the specified type may be 1, that is, 100%.
- the loss function may be the absolute value of the difference between the probability that the specified object belongs to the specified type obtained by model training and the actual probability that the specified object belongs to the specified type.
- the target object A when it is determined that the target object A appears within the projection range of the projection module 22, the target object A can be tracked, that is, the deformation pattern caused by the target object A can be tracked and collected.
- the control module 25 can adjust the multi-axis cloud according to the first projection image collected by the image collection device 24. The working state of the stage 23, so that the multi-axis gimbal 23 drives the image acquisition device 24 to track and collect the deformation pattern.
- an adjustment period can be set in the control module 25, and a timer or a counter can be started to time the adjustment period.
- the control module 25 can capture the current adjustment period according to the image acquisition device 25.
- the obtained first projection image adjust the working state of the multi-axis pan/tilt head 23, so as to drive the image acquisition device 24 to track and acquire the deformation pattern in the next adjustment period. That is, for the multi-axis pan/tilt head 23, the adjusted working state enables the image acquisition device 24 to capture the deformation pattern of the target object A caused by the next adjustment period.
- control module 25 may calculate the motion information of the target object A according to the first projection image acquired by the image acquisition device 24 in the current adjustment period.
- the motion information of the target object A may include at least one of displacement information, motion speed, motion direction, and acceleration information of the target object A.
- the control module 25 may calculate the pixel difference between the target projection image and the initial projection image corresponding to the current adjustment period.
- the initial projection image corresponding to the current adjustment period may be the first frame of the first projection image collected by the image acquisition device 24 during the current adjustment period, or may be the first N frames initially collected by the image acquisition device 24 during the current adjustment period.
- the target projection image is other projection images except the initial projection image acquired by the image acquisition device 24 in the current adjustment period.
- the number of target projection images can be one or more frames. Multi-frame refers to 2 or more frames.
- the control module 25 can calculate the motion information of the target object A according to the pixel difference between the target projection image and the initial projection image and the pose of the image acquisition device 24 in the current adjustment period.
- control module 25 may, according to the pixel difference between the target projection images of two adjacent frames and the initial projection images corresponding to the current adjustment period, and the pose of the image acquisition device in the current adjustment period, Calculate the displacement change of the target object A; and calculate the movement speed and/or acceleration of the target object A according to the displacement change of the target object A and the sampling period of the image acquisition device 24 .
- control module 25 can adjust the working state of the multi-axis pan/tilt according to the motion information of the target object A, so as to drive the image acquisition device 24 to track and acquire the deformation pattern in the next adjustment period. That is, for the multi-axis pan/tilt head 23, the adjusted working state enables the image acquisition device 24 to capture the deformation pattern of the target object A caused by the next adjustment period.
- control module 25 can calculate the target motion parameter value of the motor in the multi-axis pan/tilt 23 according to the motion information of the target object A, and adjust the motion parameter of the motor in the multi-axis pan/tilt 23 to the target motion parameter value, thereby
- the working state of the multi-axis gimbal 23 can be adjusted.
- the adjusted working state enables the image acquisition device 24 to capture the deformation pattern caused by the target object A in the next adjustment period.
- its motion parameters may include: at least one of acceleration, angular acceleration, and rotational speed of the motor in the multi-axis pan/tilt head 23.
- the target motion parameter value of the motor in the multi-axis pan/tilt head 23 may include at least one of a target acceleration, a target acceleration, and a target rotational speed of the motor in the multi-axis pan/tilt head 23 .
- control module 25 can predict the position to which the target object A moves in the next adjustment period according to the motion information of the target object A; and calculate the position where the deformation pattern is generated according to the position to which the target object A moves in the next adjustment period; further, The control module 25 can calculate the pose corresponding to the image capture device 24 in the next adjustment cycle according to the position where the deformation pattern is generated; and according to the pose corresponding to the image capture device 24 in the next adjustment cycle and the position of the image capture device 24 in the current adjustment cycle pose, and calculate the target motion parameter value of the motor in the multi-axis gimbal 23 .
- control module 25 can control the multi-axis pan/tilt 23 to adjust the motion parameters of its motor to the target motion parameter value, so as to adjust the working state of the multi-axis pan/tilt 13, so that the adjusted working state enables the image acquisition device 14 to
- the deformation pattern caused by the target object A in the next adjustment period is tracked and collected, so that the target object A can be tracked in the next adjustment period.
- a predetermined pattern is taken as a stripe pattern below, and the target tracking process is exemplarily described with reference to FIG. 2b.
- the projection module projects a fringe pattern A. After the fringe pattern A passes through the ball, it is projected onto the projection surface S21 to form a projection image D.
- the projected image D contains the deformed pattern of the fringe pattern A due to the passing of the ball.
- the image acquisition device 24 acquires the projection pattern D.
- the control module 25 can input the projection image D into the neural network model, and the neural network model recognizes that the object is spherical.
- the control module 25 can also detect the pit B on the ball and the crack C on the ball.
- control module 25 can control the movement of the multi-axis pan/tilt according to the acquired projection pattern D, so as to drive the image acquisition device 24 to track and acquire the deformation pattern. Since the deformation pattern is caused by the ball, the movement trajectory of the deformation pattern can reflect the movement trajectory of the ball. Therefore, tracking and collecting the deformation pattern realizes the tracking of the ball.
- the embodiments of the present application also provide a target tracking method.
- the following is an exemplary description of the target tracking method provided by the embodiments of the present application from the perspective of the above-mentioned control module.
- FIG. 3 is a schematic flowchart of a target tracking method provided by an embodiment of the present application. As shown in Figure 3, the method includes:
- the multi-axis pan/tilt can be rotated around its rotation axis, and the direction of rotation is determined by the rotation direction of the rotary shaft contained in the multi-axis pan/tilt. Since the image acquisition device is mounted on the multi-axis PTZ, the image acquisition device also rotates with the rotation of the multi-axis PTZ.
- the projection module can be controlled to project a reference image outward, and the reference image has a predetermined pattern.
- the predetermined pattern can be any pattern.
- the predetermined pattern may be a striped pattern, a coding pattern, a predetermined character pattern, etc., but is not limited thereto.
- the control module controlling the projection module reference can be made to the relevant contents of the above-mentioned embodiments, and details are not repeated here.
- the projection image presented by the reference image on the projection surface also has a predetermined pattern. If an object appears on the projection light of the projection module, the projection image corresponding to the predetermined pattern is deformed.
- the pattern formed by the deformation of the projection image corresponding to the predetermined pattern is defined as Deformation pattern.
- the deformation pattern is generated based on the object appearing on the projection light of the projection module, and the deformation pattern moves with the movement of the object. Based on this, in the embodiment of the present application, the tracking of the object appearing on the projection light of the projection module can be implemented based on the deformation pattern.
- the object appearing on the projection light of the projection module is defined as the target object.
- the target object can be a moving object.
- the image acquisition device can be controlled to acquire the projection image corresponding to the reference image.
- the projection image corresponding to the reference image may include: a projection image formed by directly projecting the reference image onto a certain projection surface, that is, when no object appears in the projection light of the projection module, the reference image is directly projected onto a certain projection surface. The resulting projected image.
- the projection image corresponding to the reference image may also include: when an object (target object) appears in the projection light of the projection module, the projection light of the projection module passes through the target object A to project the reference image on a certain projection surface The resulting projected image.
- object target object
- the projection light of the projection module passes through the target object A to project the reference image on a certain projection surface The resulting projected image.
- the projection surface reference may be made to the relevant content of the above-mentioned embodiment of the tracking device, which will not be repeated here.
- the projection light of the projection module passes through the target object and projects the reference image on a certain projection surface to form a
- the projected image is defined as the first projected image; and the projected image formed by directly projecting the reference image onto a certain projection surface when no object appears in the projection light of the projection module is positioned as the second projected image.
- the first projection image includes the above-mentioned deformation pattern
- the second projection image does not include the deformation pattern. Therefore, the first projection image can reflect the information of the target object, but the second projection image cannot reflect the information of the target object, and thus the target object cannot be tracked based on the second projection image. Therefore, in the following embodiments, the focus is on the implementation of the tracking process of the target object based on the first projection image collected by the image collection device.
- the working state of the multi-axis pan/tilt head can be adjusted according to the first projection image that has been collected by the image collection device.
- the multi-axis pan/tilt can be rotated.
- the rotation of the multi-axis pan/tilt can drive the image acquisition device to rotate, and then the pose of the image acquisition device can be adjusted, so that the image acquisition device can track and acquire the deformation pattern.
- the pose of the image acquisition device includes the position and orientation of the image acquisition device. Since the deformation pattern is caused by the target object, the tracking and acquisition of the deformation pattern can realize the tracking of the target object.
- the multi-axis pan/tilt drives the image acquisition device to rotate, and the pose of the image acquisition device can be adjusted, which helps to expand the tracking range of the target object.
- the adjusted working state can adjust the pose of the image acquisition device to capture the deformation pattern caused by the target object at subsequent moments.
- the image acquisition device may adopt an image acquisition device with a high sampling rate.
- the image capturing device may capture multiple frames of the first projection images including the deformation pattern.
- the sampling period of the image acquisition device is shorter than the moving time of the target object within the projection range of the projection module. That is, the moving time within the projection range of the projection module may be Q times the sampling period of the image acquisition device, and Q ⁇ 2.
- the specific value of Q is not limited.
- Q can be 3, 8, 10, 20, or 30.5, and so on. In this way, the image acquisition device can acquire multiple frames of the first projection image.
- the realization form of the target object is not limited.
- the target object may be any moving object that appears on the projection ray of the projection module.
- the image acquisition device can also acquire the second projection image formed by directly projecting the reference pattern on a certain projection surface.
- the third projection image may be used as the first frame of the first projection image. After that, start tracking the target object, that is, start tracking and collecting the deformation pattern. The specific implementation of tracking and collecting the deformation pattern will be described in detail in the following embodiments, and will not be described in detail here.
- the image acquisition device may also acquire the second projection image formed by directly projecting the reference pattern on a certain projection surface. Further, according to the third projection image and the second projection image currently collected by the image acquisition device, it can be determined whether the third projection image is deformed compared with the second projection image; if the determination result is yes, the third projection image is input in the neural network model.
- the object type of the deformation pattern contained in the third projection image is calculated; if the object type of the deformation pattern contained in the third projection image is the specified type, it is determined that the target object enters the projection range of the projection module, and The third projected image is used as the first projected image.
- the third projection image may be used as the first frame of the first projection image.
- the neural network model before using the neural network model to analyze the object type of the deformation pattern contained in the third projection image, the neural network model needs to be trained.
- the loss function can be minimized as the training target, and the model is trained by using the sample image to obtain the neural network model.
- the sample image includes a projection image formed by the projection light of the projection module passing through the designated object and projecting the reference image on the projection surface.
- the sample image can be one frame or multiple frames, and multiple frames refer to two or more frames, and the specific value of the number can be flexibly set according to actual needs.
- the loss function is determined according to the probability that the specified object belongs to the specified type obtained by the model training and the actual probability that the specified object belongs to the specified type.
- the actual probability that the specified object belongs to the specified type may be 1, that is, 100%.
- the loss function may be the absolute value of the difference between the probability that the specified object belongs to the specified type obtained by model training and the actual probability that the specified object belongs to the specified type.
- the target object when it is determined that the target object appears within the projection range of the projection module, the target object can be tracked, that is, the deformation pattern caused by the target object can be tracked and collected.
- the working state of the multi-axis pan/tilt head can be adjusted according to the first projection image acquired by the image acquisition device to The multi-axis pan/tilt drives the image acquisition device to track and acquire the deformation pattern.
- an adjustment period can be set, and a timer or a counter can be started to time the adjustment period. Whenever the adjustment period arrives, the adjustment period can be adjusted according to the first projection image collected by the image acquisition device in the current adjustment period.
- the working state of the axis gimbal to drive the image acquisition device to track and acquire the deformation pattern in the next adjustment cycle. That is, for the multi-axis pan/tilt head, the adjusted working state enables the image acquisition device to capture the deformation pattern of the target object caused by the next adjustment cycle.
- the motion information of the target object may be calculated according to the first projection image acquired by the image acquisition device in the current adjustment period.
- the movement information of the target object may include at least one of displacement information, movement speed, movement direction and acceleration information of the target object.
- the pixel difference between the target projection image and the initial projection image corresponding to the current adjustment period may be calculated.
- the initial projection image corresponding to the current adjustment period may be the first projection image of the first frame collected by the image acquisition device during the current adjustment period, or the first projection image of the first N frames initially collected by the image acquisition device during the current adjustment period.
- the projected image consisting of the pixel average of the image, where N ⁇ 2 and is an integer.
- the target projection image is other projection images except the initial projection image acquired by the image acquisition device during the current adjustment period.
- the number of target projection images can be one or more frames. Multi-frame refers to 2 or more frames.
- the motion information of the target object can be calculated according to the pixel difference between the target projection image and the initial projection image and the pose of the image acquisition device in the current adjustment period.
- the target object can be calculated according to the pixel difference between the target projection image of two adjacent frames and the initial projection image corresponding to the current adjustment period, and the pose of the image acquisition device in the current adjustment period. and calculate the movement speed and/or acceleration of the target object A according to the displacement change of the target object and the sampling period of the image acquisition device.
- the working state of the multi-axis gimbal can be adjusted according to the motion information of the target object, so as to drive the image acquisition device to track and acquire the deformation pattern in the next adjustment period. That is, for the multi-axis pan/tilt head, the adjusted working state enables the image acquisition device to capture the deformation pattern of the target object caused by the next adjustment cycle.
- the target motion parameter value of the motor in the multi-axis PTZ can be calculated, and the motion parameters of the motor in the multi-axis PTZ can be adjusted to the target motion parameter value, so that the work of the multi-axis PTZ can be adjusted. state.
- the adjusted working state enables the image acquisition device to capture the deformation pattern of the target object caused by the next adjustment cycle.
- the motor in the multi-axis pan/tilt head its motion parameters may include at least one of acceleration, angular acceleration and rotational speed of the motor in the multi-axis pan/tilt head.
- the target motion parameter value of the motor in the multi-axis pan/tilt head may include at least one of target acceleration, target acceleration, and target rotational speed of the motor in the multi-axis pan/tilt head.
- the position to which the target object moves in the next adjustment period can be predicted according to the motion information of the target object; and the position where the deformation pattern is generated can be calculated according to the position to which the target object moves in the next adjustment period; position, calculate the pose corresponding to the image acquisition device in the next adjustment period; and calculate the target motion of the motor in the multi-axis gimbal according to the pose corresponding to the image acquisition device in the next adjustment period and the pose of the image acquisition device in the current adjustment period parameter value.
- the multi-axis gimbal can be controlled to adjust the motion parameters of its motor to the target motion parameter value, thereby adjusting the working state of the multi-axis gimbal, so that the adjusted working state can enable the image acquisition device to adjust the target object in the next adjustment cycle.
- the induced deformation pattern is tracked and collected, so that the target object can be tracked in the next adjustment cycle.
- the execution subject of each step of the method provided by the above embodiments may be the same device, or the method may also be executed by different devices.
- the execution body of steps 301 and 302 may be device A; for another example, the execution body of step 301 may be device A, and the execution body of step 302 may be device B; and so on.
- the embodiments of the present application also provide a computer-readable storage medium storing computer instructions, when the computer instructions are executed by one or more processors, cause one or more processors to execute the steps in the above target tracking method .
- embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
- computer-usable storage media including, but not limited to, disk storage, CD-ROM, optical storage, etc.
- These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory result in an article of manufacture comprising instruction means, the instructions
- the apparatus implements the functions specified in the flow or flow of the flowcharts and/or the block or blocks of the block diagrams.
- a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
- processors CPUs
- input/output interfaces network interfaces
- memory volatile and non-volatile memory
- Memory may include forms of non-persistent memory, random access memory (RAM) and/or non-volatile memory in computer readable media, such as read only memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
- RAM random access memory
- ROM read only memory
- flash RAM flash memory
- Computer-readable media includes both persistent and non-permanent, removable and non-removable media, and storage of information may be implemented by any method or technology.
- Information may be computer readable instructions, data structures, modules of programs, or other data.
- Examples of computer storage media include, but are not limited to, phase-change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Flash Memory or other memory technology, Compact Disc Read Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, Magnetic tape cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
- computer-readable media does not include transitory computer-readable media, such as modulated data signals and carrier waves.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Electromagnetism (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Studio Devices (AREA)
Abstract
本申请实施例提供一种目标跟踪方法、设备、系统及存储介质。跟踪设备包括:机体,多轴云台,和控制模块。机体,用于安装多轴云台;多轴云台能够带动图像采集设备转动;图像采集设备用于采集基准图像对应的第一投影图像,第一投影图像包括预定图案对应的形变图案;形变图案基于目标对象产生;控制模块,用于根据图像采集设备已采集到的第一投影图像,调整多轴云台的工作状态,以带动图像采集设备对形变图案进行跟踪采集。本申请实施例提供的目标跟踪方法、设备、系统及存储介质,可实现对目标对象的跟踪。
Description
本申请涉及目标检测与跟踪技术领域,尤其涉及一种目标跟踪方法、设备、系统及存储介质。
目标检测与跟踪技术在现代安防、医学、民用等领域起着日益重要的作用。现有目标检测与跟踪技术中,利用摄像头对物体进行拍摄,再利用机器学习技术对图像中的物体进行目标识别,之后,在摄像头的采集视野内对移动对象进行跟踪。
但是,由于摄像头的朝向固定,也就导致摄像头对移动对象进行跟踪的范围比较局限。
发明内容
本申请的多个方面提供一种目标跟踪方法、设备、系统及存储介质,用以实现对移动目标的跟踪。
本申请实施例提供一种跟踪设备,包括:
机体,用于安装多轴云台;
多轴云台,用于搭载图像采集设备,并能够带动所述图像采集设备转动;所述图像采集设备用于采集基准图像对应的第一投影图像;所述基准图像由投影模块向外投射,且所述基准图像具有预定图案;所述第一投影图像包括所述预定图案对应的形变图案;所述形变图案基于目标对象产生;
控制模块,电连接于所述多轴云台,用于根据所述图像采集设备已采集到的第一投影图像,调整所述多轴云台的工作状态,以带动所述图像采集设备对所述形变图案进行跟踪采集。
本申请实施例还提供一种目标跟踪方法,包括:
控制投影模块向外投射基准图像;所述基准图像具有预定图案;
控制搭载于多轴云台的图像采集设备采集所述基准图像对应的第一投影图像;所述第一投影图像包括:所述预定图案的形变图案;所述形变图案基于目标对象产生;
根据所述图像采集设备已采集到的第一投影图像,调整所述多轴云台的工作状态,以带动所述图像采集设备对所述形变图案进行跟踪采集。
本申请实施例还提供一种目标跟踪系统,包括:投影模块,跟踪设备以及设置于所述跟踪设备所处物理环境中的投影面;
所述投影模块,用于向投影面投射基准图像,所述基准图像具有预定图案;
所述跟踪设备包括:机体,用于安装多轴云台;
多轴云台,用于搭载图像采集设备,并能够带动所述图像采集设备转动;所述图像采集设备用于采集基准图像在所述投影面上对应的第一投影图像,所述第一投影图像包括所述预定图案对应的形变图案;所述形变图案基于目标对象产生;
控制模块,电连接于所述多轴云台,用于根据所述图像采集设备已采集到的第一投影图像,调整所述多轴云台的工作状态,以带动所述图像采集设备对所述形变图案进行跟踪采集。
本申请实施例还提供一种存储有计算机指令的计算机可读存储介质,当所述计算机指令被一个或多个处理器执行时,致使所述一个或多个处理器执行上述目标跟踪方法中的步骤。
本申请实施例提供的目标跟踪方法、设备、系统及存储介质,可实现对目标对象的跟踪,有助于扩大对目标对象的跟踪范围。
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部 分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:
图1a为本申请实施例提供的跟踪设备的结构示意图;
图1b和图1c为本申请实施例提供的跟踪设备的结构框图;
图1d为本申请实施例提供的数字微镜器件的工作原理示意图;
图1e为本申请实施例提供的神经网络模型的训练过程示意图;
图2a为本申请实施例提供的目标跟踪系统的结构示意图;
图2b为本申请实施例提供的目标跟踪系统的工作过程示意图;
图3为本申请实施例提供的目标跟踪方法的流程示意图。
为使本申请的目的、技术方案和优点更加清楚,下面将结合本申请具体实施例及相应的附图对本申请技术方案进行清楚、完整地描述。显然,所描述的实施例仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
现有摄像头的朝向固定,导致摄像头对移动对象进行跟踪的范围比较局限。针对该技术问题,在本申请一些实施例中提供一种跟踪设备。该跟踪设备包括:机体以及设置于机体上的多轴云台。机体还可安装投影模块,多轴云台可搭载图像采集设备。投影模块可向外投射具有预定图案的基准图像。基准图像的投影图像会因为目标对象出现在投影模块的投影光线上而产生形变,且产生的形变图案跟随目标对象的移动而移动。在本实施例中,图像采集设备可采集产生形变图案的投影图像,控制模块可根据图像采集设备已采集到的投影图像,调整多轴云台的工作状态,来调整图像采集设备的位姿,使图像采集设备可对形变图案进行跟踪采集。由于形变图案是由于目标对象引起的,因此,对形变图案进行跟踪采集可实现对目标对象的跟踪,而且图像采集设备的位姿可调,有助于扩大对目标对象的跟踪范围。
以下结合附图,详细说明本申请各实施例提供的技术方案。
图1a为本申请实施例提供的跟踪设备的结构示意图。如图1a所示,跟踪设备包括:机体11,多轴云台13,和控制模块15。机体11,用于安装多轴云台;多轴云台13,用于搭载图像采集设备14,并能够带动图像采集设备14转动;其中,图像采集设备14用于采集基准图像对应的第一投影图像;基准图像由投影模块向外投射,并且基准图像具有预定图案。第一投影图像包括预定图案对应的形变图案;形变图案基于目标对象产生;控制模块15,电连接于所述多轴云台13,用于根据图像采集设备14已采集到的第一投影图像,调整多轴云台13的工作状态,以带动图像采集设备14对形变图案进行跟踪采集。
在本实施例中,不限定投影模块12与跟踪设备的所属关系。在一些实施例中,投影模块12为独立的投影设备,且设置于跟踪设备所处的物理环境中。投影模块12与控制模块15通信连接。控制模块15可指示投影模块12向外投射基准图像,该基准图像具有预定图案。
在另一实施例中,机体11可用于安装投影模块12和多轴云台13。图1a中仅以投影模块12安装于机体11上进行图示。投影模块12可固定在机体11上。可选地,投影模块12可以可拆卸的方式固定于机体11上。相应地,机体11上设置有固定件,该固定件用于固定投影模块12。可选地,机体11上的固定件可以为夹具等,对于夹具来说,可不在投影模块12上设置对应的固定件,即可将投影模块12固定在机体11上。
或者,在一些实施例中,也可利用机体11上的固定件与投影模块12上的固定件相互配合,将投影模块12固定于机体11上。例如,机体11上的固定件与投影模块12上的固定件可组合实现为卡扣或锁扣等,或者,机体11上的固定件与投影模块12上的固定件可实现为凹点和凸缘等。可选地,可在机体11上设置若干凹点,并在投影模块12上设置相应数量的凸缘,或者,也可在机体11上设置若干凸缘,并在投影模块12上设置相应数量的凹点等等。
在本实施例中,多轴云台13与机体11转动连接。多轴云台13是指具有多个转轴的云台。多个是指2个或2个以上。例如,多轴云台13可以为双轴云台、三轴云台、四轴云台或者更多轴云台等等。在本实施例中,多轴云台13用于搭载图像采集设备14。相应地,多轴云台13上设置有固定图像采集设备14的固定件。可选地,多轴云台13上的固定件可以为夹具等,对于夹具来说,可不在图像采集设备14上设置对应的固定件,即可将图像采集设备14固定在机体上。
或者,在一些实施例中,也可利用多轴云台13上的固定件与图像采集设备14上的固定件相互配合,将图像采集设备14固定于多轴云台13上。例如,多轴云台13上的固定件与图像采集设备14上的固定件可组合实现为卡扣或锁扣等,或者,多轴云台13上的固定件与图像采集设备14上的固定件可实现为凹点和凸缘等。可选地,可在多轴云台13上设置若干凹点,并在图像采集设备14上设置相应数量的凸缘,或者,也可在多轴云台13上设置若干凸缘,并在图像采集设备14上设置相应数量的凹点等等。
在本实施例中,在图像采集设备14搭载于多轴云台13上的情况下,图像采集设备14可随多轴云台13的转动而转动,即多轴云台13可带动图像采集设备14转动。其中,跟踪设备在出厂时,其多轴云台13上可已搭载图像采集设备14;或者,跟踪在出厂时,其多轴云台13上未搭载图像采集设备14,由用户在使用时将图像采集设备14固定于多轴云台13上。
多轴云台13可绕其转轴进行转动,可转动的方向由多轴云台13包含的转轴的转动方向决定。例如,对于三轴云台来说,包括:俯仰、翻滚、偏航等三个方向的转轴,因此,三轴云台可实现俯仰转动、翻滚转动和偏航转动。由于图像采集设备14搭载于三轴云台上,图像采集设备14也就可随三轴云台的转动,实现俯仰转动、翻滚转动和偏航转动。
在本实施例中,不限定多轴云台13可搭载的图像采集设备14的实现形式。图像采集设备14可为任意可进行图像采集的设备。例如,图像采集设备14可为具有拍照功能的手机、平板电脑、可穿戴设备等终端设备,也可为相 机、摄影机或摄像头等。其中图像采集设备14的实现形式不同,多轴云台13上的固定件的结构大小可适应性调整。
在本实施例中,跟踪设备还包括:控制模块15。在本申请实施例中,如图1b所示,控制模块15可包括处理器15a、存储器以及处理器15a的外围电路等等。其中,处理器可以为中央处理器(Central Processing Unit,CPU)、图形处理器(Graphics Processing Unit,GPU)或微控制单元(Microcontroller Unit,MCU);也可以为现场可编程门阵列(Field-Programmable Gate Array,FPGA)、可编程阵列逻辑器件(Programmable Array Logic,PAL)、通用阵列逻辑器件(General Array Logic,GAL)、复杂可编程逻辑器件(Complex Programmable Logic Device,CPLD)等可编程器件;或者为先进精简指令集(RISC)处理器(Advanced RISC Machines,ARM)或系统芯片(System on Chip,SOC)等等,但不限于此。
存储器可以由任何类型的易失性或非易失性存储设备,或者它们的组合实现,如内存条15b1和15b2,静态随机存取存储器(SRAM)15b3,电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘等。
在本实施例中,如图1b和图1c所示,控制模块15可指示投影模块11向外投射基准图像,该基准图像具有预定图案。其中,预定图案可以为任何形式的图案。例如预定图案可以为条纹图案、编码图案、预定的字符图案等等,但不限于此。
在一些实施例中,跟踪设备预置有预定图案。对于控制模块15,可响应于针对跟踪设备的开机操作,从存储器中获取预定图案,并指示投影模块11向外投射具有该预定图案的基准图像。
在另一些实施例中,跟踪设备预置有图案生成规则。对于控制模块15,可响应于针对跟踪设备的开机操作,按照预设的图案生成规则,生成预定图案。进一步,控制模块15可指示投影模块11向外投射具有该预定图案的基 准图像。
可选地,控制模块15可与投影模块12通信连接。在本实施例中,控制模块15与投影模块12之间可以通过移动网络通信连接,相应地,移动网络的网络制式可以为2G(GSM)、2.5G(GPRS)、3G(WCDMA、TD-SCDMA、CDMA2000、UTMS)、4G(LTE)、4G+(LTE+)、5G、WiMax等中的任意一种。可选地,不同物理机之间也可以通过蓝牙、WiFi或红外线等方式通信连接。
相应地,控制模块15可通过指令指示投影模块12向外投射具有预定图案的基准图像。可选地,控制模块15可向投影模块12发送投影指令,该投影指令用于指示投影模块12向外投射具有预定图案的基准图像。相应地,投影模块12在接收到投影指令的情况下,向外投射具有预定图案的基准图像。
或者,控制模块15电连接于投影模块12。相应地,控制模块15可通过电信号指示投影模块12向外投射具有预定图案的基准图像。其中电信号可为高电平或低电平信号。可选地,控制模块15可向投影模块12输出电信号,该电信号用于指示投影模块12向外投射具有预定图案的基准图像。相应地,投影模块12在接收到电信号的情况下,向外投射具有预定图案的基准图像。
无论控制模块15与投影模块12采用何种方式连接,投影模块12均可向外投射基准图像,该基准图像具有上述预定图案。在本申请实施例中,不限定投影模块12的具体实现形式。可选地,投影模块12可为数字光处理(Digital Light Processing,DLP)投影设备等。下面以投影模块12为DLP投影设备为例,对投影模块12的结构和工作原理进行示例性说明。
如图1b所示,DLP投影设备包括:光源12a、色轮12b、数字微镜器件(Digital Micromirror Device,DMD)12c以及投影透镜12d。可选地,色轮12b可为六段色轮等。其中,所述色轮12b光连接于所述光源12a与DMD器件12c之间;DMD器件12c与所述投影透镜12d光连接;且所述色轮12b和DMD器件12c还分别与所述控制模块15电连接。
其中,光源12a发出的光入射至色轮12b。色轮12b在所述控制模块15 的控制下,将接收到的光过滤为单色光,并将单色光投射至DMD器件12c。DMD器件12c在控制模块15的控制下,利用单色光调制出上述预定图案,并经投影透镜12d向外投射具有该预定图案的基准图像。
可选地,如图1b所示,色轮12b可包括:聚光透镜12b1、滤光片12b2以及整形透镜12b3。其中,滤光片12b2光连接于聚光透镜12b1与整形透镜12b3之间。聚光透镜12b1与光源12a光连接,整形透镜12b3与DMD器件12c光连接。进一步,控制模块15还与滤光片12b2电连接。其中,色轮12b在控制模块15的控制下,将接收到的光过滤为单色光,并将单色光投射至DMD器件12c。
当光源12a发出的光入射至色轮12b中的聚光透镜12b1时,控制模块15控制色轮12b中的滤光片12b2将接收到的光分为多种单色光,并通过色轮12b中的整形透镜12b3传输至DMD器件12c。DMD器件12c在控制模块15的控制下,利用单色光调制出上述预定图案,并经投影透镜12d向外投射具有该预定图案的基准图像。
其中,如图1d所示,DMD器件12c是一种光电转换的微机电系统。DMD器件12c中每一个微镜和相关结构控制一个像素。如图1d所示,DMD器件12c存在3种工作状态。(1)当数字微镜转动0°时,即为图1d中标号(1)所示的状态,为平态。(2)当微镜转动到正的设定角度(如+12°)时,即为图1d中标号(2)所示的状态,表示开状态。在该状态下,光源发出的光入射至数字微镜镜面,并通过数字微镜镜面向投影面反射等,投影面显示为“亮状态”。(3)当微镜转动到负的设定角度(如-12°)时,即图1d中标号(3)所示的状态,表示关状态。在该状态下,光源发出的光入射至数字微镜镜面,并通过数字微镜镜面反射至投影面之外,投影面显示为“暗状态”。基于图1d所示的DMD器件12c的工作原理,控制模块15可通过DMD器件12c中的SRAM,通过DMD器件12c中的两侧寻址电极和铰链控制数字微镜翻转。可选地,对于两侧寻址电极,可通过偏置电压转化成力,控制铰链转动,进而带动数字微镜翻转。相应地,对于数字 微镜翻转的角度可通过偏置电压的大小进行调节。
在实际应用中,在投影模块12向外投射基准图像的情况下,若未有物体出现在投影模块12的投影光线上,则基准图像在投影面上所呈现的投影图像也具有预定图案。若有物体出现在投影模块12的投影光线上,则预定图案对应的投影图像为发生形变,为了便于描述,在本申请实施例中,将预定图案对应的投影图像发生形变所形成的图案,定义为形变图案。其中,形变图案是基于出现在投影模块12的投影光线上的物体产生的,且形变图案会随着物体的移动而移动。基于此,在本申请实施例中,可基于形变图案实现对出现在投影模块12的投影光线上的物体进行跟踪。为了便于描述,将出现在投影模块12的投影光线上的物体,定义为目标对象A。目标对象A可为移动对象。
基于上述分析,在本实施例中,图像采集设备14可采集基准图像对应的投影图像。基准图像对应的投影图像可包括:基准图像直接投影至某一投影面所形成的投影图像,即在没有物体出现在投影模块12的投影光线的情况下,基准图像直接投影至某一投影面上所形成的投影图像。对于无物体出现在投影模块12的投影光线的情况,投影面可以为投影屏,例如投影幕布等;也可以为当前所处环境中的其它物体表面,例如墙壁、地面或家具表面等等,但不限于此。图1a中仅以投影面为投影幕布进行图示,但不构成限定。
当然,基准图像对应的投影图像也可包括:在有物体(目标对象A)出现在投影模块12的投影光线的情况下,投影模块12的投影光线经过目标对象A而将基准图像投射在某一投影面上形成的投影图像。对于有目标对象A出现在投影模块12的投影光线的情况,投影面16可以为目标对象A的表面,也可以为投影面16可以为投影屏,例如投影幕布等;或者为当前所处环境中的其它物体表面,例如墙壁、地面或家具表面等等,但不限于此。图1a中仅以投影面为投影幕布进行图示,但不构成限定。
在本实施例中,由于投影面可为目标对象的表面,或者跟踪设备当前所处环境中的其它物体表面。因此,相较于现有基于LCD技术进行3D视觉 检测跟踪的方案,本申请实施例提供的跟踪设备具备更轻巧的体积、更好的可维护性。这是因为现有技术中,由于LCD液晶板上晶体管不具备透光性,因此像素之间存在间隙,并且衍生出暗部细节差的缺点。另一方面,采用LCD技术设计的设备体积大,且易受环境粉尘干扰,所以现有基于基于LCD技术进行3D视觉检测跟踪所使用的设备不易维护。而本申请实施例中,可将目标对象的表面,或者跟踪设备当前所处环境中的其它物体表面作为投影面,不仅可降低设备成本,而且无需对投影面进行维护。另一方面,在本申请实施例得到的投影图像的像素之间不存在间隙,有助于提高目标检测的准确性。
在下述实施例中,为了便于描述和区分,将有物体(目标对象A)出现在投影模块12的投影光线的情况下,投影模块12的投影光线经过目标对象A而将基准图像投射在某一投影面上形成的投影图像,定义为第一投影图像;并将在没有物体出现在投影模块12的投影光线的情况下,基准图像直接投影至某一投影面上所形成的投影图像,定位为第二投影图像。其中,第一投影图像包含上述形变图案,第二投影图像不包含形变图案。因此,第一投影图像可以反应目标对象A的信息,而第二投影图像无法反应目标对象A的信息,也就无法基于第二投影图像对目标对象A进行跟踪。因此,在下述实施例中,重点以控制模块15基于图像采集设备14采集到的第一投影图像,实现对目标对象A的跟踪过程进行示例性说明。
如图1b和图1c所示,控制模块15与图像采集设备14通信连接,控制模块15与多轴云台13电连接。其中,控制模块15与图像采集设备14的通信连接方式可参见上述控制模块15与投影模块12的通信连接方式,在此不再赘述。
在本实施例中,图像采集设备14可将采集到的第一投影图像提供给控制模块15。相应地,控制模块15可根据图像采集设备14已采集到的第一投影图像,调整多轴云台13的工作状态。多轴云台13在工作状态改变时,多轴云台13可进行转动。多轴云台13转动可带动图像采集设备14进行转 动,进而可调整图像采集设备14的位姿,使图像采集设备14可对形变图案进行跟踪采集。其中,图像采集设备14的位姿包括图像采集设备14的位置和朝向。由于形变图案是由目标对象A引起的,因此对形变图案进行跟踪采集,可实现对目标对象A的跟踪。而且,多轴云台13带动图像采集设备14转动可调整图像采集设备14的位姿,有助于扩大对目标对象A的跟踪范围。
在本申请实施例中,对多轴云台13的工作状态进行调整包括:对多轴云台13在至少一个方向上的状态进行调整。例如,对于三轴云台来说,可使调整三轴云台至少在俯仰、翻滚和偏航中至少一个方向进行转动。相应地,图像采集设备14也就可随三轴云台的转动,实现俯仰转动、翻滚转动和偏航中至少一个方向上的转动。对于多轴云台13来说,其调整后的工作状态,可将图像采集设备14的位姿调整为可以采集后续时刻由目标对象A引起的形变图案。
在本申请实施例中,为了实现对目标对象A引起的形变图案的跟踪采集,图像采集设备14可采用具有高采样率的图像采集设备。优选地,图像采集设备14在目标对象A在投影模块12的投射范围内移动过程中,可采集到多帧包含形变图案的第一投影图像。相应地,图像采集设备14的采样周期小于目标对象A在投影模块12的投射范围内的移动时间。即投影模块12的投影范围内的移动时间可以为图像采集设备14的采样周期Q倍,Q≥2。在本实施例中,不限定Q的具体取值。例如,Q可以为3、8、10、20或30.5等等。这样,图像采集设备14可采集到多帧第一投影图像。
在本申请实施例中,不限定目标对象A的实现形态。在一些实施例中,目标对象A可以为出现在投影模块12的投影光线上的任何移动对象。例如,跟踪设备可实现为手持式云台,该手持式云台用于搭载图像采集设备14,如手持式手机支架、手持式相机支架或手持式摄像机支架等。用户可利用手持式云台对出现在投影模块12的投影光线上的任何移动物体进行跟踪。基于此,图像采集设备15在采集到包含目标对象A引起的形变图案的第一投影图像之 前,还可采集基准图案直接投射在某一投影面上所形成的第二投影图像,并将该第二投影图像缓存在控制模块15的内存中。进一步,控制模块15可根据图像采集设备14当前采集到的第三投影图像与第二投影图像,判断第三投影图像相较于第二投影图像是否发生形变;若判断结果为是,则确定目标对象A进入投影模块的投射范围内;并将第三投影图像作为第一投影图像。可选地,可将第三投影图像作为第一帧第一投影图像。之后,控制模块15开始对目标对象A进行跟踪,即开始对形变图案进行跟踪采集。其中,关于控制模块15对形变图案进行跟踪采集的具体实施方式,将在下述实施例中进行详细说明,在此暂不详述。
在另一些实施例中,出现在投影模块12的投影光线上的物体为指定对象或者指定类型的对象的时候,才会对该物体进行跟踪。例如,在安防领域,在出现在投影模块12的投影光线上的物体为人的时候,才将该物体作为目标对象A进行跟踪。在该应用场景中,跟踪设备可实现为监控设备,监控设备可部署在监控区域中。又例如,在生物检测领域,在出现在投影模块12的投影光线上的物体为指定类型的细胞的时候,才将该物体作为目标对象A进行跟踪;等等。在该应用场景中,跟踪设备可实现为显微镜等医学检测类设备等。
基于上述分析,图像采集设备14在采集到包含目标对象A引起的形变图案的第一投影图像之前,还可采集基准图案直接投射在某一投影面上所形成的第二投影图像,并将该第二投影图像缓存在控制模块15的内存中。进一步,控制模块15可根据图像采集设备14当前采集到的第三投影图像与第二投影图像,判断第三投影图像相较于第二投影图像是否发生形变;若判断结果为是,则将第三投影图像输入神经网络模型中。之后,在神经网络模型中,计算第三投影图像包含的形变图案的对象类型;若第三投影图像包含的形变图案的对象类型为指定类型,则确定目标对象A进入投影模块的投射范围内,并将第三投影图像作为第一投影图像。可选地,可将第三投影图像作为第一帧第一投影图像。之后,控制模块15开始对目标对象A进行跟踪,即开始对 形变图案进行跟踪采集。其中,关于控制模块15对形变图案进行跟踪采集的具体实施方式,将在下述实施例中进行详细说明,在此暂不详述。
上述利用投影图像中的形变图案进行检测,确定目标对象所属的对象类型,相较于利用图像采集设备拍摄到的物体的图像,来对物体进行识别,可降低图像识别的工作量。另一方面,由于直接利用普通单目图像采集设备拍摄到的物体的图像无法识别物体的三维特征,如果利用深度摄像头或双目摄像头来对物体进行拍摄来获取物体的三维特征,则无疑会增加图像采集设备的成本。
在本申请实施例中,利用投影图像中的形变图案进行目标检测,形变图案可包含深度信息,可用于测量目标对象的三维特征。因此,本申请实施例中,利用投影图像中的形变图案,可测量目标对象的三维特征,有助于降低图像采集设备的要求,从而有助于降低图像采集设备的成本。
值得说明的是,在本申请实施例中,在利用神经网络模型分析第三投影图像包含的形变图案的对象类型之前,还需对神经网络模型进行训练。
在本实施例中,不限定神经网络模型的模型结构。可选地,神经网络模型可包括:卷积层、池化层以及激活函数层。激活函数层中可采用Sigmoid函数、tanh函数或Relu函数。可选地,卷积层和池化层的数量相等。在本实施例中,不限定卷积层和池化层的具体数量。例如,卷积层和池化层的数量可以为2个、3个或4个甚至更多。
在本申请实施例中,可预设初始神经网络模型的网络架构。可选地,初始神经网络模型的网络架构包括:卷积层、池化层、这些卷积层和池化层的数量和设置顺序以及每个卷积层和池化层的超参数。其中,卷积层的超参数包括:卷积核的大小K(kernel size)、特征图边缘扩充的大小P(padding size)和步幅大小S(stride size)等。池化层的超参数为池化操作核的大小K和步幅大小S等。激活函数层可为Relu函数:
对于神经网络模型,每个卷积层的输出可表示为:
其中,w
i和b
i为待训练的神经网络模型的参数,分别表示每层的权重和偏置;x
i表示第i层的输入向量(例如,该层的输入图像)。对于卷积层,可将输入的图像I与卷积核K进行卷积,可以表示为:
在公式(3)中,M表示输入图像的像素点的行数;N表示输入图像的像素点的列数;m为整数并且0<m<M,n为整数并且0<n<N。
在本实施例中,对神经网络模型进行训练的过程可理解为对初始神经网络模型中的参数w
i和b
i进行训练的过程,得到每个卷积层的权重w
i和b
i。在本申请实施例中,可以损失函数最小化为训练目标,利用样本图像进行模型训练,得到神经网络模型。其中,样本图像包括投影模块的投影光线经过指定对象而将基准图像投射在投影面上形成的投影图像。其中,指定对象属于指定类型。样本图像可以为一帧或多帧,多帧是指2帧或2帧以上,其数量的具体取值可根据实际需求进行灵活设定。在本申请实施例中,不限定样本图像的来源,样本图像可以为:预先采集到的投影模块的投影光线经过指定对象而将基准图像投射在投影面上形成的投影图像;也可以为其它三维图像数据库或深度图像数据库中的图像;等等。
其中,损失函数是根据模型训练得到的指定对象属于指定类型的概率与指定对象属于指定类型的实际概率确定的。其中,指定对象属于指定类型的实际概率可为1,即100%。可选地,损失函数可以为模型训练得到的指定对象属于指定类型的概率与指定对象属于指定类型的实际概率的差值的绝对值。
为了更清楚地说明上述神经网络模型训练过程,下面结合图1d对本实施例提供的模型训练过程进行示例性说明。如图1e所示,模型训练过程的主要步骤如下:
S1:将样本图像作为初始神经网络模型的输入图像,输入初始神经网络模型。
S2:利用初始神经网络模型,计算样本图像包含的形变图案在每个对象类型下的概率。
S3:将样本图像包含的形变图案在每个对象类型下的概率以及样本图像包含的形变图案在每个对象类型下的实际概率分别带入损失函数,计算损失函数值。其中,神经网络模型输出的对象类型的种类和数量可由样本图像的丰富度进行确定。
S4:判断本次计算出的损失函数值是否小于或等于最近W次计算出的损失函数值;若判断结果为是,则执行步骤S5;若判断结果为否,则执行步骤S6。其中,W为大于或等于1的整数,其具体取值可进行灵活设备。例如,W可等于5、8、10等,但不限于此。
S5:沿初始神经网络模型中的参数的负梯度方向调整神经网络模型中的参数,并将调整后的神经网络模型作为初始神经网络模型,并返回执行步骤S1。
S6:将上述最近W次中损失函数值最小时的神经网络模型作为最终的神经网络模型。即,将上述最近W次中损失函数值最小时的每层的权重和偏置,作为最终的权重和偏置。
进一步,在确定目标对象A出现在投影模块13的投射范围内的情况下,可对目标对象A进行跟踪,即对目标对象A引起的形变图案进行跟踪采集。为了实现图像采集设备14对形变图案的跟踪采集,即实现对目标对象A的跟踪,在本实施例中,控制模块15可根据图像采集设备14已采集到的第一投影图像,调整多轴云台13的工作状态,以使多轴云台13带动图像采集设备14对形变图案进行跟踪采集。
在一些实施例中,可在控制模块15中设置调整周期,并启动一定时器或计数器对调整周期进行计时,每当调整周期到达时,控制模块15可根据图像采集设备15在当前调整周期采集到的第一投影图像,调整多轴云台13的工作状态,以带动图像采集设备14在下一调整周期对形变图案进行跟踪采集。即对于多轴云台13来说,其调整后的工作状态可使图像采集设 备14采集到目标对象A在下一调整周期引起的形变图案。
进一步,控制模块15可根据图像采集设备14在当前调整周期采集到的第一投影图像,计算目标对象A的运动信息。其中,目标对象A的运动信息可包括:目标对象A的位移信息、运动速度、运动方向以及加速度信息中的至少一种。
可选地,控制模块15可计算目标投影图像与当前调整周期对应的初始投影图像的像素差异。其中,当前调整周期对应的初始投影图像可为图像采集设备14在当前调整周期采集到的第一帧第一投影图像,也可为图像采集设备14在当前调整周期最初采集到的前N帧第一投影图像的像素平均值组成的投影图像,其中N≥2,且为整数。目标投影图像则为图像采集设备14在当前调整周期采集到的除初始投影图像之外的其它投影图像。目标投影图像的数量可以1帧或多帧。多帧是指2帧或2帧以上。进一步,控制模块15可根据目标投影图像与初始投影图像的像素差异以及图像采集设备14在当前调整周期的位姿,计算目标对象A的运动信息。
在目标投影图像的数量为多帧的情况下,控制模块15可根据相邻两帧目标投影图像与当前调整周期对应的初始投影图像的像素差异,以及图像采集设备在当前调整周期的位姿,计算目标对象A的位移变化;并根据目标对象A的位移变化以及图像采集设备14的采样周期,计算目标对象A的运动速度和/或加速度。
对于控制模块15来说,可将图像采集设备14在当前调整周期采集到的初始投影图像,缓存在内存条中,并将初始投影图像记为I
0。进一步,控制模块15可计算图像采集设备14在当前调整周期采集到的其它目标投影图像与初始投影图像I
0的帧间像素差异:ΔI=I
i-I
0;其中I
i是指图像采集设备14在当前调整周期采集到的第i帧第一投影图像,即第(i-1)帧目标投影图像;i=2,3,…,M;M为图像采集设备14在当前调整周期采集到的第一投影图像的总数量。控制模块15根据ΔI
i=I
i-I
0,可以获取目标对象A的封闭轮 廓。进一步,控制模块15可根据相邻两帧目标投影图像分别与初始投影图像的像素差异,即根据ΔI
i+1与ΔI
i以及图像采集设备14在当前调整周期的位姿,计算目标对象A的位移变化(Δx,Δy)。进一步,根据目标对象A的位移变化以及图像采集设备14的采样周期,可计算目标对象A的运动速度
和加速度
进一步,控制模块15可根据目标对象A的运动信息,调整多轴云台的工作状态,以带动图像采集设备14在下一调整周期对形变图案进行跟踪采集。即对于多轴云台13来说,其调整后的工作状态可使图像采集设备14采集到目标对象A在下一调整周期引起的形变图案。
进一步,控制模块15可根据目标对象A的运动信息,计算多轴云台13中的电机的目标运动参数值,并将多轴云台13中的电机的运动参数调整为目标运动参数值,从而可调整多轴云台13的工作状态。对于多轴云台13来说,其调整后的工作状态可使图像采集设备14采集到目标对象A在下一调整周期引起的形变图案。
对于多轴云台13中的电机来说,其运动参数可包括:多轴云台13中的电机的加速度、角加速度以及转速中的至少一种。相应地,多轴云台13中的电机的目标运动参数值可包括:多轴云台13中的电机的目标加速度、目标加速度以及目标转速中的至少一种。
进一步,控制模块15可根据目标对象A的运动信息,预测目标对象A在下一调整周期移动到的位置;并根据目标对象A在下一调整周期移动到的位置,计算产生形变图案的位置;进一步,控制模块15可根据产生形变图案的位置,计算图像采集设备14在下一调整周期对应的位姿;并根据图像采集设备14在下一调整周期对应的位姿以及图像采集设备14在当前调整周期的位姿,计算多轴云台13中的电机的目标运动参数值。进一步,控制模块15可控制多轴云台13将其电机的运动参数调整为目标运动参数值,从而调整多 轴云台13的工作状态,使其调整后的工作状态能够使图像采集设备14对目标对象A在下一调整周期引起的形变图案进行跟踪采集,从而实现在下一调整周期对目标对象A进行跟踪。
需要说明的是,如图1b所示,本申请实施例提供的跟踪设备还可包括:电源组件17和散热组件18等等。不同的跟踪设备所包含的这些基本组件以及基本组件的构成均会有所不同,本申请实施例列举的仅是部分示例。
除了上述跟踪设备之外,本申请实施例还提供目标跟踪系统。如图2a所示,该系统包括:投影模块22,跟踪设备S20和设置于跟踪设备S20所处物理环境中的投影面S21。投影模块22,用于向投影面投射基准图像,所述基准图像具有预定图案。
在本实施例中,如图2a所示,跟踪设备包括:机体21,多轴云台23,和控制模块25。机体21,用于安装多轴云台23。多轴云台23,用于搭载图像采集设备24,并能够带动图像采集设备24转动。其中,图像采集设备24用于采集基准图像对应的第一投影图像。基准图像由投影模块22向外投射,并且基准图像具有预定图案。第一投影图像包括所述预定图案对应的形变图案。形变图案基于目标对象产生。控制模块25,电连接于多轴云台23,用于根据图像采集设备24已采集到的第一投影图像,调整多轴云台23的工作状态,以带动所述图像采集设备对所述形变图案进行跟踪采集。
在本实施例中,不限定投影模块22与跟踪设备的所属关系。在一些实施例中,投影模块22为独立的投影设备,且设置于跟踪设备所处的物理环境中。投影模块22与控制模块15通信连接。控制模块25可指示投影模块22向外投射基准图像,该基准图像具有预定图案。
在另一实施例中,机体21可用于安装投影模块22和多轴云台23。投影模块22可固定在机体21上。关于投影模块22在机体21上的固定方式,可参见上述图1a的相关描述内容,在此不再赘述。
在本实施例中,多轴云台23与机体21转动连接。多轴云台23是指具有多个转轴的云台。多个是指2个或2个以上。在本实施例中,多轴云台23用 于搭载图像采集设备24。关于多轴云台13搭载图像采集设备24的方式,可参见上述图1a中的相关描述,在此不再赘述。
在本实施例中,在图像采集设备24搭载于多轴云台23上的情况下,图像采集设备24可随多轴云台23的转动而转动,即多轴云台23可带动图像采集设备24转动。关于多轴云台23的工作原理、实现形态以及图像采集设备24的实现形态,均可参见上述实施例的相关内容,在此不再赘述。
在本实施例中,跟踪设备还包括:控制模块25。关于控制模块25的具体实施方式可参见上述跟踪设备实施例的相关内容。在本实施例中,跟踪设备S20的计算机指令主要由控制模块25执行。
在本实施例中,控制模块25可指示投影模块22向外投射基准图像,该基准图像具有预定图案。其中,预定图案可以为任何形式的图案。例如预定图案可以为条纹图案、编码图案、预定的字符图案等等,但不限于此。
可选地,控制模块25可与投影模块22通信连接。相应地,控制模块25可通过指令指示投影模块22向外投射具有预定图案的基准图像。可选地,控制模块25可向投影模块22发送投影指令,该投影指令用于指示投影模块22向外投射具有预定图案的基准图像。相应地,投影模块22在接收到投影指令的情况下,向外投射具有预定图案的基准图像。
或者,控制模块25电连接于投影模块22。相应地,控制模块25可通过电信号指示投影模块22向外投射具有预定图案的基准图像。其中电信号可为高电平或低电平信号。可选地,控制模块25可向投影模块22输出电信号,该电信号用于指示投影模块22向外投射具有预定图案的基准图像。相应地,投影模块22在接收到电信号的情况下,向外投射具有预定图案的基准图像。
无论控制模块25与投影模块22采用何种方式连接,投影模块22均可向外投射基准图像,该基准图像具有上述预定图案。在本申请实施例中,不限定投影模块22的具体实现形式。关于投影模块22的实现形式及工作原理的描述,可参见上述跟踪设备实施例的相关内容,在此不再赘述。
在实际应用中,在投影模块22向外投射基准图像的情况下,若未有物体 出现在投影模块22的投影光线上,则基准图像在投影面S21上所呈现的投影图像具有预定图案。若有物体出现在投影模块22的投影光线上,则预定图案对应的投影图像为发生形变,为了便于描述,在本申请实施例中,将预定图案对应的投影图像发生形变所形成的图案,定义为形变图案。其中,形变图案是基于出现在投影模块22的投影光线上的物体产生的,且形变图案会随着物体的移动而移动。基于此,在本申请实施例中,可基于形变图案实现对出现在投影模块22的投影光线上的物体进行跟踪。为了便于描述,将出现在投影模块22的投影光线上的物体,定义为目标对象A。目标对象A可为移动对象。
基于上述分析,在本实施例中,图像采集设备24可采集基准图像对应的投影图像。基准图像对应的投影图像可包括:基准图像直接投影至投影面S21所形成的投影图像。当然,基准图像对应的投影图像也可包括:在有物体(目标对象A)出现在投影模块22的投影光线的情况下,投影模块22的投影光线经过目标对象A而将基准图像投射在投影面S21上形成的投影图像。在本实施例中,投影面S21可以为投影面S21可以为投影屏,例如投影幕布等;或者为当前所处环境中的其它物体表面,例如墙壁、地面或家具表面等等,但不限于此。图2中仅以投影面S21为投影幕布进行图示,但不构成限定。
在下述实施例中,为了便于描述和区分,将有物体(目标对象A)出现在投影模块22的投影光线的情况下,投影模块22的投影光线经过目标对象A而将基准图像投射在某一投影面上形成的投影图像,定义为第一投影图像;并将在没有物体出现在投影模块22的投影光线的情况下,基准图像直接投影至某一投影面上所形成的投影图像,定位为第二投影图像。其中,第一投影图像包含上述形变图案,第二投影图像不包含形变图案。因此,第一投影图像可以反应目标对象A的信息,而第二投影图像无法反应目标对象A的信息,也就无法基于第二投影图像对目标对象A进行跟踪。因此,在下述实施例中,重点以控制模块25基于图像采集设备24采集到的第一 投影图像,实现对目标对象A的跟踪过程进行示例性说明。
控制模块25与图像采集设备24通信连接,控制模块25与多轴云台23电连接。其中,图像采集设备24可将采集到的第一投影图像提供给控制模块25。相应地,控制模块25可根据图像采集设备24已采集到的第一投影图像,调整多轴云台23的工作状态。多轴云台23在工作状态改变时,多轴云台23可进行转动。多轴云台23转动可带动图像采集设备24进行转动,进而可调整图像采集设备24的位姿,使图像采集设备24可对形变图案进行跟踪采集。其中,图像采集设备24的位姿包括图像采集设备24的位置和朝向。由于形变图案是由目标对象A引起的,因此对形变图案进行跟踪采集,可实现对目标对象A的跟踪。而且,多轴云台23带动图像采集设备24转动可调整图像采集设备24的位姿,有助于扩大对目标对象A的跟踪范围。
在本申请实施例中,为了实现对目标对象A引起的形变图案的跟踪采集,图像采集设备24可采用具有高采样率的图像采集设备。优选地,图像采集设备24在目标对象A在投影模块22的投射范围内移动过程中,可采集到多帧包含形变图案的第一投影图像。相应地,图像采集设备24的采样周期小于目标对象A在投影模块22的投射范围内的移动时间。即投影模块22的投影范围内的移动时间可以为图像采集设备24的采样周期Q倍,Q≥2。这样,图像采集设备14可采集到多帧第一投影图像。
在本申请实施例中,不限定目标对象A的实现形态。在一些实施例中,目标对象A可以为出现在投影模块22的投影光线上的任何移动对象。基于此,图像采集设备25在采集到包含目标对象A引起的形变图案的第一投影图像之前,还可采集基准图案直接投射在某一投影面上所形成的第二投影图像,并将该第二投影图像缓存在控制模块25的内存中。进一步,控制模块25可根据图像采集设备24当前采集到的第三投影图像与第二投影图像,判断第三投影图像相较于第二投影图像是否发生形变;若判断结果为是,则确定目标对象A进入投影模块的投射范围内;并将第三投影图像作为第一投影图像。可 选地,可将第三投影图像作为第一帧第一投影图像。之后,控制模块25开始对目标对象A进行跟踪,即开始对形变图案进行跟踪采集。其中,关于控制模块25对形变图案进行跟踪采集的具体实施方式,将在下述实施例中进行详细说明,在此暂不详述。
在另一些实施例中,出现在投影模块22的投影光线上的物体为指定对象或者指定类型的对象的时候,才会对该物体进行跟踪。基于此,图像采集设备24在采集到包含目标对象A引起的形变图案的第一投影图像之前,还可采集基准图案直接投射在某一投影面上所形成的第二投影图像,并将该第二投影图像缓存在控制模块25的内存中。进一步,控制模块25可根据图像采集设备24当前采集到的第三投影图像与第二投影图像,判断第三投影图像相较于第二投影图像是否发生形变;若判断结果为是,则将第三投影图像输入神经网络模型中。之后,在神经网络模型中,计算第三投影图像包含的形变图案的对象类型;若第三投影图像包含的形变图案的对象类型为指定类型,则确定目标对象A进入投影模块的投射范围内。并将第三投影图像作为第一投影图像。可选地,可将第三投影图像作为第一帧第一投影图像。之后,控制模块25开始对目标对象A进行跟踪,即开始对形变图案进行跟踪采集。其中,关于控制模块25对形变图案进行跟踪采集的具体实施方式,将在下述实施例中进行详细说明,在此暂不详述。
值得说明的是,在本申请实施例中,在利用神经网络模型分析第三投影图像包含的形变图案的对象类型之前,还需对神经网络模型进行训练。
在本申请实施例中,可以损失函数最小化为训练目标,利用样本图像进行模型训练,得到神经网络模型。其中,样本图像包括投影模块的投影光线经过指定对象而将基准图像投射在投影面上形成的投影图像。其中,损失函数是根据模型训练得到的指定对象属于指定类型的概率与指定对象属于指定类型的实际概率确定的。其中,指定对象属于指定类型的实际概率可为1,即100%。可选地,损失函数可以为模型训练得到的指定对象属于指定类型的概率与指定对象属于指定类型的实际概率的差值的绝对值。
其中,关于样本图像和神经网络模型的网络架构的描述以及神经网络模型训练的具体过程的描述,可参见上述跟踪设备实施例的相关内容,在此不再赘述。
进一步,在确定目标对象A出现在投影模块22的投射范围内的情况下,可对目标对象A进行跟踪,即对目标对象A引起的形变图案进行跟踪采集。为了实现图像采集设备24对形变图案的跟踪采集,即实现对目标对象A的跟踪,在本实施例中,控制模块25可根据图像采集设备24已采集到的第一投影图像,调整多轴云台23的工作状态,以使多轴云台23带动图像采集设备24对形变图案进行跟踪采集。
在一些实施例中,可在控制模块25中设置调整周期,并启动一定时器或计数器对调整周期进行计时,每当调整周期到达时,控制模块25可根据图像采集设备25在当前调整周期采集到的第一投影图像,调整多轴云台23的工作状态,以带动图像采集设备24在下一调整周期对形变图案进行跟踪采集。即对于多轴云台23来说,其调整后的工作状态可使图像采集设备24采集到目标对象A在下一调整周期引起的形变图案。
进一步,控制模块25可根据图像采集设备24在当前调整周期采集到的第一投影图像,计算目标对象A的运动信息。其中,目标对象A的运动信息可包括:目标对象A的位移信息、运动速度、运动方向以及加速度信息中的至少一种。
可选地,控制模块25可计算目标投影图像与当前调整周期对应的初始投影图像的像素差异。其中,当前调整周期对应的初始投影图像可为图像采集设备24在当前调整周期采集到的第一帧第一投影图像,也可为图像采集设备24在当前调整周期最初采集到的前N帧第一投影图像的像素平均值组成的投影图像,其中N≥2,且为整数。目标投影图像则为图像采集设备24在当前调整周期采集到的除初始投影图像之外的其它投影图像。目标投影图像的数量可以1帧或多帧。多帧是指2帧或2帧以上。进一步,控制模块25可根据目标投影图像与初始投影图像的像素差异以及图像采集设备24在当前调整周期 的位姿,计算目标对象A的运动信息。
在目标投影图像的数量为多帧的情况下,控制模块25可根据相邻两帧目标投影图像与当前调整周期对应的初始投影图像的像素差异,以及图像采集设备在当前调整周期的位姿,计算目标对象A的位移变化;并根据目标对象A的位移变化以及图像采集设备24的采样周期,计算目标对象A的运动速度和/或加速度。
进一步,控制模块25可根据目标对象A的运动信息,调整多轴云台的工作状态,以带动图像采集设备24在下一调整周期对形变图案进行跟踪采集。即对于多轴云台23来说,其调整后的工作状态可使图像采集设备24采集到目标对象A在下一调整周期引起的形变图案。
进一步,控制模块25可根据目标对象A的运动信息,计算多轴云台23中的电机的目标运动参数值,并将多轴云台23中的电机的运动参数调整为目标运动参数值,从而可调整多轴云台23的工作状态。对于多轴云台23来说,其调整后的工作状态可使图像采集设备24采集到目标对象A在下一调整周期引起的形变图案。
对于多轴云台23中的电机来说,其运动参数可包括:多轴云台23中的电机的加速度、角加速度以及转速中的至少一种。相应地,多轴云台23中的电机的目标运动参数值可包括:多轴云台23中的电机的目标加速度、目标加速度以及目标转速中的至少一种。
进一步,控制模块25可根据目标对象A的运动信息,预测目标对象A在下一调整周期移动到的位置;并根据目标对象A在下一调整周期移动到的位置,计算产生形变图案的位置;进一步,控制模块25可根据产生形变图案的位置,计算图像采集设备24在下一调整周期对应的位姿;并根据图像采集设备24在下一调整周期对应的位姿以及图像采集设备24在当前调整周期的位姿,计算多轴云台23中的电机的目标运动参数值。进一步,控制模块25可控制多轴云台23将其电机的运动参数调整为目标运动参数值,从而调整多轴云台13的工作状态,使其调整后的工作状态能够使图像采集设备14对目 标对象A在下一调整周期引起的形变图案进行跟踪采集,从而实现在下一调整周期对目标对象A进行跟踪。
为了便于理解上述目标跟踪过程,下面以预定图案为条纹图案,并结合图2b对目标跟踪过程进行示例性说明。
如图2b所示,在一个球从投影模块22的投射范围内移动的过程中,投影模块投射出条纹图案A,条纹图案A经过球后,投影至投影面S21中形成投影图像D。投影图像D包含条纹图案A由于经过球而产生的形变图案。图像采集设备24获取投影图案D。控制模块25可将投影图像D输入神经网络模型,通过神经网络模型识别出该物体为球形。可选地,控制模块25还可检测到球上的凹陷点B和球上的裂纹C。同时,控制模块25可根据已采集到的投影图案D,控制多轴云台运动,以带动图像采集设备24对形变图案进行跟踪采集。由于形变图案是由球引起的,因此形变图案的运动轨迹可反映球的运动轨迹,因此对形变图案进行跟踪采集,也就实现了对球的跟踪。
除了上述跟踪设备及系统实施例之外,本申请实施例还提供目标跟踪方法,下面从上述控制模块的角度,对本申请实施例提供的目标跟踪方法进行示例性说明。
图3为本申请实施例提供的目标跟踪方法的流程示意图。如图3所示,该方法包括:
301、控制投影模块向外投射基准图像;基准图像具有预定图案。
302、控制搭载于多轴云台的图像采集设备采集基准图像对应的第一投影图像;第一投影图像包括:预定图案的形变图案;形变图案基于目标对象产生。
303、根据图像采集设备已采集到的第一投影图像,调整多轴云台的工作状态,以带动图像采集设备对形变图案进行跟踪采集。
其中,关于投影模块、多轴云台以及图像采集设备的实现方式和连接结构可参见上述跟踪设备实施例的相关内容,在此不再赘述。
多轴云台可绕其转轴进行转动,可转动的方向由多轴云台包含的转轴的 转动方向决定。由于图像采集设备搭载于多轴云台上,图像采集设备也就随多轴云台的转动而转动。
在本实施例中,可控制投影模块向外投射基准图像,该基准图像具有预定图案。其中,预定图案可以为任何形式的图案。例如预定图案可以为条纹图案、编码图案、预定的字符图案等等,但不限于此。其中关于生成预定图案以及控制模块控制投影模块的实施方式,均可参见上述实施例的相关内容,在此不再赘述。
在实际应用中,在投影模块向外投射基准图像的情况下,若未有物体出现在投影模块的投影光线上,则基准图像在投影面上所呈现的投影图像也具有预定图案。若有物体出现在投影模块的投影光线上,则预定图案对应的投影图像为发生形变,为了便于描述,在本申请实施例中,将预定图案对应的投影图像发生形变所形成的图案,定义为形变图案。其中,形变图案是基于出现在投影模块的投影光线上的物体产生的,且形变图案会随着物体的移动而移动。基于此,在本申请实施例中,可基于形变图案实现对出现在投影模块的投影光线上的物体进行跟踪。为了便于描述,将出现在投影模块的投影光线上的物体,定义为目标对象。目标对象可为移动对象。
基于上述分析,在本实施例中,可控制图像采集设备采集基准图像对应的投影图像。基准图像对应的投影图像可包括:基准图像直接投影至某一投影面所形成的投影图像,即在没有物体出现在投影模块的投影光线的情况下,基准图像直接投影至某一投影面上所形成的投影图像。
当然,基准图像对应的投影图像也可包括:在有物体(目标对象)出现在投影模块的投影光线的情况下,投影模块的投影光线经过目标对象A而将基准图像投射在某一投影面上形成的投影图像。其中关于投影面的实现形态可参见上述跟踪设备实施例的相关内容,在此不再赘述。
在下述实施例中,为了便于描述和区分,将有物体(目标对象)出现在投影模块的投影光线的情况下,投影模块的投影光线经过目标对象而将基准图像投射在某一投影面上形成的投影图像,定义为第一投影图像;并将 在没有物体出现在投影模块的投影光线的情况下,基准图像直接投影至某一投影面上所形成的投影图像,定位为第二投影图像。其中,第一投影图像包含上述形变图案,第二投影图像不包含形变图案。因此,第一投影图像可以反应目标对象的信息,而第二投影图像无法反应目标对象的信息,也就无法基于第二投影图像对目标对象进行跟踪。因此,在下述实施例中,重点以基于图像采集设备采集到的第一投影图像,实现对目标对象的跟踪过程进行示例性说明。
在本实施例中,可根据图像采集设备已采集到的第一投影图像,调整多轴云台的工作状态。多轴云台在工作状态改变时,多轴云台可进行转动。多轴云台转动可带动图像采集设备进行转动,进而可调整图像采集设备的位姿,使图像采集设备可对形变图案进行跟踪采集。其中,图像采集设备的位姿包括图像采集设备的位置和朝向。由于形变图案是由目标对象引起的,因此对形变图案进行跟踪采集,可实现对目标对象的跟踪。而且,多轴云台带动图像采集设备转动可调整图像采集设备的位姿,有助于扩大对目标对象的跟踪范围。
对于多轴云台来说,其调整后的工作状态,可将图像采集设备的位姿调整为可以采集后续时刻由目标对象引起的形变图案。
在本申请实施例中,为了实现对目标对象引起的形变图案的跟踪采集,图像采集设备可采用具有高采样率的图像采集设备。优选地,图像采集设备在目标对象在投影模块的投射范围内移动过程中,可采集到多帧包含形变图案的第一投影图像。相应地,图像采集设备的采样周期小于目标对象在投影模块的投射范围内的移动时间。即投影模块的投影范围内的移动时间可以为图像采集设备的采样周期Q倍,Q≥2。在本实施例中,不限定Q的具体取值。例如,Q可以为3、8、10、20或30.5等等。这样,图像采集设备可采集到多帧第一投影图像。
在本申请实施例中,不限定目标对象的实现形态。在一些实施例中,目标对象可以为出现在投影模块的投影光线上的任何移动对象。基于此,图像 采集设备在采集到包含目标对象引起的形变图案的第一投影图像之前,还可采集基准图案直接投射在某一投影面上所形成的第二投影图像。进一步,可根据图像采集设备当前采集到的第三投影图像与第二投影图像,判断第三投影图像相较于第二投影图像是否发生形变;若判断结果为是,则确定目标对象进入投影模块的投射范围内;并将第三投影图像作为第一投影图像。可选地,可将第三投影图像作为第一帧第一投影图像。之后,开始对目标对象进行跟踪,即开始对形变图案进行跟踪采集。其中,关于对形变图案进行跟踪采集的具体实施方式,将在下述实施例中进行详细说明,在此暂不详述。
在另一些实施例中,出现在投影模块的投影光线上的物体为指定对象或者指定类型的对象的时候,才会对该物体进行跟踪。基于此,图像采集设备在采集到包含目标对象引起的形变图案的第一投影图像之前,还可采集基准图案直接投射在某一投影面上所形成的第二投影图像。进一步,可根据图像采集设备当前采集到的第三投影图像与第二投影图像,判断第三投影图像相较于第二投影图像是否发生形变;若判断结果为是,则将第三投影图像输入神经网络模型中。之后,在神经网络模型中,计算第三投影图像包含的形变图案的对象类型;若第三投影图像包含的形变图案的对象类型为指定类型,则确定目标对象进入投影模块的投射范围内,并将第三投影图像作为第一投影图像。可选地,可将第三投影图像作为第一帧第一投影图像。之后,开始对目标对象进行跟踪,即开始对形变图案进行跟踪采集。其中,关于对形变图案进行跟踪采集的具体实施方式,将在下述实施例中进行详细说明,在此暂不详述。
值得说明的是,在本申请实施例中,在利用神经网络模型分析第三投影图像包含的形变图案的对象类型之前,还需对神经网络模型进行训练。
在本申请实施例中,可以损失函数最小化为训练目标,利用样本图像进行模型训练,得到神经网络模型。其中,样本图像包括投影模块的投影光线经过指定对象而将基准图像投射在投影面上形成的投影图像。样本图像可以为一帧或多帧,多帧是指2帧或2帧以上,其数量的具体取值可根据实际需 求进行灵活设定。其中,损失函数是根据模型训练得到的指定对象属于指定类型的概率与指定对象属于指定类型的实际概率确定的。其中,指定对象属于指定类型的实际概率可为1,即100%。可选地,损失函数可以为模型训练得到的指定对象属于指定类型的概率与指定对象属于指定类型的实际概率的差值的绝对值。
进一步,在确定目标对象出现在投影模块的投射范围内的情况下,可对目标对象进行跟踪,即对目标对象引起的形变图案进行跟踪采集。为了实现图像采集设备对形变图案的跟踪采集,即实现对目标对象的跟踪,在本实施例中,可根据图像采集设备已采集到的第一投影图像,调整多轴云台的工作状态,以使多轴云台带动图像采集设备对形变图案进行跟踪采集。
在一些实施例中,可设置调整周期,并启动一定时器或计数器对调整周期进行计时,每当调整周期到达时,可根据图像采集设备在当前调整周期采集到的第一投影图像,调整多轴云台的工作状态,以带动图像采集设备在下一调整周期对形变图案进行跟踪采集。即对于多轴云台来说,其调整后的工作状态可使图像采集设备采集到目标对象在下一调整周期引起的形变图案。
进一步,可根据图像采集设备在当前调整周期采集到的第一投影图像,计算目标对象的运动信息。其中,目标对象的运动信息可包括:目标对象的位移信息、运动速度、运动方向以及加速度信息中的至少一种。
可选地,可计算目标投影图像与当前调整周期对应的初始投影图像的像素差异。其中,当前调整周期对应的初始投影图像可为图像采集设备在当前调整周期采集到的第一帧第一投影图像,也可为图像采集设备在当前调整周期最初采集到的前N帧第一投影图像的像素平均值组成的投影图像,其中N≥2,且为整数。目标投影图像则为图像采集设备在当前调整周期采集到的除初始投影图像之外的其它投影图像。目标投影图像的数量可以1帧或多帧。多帧是指2帧或2帧以上。进一步,可根据目标投影图像与初始投影图像的像素差异以及图像采集设备在当前调整周期的位姿,计算目标对象的运动信 息。
在目标投影图像的数量为多帧的情况下,可根据相邻两帧目标投影图像与当前调整周期对应的初始投影图像的像素差异,以及图像采集设备在当前调整周期的位姿,计算目标对象的位移变化;并根据目标对象的位移变化以及图像采集设备的采样周期,计算目标对象A的运动速度和/或加速度。
进一步,可根据目标对象的运动信息,调整多轴云台的工作状态,以带动图像采集设备在下一调整周期对形变图案进行跟踪采集。即对于多轴云台来说,其调整后的工作状态可使图像采集设备采集到目标对象在下一调整周期引起的形变图案。
进一步,可根据目标对象的运动信息,计算多轴云台中的电机的目标运动参数值,并将多轴云台中的电机的运动参数调整为目标运动参数值,从而可调整多轴云台的工作状态。对于多轴云台来说,其调整后的工作状态可使图像采集设备采集到目标对象在下一调整周期引起的形变图案。
对于多轴云台中的电机来说,其运动参数可包括:多轴云台中的电机的加速度、角加速度以及转速中的至少一种。相应地,多轴云台中的电机的目标运动参数值可包括:多轴云台中的电机的目标加速度、目标加速度以及目标转速中的至少一种。
进一步,可根据目标对象的运动信息,预测目标对象在下一调整周期移动到的位置;并根据目标对象在下一调整周期移动到的位置,计算产生形变图案的位置;进一步,可根据产生形变图案的位置,计算图像采集设备在下一调整周期对应的位姿;并根据图像采集设备在下一调整周期对应的位姿以及图像采集设备在当前调整周期的位姿,计算多轴云台中的电机的目标运动参数值。进一步,可控制多轴云台将其电机的运动参数调整为目标运动参数值,从而调整多轴云台的工作状态,使其调整后的工作状态能够使图像采集设备对目标对象在下一调整周期引起的形变图案进行跟踪采集,从而实现在下一调整周期对目标对象进行跟踪。
需要说明的是,上述实施例所提供方法的各步骤的执行主体均可以是同 一设备,或者,该方法也由不同设备作为执行主体。比如,步骤301和302的执行主体可以为设备A;又比如,步骤301的执行主体可以为设备A,步骤302的执行主体可以为设备B;等等。
另外,在上述实施例及附图中的描述的一些流程中,包含了按照特定顺序出现的多个操作,但是应该清楚了解,这些操作可以不按照其在本文中出现的顺序来执行或并行执行,操作的序号如301、302等,仅仅是用于区分开各个不同的操作,序号本身不代表任何的执行顺序。另外,这些流程可以包括更多或更少的操作,并且这些操作可以按顺序执行或并行执行。
相应地,本申请实施例还提供一种存储有计算机指令的计算机可读存储介质,当计算机指令被一个或多个处理器执行时,致使一个或多个处理器执行上述目标跟踪方法中的步骤。
需要说明的是,本文中的“第一”、“第二”等描述,是用于区分不同的消息、设备、模块等,不代表先后顺序,也不限定“第一”和“第二”是不同的类型。
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过 程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。
以上所述仅为本申请的实施例而已,并不用于限制本申请。对于本领域技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本申请的权利要求范围之内。
Claims (43)
- 一种跟踪设备,其特征在于,包括:机体,用于安装多轴云台;多轴云台,用于搭载图像采集设备,并能够带动所述图像采集设备转动;其中,所述图像采集设备用于采集基准图像对应的第一投影图像;所述基准图像由投影模块向外投射,且所述基准图像具有预定图案;所述第一投影图像包括所述预定图案对应的形变图案;所述形变图案基于目标对象产生;控制模块,电连接于所述多轴云台,用于根据所述图像采集设备已采集到的第一投影图像,调整所述多轴云台的工作状态,以带动所述图像采集设备对所述形变图案进行跟踪采集。
- 根据权利要求1所述的设备,其特征在于,所述控制模块电连接于所述投影模块,并指示所述投影模块向外投射所述基准图像。
- 根据权利要求1所述的设备,其特征在于,所述图像采集设备的采样周期小于所述目标对象在所述投影模块的投射范围内的移动时间。
- 根据权利要求1所述的设备,其特征在于,所述控制模块在调整所述多轴云台的工作状态时,具体用于:根据所述图像采集设备在当前调整周期采集到的第一投影图像,调整所述多轴云台的工作状态,以带动所述图像采集设备在下一调整周期对所述形变图案进行跟踪采集。
- 根据权利要求4所述的设备,其特征在于,所述控制模块在调整所述多轴云台的工作状态时,具体用于:根据所述图像采集设备在当前调整周期采集到的第一投影图像,计算所述目标对象的运动信息;根据所述目标对象的运动信息,调整所述多轴云台的工作状态,以带动所述图像采集设备在下一调整周期对所述形变图案进行跟踪采集。
- 根据权利要求5所述的设备,其特征在于,所述控制模块在调整所述多轴云台的工作状态时,具体用于:根据所述目标对象的运动信息,计算所述多轴云台中的电机的目标运动参数值;将所述多轴云台中的电机的运动参数调整为所述目标运动参数值,以调整所述多轴云台的工作状态。
- 根据权利要求6所述的设备,其特征在于,所述控制模块在计算所述多轴云台中的电机的目标运动参数值时,具体用于:根据所述目标对象的运动信息,预测所述目标对象在下一调整周期移动到的位置;根据所述目标对象在下一调整周期移动到的位置,计算产生所述形变图案的位置;根据所述产生所述形变图案的位置,计算所述图像采集设备在所述下一调整周期对应的位姿;根据所述图像采集设备在所述下一调整周期对应的位姿以及所述图像采集设备在当前调整周期的位姿,计算所述多轴云台中的电机的目标运动参数值。
- 根据权利要求6所述的设备,其特征在于,所述多轴云台中的电机的运动参数包括:所述多轴云台中的电机的加速度、角加速度以及转速中的至少一种。
- 根据权利要求5所述的设备,其特征在于,所述控制模块在计算所述目标对象的运动信息时,具体用于:计算目标投影图像分别与当前调整周期对应的初始投影图像的像素差异;根据所述像素差异以及所述图像采集设备在当前调整周期的位姿,计算所述目标对象的运动信息;其中,所述目标投影图像为所述图像采集设备在当前调整周期采集到的除所述初始投影图像之外的其它投影图像。
- 根据权利要求9所述的设备,其特征在于,所述控制模块在计算所 述目标对象的运动信息时,具体用于:根据相邻两帧目标投影图像与当前调整周期对应的初始投影图像的像素差异,以及所述图像采集设备在当前调整周期的位姿,计算所述目标对象的位移变化;根据所述目标对象的位移变化以及所述图像采集设备的采样周期,计算所述目标对象的运动速度和/或加速度。
- 根据权利要求1-10任一项所述的设备,其特征在于,所述第一投影图像是所述投影模块的投影光线经过所述目标对象而将所述基准图像投射在投影面上形成的;所述图像采集设备,还用于:在采集所述第一投影图像之前,采集所述基准图像直接投射在所述投影面上形成的第二投影图像;;所述控制模块,用于:根据所述图像采集设备当前采集到的第三投影图像与所述第二投影图像,判断所述第三投影图像相较于所述第二投影图像是否发生形变;若判断结果为是,则确定所述目标对象进入所述投影模块的投射范围内;并将所述第三投影图像作为所述第一投影图像。
- 根据权利要求1-10任一项所述的设备,其特征在于,所述第一投影图像是所述投影模块的投影光线经过所述目标对象而将所述基准图像投射在投影面上形成的;所述图像采集设备,还用于:在采集所述第一投影图像之前,采集所述基准图像直接投射在投影面上的第二投影图像;所述控制模块,用于:根据所述图像采集设备当前采集到的第三投影图像与所述第二投影图像,判断所述第三投影图像相较于所述第二投影图像是否发生形变;若判断结果为是,则将所述第三投影图像输入神经网络模型;在所述神经网络模型中,计算所述第三投影图像包含的形变图案的对象类型;若所述对象类型为指定类型,则确定所述目标对象进入所述投影模块的投射范围内;并将所述第三投影图像作为所述第一投影图像。
- 根据权利要求12所述的设备,其特征在于,所述控制模块,还用于:以损失函数最小化为训练目标,利用样本图像进行模型训练,得到所述神经网络模型;所述样本图像包括所述投影模块的投影光线经过指定对象而将所述基准图像投射在投影面上形成的投影图像;所述指定对象属于所述指定类型;所述损失函数是根据模型训练得到的所述指定对象属于所述指定类型的概率与所述指定对象属于所述指定类型的实际概率确定的。
- 根据权利要求1-10任一项所述的设备,其特征在于,所述预定图案为条纹图案。
- 根据权利要求1-10任一项所述的设备,其特征在于,所述投影模块为数字光处理投影设备。
- 根据权利要求15所述的设备,其特征在于,所述数字光处理投影设备包括:光源、色轮、数字微镜器件以及投影透镜;其中,所述色轮光连接于所述光源与所述数字微镜器件之间;所述数字微镜器件与所述投影透镜光连接;且所述色轮和所数字微镜器件还分别与所述控制模块电连接;其中,所述光源发出的光入射至所述色轮;所述色轮在所述控制模块统的控制下,将所述光过滤为单色光,并将所述单色光投射至所述数字微镜器件;所述数字微镜器件在所述控制模块的控制下,利用所述单色光调制出所述预定图案,并经所述投影透镜向外投射具有所述预定图案的基准图像。
- 一种目标跟踪方法,其特征在于,包括:控制投影模块向外投射基准图像;所述基准图像具有预定图案;控制搭载于多轴云台的图像采集设备采集所述基准图像对应的第一投影图像;所述第一投影图像包括:所述预定图案的形变图案;所述形变图案基于目标对象产生;根据所述图像采集设备已采集到的第一投影图像,调整所述多轴云台的工作状态,以带动所述图像采集设备对所述形变图案进行跟踪采集。
- 根据权利要求17所述的方法,其特征在于,所述图像采集设备的采样周期小于所述目标对象在所述投影模块的投射范围内的移动时间。
- 根据权利要求17所述的方法,其特征在于,所述根据所述图像采集设备已采集到的第一投影图像,调整多轴云台的工作状态,包括:根据所述图像采集设备在当前调整周期采集到的第一投影图像,调整所述多轴云台的工作状态,以带动所述图像采集设备在下一调整周期对所述形变图案进行跟踪采集。
- 根据权利要求19所述的方法,其特征在于,所述根据所述图像采集设备在当前调整周期采集到的第一投影图像,调整所述多轴云台的工作状态,包括:根据所述图像采集设备在当前调整周期采集到的第一投影图像,计算所述目标对象的运动信息;根据所述目标对象的运动信息,调整所述多轴云台的工作状态,以带动所述图像采集设备在下一调整周期对所述形变图案进行跟踪采集。
- 根据权利要求20所述的方法,其特征在于,所述根据所述目标对象的运动信息,调整所述多轴云台的工作状态,包括:根据所述目标对象的运动信息,计算所述多轴云台中的电机的目标运动参数值;将所述多轴云台中的电机的运动参数调整为所述目标运动参数值,以调整所述多轴云台的工作状态。
- 根据权利要求21所述的方法,其特征在于,所述根据所述目标对象的运动信息,计算所述多轴云台中的电机的目标运动参数值,包括:根据所述目标对象的运动信息,预测所述目标对象在下一调整周期移动到的位置;根据所述目标对象在下一调整周期移动到的位置,计算产生所述形变图案的位置;根据所述产生所述形变图案的位置,计算所述图像采集设备在所述下一调整周期应对应的位姿;根据所述图像采集设备在所述下一调整周期应对应的位姿以及在当前调 整周期的位姿,计算所述多轴云台中的电机的目标运动参数值。
- 根据权利要求21所述的方法,其特征在于,所述多轴云台中的电机的运动参数包括:所述多轴云台中的电机的加速度、角加速度以及转速中的至少一种。
- 根据权利要求20所述的方法,其特征在于,所述根据所述图像采集设备在当前调整周期采集到的第一投影图像,计算所述目标对象的运动信息,包括:计算目标投影图像分别与当前调整周期对应的初始投影图像的像素差异;根据所述像素差异以及所述图像采集设备在当前调整周期的位姿,计算所述目标对象的运动信息;其中,所述目标投影图像为所述图像采集设备在当前调整周期采集到的除所述初始投影图像之外的其它投影图像。
- 根据权利要求24所述的方法,其特征在于,所述根据所述像素差异以及所述图像采集设备在当前调整周期的位姿,计算所述目标对象的运动信息,包括:根据相邻两帧目标投影图像与当前调整周期对应的初始投影图像的像素差异,以及所述图像采集设备在当前调整周期的位姿,计算所述目标对象的位移变化;根据所述目标对象的位移变化以及所述图像采集设备的采样周期,计算所述目标对象的运动速度和/或加速度。
- 根据权利要求17-25任一项所述的方法,其特征在于,所述第一投影图像是所述投影模块的投影光线经过所述目标对象而将所述基准图像投射在投影面上形成的;所述方法还包括:控制所述图像采集设备采集所述基准图像直接投射在所述投影面上形成的第二投影图像;根据所述图像采集设备当前采集到的第三投影图像与所述第二投影图 像,判断所述第三投影图像相较于所述第二投影图像是否发生形变;若判断结果为是,则确定所述目标对象进入所述投影模块的投射范围内;并将所述第三投影图像作为所述第一投影图像。
- 根据权利要求17-25任一项所述的方法,其特征在于,所述第一投影图像是所述投影模块的投影光线经过所述目标对象而将所述基准图像投射在投影面上形成的;所述方法还包括:控制所述图像采集设备采集所述基准图像直接投射在所述投影面上形成的第二投影图像;根据所述图像采集设备当前采集到的第三投影图像与所述第二投影图像,判断所述第三投影图像相较于所述第二投影图像是否发生形变;若判断结果为是,将所述第三投影图像输入神经网络模型;并在所述神经网络模型中,计算所述第三投影图像包含的形变图案的对象类型;若所述对象类型为指定类型,则确定所述目标对象进入所述投影模块的投射范围内;并将所述第三投影图像作为第一帧第一投影图像。
- 根据权利要求27所述的方法,其特征在于,在将所述第三投影图像输入神经网络模型之前,所述方法还包括:以损失函数最小化为训练目标,利用样本图像对进行模型训练,得到所述神经网络模型;所述样本图像包括所述投影模块的投影光线经过指定对象而将所述基准图像投射在投影面上形成的投影图像;所述指定对象属于所述指定类型;所述损失函数是根据模型训练得到的所述指定对象属于所述指定类型的概率与所述指定对象属于所述指定类型的实际概率确定的。
- 根据权利要求17-25任一项所述的方法,其特征在于,所述预定图案为条纹图案。
- 一种目标跟踪系统,其特征在于,包括:投影模块、跟踪设备以及设置于所述跟踪设备所处物理环境中的投影面;所述投影模块,用于向所述投影面投射基准图像,所述基准图像具有预 定图案;所述跟踪设备包括:机体,用于安装多轴云台;多轴云台,用于搭载图像采集设备,并能够带动所述图像采集设备转动;所述图像采集设备用于采集基准图像在所述投影面上对应的第一投影图像,所述第一投影图像包括所述预定图案对应的形变图案;所述形变图案基于目标对象产生;控制模块,电连接于所述多轴云台,用于根据所述图像采集设备已采集到的第一投影图像,调整所述多轴云台的工作状态,以带动所述图像采集设备对所述形变图案进行跟踪采集。
- 根据权利要求30所述的系统,其特征在于,所述控制模块电连接于所述投影模块,并指示所述投影模块向外投射所述基准图像。
- 根据权利要求30所述的系统,其特征在于,所述图像采集设备的采样周期小于所述目标对象在所述投影模块的投射范围内的移动时间。
- 根据权利要求30所述的系统,其特征在于,所述控制模块在调整所述多轴云台的工作状态时,具体用于:根据所述图像采集设备在当前调整周期采集到的第一投影图像,调整所述多轴云台的工作状态,以带动所述图像采集设备在下一调整周期对所述形变图案进行跟踪采集。
- 根据权利要求33所述的系统,其特征在于,所述控制模块在调整所述多轴云台的工作状态时,具体用于:根据所述图像采集设备在当前调整周期采集到的第一投影图像,计算所述目标对象的运动信息;根据所述目标对象的运动信息,调整所述多轴云台的工作状态,以带动所述图像采集设备在下一调整周期对所述形变图案进行跟踪采集。
- 根据权利要求34所述的系统,其特征在于,所述控制模块在调整所述多轴云台的工作状态时,具体用于:根据所述目标对象的运动信息,计算所述多轴云台中的电机的目标运动参数值;将所述多轴云台中的电机的运动参数调整为所述目标运动参数值,以调整所述多轴云台的工作状态。
- 根据权利要求35所述的系统,其特征在于,所述控制模块在计算所述多轴云台中的电机的目标运动参数值时,具体用于:根据所述目标对象的运动信息,预测所述目标对象在下一调整周期移动到的位置;根据所述目标对象在下一调整周期移动到的位置,计算产生所述形变图案的位置;根据所述产生所述形变图案的位置,计算所述图像采集设备在所述下一调整周期应对应的位姿;根据所述图像采集设备在所述下一调整周期应对应的位姿以及在当前调整周期的位姿,计算所述多轴云台中的电机的目标运动参数值。
- 根据权利要求35所述的系统,其特征在于,所述多轴云台中的电机的运动参数包括:所述多轴云台中的电机的加速度、角加速度以及转速中的至少一种。
- 根据权利要求34所述的系统,其特征在于,所述控制模块在计算所述目标对象的运动信息时,具体用于:计算目标投影图像分别与当前调整周期对应的初始投影图像的像素差异;根据所述像素差异以及所述图像采集设备在当前调整周期的位姿,计算所述目标对象的运动信息;其中,所述目标投影图像为所述图像采集设备在当前调整周期采集到的除所述初始投影图像之外的其它投影图像。
- 根据权利要求38所述的系统,其特征在于,所述控制模块在计算所述目标对象的运动信息时,具体用于:根据相邻两帧目标投影图像与当前调整周期对应的初始投影图像的像素差异,以及所述图像采集设备在当前调整周期的位姿,计算所述目标对象的位移变化;根据所述目标对象的位移变化以及所述图像采集设备的采样周期,计算所述目标对象的运动速度和/或加速度。
- 根据权利要求30-39任一项所述的系统,其特征在于,所述控制模块,还用于:控制所述图像采集设备采集所述基准图像直接投射在所述投影面上形成的第二投影图像;根据所述图像采集设备当前采集到的第三投影图像与所述第二投影图像,判断所述第三投影图像相较于所述第二投影图像是否发生形变;若判断结果为是,则确定所述目标对象进入所述投影模块的投射范围内;并将所述第三投影图像作为所述第一投影图像。
- 根据权利要求30-39任一项所述的系统,其特征在于,所述控制模块,还用于:控制所述图像采集设备采集所述基准图像直接投射在投影面上的第二投影图像;根据所述图像采集设备当前采集到的第三投影图像与所述第二投影图像,判断所述第三投影图像相较于所述第二投影图像是否发生形变;若判断结果为是,则将所述第三投影图像输入神经网络模型;在所述神经网络模型中,计算所述第三投影图像包含的形变图案的对象类型;若所述对象类型为指定类型,则确定所述目标对象进入所述投影模块的投射范围内,并将所述第三投影图像作为所述第一投影图像。
- 根据权利要求41所述的系统,其特征在于,所述控制模块还用于:以损失函数最小化为训练目标,利用样本图像对进行模型训练,得到所述神经网络模型;所述样本图像包括所述投影模块的投影光线经过指定对象而将所述基准图像投射在投影面上形成的投影图像;所述指定对象属于所述 指定类型;所述损失函数是根据模型训练得到的所述指定对象属于所述指定类型的概率与所述指定对象属于所述指定类型的实际概率确定的。
- 一种存储有计算机指令的计算机可读存储介质,其特征在于,当所述计算机指令被一个或多个处理器执行时,致使所述一个或多个处理器执行权利要求17-29任一项所述方法中的步骤。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202080005912.1A CN112955844A (zh) | 2020-06-30 | 2020-06-30 | 目标跟踪方法、设备、系统及存储介质 |
PCT/CN2020/099161 WO2022000242A1 (zh) | 2020-06-30 | 2020-06-30 | 目标跟踪方法、设备、系统及存储介质 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2020/099161 WO2022000242A1 (zh) | 2020-06-30 | 2020-06-30 | 目标跟踪方法、设备、系统及存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022000242A1 true WO2022000242A1 (zh) | 2022-01-06 |
Family
ID=76236244
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/099161 WO2022000242A1 (zh) | 2020-06-30 | 2020-06-30 | 目标跟踪方法、设备、系统及存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112955844A (zh) |
WO (1) | WO2022000242A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116630374A (zh) * | 2023-07-24 | 2023-08-22 | 贵州翰凯斯智能技术有限公司 | 目标对象的视觉跟踪方法、装置、存储介质及设备 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105430326A (zh) * | 2015-11-03 | 2016-03-23 | 中国电子科技集团公司第二十八研究所 | 一种cctv船舶视频平滑跟踪方法 |
CN107065935A (zh) * | 2017-03-23 | 2017-08-18 | 广东思锐光学股份有限公司 | 一种用于光流定位的云台控制方法、装置及目标跟踪系统 |
CN108037512A (zh) * | 2017-11-24 | 2018-05-15 | 上海机电工程研究所 | 激光半主动关联成像跟踪探测系统及方法 |
US10412371B1 (en) * | 2017-05-18 | 2019-09-10 | Facebook Technologies, Llc | Thin film acousto-optic structured light generator |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003042735A (ja) * | 2001-08-01 | 2003-02-13 | Minolta Co Ltd | 3次元計測方法および装置並びにコンピュータプログラム |
DE602005012163D1 (de) * | 2005-09-09 | 2009-02-12 | Sacmi | Verfahren und vorrichtung zur optischen inspektion eines gegenstands |
JP4986679B2 (ja) * | 2007-03-29 | 2012-07-25 | 学校法人福岡工業大学 | 非静止物体の三次元画像計測装置、三次元画像計測方法および三次元画像計測プログラム |
CN102074045B (zh) * | 2011-01-27 | 2013-01-23 | 深圳泰山在线科技有限公司 | 一种投影重建的系统和方法 |
KR101423829B1 (ko) * | 2012-09-07 | 2014-07-25 | 주식회사 인스펙토 | 투영격자의 진폭을 적용한 3차원 형상 측정장치 및 방법 |
US9142025B2 (en) * | 2011-10-05 | 2015-09-22 | Electronics And Telecommunications Research Institute | Method and apparatus for obtaining depth information using optical pattern |
JP6041513B2 (ja) * | 2012-04-03 | 2016-12-07 | キヤノン株式会社 | 画像処理装置、画像処理方法及びプログラム |
JP5740370B2 (ja) * | 2012-09-04 | 2015-06-24 | 株式会社東芝 | 領域特定装置、方法、及びプログラム |
KR101399274B1 (ko) * | 2012-09-27 | 2014-05-27 | 오승태 | 다중 패턴 빔을 이용하는 3차원 촬영 장치 및 방법 |
KR101634129B1 (ko) * | 2014-03-14 | 2016-06-28 | 벨로스테크놀로지 주식회사 | 팬틸트 일체형 감시 카메라의 영상추적 제어 시스템 |
CN107894423B (zh) * | 2017-11-08 | 2021-04-20 | 安吉汽车物流股份有限公司 | 车身表面质损自动检测设备及方法、车辆智能检测系统 |
CN108647636B (zh) * | 2018-05-09 | 2024-03-05 | 深圳阜时科技有限公司 | 身份鉴权方法、身份鉴权装置及电子设备 |
-
2020
- 2020-06-30 WO PCT/CN2020/099161 patent/WO2022000242A1/zh active Application Filing
- 2020-06-30 CN CN202080005912.1A patent/CN112955844A/zh active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105430326A (zh) * | 2015-11-03 | 2016-03-23 | 中国电子科技集团公司第二十八研究所 | 一种cctv船舶视频平滑跟踪方法 |
CN107065935A (zh) * | 2017-03-23 | 2017-08-18 | 广东思锐光学股份有限公司 | 一种用于光流定位的云台控制方法、装置及目标跟踪系统 |
US10412371B1 (en) * | 2017-05-18 | 2019-09-10 | Facebook Technologies, Llc | Thin film acousto-optic structured light generator |
CN108037512A (zh) * | 2017-11-24 | 2018-05-15 | 上海机电工程研究所 | 激光半主动关联成像跟踪探测系统及方法 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116630374A (zh) * | 2023-07-24 | 2023-08-22 | 贵州翰凯斯智能技术有限公司 | 目标对象的视觉跟踪方法、装置、存储介质及设备 |
CN116630374B (zh) * | 2023-07-24 | 2023-09-19 | 贵州翰凯斯智能技术有限公司 | 目标对象的视觉跟踪方法、装置、存储介质及设备 |
Also Published As
Publication number | Publication date |
---|---|
CN112955844A (zh) | 2021-06-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10074031B2 (en) | 2D image analyzer | |
CN100373394C (zh) | 基于仿生复眼的运动目标的检测方法 | |
CN105931263B (zh) | 一种目标跟踪方法及电子设备 | |
US20190025773A1 (en) | Deep learning-based real-time detection and correction of compromised sensors in autonomous machines | |
CN201845345U (zh) | 一种基于主动视觉的面部表情识别数据采集系统 | |
CN109643145A (zh) | 具有世界传感器和用户传感器的显示系统 | |
CN111006586B (zh) | 一种用于3d信息采集的智能控制方法 | |
CN104811641A (zh) | 一种带有云台的头戴式摄录系统及其控制方法 | |
JPH07286837A (ja) | 球体の回転量測定装置及び測定方法 | |
CN103544714A (zh) | 一种基于高速图像传感器的视觉追踪系统及方法 | |
WO2022000242A1 (zh) | 目标跟踪方法、设备、系统及存储介质 | |
CN111627049A (zh) | 高空抛物的确定方法、装置、存储介质及处理器 | |
CN110771143A (zh) | 手持云台的控制方法及手持云台、手持设备 | |
CN110986770A (zh) | 一种用于3d采集系统中相机及相机选择方法 | |
Metta et al. | Learning to track colored objects with log-polar vision | |
CN107067423A (zh) | 一种适用于开放赋存环境的文物本体微变监测的方法 | |
CN110060295B (zh) | 目标定位方法及装置、控制装置、跟随设备及存储介质 | |
WO2019205077A1 (zh) | 一种图像采集装置 | |
Yu et al. | Human pose estimation in monocular omnidirectional top-view images | |
CN111107782B (zh) | 具有可移动的光阑的视网膜相机 | |
CN111207690B (zh) | 一种可调整的虹膜3d信息采集测量设备 | |
Xu et al. | Eye-motion detection system for mnd patients | |
Sueishi et al. | Mirror-based high-speed gaze controller calibration with optics and illumination control | |
CN110442011A (zh) | 一种能连续检测虚拟现实设备动态延时的方法及采用该方法的延时检测系统 | |
Pernechele et al. | Omnidirectional people’s gathering monitoring by using deep learning algorithms |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20943052 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20943052 Country of ref document: EP Kind code of ref document: A1 |