Detailed Description
The present disclosure will be described in further detail below with reference to the accompanying drawings and examples. It is to be understood that the examples provided herein are merely illustrative of the present disclosure and are not intended to limit the present disclosure. In addition, the embodiments provided below are some embodiments for implementing the disclosure, not all embodiments for implementing the disclosure, and the technical solutions described in the embodiments of the disclosure may be implemented in any combination without conflict.
It should be noted that, in the embodiments of the present disclosure, the terms "comprises," "comprising," or any other variation thereof are intended to cover a non-exclusive inclusion, so that a method or apparatus including a series of elements includes not only the explicitly recited elements but also other elements not explicitly listed or inherent to the method or apparatus. Without further limitation, the use of the phrase "including a. -. said." does not exclude the presence of other elements (e.g., steps in a method or elements in a device, such as portions of circuitry, processors, programs, software, etc.) in the method or device in which the element is included.
For example, the image synchronization method provided by the embodiment of the present disclosure includes a series of steps, but the image synchronization method provided by the embodiment of the present disclosure is not limited to the described steps, and similarly, the image synchronization apparatus provided by the embodiment of the present disclosure includes a series of modules, but the apparatus provided by the embodiment of the present disclosure is not limited to include the explicitly described modules, and may also include modules that are required to be configured to acquire related information or perform processing based on the information.
The term "and/or" herein is merely an associative relationship that describes an associated object, meaning that three relationships may exist, e.g., a and/or C, may mean: a exists alone, A and C exist simultaneously, and C exists alone. In addition, the term "at least one" herein means any combination of any one or more of a plurality, for example, including at least one of A, C, D, and may mean including any one or more elements selected from the group consisting of A, C and D.
The disclosed embodiments may be implemented in computer systems comprising terminals and/or servers and may be operational with numerous other general purpose or special purpose computing system environments or configurations. Here, the terminal may be a thin client, a thick client, a hand-held or laptop device, a microprocessor-based system, a set-top box, a programmable consumer electronics, a network personal computer, a small computer system, etc., and the server may be a server computer system, a small computer system, a mainframe computer system, a distributed cloud computing environment including any of the above, etc.
The electronic devices of the terminal, server, etc. may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, etc. that perform particular tasks or implement particular abstract data types. The computer system/server may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices.
In some embodiments of the present disclosure, an image synchronization method is provided, which may be applied to an electronic device integrated with a multi-view camera system, and the embodiments of the present disclosure may be applied to security monitoring, non-contact human body temperature measurement, unmanned aerial vehicle shooting, fast temperature measurement entrance guard passage and other scenes.
Fig. 1 is a flowchart of an image synchronization method according to an embodiment of the present disclosure, and as shown in fig. 1, the flowchart may include:
step 101: the method comprises the steps of obtaining at least two images collected by at least two image collecting devices, wherein the image collecting devices corresponding to the images in the at least two images are different.
In an embodiment of the present disclosure, the at least two image capturing devices may be devices in a multi-view camera, and the at least two image capturing devices may include at least two of the following devices: the device comprises equipment for collecting Red-Green-Blue (RGB) images, an infrared camera and a thermal imager.
In an embodiment of the present disclosure, the at least two images represent images captured by different image capturing devices; exemplarily, in a case that the at least two images are two images, the image capturing devices corresponding to the two images are any two devices among a device for capturing RGB images, an infrared camera, and a thermal imager, respectively; illustratively, in the case that at least two images are three images, the image capturing devices corresponding to the three images are a device for capturing RGB images, an infrared camera and a thermal imager, respectively.
In an embodiment of the present disclosure, the at least two image capturing devices are configured to capture images of a same scene, and the images captured by the at least two image capturing devices may include a same object, and the same object may be an object in a moving state or an object in a static state.
In some embodiments, in the electronic device integrated with the multi-view camera system, a shooting instruction may be issued to the at least two image capturing devices at the same time, and the at least two image capturing devices capture images according to the received shooting instruction; it can be understood that, due to factors such as jitter existing in a communication link of each component in the electronic device, there may be a time delay in at least two acquired images, and at this time, it is necessary to determine a delay time between two images in the at least two images, so as to perform synchronization processing on the at least two image acquisition devices, so that time synchronization is maintained in images acquired from the at least two image acquisition devices subsequently.
In some embodiments, after the at least two images are acquired, the at least two images may be stored in a memory of the electronic device for later recall.
Step 102: and carrying out target identification on the at least two images, and determining the positions of the same moving target in the at least two images.
In the embodiment of the present disclosure, the same moving object belongs to the same object; in some embodiments, the same moving object may be a pre-specified object. For example, the moving object may be a pointer, a remote control car, or other objects that can move according to a preset rule.
In some embodiments, object recognition may be performed on each of the at least two images to identify the location of the same moving object in each image.
In some embodiments, features may be extracted from each image based on machine vision techniques, such that the same moving object is identified from the extracted features in each image, and the position of the same moving object in each image is determined.
For example, each image may be input to a pre-trained neural network, and based on the processing of each image by the neural network, a corresponding moving object may be identified, thereby determining the position of the moving object in each image.
Step 103: and determining the delay time between the at least two images according to the positions of the same moving object in the at least two images and the predetermined motion state information of the same moving object.
In the embodiment of the present disclosure, the motion state information of the same moving object may be motion speed information, motion rate information, motion position information, and the like; in some embodiments, the manner of movement of the moving object may be specified so that the speed, velocity, or position of the moving object at each time instance may be determined.
In one example, the target may be specified to perform uniform motion at a preset rate, where the uniform motion means a constant-rate motion in which the motion direction may be consistent or changed; for example, the uniform motion may be uniform circular motion, uniform linear motion, or other uniform motion along a regular route.
In another example, the target may be specified to move at a preset rate of change.
For example, the target may be driven to move by a motor or the like, so that the motor may drive the target to move according to a specified movement manner by issuing a corresponding driving instruction to the motor.
In some embodiments, in the case that the number of the at least two images is 2, a delay time between the two images may be determined; and under the condition that the number of the at least two images is more than 2, the delay time between the at least two images is the delay time between every two at least two images.
Step 104: and according to the delay time, performing time synchronization processing on at least two image acquisition devices.
In the embodiment of the present disclosure, time synchronization processing may be performed between the at least two image capturing devices according to the delay time, so that time consistency of images subsequently captured by the at least two image capturing devices can be maintained, that is, time synchronization of images subsequently acquired from the at least two image capturing devices can be maintained.
Illustratively, the image captured by the image capturing device E lags behind the image captured by the image capturing device F, and the delay time of the image captured by the image capturing device E with respect to the image captured by the image capturing device F is 30 μ s, the image subsequently captured by the image capturing device F may be delayed by 30 μ s, so as to achieve time synchronization of the image capturing device E and the image capturing device F.
Illustratively, the image captured by the image capturing device P precedes the image captured by the image capturing device Q, and the delay time of the image captured by the image capturing device Q relative to the image captured by the image capturing device P is 40 μ s, then the image subsequently captured by the image capturing device P may be delayed by 40 μ s, so as to achieve time synchronization of the image capturing device P and the image capturing device Q.
In practical applications, the steps 101 to 104 may be implemented by a Processor in an electronic Device, where the Processor may be at least one of an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a Central Processing Unit (CPU), a controller, a microcontroller, and a microprocessor.
It can be seen that, according to the embodiment of the present disclosure, the delay time of at least two images can be determined according to the position difference of the same moving object in the at least two images and the predetermined moving state information of the same moving object, thereby implementing the time synchronization of the at least two image capturing devices, and since the image delay between different image capturing devices does not need to be determined by the video image number frame or the human eye observation method, the delay time of the at least two images can be determined rapidly and accurately, thereby improving the statistical efficiency of the image delay between different image capturing devices, and compared with the scheme of determining the image delay between different image capturing devices by the video image number frame or the human eye observation method in the related art, the method has the characteristics of high accuracy, easy implementation, and the like.
In some embodiments, the motion state information of the same moving object may be in uniform motion.
It can be understood that, because the motion law of the uniform motion is simple, the delay time between at least two images is easy to determine according to the motion state information of the object moving at the uniform motion.
In some embodiments, the obtaining motion state information of the same moving object in uniform motion may include: and acquiring first speed information of the same moving target in uniform circular motion, and taking the first speed information as the motion state information.
Illustratively, the first rate information is preset rate information, and it can be seen that, in the embodiment of the present disclosure, by controlling the moving object to perform uniform circular motion at the first rate, it is convenient to determine the moving state information of the moving object before performing time synchronization on the at least two image capturing devices.
In some embodiments, the determining the delay time between the at least two images according to the position of the same moving object in the at least two images and the predetermined motion state information of the same moving object may include:
determining the polar angle of the same moving object in each image of the at least two images according to the position of the same moving object in the at least two images, wherein the polar angle of the same moving object in each image represents: an included angle between a connecting line of motion center points of uniform circular motion from the same moving object in each image and a reference direction;
determining a polar angle difference of two polar angles according to the two polar angles of any two images of the same moving target in at least two images;
a delay time between at least two images is determined based on the polar angle difference and the first rate information.
In the embodiment of the present disclosure, the reference direction may represent a direction of a line passing through a motion center point of the preset uniform circular motion, and in a polar coordinate system, the reference direction may represent a direction of a polar axis.
Exemplarily, referring to fig. 2A, a connection line between the first moving object 201 and a moving center point of the uniform circular motion in the image acquired by the image acquisition device a1 is a first connection line 203, and an included angle between the first connection line 203 and the polar axis 202 is 36 degrees; referring to fig. 2B, a connection line between the first moving object 201 and the moving center point of the uniform circular motion in the image acquired by the image acquisition device a2 is a second connection line 204, and an included angle between the second connection line 204 and the polar axis 202 is 74 degrees, so that polar angles of the first moving object 201 in the two images are 36 degrees and 74 degrees, respectively, and thus, it can be determined that a polar angle difference of the first moving object 201 in the two images is 38 degrees.
In some embodiments, the delay time between any two images may be calculated according to the following formula:
wherein, Δ t represents the delay time between any two images, θ represents the polar angle difference of the position of the moving target in any two images relative to the motion center point of the uniform circular motion, and r represents the rotation speed, and the unit is rpm.
It is understood that if the time of one rotation of the moving object is less than the time delay of any two images, the polar angle difference cannot accurately reflect the time delay of the two images.
In order to accurately and reliably calculate the polar angle difference, the time of the moving target rotating for one circle needs to be greater than or equal to the time delay of any two images, and for example, the value range of the time delay of any two images can be estimated according to historical data, so that the rotating speed of the uniform-speed circular motion is set according to the value range, and the time of the moving target rotating for one circle is greater than or equal to the time delay of any two images.
It can be seen that the embodiment of the disclosure can determine the delay time between at least two images according to the polar angle difference and the first rate information, and has the characteristics of accuracy and high efficiency compared with a scheme of determining the image delay between different image acquisition devices by a video image frame counting or human eye observation method in the related art.
In some embodiments, the at least two image acquisition devices include an RGB camera, an infrared camera, and a thermal infrared imager, and before image synchronization, a background heat source may be turned on, and a turntable capable of performing uniform circumferential motion is placed in front of the background heat source, so that the RGB camera, the infrared camera, and the thermal infrared imager can directly shoot the turntable; in the embodiment of the present disclosure, referring to fig. 3A to 3C, a pointer 302 indicating a rotation angle of the dial is disposed on the dial 301, and the dial 301 may further have scales to facilitate reading the rotation angle of the pointer 302, where the pointer 302 is the same moving target.
When the control turntable 301 moves at a constant speed at a set rotating speed, the RGB camera, the infrared camera and the thermal infrared imager are controlled to shoot, wherein an image shot by the RGB camera is an RGB image, an image shot by the infrared camera is an infrared image, and an image shot by the thermal infrared imager is a thermodynamic diagram; for example, the set rotation speed may be 1000 revolutions per second, that is, the time for one revolution of the turntable is 1 millisecond.
The position of the pointer 302 in the RGB image is determined by performing object recognition and corner detection on the RGB image, and referring to fig. 3A, the position of the pointer 302 is a position rotated by 36 degrees from the true north direction. The position of the pointer 302 in the infrared image is determined by performing object recognition and corner detection on the infrared image, and referring to fig. 3B, the position of the pointer 302 is a position rotated by 54 degrees from the due north direction. The thermodynamic diagram is subjected to target recognition, and the position of the pointer 302 in the thermodynamic diagram is determined, and referring to fig. 3C, the position of the pointer 302 is rotated by 72 degrees from the due north direction.
After determining the position of the pointer 302 in the RGB image, the infrared image, and the thermodynamic diagram, the polar angle difference of the pointer may be determined between two of the RGB image, the infrared image, and the thermodynamic diagram, and further, the delay time between two of the RGB image, the infrared image, and the thermodynamic diagram may be determined.
Illustratively, referring to fig. 3A to 3C, the polar angle difference of the pointer in the RGB image and the infrared image with respect to the moving center point is-18 degrees, and in case that the set rotation speed may be 1000 revolutions per second, it may be determined that the delay time of the RGB image with respect to the infrared image is-50 microseconds according to equation (1), that is, the RGB camera is slower than each frame image of the infrared camera by 50 microseconds
The polar angle difference of the pointer in the RGB image and the thermodynamic diagram relative to the moving center point is-36 degrees, and then under the condition that the set rotating speed can be 1000 revolutions per second, according to the formula (1), the delay time of the RGB image relative to the thermodynamic diagram can be determined to be-100 microseconds, that is, the RGB camera is 100 microseconds slower than each frame image of the thermal infrared imager.
The polar angle difference of the pointer in the infrared image and the thermodynamic diagram relative to the moving center point is-18 degrees, and under the condition that the set rotating speed can be 1000 revolutions per second, according to the formula (1), the delay time of the infrared image relative to the thermodynamic diagram can be determined to be-50 microseconds, namely, the infrared camera is slower than each frame of image of the thermal infrared imager by 50 microseconds.
In some embodiments, second speed information of the same moving object in uniform linear motion may be obtained, and the second speed information may be used as the motion state information.
Illustratively, the second rate information is preset rate information, and it can be seen that in the embodiment of the present disclosure, by controlling the moving object to perform a uniform linear motion at the second rate, it is convenient to determine the moving state information of the moving object before performing time synchronization on the at least two image capturing devices.
In some embodiments, the determining the delay time between the at least two images according to the position of the same moving object in the at least two images and the predetermined motion state information of the same moving object may include:
determining the relative distance of the same moving target in any two images of the at least two images according to the positions of the same moving target in the at least two images;
a delay time between any two of the at least two images is determined based on the relative distance and the second rate information.
In some embodiments, the delay time between any two images may be calculated according to the following formula:
where Δ t represents a delay time between any two images, l represents a relative distance of the same moving object in any two images, and v represents a second velocity.
In some embodiments, the at least two image capturing devices include an RGB camera, an infrared camera, and a thermal infrared imager, and accordingly, the at least two images are an RGB image, an infrared image, and a thermodynamic diagram, and a distance l from a starting point of a moving object in the RGB image is provided1Red, redThe distance between the moving object in the outer image and the starting point is l2The distance between the moving object and the starting point in the thermodynamic diagram is l3。
It can be seen that the relative distance of the moving object in the RGB image and the infrared image is l1-l2According to the formula (2), the delay time of the RGB image and the infrared image can be calculated; the relative distance of the moving object in the RGB image and the thermodynamic diagram is l1-l3According to the formula (2), the delay time of the RGB image and the thermodynamic diagram can be calculated; the relative distance of the moving object in the infrared image and the thermodynamic diagram is l2-l3The delay time of the infrared image and the thermodynamic diagram can be calculated according to the formula (2).
It can be seen that the embodiment of the present disclosure can determine the delay time between at least two images according to the relative distance and the second rate information, and has the characteristics of accuracy and high efficiency compared with a scheme of determining the image delay between different image acquisition devices through a video image frame counting or human eye observation method in the related art.
In some embodiments, the movement speed change information of the same moving object may be acquired, and the movement speed change information may be used as the movement state information.
For example, the movement pattern of the moving object at each time may be specified, so that the speed of the moving object at each time may be determined in advance, that is, the movement speed variation information of the moving object may be determined.
It can be seen that, in the embodiment of the present disclosure, by controlling the moving object to move with reference to the preset moving speed change information, it is convenient to determine the moving state information of the moving object before performing time synchronization on the at least two image capturing devices, and thus, it is beneficial to determine the delay time of the at least two images.
In some embodiments, where the at least two image acquisition devices comprise thermal imagers, the moving object may comprise a hollowed-out portion; before image synchronization is carried out, a background heat source can be started, and a moving target is placed in front of the background heat source, so that the moving target can be observed visually conveniently in an image collected by a thermal imager due to the fact that the moving target comprises a hollow part.
Illustratively, when the moving target is located on the turntable, hollowed scale marks can be added on the turntable, so that the position of the moving target can be observed visually in an image collected by the thermal imager.
In the scenes of entrance guard passage, security protection and rapid temperature measurement, a multi-view camera system can be integrated in electronic equipment so as to obtain RGB images, infrared images and thermodynamic diagrams, time delay exists among the RGB images, the infrared images and the thermodynamic diagrams, and the problems of binocular distance measurement difference and abnormal target mapping exist in rapid moving objects, so that the problems of abnormal temperature measurement, living body and distance position estimation functions occur, and therefore the time delay among the images acquired by the multi-view camera needs to be determined and subsequently processed. For the problem, in the embodiment of the present disclosure, the delay time of at least two images is determined according to the position difference of the same moving object in the at least two images and the predetermined moving state information of the same moving object, so as to implement time synchronization of the at least two image capturing devices.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
On the basis of the image synchronization method provided by the foregoing embodiment, the embodiment of the present disclosure provides an image synchronization apparatus.
Fig. 4 is a schematic diagram illustrating a configuration of an image synchronization apparatus according to an embodiment of the disclosure, and as shown in fig. 4, the apparatus may include an obtaining module 400, a first processing module 401, and a second processing module 402, wherein,
an obtaining module 400, configured to obtain at least two images acquired by at least two image acquisition devices, where an image acquisition device corresponding to each of the at least two images is different;
a first processing module 401, configured to perform target identification on at least two images, and determine positions of a same moving target in the at least two images; determining the delay time between at least two images according to the positions of the same moving target in at least two images and the predetermined motion state information of the same moving target;
and a second processing module 402, configured to perform time synchronization processing on the at least two image capturing devices according to the delay time.
In some embodiments, the first processing module 401 is further configured to obtain motion state information of the same moving object in uniform motion.
In some embodiments, the first processing module 401 is configured to obtain motion state information of the same moving object in uniform motion, and includes:
and acquiring first speed information of the same moving target in uniform circular motion, and taking the first speed information as motion state information.
In some embodiments, the first processing module 401 is configured to determine a delay time between at least two images according to a position of the same moving object in the at least two images and predetermined motion state information of the same moving object, and includes:
determining the polar angle of the same moving object in each image of the at least two images according to the position of the same moving object in the at least two images, wherein the polar angle of the same moving object in each image represents: an included angle between a connecting line of motion center points of uniform circular motion from the same moving object in each image and a reference direction;
determining a polar angle difference of two polar angles according to the two polar angles of any two images of the same moving target in at least two images;
a delay time between any two of the at least two images is determined based on the polar angle difference and the first rate information.
In some embodiments, the first processing module 401 is configured to obtain motion state information of the same moving object in uniform motion, and includes:
and acquiring second speed information of the same moving object in uniform linear motion, and taking the second speed information as motion state information.
In some embodiments, the first processing module 401 is configured to determine a delay time between at least two images according to a position of the same moving object in the at least two images and predetermined motion state information of the same moving object, and includes:
determining the relative distance of the same moving target in any two images of the at least two images according to the positions of the same moving target in the at least two images;
a delay time between any two of the at least two images is determined based on the relative distance and the second rate information.
In some embodiments, the first processing module 401 is further configured to obtain movement speed variation information of the same moving object, and use the movement speed variation information as the movement state information.
In some embodiments, the at least two image capture devices comprise thermal imagers and the moving object comprises a hollowed-out portion.
In practical applications, the obtaining module 400, the first processing module 401, and the second processing module 402 may all be implemented by a processor in an electronic device, where the processor may be at least one of an ASIC, a DSP, a DSPD, a PLD, an FPGA, a CPU, a controller, a microcontroller, and a microprocessor.
In addition, each functional module in this embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware or a form of a software functional module.
Based on the understanding that the technical solution of the present embodiment essentially or a part contributing to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, and include several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the method of the present embodiment. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Specifically, the computer program instructions corresponding to an image synchronization method in the present embodiment may be stored on a storage medium such as an optical disc, a hard disc, a usb disk, or the like, and when the computer program instructions corresponding to an image synchronization method in the storage medium are read or executed by an electronic device, any of the image synchronization methods of the foregoing embodiments is implemented.
Based on the same technical concept of the foregoing embodiment, referring to fig. 5, it shows an electronic device 5 provided by the embodiment of the present disclosure, which may include: a memory 501 and a processor 502; wherein,
the memory 501 is used for storing computer programs and data;
the processor 502 is configured to execute the computer program stored in the memory to implement any one of the image synchronization methods of the foregoing embodiments.
In practical applications, the memory 501 may be a volatile memory (volatile memory), such as a RAM; or a non-volatile memory (non-volatile memory) such as a ROM, a flash memory (flash memory), a Hard Disk (Hard Disk Drive, HDD) or a Solid-State Drive (SSD); or a combination of the above types of memories and provides instructions and data to the processor 502.
The processor 502 may be at least one of ASIC, DSP, DSPD, PLD, FPGA, CPU, controller, microcontroller, and microprocessor. It is understood that the electronic devices for implementing the above-described processor functions may be other devices, and the embodiments of the present disclosure are not particularly limited.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
The foregoing description of the various embodiments is intended to highlight various differences between the embodiments, and the same or similar parts may be referred to each other, which are not repeated herein for brevity
The methods disclosed in the method embodiments provided by the present disclosure may be combined arbitrarily without conflict to obtain new method embodiments.
Features disclosed in the various product embodiments provided by the disclosure may be combined in any combination to yield new product embodiments without conflict.
The features disclosed in the various method or apparatus embodiments provided by the present disclosure may be combined arbitrarily, without conflict, to arrive at new method embodiments or apparatus embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present disclosure.
While the embodiments of the present disclosure have been described in connection with the drawings, the present disclosure is not limited to the specific embodiments described above, which are intended to be illustrative rather than limiting, and it will be apparent to those of ordinary skill in the art in light of the present disclosure that many more modifications can be made without departing from the spirit of the disclosure and the scope of the appended claims.