CN114862828A - Light spot searching method and device, computer readable medium and electronic equipment - Google Patents

Light spot searching method and device, computer readable medium and electronic equipment Download PDF

Info

Publication number
CN114862828A
CN114862828A CN202210600750.7A CN202210600750A CN114862828A CN 114862828 A CN114862828 A CN 114862828A CN 202210600750 A CN202210600750 A CN 202210600750A CN 114862828 A CN114862828 A CN 114862828A
Authority
CN
China
Prior art keywords
light spot
spot
light
depth
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210600750.7A
Other languages
Chinese (zh)
Other versions
CN114862828B (en
Inventor
侯烨
顾标平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202210600750.7A priority Critical patent/CN114862828B/en
Publication of CN114862828A publication Critical patent/CN114862828A/en
Application granted granted Critical
Publication of CN114862828B publication Critical patent/CN114862828B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The disclosure provides a light spot searching method, a light spot searching device, a computer readable medium and an electronic device, and relates to the technical field of ranging. The method comprises the following steps: acquiring a historical depth reference image corresponding to a current light intensity image acquired by a speckle TOF camera; and determining a search area on the current light intensity image based on the historical depth reference image, and performing light spot search in the search area. According to the speckle TOF camera, the historical depth reference image corresponding to the current light intensity image collected by the speckle TOF camera is obtained, a search area is determined on the current light intensity image, light spot search is carried out in the search area, the search area is determined in advance relative to pixel-by-pixel light spot search of the current light intensity image, and the search is carried out in the search area, so that the calculated amount of light spot search can be reduced, the speed of light spot search is improved, and the time consumption of light spot search is further reduced.

Description

Light spot searching method and device, computer readable medium and electronic equipment
Technical Field
The present disclosure relates to the field of ranging technologies, and in particular, to a light spot searching method, a light spot searching apparatus, a computer-readable medium, and an electronic device.
Background
In the field of three-dimensional vision, the current mature distance measurement schemes include three schemes of binocular distance measurement, structured light distance measurement and Time of flight (TOF) camera distance measurement. The TOF camera has the advantages of wide measurement range, high measurement precision, simple depth calculation method, good adaptability to measurement scenes and the like, and is widely applied to distance measurement scenes of devices such as mobile phones and AR.
The TOF camera serving as active light depth distance detection equipment comprises a transmitting end, a collimating mirror, a receiving lens and a receiving end. Referring to fig. 1, during distance detection, a transmitting end actively transmits debugged infrared light, the infrared light passes through a collimating mirror and is reflected by an object in a ranging scene, a receiving lens and a receiving end receive the infrared light, and the distance between a camera and the object to be measured is determined by calculating the time difference or the phase relation between the receiving end and the transmitting end, so that a depth image corresponding to the ranging scene is generated. TOF cameras can be classified into flood TOF cameras and speckle TOF cameras according to the difference of infrared light emitted by a transmitting end.
The infrared light emitted by the transmitting end of the speckle TOF camera is a beam of light, and after the infrared light is reflected by an object in a ranging scene, the receiving end receives a light spot corresponding to the light beam and the intensity of the light spot. At this time, the received light intensity image needs to be subjected to spot search to determine the position of the corresponding spot of each light beam. In the related art, in order to determine the spot position corresponding to each light beam, a pixel-by-pixel search is usually required for the whole light intensity image, which is relatively computationally intensive.
Disclosure of Invention
The present disclosure is directed to a light spot searching method, a light spot searching apparatus, a computer readable medium, and an electronic device, which reduce the amount of light spot searching computation to at least a certain extent and increase the speed of light spot searching.
According to a first aspect of the present disclosure, there is provided a light spot searching method applied to a terminal device, where the terminal device includes a speckle TOF camera, including: acquiring a historical depth reference image corresponding to a current light intensity image acquired by a speckle TOF camera; determining a search area on the current light intensity image based on the historical depth reference image, and searching light spots in the search area; the number of pixels contained in the search area is smaller than that of the pixels contained in the current light intensity image.
According to a second aspect of the present disclosure, there is provided a light spot searching apparatus applied to a terminal device, where the terminal device includes a speckle TOF camera, including: the reference acquisition module is used for acquiring a historical depth reference image corresponding to the current light intensity image acquired by the speckle TOF camera; the light spot searching module is used for determining a searching area on the current light intensity image based on the historical depth reference image and searching light spots in the searching area; the number of pixels contained in the search area is smaller than that of the pixels contained in the current light intensity image.
According to a third aspect of the present disclosure, a computer-readable medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, is adapted to carry out the above-mentioned method.
According to a fourth aspect of the present disclosure, there is provided an electronic apparatus, comprising: a processor; and memory storing one or more programs that, when executed by the one or more processors, cause the one or more processors to implement the method described above.
According to the light spot searching method provided by the embodiment of the disclosure, a searching area is determined on a current light intensity image by obtaining a historical depth reference image corresponding to the current light intensity image acquired by a speckle TOF camera, and light spot searching is performed in the searching area.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty. In the drawings:
fig. 1 schematically illustrates an imaging principle schematic diagram of a speckle TOF camera in an exemplary embodiment of the disclosure;
FIG. 2 illustrates a schematic diagram of an exemplary system architecture to which embodiments of the present disclosure may be applied;
fig. 3 schematically illustrates a flow chart of a spot search method in an exemplary embodiment of the present disclosure;
FIG. 4 schematically illustrates a flow chart of a method of determining a first search area in an exemplary embodiment of the present disclosure;
FIG. 5 schematically illustrates a principle schematic diagram of a speckle TOF camera with spot position shift in an exemplary embodiment of the disclosure;
FIG. 6 is a schematic diagram schematically illustrating a downward shift in spot position in an exemplary embodiment of the disclosure;
FIG. 7 schematically illustrates a flow chart of a method of determining a first offset in an exemplary embodiment of the disclosure;
FIG. 8 schematically illustrates a flow chart of a method of determining a second search area in an exemplary embodiment of the present disclosure;
FIG. 9 schematically illustrates a flow chart of another method of determining a second search area in an exemplary embodiment of the present disclosure;
FIG. 10 is a schematic diagram schematically illustrating a distance between an object and a speckle TOF camera versus a pixel shift number in an exemplary embodiment of the present disclosure;
FIG. 11 schematically illustrates a flow chart of a method of determining a third search area in an exemplary embodiment of the present disclosure;
FIG. 12 schematically illustrates a flow chart of another method of determining a third search area in an exemplary embodiment of the present disclosure;
FIG. 13 schematically illustrates a flow chart of a method of determining a fourth search area in an exemplary embodiment of the present disclosure;
FIG. 14 schematically illustrates a flow chart of a method of conducting a supplemental search in an exemplary embodiment of the present disclosure;
fig. 15 is a schematic diagram illustrating a composition of a spot searching apparatus according to an exemplary embodiment of the present disclosure;
fig. 16 shows a schematic diagram of an electronic device to which an embodiment of the present disclosure may be applied.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
Fig. 2 is a schematic diagram illustrating a system architecture of an exemplary application environment to which the spot searching method and apparatus according to the embodiment of the present disclosure may be applied.
As shown in fig. 2, the system architecture 200 may include one or more of terminal devices 201, 202, 203, a network 204, and a server 205. The network 204 serves as a medium for providing communication links between the terminal devices 201, 202, 203 and the server 205. Network 204 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few. The terminal devices 201, 202, 203 may be terminal devices configured with speckle TOF cameras including, but not limited to, desktop computers, portable computers, smart phones, tablets, AR devices, and the like. It should be understood that the number of terminal devices, networks, and servers in fig. 2 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. For example, the server 205 may be a server cluster composed of a plurality of servers.
The spot search method provided by the embodiment of the present disclosure is generally executed by the terminal devices 201, 202, and 203, and accordingly, the spot search apparatus is generally disposed in the terminal devices 201, 202, and 203. However, it is easily understood by those skilled in the art that the spot searching method provided in the embodiment of the present disclosure may also be executed by the server 205, and accordingly, the spot searching apparatus may also be disposed in the server 205, which is not particularly limited in the exemplary embodiment. For example, in an exemplary embodiment, the server 205 may obtain, through the network 204, a historical depth reference image and a current light intensity image corresponding to a current light intensity image acquired by a speckle TOF camera configured in the terminal device 201, 202, 203, then determine a search area on the current light intensity image based on the historical depth reference image, perform a light spot search in the search area, and then feed back the search result to the terminal device 201, 202, 203 through the network 204.
In an exemplary embodiment, the present exemplary embodiment provides a spot search method, which may include the following steps S310 and S320, as shown in fig. 3:
in step S310, a historical depth reference image corresponding to the current light intensity image collected by the speckle TOF camera is obtained.
The speckle TOF camera is characterized in that infrared light emitted by an emitting end of the TOF camera is not flood light (surface light) but a plurality of beams of light.
The historical depth reference image can comprise an image used for characterizing scene depth features acquired by each light spot in a historical certain time interval by the speckle TOF camera. Specifically, when the speckle TOF camera generates a depth image, only an area illuminated by a light spot has a valid depth value, the depth values of the remaining areas are invalid (i.e. dark area depth), and even if the depth values of the remaining areas occur, the depth values are usually noisy (of course, each light spot may be simultaneously applied to several pixels, but after being processed by an algorithm, the several pixels usually generate only one depth value, that is, one light spot only generates a depth value corresponding to one pixel). Therefore, the depth value corresponding to each reference light spot in the historical depth reference image can characterize the depth feature of the scene acquired by the reference light spot in the speckle TOF camera within a certain historical time interval.
For example, the historical depth reference image may include a frame of historical depth image before the current depth image corresponding to the current intensity image; for another example, the historical depth reference image may include a historical depth image whose acquisition time is before the current depth image and whose acquisition time is separated from the acquisition time of the current depth image by a certain time; for another example, the historical depth reference image may include an image determined based on a plurality of frames of historical depth images before the depth image corresponding to the current light intensity image, and used for characterizing the depth feature of the scene corresponding to the plurality of frames of historical depth images.
In an exemplary embodiment, when acquiring the historical depth reference image corresponding to the current light intensity image acquired by the speckle TOF camera, at least one frame of historical depth image acquired by the speckle TOF camera may be acquired, and then the corresponding historical depth reference image may be generated based on the at least one frame of historical depth image.
Wherein the historical depth image may comprise a depth image determined based on a historically collected historical light intensity image; and relative to the current light intensity image, the historical acquisition time of the historical light intensity image corresponding to the historical depth image is before the current acquisition time of the current light intensity image.
Furthermore, in order to ensure real-time performance, namely, in order to ensure the scene depth characteristics represented by the historical depth reference image, the reference function is provided for the light spot position in the current light intensity image, and the historical depth image closer to the current moment can be collected as the generation basis for determining the historical depth reference image. Specifically, the historical acquisition time may be defined as a time interval less than a preset time interval from the current acquisition time.
In addition, in generating the depth value, the weaker the light intensity of the infrared light received by the receiving end of the TOF camera is, the worse the reliability of the calculated depth value is generally. Within a certain light intensity range (under the premise that overexposure does not occur), the stronger the light intensity of the received infrared rays is, the stronger the reliability of the depth value obtained by settlement is. Therefore, when the depth image is generated, the light intensity corresponding to the infrared light for resolving the depth value may also be acquired, that is, the confidence image corresponding to the depth image may also be generated. At this time, a historical depth reference image may be generated in conjunction with the confidence image.
In an exemplary embodiment, only one frame of historical depth image may be collected and then directly used as a historical depth reference image. For example, the historical depth image of the previous frame acquired by the TOF camera is directly acquired as the historical depth reference image, and for example, any one frame of the historical depth images of the previous N frames acquired by the TOF camera is acquired as the historical depth reference image.
In an exemplary embodiment, a plurality of frames of historical depth images may also be acquired, and then historical depth reference images corresponding to the plurality of frames of historical depth images are generated based on a preset calculation mode. For example, the historical depth images of the previous frame and the previous frame acquired by the TOF camera may be acquired first, then the two frames of historical depth images are compared, the depth reference values corresponding to the pixels with the consistent depth are retained, and the average value of the depth values corresponding to the pixels with the inconsistent depth is calculated, so as to obtain the historical depth reference image; for another example, for pixels with inconsistent depth, the depth value with higher confidence coefficient may also be reserved as the depth reference value corresponding to the pixels with inconsistent depth; for another example, for pixels with inconsistent depths, the confidence may be used as a weight to perform weighted calculation on the depth values to obtain corresponding depth reference values. In addition, when the historical depth reference images corresponding to the multi-frame historical depth images are generated based on a preset calculation mode, the frequency of occurrence of the depth values generated by the same light spot can be counted, and the depth reference value corresponding to each light spot is determined based on the frequency. In addition, when the corresponding historical depth reference image is generated based on the multi-frame historical depth image, time-domain filtering processing can be further performed, the original image (raw image) corresponding to the multi-frame historical depth image is filtered, and the historical depth value with random interference noise removed is obtained, so that the more accurate historical depth reference image is generated.
It should be noted that, in order to ensure the scene depth features represented by the historical depth reference image, the reference function is provided for the position of the light spot in the current light intensity image, and when the current light intensity image is acquired each time, the corresponding historical depth reference image needs to be acquired for the current light intensity image. Specifically, the historical depth reference image corresponding to the current light intensity image may be re-acquired based on the above-mentioned manner of acquiring the historical depth reference image. For example, when a plurality of frames of historical depth images are taken to determine the historical depth reference image, assuming that the depth image generated by the current light intensity image is the 6 th frame, the historical depth images of the 1 st frame to the 5 th frame can be obtained to determine the historical depth reference image; when the depth image generated by the current light intensity image is the 7 th frame, the historical depth images of the 2 nd frame to the 6 th frame can be obtained to determine a historical reference depth image; if the depth image generated by the current light intensity image is the 6 th frame when the historical depth image of one frame is obtained to determine the historical depth reference image, the historical depth image of the 5 th frame can be obtained as the historical depth reference image; then when the depth image generated by the current light intensity image is the 7 th frame, the historical depth image of the 6 th frame can be directly acquired as the historical depth reference image.
Furthermore, in order to reduce the calculation amount for obtaining the historical depth reference image, the historical depth reference image corresponding to the current light intensity image can be generated directly based on the historical depth reference image of the previous frame and the depth image corresponding to the light intensity image of the previous frame. As for the above example, when determining the historical depth reference image by taking a plurality of frames of historical depth images, assuming that the depth image generated by the current light intensity image is the 6 th frame, the historical depth images of the 1 st to 5 th frames may be obtained to determine the historical depth reference image; when the depth image generated by the current light intensity image is the 7 th frame, because the historical depth reference image corresponding to the 6 th frame depth image already contains the depth information of the 1 st frame to the 5 th frame, the 6 th frame depth image can be used as a newly added historical depth image and directly combined with the historical depth reference image corresponding to the 6 th frame depth image to directly generate the historical depth reference image corresponding to the current light intensity image.
It should be noted that, the historical depth reference image corresponding to the current light intensity image is generated directly based on the historical depth reference image of the previous frame and the depth image corresponding to the light intensity image of the previous frame. Any generation mode can be adopted to calculate the historical depth reference image of the previous frame and the depth image corresponding to the light intensity image of the previous frame, and the method is not particularly limited by the disclosure. However, in order to ensure real-time performance, the influence of the depth image corresponding to the previous light intensity image on the historical depth reference image corresponding to the current light intensity image can be made larger during calculation. For example, in one calculation manner, the historical depth reference image of the previous frame and the depth image corresponding to the light intensity image of the previous frame may be weighted and averaged; at this time, the weight corresponding to the depth image corresponding to the previous frame of light intensity image may be set to a larger value, so as to ensure that the scene depth information closer to the current light intensity image of the current frame may be retained in the historical depth reference image corresponding to the current light intensity image of the current frame to a greater extent.
In step S320, a search area is determined on the current intensity image based on the historical depth reference image, and a spot search is performed within the search area.
And the searching area determined on the current light intensity image according to the historical depth reference image contains less pixels than the current light intensity image. Namely, a smaller searching range can be determined based on the historical depth reference image, so that the calculation amount of the spot searching can be reduced, and the speed of the spot searching is improved.
In an exemplary embodiment, when determining the search region on the current light intensity image based on the historical depth reference image, the determination may be performed according to the first position template and the preset offset function, and as shown in fig. 4, the steps S410 to S430 may be included:
in step S410, a first position template and a preset offset function are acquired.
The first position template comprises at least one first light spot position acquired at a first acquisition distance, namely the position of at least one light spot of the speckle TOF camera in the light intensity image when the scene distance is the first acquisition distance can be generally represented by pixel coordinates.
Referring to fig. 5, after a certain speckle TOF camera is designed, because a fixed base distance exists between a transmitting end and a receiving end of the speckle TOF camera, a field of view may be shifted, that is, after an emitted infrared light beam is reflected by an object 1 and an object 2 which have different distances from the speckle TOF camera, coordinates received by an active area of a sensor may be changed, and a corresponding light spot position may be shifted to different degrees. For example, in a certain speckle TOF camera, if distances between the object 1 and the object 2 in fig. 5 and the speckle TOF camera are d respectively 1 And d 2 The position of the spot a formed by the same beam is significantly shifted downward (see fig. 6). Therefore, when the first position template, the second position template, and the third position template are acquired, the light spot positions corresponding to the depth image generated when all object distances in the scene are a certain specific distance (the first acquisition distance, the second acquisition distance, or the third acquisition distance) can be used as the position templates, and the position coordinates of each light spot on the depth image under the corresponding acquisition distance are included in the position templates.
It should be noted that, in some embodiments, each pixel on the historical depth image corresponds to each pixel on the confidence image in a one-to-one manner. That is, the pixel position of the historical depth value generated by a certain light spot on the historical depth image is consistent with the pixel position on the corresponding confidence image. At this time, when the historical depth reference image is generated by combining the confidence image, the confidence corresponding to each depth value can be directly obtained through the pixel position. At this time, the actual spot position may coincide with the spot position shown by the historical depth image.
In other embodiments, a spot may be represented by more than one pixel on the confidence image. For example, a certain spot may cause 9 pixels on the confidence image to be illuminated, and at the same time, the 9 pixels on the confidence image have confidence, but when the historical depth image is generated, only 1 depth value is solved through the spot, that is, the depth value needs to be assigned to a certain 1 pixel on the depth map. At this time, 1 assigned pixel may be determined among the 9 pixels, and a depth value is given to a pixel on the depth image corresponding to the assigned pixel. For example, the assigned pixel may be the pixel with the highest confidence among 9 pixels, or may be the pixel corresponding to the luminance gravity center position of the light spot. At this time, the actual spot position may be slightly different from the spot position shown in the historical depth image. In this case, the method used to obtain the first position template, the second position template, and the third position template should be consistent with the way the historical depth reference image is obtained. For example, in determining the historical depth reference image, the assigned pixel is the highest-confidence pixel of the 9 pixels. Correspondingly, when the position template is obtained, the assigned pixel should also be the pixel with the highest confidence among the 9 pixels.
The preset offset function may include a function for characterizing a relationship between a distance (depth) between an object in the scene and the speckle TOF camera and a position offset of the position of the light spot. It should be noted that the position offset is a relative quantity, so that a fixed spot position is required to be used as a reference, and the fixed spot position may be a position of each spot when the distance between the object and the speckle TOF camera is a fixed distance. When the preset offset function is obtained, the determination may be performed in different manners, which is not particularly limited in this disclosure. For example, theoretical calculation can be performed according to a fixed base distance existing between the transmitting end and the receiving end of the speckle TOF camera; as another example, calibration may be performed experimentally.
Specifically, when theoretical calculation is performed based on a fixed base distance, it can be known from theoretical analysis that the position offset (including the offset direction and the offset distance) is related to factors such as the distance between the speckle TOF camera and the object, the fixed base distances of the transmitting end and the receiving end, the focal length of the lens of the receiving end, and the like, and obeys an inverse proportional function relationship. After a certain speckle TOF camera is designed, the fixed base distance between the transmitting end and the receiving end and the focal length of the lens of the receiving end are usually fixed and unchanged, so the position offset is a function taking the distance between the speckle TOF camera and an object as an independent variable. At this time, the following equation can be obtained:
Figure BDA0003669793770000091
wherein f is r Representing the focal length of a lens at a receiving end; b represents a fixed base distance between the transmitting end and the receiving end; d3 represents the distance between the speckle TOF camera and a certain plane 3; d represents the distance between the speckle TOF camera and the object; Δ represents the amount of positional deviation of the object from the spot position corresponding to the plane 3, including the deviation direction and the deviation distance, which can be generally expressed by the number of deviation pixels.
Specifically, when calibration is performed through an experimental method, a speckle TOF camera may be used to capture objects at different distances (theoretically, only 2 distances are needed for fitting, but for the accuracy of fitting, multiple groups, for example, 5 groups or more than 5 groups, may be selected, and generally, a position offset amount caused by the same distance difference is larger at a short distance and a long distance, so that more short distances may be selected when different distances are selected), so as to obtain depth images corresponding to different distances, and calculate position offset amounts in the depth images at different distances. And then, carrying out position tracking on a certain light spot, a light spot in a certain area or a plurality of randomly selected light spots so as to fit the function relation of the position offset along with the distance transformation between the speckle TOF camera and the object.
It should be noted that, when calibration is performed by an experimental method, the objects at different distances are generally large planes with a certain flatness, which can approximately satisfy a diffuse reflection model, and the reflectivity is greater than a set emissivity threshold, so as to ensure that the acquired depth data is accurate and reliable. And then, moving the position of the speckle TOF camera or the position of the large plane, and shooting to obtain a corresponding depth image. When shooting, the speckle TOF camera is required to be ensured to be opposite to a plane, namely, an imaging surface of the receiving end sensor is completely parallel to the large plane, or an included angle between the imaging surface and the large plane is smaller than an included angle threshold value, and the deviation between an actual shooting distance and a distance adopted for calibration is ensured to be smaller than a distance threshold value. The included angle threshold and the distance threshold can be obtained according to experience and the performance of the speckle TOF camera.
Meanwhile, one frame of image or multiple frames of images can be collected for calibration at the same distance. When multi-frame images are collected for calibration, original images (raw images) corresponding to the multi-frame depth images can be filtered through time-domain filtering, and depth values with random interference noise removed are obtained.
In addition, when the light spot is selected for position tracking, whether the position offset corresponding to the light spot is accurate or not can be judged through the confidence degree corresponding to the depth value calculated by the light spot. For example, a confidence threshold may be set, and when the confidence of a light spot is higher than the confidence threshold, the position offset calculated based on the light spot is considered to be reliable, and if the confidence of the light spot is lower than the confidence threshold at a certain distance, calibration may be performed by increasing the infrared emission power, increasing the integration time of the receiving end, or replacing a plane with higher reflectivity. Compared with a mode of obtaining the preset offset function through theoretical calculation, the preset offset function obtained through calibration by an experimental method can cover more errors and better accords with actual conditions.
In step S420, a first offset corresponding to each first spot position in the first position template is calculated based on the historical depth reference image and a preset offset function.
In an exemplary embodiment, the depth value corresponding to each reference spot in the historical depth reference image may characterize the depth feature of the scene acquired by the speckle TOF camera at a certain time interval historically of the reference spot, and therefore, the position offset relative to the first acquisition distance may be calculated based on the historical depth reference value and a preset offset function.
Specifically, when calculating the first offset, as shown in fig. 7, the following steps S710 and S720 may be included:
in step S710, the depth value corresponding to each reference spot in the historical depth reference image is obtained, and the first spot position corresponding to each reference spot is determined in the first position template.
Specifically, the number of beams of infrared light emitted by the speckle TOF camera is fixed, and a receiving end receives a beam of reflected light to generate a light spot. Therefore, each reference light spot corresponding to the historical depth reference image is in one-to-one correspondence with the first light spot position in the first position template. For example, the speckle TOF camera emits 9 light beams, and then 9 reference light spots of the historical reference depth image are generated, the first position template comprises first light spot positions corresponding to the 9 light spots, and the reference light spots generated by the same light beam correspond to the determined first light spot positions. Thus, the first spot position for each reference spot may be determined in the first position template based on the mutual correspondence.
In step S720, a position offset corresponding to each reference spot is calculated according to a preset offset function and the depth value, and the position offset is determined as a first offset corresponding to a first spot position corresponding to the reference spot.
In an exemplary embodiment, when the depth value corresponding to each reference light spot is obtained, a position offset amount relative to the first light spot position corresponding to each reference light spot may be calculated according to the first collecting distance and a preset offset function, and then the position offset amount is determined as the first offset amount corresponding to the first light spot position corresponding to each reference light spot. It should be noted that, the position offset amount calculated by the preset offset function is a relative amount, and the position offset amount to be calculated at this time is a position offset amount relative to the first spot position. Therefore, in the calculation, the fixed spot position used for reference in the preset offset function may be the first spot position, that is, the fixed spot position may be the spot position when the distances of all objects in the scene are the first acquisition distance.
In step S430, the first light spot positions are shifted according to the first shift amount, and a first search area corresponding to the shifted first light spot positions is determined on the current light intensity image.
In an exemplary embodiment, after obtaining the first offset corresponding to each light spot in the first position template, the first light spot position corresponding to each light spot may be offset to obtain an offset position, and then the first search area corresponding to the offset first light spot position is determined on the current light intensity image.
Specifically, the first spot position may be represented by pixel coordinates, and the offset distance of the first offset pair may be represented by the number of offset pixels, so that the offset first spot position may also be represented by the pixel coordinates. Therefore, when the first search area corresponding to the shifted first light spot position is determined on the current light intensity image, the pixel coordinate corresponding to the shifted first light spot position can be directly found on the current light intensity image, and then the first search area on the current light intensity image is determined based on the pixel coordinate.
In an exemplary embodiment, when determining the first search area, the first search area may be determined by the pixel coordinates corresponding to the shifted first spot position and a preset strategy. The preset strategy comprises a preset random strategy for determining a region based on a pixel coordinate. For example, the predetermined policy may include a policy of determining a region of 3 × 3 pixels or 5 × 5 pixels around a certain pixel as a center. In this case, a region of 3 × 3 pixels or 5 × 5 pixels around the first spot position may be used as the first search region, with the pixel coordinate corresponding to the first spot position as the center.
In some scenarios, it is inevitable that the receiving end may not receive the light spots corresponding to some light beams. At this point, these beams cannot generate valid and reliable depth values, i.e. in the historical depth reference image, these reference spots do not have corresponding depth values. Therefore, the first offset corresponding to the first spot position corresponding to the reference spots cannot be calculated, that is, only the area corresponding to the first spot position before offset can be determined. At the moment, two situations may occur, the current scene is similar to the historical scene, the light spot searching is carried out based on the first light spot position before the deviation, and the big probability of the searched light spot is wrong; the difference between the current scene and the historical scene is large, light spot searching is carried out based on the first light spot position before deviation, and the accuracy of the searched light spot is unknown. In particular, if the position of a certain spot is offset by an amount greater than or equal to the distance between two spots, two or more spots may be searched in a small area. According to theoretical analysis and actual tests, the situation is probably caused when the distance between the object and the speckle TOF camera is small, but the background distance of the scene where the object is located is large. That is, in the case where the distance between the object and the speckle TOF camera is small, the scheme of calculating the first offset amount by the historical depth reference image and the preset offset function may fail.
Further, before calculating the first offset corresponding to each first light spot position in the first position template based on the historical depth reference image and the preset offset function, the historical depth reference image may be judged first, so as to avoid the above problem when the distance between the object and the speckle TOF camera is small. Specifically, as shown in fig. 8, the method may include steps S810 to S830:
in step S810, the depth values corresponding to the reference light spots in the historical depth reference image are counted, and the light spot ratio of the target light spot with the depth value smaller than the depth threshold is determined.
In an exemplary embodiment, the depth values corresponding to the reference light spots in the historical depth reference image may be counted first to determine the ratio of the target light spot whose depth value is smaller than the depth threshold to the light spots of all the light spots. For example, assuming a total of 10 spots, where 7 spots correspond to target spots having depth values less than the depth threshold, the spot fraction is 70%. The depth threshold is a value set empirically. In different application scenarios, the depth threshold may be set differently, which is not limited by the present disclosure.
In step S820, when the ratio of the light spots is greater than the ratio threshold, a second search area corresponding to the target light spot is determined on the current light intensity image.
In an exemplary embodiment, when the light spot ratio is greater than the ratio threshold, it may be stated that, historically, a large proportion of short-distance objects exist in the scene acquired by the speckle TOF camera within a certain time interval, that is, the historical depth reference image and the scheme of calculating the first offset by the preset offset function may fail.
In an exemplary embodiment, when determining the second search area corresponding to the target spot on the current intensity image, as shown in fig. 9, the method may include steps S910 to S930:
in step S910, neighboring light spots corresponding to the target light spots are obtained, and a second offset corresponding to the target light spots is calculated based on a preset offset function and depth values corresponding to the neighboring light spots.
In an exemplary embodiment, when two light spots are close to each other and can represent the depth of the same object, the depth value corresponding to the adjacent light spot can be estimated as the depth value corresponding to the target light spot. At this time, for each target light spot, an adjacent light spot corresponding to the target light spot may be obtained first, and then a second offset corresponding to the target light spot may be calculated by using a preset offset function based on the depth value.
In step S920, a first target spot position corresponding to each target spot is determined in the first position template.
In an exemplary embodiment, since the reference spot generated by the same beam and the determined first spot position correspond to each other, the first target spot position corresponding to each target spot may be determined in the first position template according to the mutual correspondence relationship.
In step S930, the first target light spot position corresponding to each target light spot is shifted according to the second shift amount, and a second search area corresponding to the shifted first target light spot position is determined on the current light intensity image.
The process of offsetting the first target light spot position corresponding to each target light spot by the second offset and determining the second search area corresponding to the offset first target light spot position on the current light intensity image is similar to the process of offsetting each first light spot position by the first offset and determining the first search area corresponding to the offset first light spot position on the current light intensity image in step S430, and details which are not disclosed may be referred to in the embodiment of this section, and thus are not described again.
It should be noted that, because the second offset is still a position offset relative to the first light spot position, when the second offset is calculated, the fixed light spot position used for reference in the preset offset function may also be the first light spot position, that is, the fixed light spot position may be a light spot position when distances of all objects in the scene are the first acquisition distance.
However, there is a limitation in the method of estimating the depth value corresponding to the neighboring spot as the depth value corresponding to the target spot. Namely, when a large number of invalid light spots exist, the depth value corresponding to the adjacent light spot is estimated as the depth value corresponding to the target light spot, which easily causes distortion. Meanwhile, even if the two light spots are adjacent to each other, it is difficult to ensure that the two light spots are reflected by objects passing through the same distance.
In order to avoid the above limitation, in an exemplary embodiment, when the spot ratio is greater than the ratio threshold, the spot search on a pixel-by-pixel basis may be directly performed by using the entire current intensity image as the second search region.
Furthermore, if the whole current light intensity image is directly used as a second search area for light spot search, the situation of large calculation amount still exists, so that invalid areas where a large number of invalid light spots are located can be firstly determined in the historical depth reference image, then the area corresponding to the invalid area is determined on the current light intensity image to be used as the second search area, the second search area is searched pixel by pixel, and pixel points which accord with the characteristics of the light spots are found out. The pixel-by-pixel search of the region corresponding to the invalid region only can reduce a part of the calculation amount compared with the pixel-by-pixel search of the whole current light intensity image.
In step S830, when the spot ratio is smaller than or equal to the ratio threshold, a first offset corresponding to each first spot position in the first position template is calculated based on the historical depth reference image and a preset offset function.
In an exemplary embodiment, when the light spot proportion is smaller than or equal to the proportion threshold, it may be stated that, in a certain time interval historically, a smaller proportion of short-distance objects exist in the scene acquired by the speckle TOF camera, that is, the scheme of calculating the first offset by using the historical depth reference image and the preset offset function does not fail. At this time, the first offset corresponding to each first light spot position in the first position template may be continuously calculated based on the historical depth reference image and the preset offset function, so as to determine the first search area for performing the light spot search.
In the working range of the speckle TOF camera, the relationship between the distance between the object and the speckle TOF camera and the pixel offset number is not linear, and as shown in fig. 10, the position offset amount is larger due to the same distance difference at a short distance and a long distance. For example, for a certain speckle TOF camera, the distance between an object and the speckle TOF camera changes by the same value a, and when the distance between the object and the speckle TOF camera is short, the position offset of a light spot may be dozens of pixels; and when the distance between the object and the speckle TOF camera is long, the position offset of the light spot can be only two or three pixels. At this time, in order to determine the offset more accurately, the positions of the light spots collected at multiple distances may be selected as templates to determine the search area.
Specifically, referring to fig. 11, determining a search region on the current intensity image based on the historical depth reference image may include steps S1110 to S1140:
in step S1110, a set of position templates is acquired.
The position template set comprises a plurality of second position templates, the second acquisition distances corresponding to the second position templates are different, and each second position template comprises at least one second light spot position acquired at the second acquisition distance corresponding to the second position template. That is, each second position template is the position of at least one light spot of the speckle camera in the light intensity image when the scene distance is the corresponding second acquisition distance, and may be generally represented by pixel coordinates.
In some embodiments, the second acquisition distances may also be determined to be uniformly distributed within the range of the speckle TOF camera to acquire the corresponding second position template, which is not particularly limited by the present disclosure.
It should be noted that, in order to determine the offset more accurately, the second collecting distance corresponding to the second position template may be set according to the difference in the range of the distance between the object and the speckle TOF camera. Specifically, for a certain speckle TOF camera, the distance between an object and the speckle TOF camera changes by the same value a, when the distance between the object and the speckle TOF camera is long, the position offset of a light spot may only have two or three pixels, and at this time, a small number of second acquisition distances can be set, so that the second light spot position can cover the range of the light spot offset to a greater extent, and therefore, a larger or smaller area can be determined for searching by setting different preset strategies without increasing more second acquisition distances, so that more second position templates can be increased; when the distance between the object and the speckle TOF camera is short, the position offset of the light spot can have dozens of pixels, the change is large, and the second acquisition distance with a large number needs to be set at the moment so as to ensure the range of the second light spot position covering the light spot offset to a greater extent. Therefore, more second acquisition distances need to be added to add more second position templates. By arranging different numbers of second position templates in different distance ranges, more templates can be arranged in the range with sharp change of the position offset amount, so that the position offset amount can be more accurately determined.
In step S1120, the depth values corresponding to the reference light spots in the historical depth reference image are obtained, and the target second position templates corresponding to the reference light spots are determined in the position template set based on the depth values.
In an exemplary embodiment, a target second position template corresponding to each reference spot may be determined in the position template set for the depth value corresponding to each reference spot in the historical depth reference image. For example, if the depth value corresponding to a certain reference spot B in the historical depth reference image is 1.035m, the second position template with the second acquisition distance of 1.035m can be selected as the target second position template corresponding to the reference spot 1.
It should be noted that, in some embodiments, there may not be a second position template having a second acquisition distance identical to the depth value corresponding to the reference spot, and at this time, the second position template corresponding to the second acquisition distance having the smallest difference of the depth values may be selected as the target second position template. For example, the depth value corresponding to a certain reference spot 1 in the historical depth reference image is 1.035m, and only the second position templates with the second acquisition distances of 1m and 1.5m are in the position template set, so that the second position template with the second acquisition distance of 1m is selected as the template second position template nearby.
In step S1130, a second target spot position corresponding to each reference spot is determined in the target second position template corresponding to each reference spot.
In an exemplary embodiment, after the target second position templates corresponding to the reference light spots are determined, the second target light spot position corresponding to a reference light spot needs to be determined in the target second position template corresponding to a certain reference light spot.
Specifically, since the reference light spots generated by the same light beam correspond to the determined positions of the first light spots, the position of the second target light spot corresponding to each reference light spot can be determined in the target second position template according to the mutual corresponding relationship. Specifically, the depth value corresponding to a certain reference spot 1 in the historical depth reference image is 1.035m, and after the second position template with the second acquisition distance of 1.035m is selected as the target second position template corresponding to the reference spot 1, the position of the light beam d for generating the reference spot 1 on the target second position template can be used as the second target spot position.
In step S1140, a third search area corresponding to the second target spot position is determined on the current intensity image.
The process of determining the third search area corresponding to the second target spot position on the current light intensity image is similar to the process of determining the first search area corresponding to the shifted first spot position on the current light intensity image in step S430, and details that are not disclosed may be referred to in the embodiment of this section, and thus are not described again.
In addition, in some embodiments, for one reference light spot, a plurality of second position templates may be further selected as target second position templates, then second target light spot positions corresponding to the reference light spot are respectively determined in each target second position template, and then third search areas corresponding to each second target light spot position are respectively determined on the current light intensity image to perform search respectively. After searching, the second target spot position with the highest confidence coefficient or the best fit spot feature can be selected as the position of the reference spot corresponding to the beam projected on the current light intensity image, that is, the spot position formed by the beam on the current light intensity image is the second target spot position with the highest confidence coefficient or the best fit spot feature. The manner in which the second target spot location is selected among the plurality of second target spot locations is not particularly limited by this disclosure.
For example, the depth value corresponding to a certain reference spot 1 in the historical depth reference image is 1.035m, the second position templates with the second acquisition distances of 0.9m, 1.0m and 1.5m can be simultaneously selected as the target second position templates, and then the second target spot positions corresponding to the reference spot 1 are respectively determined in the three target second position templates, so as to obtain three second target spot positions. And determining three third search areas on the current light intensity image according to the three second target light spot positions, and respectively searching for the light spots. After the spot search, one of the positions that best meets the spot characteristics can be selected as the projection position of the beam corresponding to the reference spot 1 on the current intensity image.
Further, in an exemplary embodiment, in the case where there are a plurality of templates, the number of templates may be reduced in combination with a preset offset function. Specifically, the second acquisition distance corresponding to the second position template may be set to be sparsely distributed in the entire ranging range of the speckle TOF camera. At this time, the third search area corresponding to the second target spot position is determined on the current intensity image, and as shown in fig. 12, the following steps S1210 and S1220 may be included:
in step S1210, a preset offset function is obtained, and a third offset corresponding to the second target spot position is calculated according to the preset offset function and the depth values corresponding to the reference spots.
In an exemplary embodiment, after the second target spot position is obtained, a third offset corresponding to the second target spot position determined by the reference spot may be calculated based on a preset offset function and a depth value corresponding to the reference spot.
It should be noted that, because the third offset is a position offset relative to the second target spot position, when the third offset is calculated, the fixed spot position used for reference in the preset offset function may also be the second target spot position, that is, the fixed spot position may be the spot position when the distances of all objects in the scene are the second acquisition distances. The corresponding second collecting distance is a second collecting distance corresponding to a target second position template where the second target light spot is located.
In step S1220, the positions of the second target light spots are shifted according to the third shift amount, and a third search area corresponding to the shifted positions of the second target light spots is determined on the current light intensity image.
In an exemplary embodiment, after the third offset is obtained, the corresponding second target light spot position may be offset according to the third offset, and a third search area corresponding to the offset second target light spot position is determined on the current light intensity image, so as to perform the light spot search. It should be noted that the process of determining the third search area corresponding to the shifted second target spot position on the current light intensity image is similar to the process of determining the first search area corresponding to the shifted first spot position on the current light intensity image in step S430, and details that are not disclosed may be referred to in the embodiment of this section, and thus are not described again.
In addition, when a plurality of templates are selected as the target second position templates, the calculation and the offset based on the preset offset function may be performed for each target second position template, and the third search area corresponding to each second target spot position may be determined on the current light intensity image. After the spot search, the second target spot position with the highest confidence coefficient or the best fit spot feature can be selected as the position of the reference spot corresponding to the beam projected on the current intensity image.
A plurality of second position templates are arranged in the position template set, and the calculated amount of speckle TOF camera light spot searching can be reduced by increasing the storage space occupied by the templates in the speckle TOF camera; in addition, because the plurality of templates can improve the degree of possible position deviation of the covered light spot as much as possible, compared with a single template, the searched light spot is more accurate and reliable.
In an exemplary embodiment, when determining the search region on the current light intensity image based on the historical depth reference image, the search region may also be determined directly depending on the historical depth reference image. Specifically, as shown in fig. 13, the method may include steps S1310 and S1320:
in step S1310, the reference spot position corresponding to each reference spot is determined according to the historical depth reference image.
In an exemplary embodiment, after the historical reference depth image is obtained, the reference light spot position corresponding to each reference light spot in the historical depth reference image can be directly determined. Specifically, when the speckle TOF camera generates a depth image, only the area illuminated by the light spot can generate a depth value, and therefore, in the historical depth reference image, the position with the depth value is the reference light spot position corresponding to the reference light spot.
In step S1320, a fourth search area corresponding to the reference spot position is determined on the current intensity image.
In an exemplary embodiment, after the reference spot position is determined, a fourth search area corresponding to the reference spot position may be determined on the current intensity image. It should be noted that the process of determining the fourth search area corresponding to the reference spot position on the current light intensity image is similar to the process of determining the first search area corresponding to the shifted first spot position on the current light intensity image in step S430, and details that are not disclosed may be referred to in the embodiment of this section, and thus are not described again.
Further, in an exemplary embodiment, some spots may fail to generate depth values in the historical depth reference image or the generated depth values may be invalid due to objects being at an excessive distance or objects having a low reflectivity. Accordingly, a supplemental search may be conducted based on the template. Specifically, as shown in fig. 14, the method may include step S1410 and step S1420:
in step S1410, a third position template is acquired.
The third position template comprises at least one third light spot position acquired at a third acquisition distance, namely the position of at least one light spot of the speckle TOF camera in the light intensity image can be generally represented by pixel coordinates when the scene distance is the third acquisition distance.
In step S1420, a supplementary search is performed on the current intensity image according to the respective reference spot and the third spot positions.
In an exemplary embodiment, after performing the spot search on the fourth search area determined based on the historical depth reference image, some spots may not generate depth values in the historical depth reference image or the generated depth values are invalid due to the object being at an ultra-far distance or the reflectivity of the object being low, and therefore the fourth search area may not search for spots generated by all light beams emitted by the speckle TOF camera. At this time, a supplementary search may be performed on the current intensity image based on the reference spot and the third spot position.
In an exemplary embodiment, a part of the spot positions may be determined based on the reference spots in the historical depth reference image, and therefore a third spot position corresponding to a spot for which no depth value is generated may be determined first, and then a supplementary search may be performed for these spots. Specifically, a third target spot position corresponding to a reference spot that can pass through the historical depth reference image may be determined in a third position template; and then, taking the third light spot position except the third target light spot position in the third position template as a third residual light spot position, then determining a supplementary search area corresponding to the third residual light spot position on the current light intensity image, and performing supplementary search of the light spots in the supplementary search area.
For example, assuming that a speckle TOF camera emits 9 infrared rays in total, there are theoretically 9 valid depth values in the historical depth reference image, and a third position template includes third spot positions corresponding to the 9 spots. But there are only 6 spots in the historical depth reference image due to the fact that some objects in the scene are too far away from the speckle TOF camera, or the reflectivity of some objects is too low. At this time, based on the one-to-one correspondence relationship between the light beams and the light spots, third target light spot positions corresponding to 6 light spots are determined in the third position template, then the remaining 3 third light spot positions in the third position template are determined as third remaining light spot positions, and then the supplementary search area is determined based on the third remaining light spot positions and the supplementary search is performed.
Compared with the scheme of searching directly based on the position template, the mechanical energy light spot searching method based on the mode of searching the fourth searching area determined based on the historical depth reference image and then performing supplementary searching based on the third position template can save a certain amount of calculation, and is suitable for scenes with higher frame rate, higher real-time requirement and fewer ultra-far distance or low-reflectivity objects or scenes with less variation of the long-range distance of the objects.
In an exemplary embodiment, when performing spot search on a search area, the maximum value of the light intensity in the search area may be directly searched to determine the position of the spot, or the position of the spot may be determined based on whether the maximum value of the light intensity and the light intensity distribution around the maximum value conform to the gaussian function law, which is not limited in this disclosure. The search area actually performing the spot search may be determined according to an actually adopted scheme, for example, the search area may be a first search area, a second search area, a third search area, a fourth search area, a supplementary search area, or a fourth search area and a supplementary search area, which is not particularly limited in this disclosure.
It should be added that, because the above-mentioned light spot searching process is realized based on the historical depth reference image, under the condition that the speckle TOF camera is just started, the historical depth reference image is lacked; or when the first frame just started is a short-distance scene, not only the historical depth reference image is missing, but also the light spot position is far away from the light spot position shown by the template, and at this time, the situation that the depth cannot be solved due to the fact that the light spot cannot be searched is likely to occur. Therefore, under the condition that the speckle TOF camera is just started, the light spot searching pixel by pixel can be firstly carried out on the light intensity image with the preset frame number so as to obtain a certain amount of historical data, and then the light spot searching is carried out in a mode of obtaining the historical depth reference image.
In addition, in the scheme of performing the light spot search based on the position template set including the plurality of second position templates, except for a mode of performing the pixel-by-pixel light spot search on the light intensity image with the preset frame number to obtain a certain amount of historical data, the historical data can be obtained by traversing the light spot positions shown by the plurality of second position templates under the condition that the speckle TOF camera is just started. Specifically, the positions of the light spots shown by the plurality of second position templates can be traversed, and for a certain light spot, the position of the light spot, which is on the possible offset path of the light spot and best accords with the characteristics of the light spot, can be searched as a search result.
In an exemplary embodiment, the first position template, the second position template and the third position template may be obtained in a variety of manners, for example, the first position template, the second position template and the third position template may be directly determined by theoretical calculation, may be directly acquired by a speckle TOF camera, may be obtained by calculation based on a preset offset function and data acquired by the speckle TOF camera, and the like, which is not limited in this disclosure. In different embodiments, the acquisition distances corresponding to the first position template, the second position template and the third position template can be set by themselves, and the acquisition distance is not specially limited by the present disclosure.
Specifically, when the position is determined through theoretical calculation, calculation and software simulation can be directly performed based on design parameters of the speckle TOF camera, the position distribution of a light spot on a receiving end sensor in theory is obtained, and then the light spot position on a light intensity image, namely a position template, is determined. It should be noted that, in order to make the position template more conform to the situation of the real speckle TOF camera, during theoretical calculation, tolerance in the assembly process, that is, data of simultaneous internal and external reference calibration, may be considered, and errors such as possible offset, tilt, and rotation in the assembly process are included.
Specifically, a large plane can be shot at a certain fixed distance by a speckle TOF camera to obtain a corresponding depth image, and then the position of a light spot, namely the position of the light spot on the light intensity image, is determined based on the depth image. When shooting is performed by a speckle TOF camera, it is necessary to keep the imaging plane of the speckle TOF camera parallel to the large plane to be shot.
In addition, the fixed distance can be any distance within the range of the speckle TOF camera in theory, but in practical application, a better value can be selected based on the corresponding spot deviation condition of the speckle TOF camera. Specifically, if in the range of the speckle TOF camera, when the distance between the object and the speckle TOF camera changes by 1 unit and the position offset of the light spot is small, the intermediate distance in the range of the distance can be selected through theoretical calculation, so that the position offset of the light spot is relatively balanced before and after the distance when the distance changes in each unit.
For example, when a fixed distance of a certain speckle TOF camera is 10cm, the coordinate position of a certain light spot is (1,1), (1,100), (1,50) and (1,50), so that the acquisition template is suitable when the fixed distance is 1 m. Because the absolute distance of the spot offset from the backward or forward is close, the spot search of the subsequent algorithm is facilitated. Furthermore, in some applications, the template may also be acquired directly at 10m or 10cm, but with somewhat reduced efficiency.
The method for shooting at a certain fixed distance by the speckle TOF camera needs to collect the template once before leaving a factory. Currently, it is necessary to keep the influence factors such as the physical structure and the working environment of the speckle TOF camera from being significantly changed in the subsequent use process so as to prevent the template from failing. The influencing factors such as the physical structure and the working environment may include the base distance of the speckle TOF camera, the position of the lens, the temperature difference between the working temperature and the temperature when the template is collected within a certain range (for example, the template is collected at a temperature of-50 ℃ but works in an environment of 56 ℃, that is, the temperature difference is too large), and the like, and the influencing factors may also include other influencing factors, which are not particularly limited in this disclosure.
Further, on the basis of shooting at a certain fixed distance by the speckle TOF camera, the template can be determined by calculating through a preset offset function. Specifically, under the conditions that the field angle of the speckle TOF camera is large, or a large plane required by shooting cannot be provided, a depth image at any distance can be acquired first, the position of a light spot is determined based on the depth image, and then the obtained position of the light spot is subjected to offset calculation through a preset offset function, so that the position of the light spot at a specific distance is determined to serve as a template. The specific distance may be a distance such that the amount of positional deviation of the light spot per unit distance change is relatively uniform before and after the distance.
Furthermore, in some embodiments, the templates may also be acquired adaptively during application of the speckle TOF camera. Specifically, a specific distance required for acquiring the template is set in the speckle TOF camera. When a user uses the speckle TOF camera to measure distance, if a depth value corresponding to a certain light spot is equal to a specific distance and the confidence corresponding to the depth value is greater than a confidence threshold value, the depth value can be considered to be credible, and the pixel coordinate corresponding to the light spot at the moment is recorded as the position of the light spot at the specific distance, namely the position of the light spot corresponding to the light spot in the position template. By the method, the light spot position corresponding to each light spot is continuously acquired in the using process of the user, and a complete template is generated in a self-adaptive manner.
Before a complete template is not generated, a pixel-by-pixel search needs to be performed on a plurality of frames of light intensity images to determine the position of a light spot and generate the complete template; in addition, in the using process, it is likely that the depth value corresponding to the light spot cannot be exactly equal to the specific distance, so that when the depth value is within the range of the specific distance ± the first distance deviation, the depth value corresponding to a certain light spot can be considered to be equal to the specific distance in an approximate manner, and then the template is acquired.
In addition, after the light spot position corresponding to a certain light spot at a specific distance is acquired, the light spot position can be continuously optimized based on a subsequent using process. Specifically, in the using process of the user, if a situation that the depth value corresponding to a certain spot is equal to the specific distance and the confidence corresponding to the depth value is greater than the confidence threshold is encountered, the spot position of the spot may be updated based on the confidence corresponding to the depth value.
For example, if the difference between the confidence coefficient of the depth value obtained this time and the previously recorded confidence coefficient is not large, the spot position of the spot at this time can be recorded within the range of the previously recorded confidence coefficient ± preset confidence coefficient deviation, and the spot position is averaged with the previously recorded position to obtain a new spot position; if the confidence coefficient of the depth value obtained this time is far smaller than the previously recorded confidence coefficient, the position of the light spot at the time is considered to be unreliable compared with the position of the light spot at the last time, and no record is made; if the confidence of the depth value obtained this time is far greater than that of the previous record, but the overexposure condition is not reached (for example, the confidence of the depth value obtained this time is 1.5 times of that of the previous time, but not 10 times of that of the previous time), the pixel coordinates recorded last time can be replaced, or the light spot positions corresponding to two light spots can be recorded simultaneously, and the times and the probability of the light spot appearing at each light spot position are continuously recorded in the use of a user. When spot searching is carried out subsequently, searching can be carried out on spot positions with higher spot frequency or higher probability, or each spot position can be searched, and then a spot corresponding to the spot position which best accords with the spot characteristics is selected as a searching result.
By the mode of self-adaptively acquiring the template, the template can be generated in a self-adaptive manner along with the use of the user, and is continuously updated in subsequent use, so that the template is more in line with the use preference of the user.
It should be noted that, when acquiring a plurality of second position templates in the position template set, the acquisition manner of each second position template may be any one of the above-mentioned manners, and the present disclosure does not particularly limit this.
In an exemplary embodiment, after the location templates, including the first location template, the second location template, and the third location template, are obtained by the method, the location templates may be further updated based on subsequent use by the user. Specifically, the spot positions of the spots in the position template may be updated according to the confidence, the frequency of occurrence of the spots in each position in the position template, and other influencing factors.
For example, a record table may be established for a position template, which is used to characterize the reliability corresponding to the spot position of each spot in the template. For example, a position template is established when the distance between the object and the speckle TOF camera is 1.5m, and in the using process of a user, when the depth value corresponding to a certain light spot is 1.5m, the actual position (possibly having a certain deviation from the template position) of the light spot at the time, the confidence coefficient and the reflectivity corresponding to the light spot at the time are recorded, and based on the number of times that the light spot appears at the actual position, the confidence coefficient and the reflectivity corresponding to the light spot are taken as the reliability of the light spot appearing at the actual position. When the position template is used for spot searching next time, the position with higher reliability in the position template can be preferentially selected to determine the searching area for spot searching.
It should be noted that, similar to the way of obtaining the template in a self-adaptive manner, in the using process, it is likely that the depth value corresponding to the light spot cannot be exactly equal to the collecting distance corresponding to the position template, so that when the depth value is within the range of the collecting distance ± the second distance deviation, it can be approximately determined that the depth value corresponding to a certain light spot is equal to the collecting distance, and then the corresponding record is performed. The value of the second distance deviation may depend on the speckle TOF camera specific parameters and the usage scenario, among other things.
Furthermore, for a location template set comprising a plurality of second location templates, the location template set may also be updated, i.e. the second location templates are automatically added or subtracted. For example, if there is a large proportion of spots that have a large deviation in the actual positions searched for from the positions shown in the position templates over time, it may be said that the second position template included in the current set of position templates may not be suitable for the current ranging scenario, and thus a new template may be created. For example, the depth value in a certain depth image is d 4 If the pixel ratio is greater than the preset ratio threshold, the second collecting distance may be d 4 A new second location template is created. Of course, the new second position template may also be continuously updated in subsequent uses.
Furthermore, the method for obtaining the position template by self-use can be combined with the method for updating the position template, a complete template is established in the using process of a user without extracting and establishing the template, and the self-adaptively established template is continuously updated. For example, one or more position templates may be created based on the depth values with a larger number of occurrences according to the current usage scenario, and a reliability table may be correspondingly created. The position templates are adaptively reduced, increased or updated as the ranging scene of the speckle TOF camera is continuously changed. Of course, before the position template with higher reliability is established, the light spot search of multiple frames of light intensity images needs to be performed pixel by pixel until a complete position template with high reliability is obtained.
In the process of establishing the position template, in order to obtain the position template with higher reliability, a reliability condition can be set for the position template, that is, when the position template meets the reliability condition, it is determined that the reliability of the position template is higher, and the position template can be directly used in the subsequent use process. Specifically, the reliability condition may be set differently according to different ranging scenarios.
For example, the reliability condition may include that, in the spots searched based on the current position template, the ratio of the number of effective spots to the number of theoretical spots or the number of spots searched per pixel is greater than a certain set value; for another example, the reliability condition may include that the ratio of the spot position shown by the established position template to the theoretical spot number is greater than a certain set value; for another example, the reliability condition may include that the reliability corresponding to each light spot in the position template is higher than a certain set value, and the like, which is not particularly limited by this disclosure.
The position template is continuously updated in the above mode, so that the template has the functions of learning and memorizing, and the speckle TOF camera light spot searching speed and accuracy can be improved along with the increase of the service time.
In addition, when spot searching is performed, the position template may not be suitable for some ranging scenes, which may result in that many spots may be missed by searching based on the position template, or many wrong spots may be searched. The adaptability of the position template can thus be checked before it is used. Therefore, adaptive conditions can be set for the position template, namely when the position template meets the adaptive conditions, the adaptability of the position template is confirmed, and when the position template meets the adaptive conditions, the position template is continuously used for searching light spots; when the position template does not meet the adaptability condition, the adaptability of the position template can be further judged by other modes of not adopting the position template to carry out light spot search.
For example, the adaptive condition may include that when the ratio of the number of light spots searched based on the position template to the theoretical number of light spots is smaller than a certain set value, it is determined that the position template does not satisfy the adaptive condition, and at this time, pixel-by-pixel search may be performed on the current light intensity image or the next frame of light intensity image, and if a result of the pixel-by-pixel search is similar to a search result based on the position template, it indicates that a part of the light spots may not be searched due to the scene itself, such as exceeding the range of the speckle TOF camera; and if the difference between the pixel-by-pixel search result and the search result based on the position template is larger, the position template is suitable for the current ranging scene. At this time, a pixel-by-pixel search mode may be used for the current ranging scene, or the position template may be updated and learned until the position template meets the adaptive condition, and then the position template is reused to perform the spot search.
The position template is judged adaptively through the method, namely a protection mechanism is added for the search result based on the position template, and even if errors occur in the light spot searching mode based on the position template, the search result can be corrected based on a pixel-by-pixel searching mode or other modes without the position template, and the normal work of the speckle TOF camera cannot be influenced.
In summary, in the exemplary embodiment, a speckle TOF camera spot searching method is provided, in which a search area including pixels smaller than a current light intensity image is determined on the current light intensity image through a historical depth reference image and acquired templates at different positions, and an amount of computation can be greatly reduced compared with a method for performing pixel-by-pixel spot searching on the current light intensity image.
It is noted that the above-mentioned figures are merely schematic illustrations of processes involved in methods according to exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Further, referring to fig. 15, an exemplary embodiment of the present disclosure provides a light spot searching apparatus 1500, which includes a reference obtaining module 1510 and a light spot searching module 1520. Wherein:
the reference acquisition module 1510 may be configured to acquire a historical depth reference image corresponding to the current light intensity image acquired by the speckle TOF camera.
The light spot searching module 1520 may be configured to determine a search region on the current intensity image based on the historical depth reference image, and perform a light spot search in the search region; the number of pixels contained in the search area is smaller than that of the pixels contained in the current light intensity image.
In an exemplary embodiment, the spot search module 1520 may be configured to obtain a first position template and a preset offset function; the first position template comprises at least one first light spot position acquired at a first acquisition distance; calculating a first offset corresponding to each first light spot position in the first position template based on the historical depth reference image and a preset offset function; and offsetting each first light spot position according to the first offset, and determining a first search area corresponding to the offset first light spot position on the current light intensity image.
In an exemplary embodiment, the light spot searching module 1520 may be configured to obtain a depth value corresponding to each reference light spot in the historical depth reference image, and determine a first light spot position corresponding to each reference light spot in the first position template; and calculating the position offset corresponding to each reference light spot according to a preset offset function and the depth value, and determining the position offset as a first offset corresponding to the position of a first light spot corresponding to the reference light spot.
In an exemplary embodiment, the light spot searching module 1520 may be configured to count depth values corresponding to reference light spots in the historical depth reference image, and determine a light spot proportion of a target light spot whose depth value is smaller than a depth threshold; when the ratio of the light spots is larger than a ratio threshold value, determining a second search area corresponding to the target light spot on the current light intensity image; and when the light spot proportion is smaller than or equal to the proportion threshold value, calculating a first offset corresponding to each first light spot position in the first position template based on the historical depth reference image and a preset offset function.
In an exemplary embodiment, the light spot searching module 1520 may be configured to obtain neighboring light spots corresponding to each target light spot, and calculate a second offset corresponding to the target light spot based on a preset offset function and depth values corresponding to the neighboring light spots; determining a first target light spot position corresponding to each target light spot in a first position template; and offsetting the first target light spot position corresponding to each target light spot according to the second offset, and determining a second search area corresponding to the offset first target light spot position on the current light intensity image.
In an exemplary embodiment, the spot search module 1520 may be configured to obtain a set of position templates; the position template set comprises a plurality of second position templates, the second acquisition distances corresponding to the second position templates are different, and each second position template comprises at least one second light spot position acquired at the second acquisition distance corresponding to the second position template; acquiring a depth value corresponding to each reference light spot in the historical depth reference image, and determining a target second position template corresponding to each reference light spot in the position template set based on the depth value; determining a second target light spot position corresponding to each reference light spot in a target second position template corresponding to each reference light spot; and determining a third search area corresponding to the position of the second target light spot on the current light intensity image.
In an exemplary embodiment, the light spot searching module 1520 may be configured to obtain a preset offset function, and calculate a third offset corresponding to the second target light spot position according to the preset offset function and the depth value corresponding to each reference light spot; and offsetting the positions of the second target light spots according to the third offset, and determining a third search area corresponding to the offset second target light spot positions on the current light intensity image.
In an exemplary embodiment, the light spot searching module 1520 may be configured to determine a reference light spot position corresponding to each reference light spot according to the historical depth reference image; and determining a fourth search area corresponding to the position of the reference light spot on the current light intensity image.
In an exemplary embodiment, the spot search module 1520 may be configured to obtain a third position template; the third position template comprises at least one third light spot position acquired at a third acquisition distance; and performing supplementary search on the current light intensity image according to the positions of the reference light spots and the third light spot.
In an exemplary embodiment, the spot search module 1520 may be configured to determine a third target spot position corresponding to each reference spot in the third position template; acquiring a third residual light spot position except for the third target light spot position in the third position template; and determining a supplementary search area corresponding to the third remaining light spot position on the current light intensity image, and searching the light spots in the supplementary search area.
In an exemplary embodiment, the reference acquisition module 1510 may be configured to acquire at least one frame of historical depth image acquired by a speckle TOF camera; the historical acquisition time of the historical light intensity image corresponding to the historical depth image is before the current acquisition time of the current light intensity image; and generating a corresponding historical depth reference image based on the at least one frame of historical depth image.
The specific details of each module in the above apparatus have been described in detail in the method section, and details that are not disclosed may refer to the method section, and thus are not described again.
As will be appreciated by one skilled in the art, aspects of the present disclosure may be embodied as a system, method or program product. Accordingly, various aspects of the present disclosure may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
An electronic device for implementing the spot search method is also provided in the exemplary embodiment of the present disclosure, and may be the terminal device 101, 102, 103 or the server 105 in fig. 1. The electronic device comprises at least a processor and a memory for storing executable instructions of the processor, the processor being configured to perform the spot search method via execution of the executable instructions.
The following takes the mobile terminal 1600 in fig. 16 as an example, and exemplifies the configuration of the electronic device in the embodiment of the present disclosure. It will be appreciated by those skilled in the art that the configuration in figure 16 can also be applied to fixed type devices, in addition to components specifically intended for mobile purposes. In other embodiments, mobile terminal 1600 may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware. The interfacing relationship between the various components is shown schematically and is not meant to be a structural limitation of mobile terminal 1600. In other embodiments, mobile terminal 1600 may interface differently than shown in FIG. 16, or a combination of multiple interfaces.
As shown in fig. 16, the mobile terminal 1600 may specifically include: a processor 1610, an internal memory 1621, an external memory interface 1622, a Universal Serial Bus (USB) interface 1630, a charging management module 1640, a power management module 1641, a battery 1642, an antenna 1, an antenna 2, a mobile communication module 1650, a wireless communication module 1660, an audio module 1670, a speaker 1671, a receiver 1672, a microphone 1673, an earphone interface 1674, a sensor module 1680, a display 1690, a camera module 1691, an indicator 1692, a motor 1693, keys 1694, and a Subscriber Identity Module (SIM) card interface 1695. Wherein the sensor module 1680 may include a depth sensor 16801, a pressure sensor 16802, a gyroscope sensor 16803, and the like.
Processor 1610 may include one or more processing units, such as: processor 1610 may include an Application Processor (AP), a modem Processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband Processor, and/or a Neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
A memory is disposed in processor 1610. The memory may store instructions for implementing six modular functions: detection instructions, connection instructions, information management instructions, analysis instructions, data transmission instructions, and notification instructions, and are controlled for execution by processor 1610.
The wireless communication function of the mobile terminal 1600 can be implemented by the antenna 1, the antenna 2, the mobile communication module 1650, the wireless communication module 1660, the modem processor, the baseband processor, and the like. Wherein, the antenna 1 and the antenna 2 are used for transmitting and receiving electromagnetic wave signals; the mobile communication module 1650 may provide a solution including wireless communication such as 16G/3G/4G/5G applied to the mobile terminal 1600; the modem processor may include a modulator and a demodulator; the Wireless communication module 1660 may provide a solution for Wireless communication including Wireless Local Area Network (WLAN) (e.g., Wireless Fidelity (Wi-Fi) network), Bluetooth (BT), and the like, applied to the mobile terminal 1600. In some embodiments, antenna 1 and mobile communication module 1650 and antenna 2 and wireless communication module 1660 of mobile terminal 1600 are coupled such that mobile terminal 1600 can communicate with networks and other devices via wireless communication techniques.
The mobile terminal 1600 implements a display function via the GPU, the display screen 1690, the application processor, and the like. The GPU is a microprocessor for image processing, connected to the display screen 1690 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 1610 may include one or more GPUs that execute program instructions to generate or change display information.
The mobile terminal 1600 may implement a shooting function through the ISP, the camera module 1691, the video codec, the GPU, the display 1690, the application processor, and the like. The ISP is used for processing data fed back by the camera module 1691; the camera module 1691 is used for capturing still images or videos; the digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals; the video codec is used to compress or decompress digital video, and the mobile terminal 1600 may also support one or more video codecs. In some embodiments, the process of acquiring the historical depth image can be realized by the speckle TOF camera module, and the process of searching for the light spot can be realized by combining with an ISP (internet service provider) and the like.
The external memory interface 1622 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the mobile terminal 1600. The external memory card communicates with the processor 1610 through the external memory interface 1622 to implement data storage functions. For example, files such as music, video, etc. are saved in an external memory card.
Internal memory 1621 may be used to store computer-executable program code, which includes instructions. The internal memory 1621 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (e.g., audio data, a phonebook, etc.) created during use of the mobile terminal 1600, and the like. In addition, the internal memory 1621 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk Storage device, a Flash memory device, a Universal Flash Storage (UFS), and the like. The processor 1610 performs various functional applications and data processing of the mobile terminal 1600 by executing instructions stored in the internal memory 1621 and/or instructions stored in a memory provided in the processor.
Mobile terminal 1600 may implement audio functions via audio module 1670, speaker 1671, receiver 1672, microphone 1673, headset interface 1674, and an application processor, among other things. Such as music playing, recording, etc.
The depth sensor 16801 is used to obtain depth information for the scene. The pressure sensor 168016 is used to sense a pressure signal, which can be converted into an electrical signal. Gyroscope sensor 16803 can be used to determine the motion pose of mobile terminal 1600. In addition, other functional sensors, such as an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, and a bone conduction sensor, may be provided in the sensor module 1680 according to actual needs.
Other devices that provide ancillary functions may also be included in the mobile terminal 1600. For example, the keys 1694 include a power-on key, a volume key, etc., through which a user can generate key signal inputs related to user settings and function control of the mobile terminal 1600. Further examples include indicators 1692, motors 1693, SIM card interfaces 1695, etc.
Furthermore, the exemplary embodiments of the present disclosure also provide a computer-readable storage medium on which a program product capable of implementing the above-described method of the present specification is stored. In some possible embodiments, various aspects of the present disclosure may also be implemented in the form of a program product including program code for causing a terminal device to perform the steps according to various exemplary embodiments of the present disclosure described in the "exemplary methods" section above in this specification, when the program product is run on the terminal device, for example, any one or more of the steps in fig. 3, fig. 4, fig. 7 to fig. 9, and fig. 11 to fig. 14 may be performed.
It should be noted that the computer readable media shown in the present disclosure may be computer readable signal media or computer readable storage media or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Furthermore, program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is to be limited only by the terms of the appended claims.

Claims (14)

1. A light spot searching method is applied to a terminal device, wherein the terminal device comprises a speckle TOF camera, and the method comprises the following steps:
acquiring a historical depth reference image corresponding to a current light intensity image acquired by the speckle TOF camera;
determining a search area on the current light intensity image based on the historical depth reference image, and searching light spots in the search area; wherein the number of pixels included in the search area is smaller than the number of pixels included in the current intensity image.
2. The method of claim 1, wherein determining a search area on the current intensity image based on the historical depth reference image comprises:
acquiring a first position template and a preset offset function; wherein the first position template comprises at least one first spot position acquired at a first acquisition distance;
calculating a first offset corresponding to each first light spot position in the first position template based on the historical depth reference image and the preset offset function;
and offsetting each first light spot position according to the first offset, and determining a first search area corresponding to the offset first light spot position on the current light intensity image.
3. The method according to claim 2, wherein the calculating a first offset corresponding to each first spot position in the first position template based on the historical depth reference image and the preset offset function comprises:
acquiring depth values corresponding to all reference light spots in the historical depth reference image, and determining first light spot positions corresponding to all the reference light spots in the first position template;
and calculating the position offset corresponding to each reference light spot according to the preset offset function and the depth value, and determining the position offset as a first offset corresponding to a first light spot position corresponding to the reference light spot.
4. The method according to claim 2, wherein before the calculating a first offset amount corresponding to each of the first spot positions in the first position template based on the historical depth reference image and the preset offset function, the method further comprises:
counting the depth values corresponding to the reference light spots in the historical depth reference image, and determining the light spot proportion of the target light spot of which the depth value is smaller than a depth threshold value;
when the light spot proportion is larger than the proportion threshold value, determining a second search area corresponding to the target light spot on the current light intensity image;
and when the light spot proportion is smaller than or equal to a proportion threshold value, calculating a first offset corresponding to each first light spot position in the first position template based on the historical depth reference image and the preset offset function.
5. The method according to claim 4, wherein the determining the second search area corresponding to the target spot on the current intensity image comprises:
acquiring adjacent light spots corresponding to the target light spots, and calculating second offset corresponding to the target light spots based on the preset offset function and the depth values corresponding to the adjacent light spots;
determining a first target light spot position corresponding to each target light spot in the first position template;
and offsetting the first target light spot position corresponding to each target light spot according to the second offset, and determining a second search area corresponding to the offset first target light spot position on the current light intensity image.
6. The method of claim 1, wherein determining a search area on the current intensity image based on the historical depth reference image comprises:
acquiring a position template set; the position template set comprises a plurality of second position templates, the second acquisition distances corresponding to the second position templates are different, and each second position template comprises at least one second light spot position acquired at the second acquisition distance corresponding to the second position template;
acquiring a depth value corresponding to each reference light spot in the historical depth reference image, and determining a target second position template corresponding to each reference light spot in the position template set based on the depth value;
determining a second target light spot position corresponding to each reference light spot in a target second position template corresponding to each reference light spot;
and determining a third search area corresponding to the position of the second target light spot on the current light intensity image.
7. The method of claim 6, wherein the determining a third search area corresponding to the second target spot position on the current intensity image comprises:
acquiring a preset offset function, and calculating a third offset corresponding to the position of the second target light spot according to the preset offset function and the depth value corresponding to each reference light spot;
and shifting the positions of the second target light spots according to the third shift amount, and determining a third search area corresponding to the shifted positions of the second target light spots on the current light intensity image.
8. The method of claim 1, wherein determining a search area on the current intensity image based on the historical depth reference image comprises:
determining the position of a reference light spot corresponding to each reference light spot according to the historical depth reference image;
and determining a fourth search area corresponding to the position of the reference light spot on the current light intensity image.
9. The method of claim 8, wherein after performing the spot search on the fourth search area, the method further comprises:
acquiring a third position template; wherein the third position template comprises at least one third spot position acquired at a third acquisition distance;
and performing supplementary search on the current light intensity image according to the positions of the reference light spots and the third light spots.
10. The method of claim 9 wherein the performing a supplemental search of the current intensity image based on each of the reference spot and the third spot positions comprises:
determining a third target light spot position corresponding to each reference light spot in the third position template; acquiring a third residual light spot position except for the third target light spot position in the third position template;
and determining a supplementary search area corresponding to the third remaining light spot position on the current light intensity image, and searching light spots in the supplementary search area.
11. The method according to claim 1, wherein the obtaining of the historical depth reference image corresponding to the current light intensity image collected by the speckle TOF camera comprises:
acquiring at least one frame of historical depth image acquired by the speckle TOF camera; the historical acquisition time of the historical light intensity image corresponding to the historical depth image is before the current acquisition time of the current light intensity image;
and generating a corresponding historical depth reference image based on the at least one frame of historical depth image.
12. A light spot searching device is applied to a terminal device, wherein the terminal device comprises a speckle TOF camera, and the light spot searching device comprises:
the reference acquisition module is used for acquiring a historical depth reference image corresponding to the current light intensity image acquired by the speckle TOF camera;
the light spot searching module is used for determining a searching area on the current light intensity image based on the historical depth reference image and searching light spots in the searching area; wherein the number of pixels included in the search area is smaller than the number of pixels included in the current intensity image.
13. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 11.
14. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any one of claims 1 to 11 via execution of the executable instructions.
CN202210600750.7A 2022-05-30 2022-05-30 Light spot searching method device, computer readable medium and electronic equipment Active CN114862828B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210600750.7A CN114862828B (en) 2022-05-30 2022-05-30 Light spot searching method device, computer readable medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210600750.7A CN114862828B (en) 2022-05-30 2022-05-30 Light spot searching method device, computer readable medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN114862828A true CN114862828A (en) 2022-08-05
CN114862828B CN114862828B (en) 2024-08-23

Family

ID=82640409

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210600750.7A Active CN114862828B (en) 2022-05-30 2022-05-30 Light spot searching method device, computer readable medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN114862828B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115965653A (en) * 2022-12-14 2023-04-14 北京字跳网络技术有限公司 Light spot tracking method and device, electronic equipment and storage medium
CN116299497A (en) * 2023-05-12 2023-06-23 深圳深浦电气有限公司 Method, apparatus and computer readable storage medium for optical detection
CN117156113A (en) * 2023-10-30 2023-12-01 南昌虚拟现实研究院股份有限公司 Deep learning speckle camera-based image correction method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101408985A (en) * 2008-09-22 2009-04-15 北京航空航天大学 Method and apparatus for extracting circular luminous spot second-pixel center
CN102622742A (en) * 2011-12-21 2012-08-01 中国科学院光电技术研究所 Light spot and aperture searching method and device of Hartmann wavefront detector
CN103679180A (en) * 2012-09-19 2014-03-26 武汉元宝创意科技有限公司 Sight tracking method based on single light source of single camera
CN104155006A (en) * 2014-08-27 2014-11-19 湖北久之洋红外系统股份有限公司 Handheld thermal infrared imager and method for same to carry out quick locking and ranging on small target
WO2016192436A1 (en) * 2015-06-05 2016-12-08 深圳奥比中光科技有限公司 Method and system for acquiring target three-dimensional image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101408985A (en) * 2008-09-22 2009-04-15 北京航空航天大学 Method and apparatus for extracting circular luminous spot second-pixel center
CN102622742A (en) * 2011-12-21 2012-08-01 中国科学院光电技术研究所 Light spot and aperture searching method and device of Hartmann wavefront detector
CN103679180A (en) * 2012-09-19 2014-03-26 武汉元宝创意科技有限公司 Sight tracking method based on single light source of single camera
CN104155006A (en) * 2014-08-27 2014-11-19 湖北久之洋红外系统股份有限公司 Handheld thermal infrared imager and method for same to carry out quick locking and ranging on small target
WO2016192436A1 (en) * 2015-06-05 2016-12-08 深圳奥比中光科技有限公司 Method and system for acquiring target three-dimensional image

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115965653A (en) * 2022-12-14 2023-04-14 北京字跳网络技术有限公司 Light spot tracking method and device, electronic equipment and storage medium
CN115965653B (en) * 2022-12-14 2023-11-07 北京字跳网络技术有限公司 Light spot tracking method and device, electronic equipment and storage medium
CN116299497A (en) * 2023-05-12 2023-06-23 深圳深浦电气有限公司 Method, apparatus and computer readable storage medium for optical detection
CN116299497B (en) * 2023-05-12 2023-08-11 深圳深浦电气有限公司 Method, apparatus and computer readable storage medium for optical detection
CN117156113A (en) * 2023-10-30 2023-12-01 南昌虚拟现实研究院股份有限公司 Deep learning speckle camera-based image correction method and device
CN117156113B (en) * 2023-10-30 2024-02-23 南昌虚拟现实研究院股份有限公司 Deep learning speckle camera-based image correction method and device

Also Published As

Publication number Publication date
CN114862828B (en) 2024-08-23

Similar Documents

Publication Publication Date Title
CN114862828B (en) Light spot searching method device, computer readable medium and electronic equipment
CN112270754B (en) Local grid map construction method and device, readable medium and electronic equipment
US11902577B2 (en) Three-dimensional data encoding method, three-dimensional data decoding method, three-dimensional data encoding device, and three-dimensional data decoding device
US8224069B2 (en) Image processing apparatus, image matching method, and computer-readable recording medium
US20190147606A1 (en) Apparatus and method of five dimensional (5d) video stabilization with camera and gyroscope fusion
KR102524982B1 (en) Apparatus and method for applying noise pattern to image processed bokeh
CN109756763B (en) Electronic device for processing image based on priority and operating method thereof
CN110033423B (en) Method and apparatus for processing image
CN110544273B (en) Motion capture method, device and system
CN112927281B (en) Depth detection method, depth detection device, storage medium and electronic equipment
CN111882655A (en) Method, apparatus, system, computer device and storage medium for three-dimensional reconstruction
US11238563B2 (en) Noise processing method and apparatus
CN115908720A (en) Three-dimensional reconstruction method, device, equipment and storage medium
CN111160233B (en) Human face in-vivo detection method, medium and system based on three-dimensional imaging assistance
CN110726971B (en) Visible light positioning method, device, terminal and storage medium
CN110706288A (en) Target detection method, device, equipment and readable storage medium
CN113052884A (en) Information processing method, information processing apparatus, storage medium, and electronic device
US11283970B2 (en) Image processing method, image processing apparatus, electronic device, and computer readable storage medium
US20230215018A1 (en) Electronic device including camera and method for generating video recording of a moving object
CN110087002B (en) Shooting method and terminal equipment
WO2022227916A1 (en) Image processing method, image processor, electronic device, and storage medium
CN107403146A (en) detection method and related product
CN116309850B (en) Virtual touch identification method, device and storage medium
CN117351135A (en) Voxel map generation method, device, equipment and storage medium
CN114120415A (en) Face detection method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant