CN110672036B - Method and device for determining projection area - Google Patents

Method and device for determining projection area Download PDF

Info

Publication number
CN110672036B
CN110672036B CN201810716581.7A CN201810716581A CN110672036B CN 110672036 B CN110672036 B CN 110672036B CN 201810716581 A CN201810716581 A CN 201810716581A CN 110672036 B CN110672036 B CN 110672036B
Authority
CN
China
Prior art keywords
position point
determining
coefficient
waveform
group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810716581.7A
Other languages
Chinese (zh)
Other versions
CN110672036A (en
Inventor
杨少鹏
孙元栋
李林橙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikrobot Co Ltd
Original Assignee
Hangzhou Hikrobot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikrobot Technology Co Ltd filed Critical Hangzhou Hikrobot Technology Co Ltd
Priority to CN201810716581.7A priority Critical patent/CN110672036B/en
Publication of CN110672036A publication Critical patent/CN110672036A/en
Application granted granted Critical
Publication of CN110672036B publication Critical patent/CN110672036B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/254Projection of a pattern, viewing through a pattern, e.g. moiré

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a method and a device for determining a projection area, and belongs to the technical field of three-dimensional measurement. The method comprises the following steps: shooting a group of structured light in a shooting area to obtain a group of shooting images, wherein the group of structured light comprises at least three structured lights; determining a waveform significance coefficient of each position point in the shooting area according to the group of shot images, wherein the waveform significance coefficient of any position point is used for indicating the imaging quality of the structured light formation of any position point in the group of shot images; and determining a projection area from the shooting area according to the waveform significance coefficient of each position point in the shooting area. In the embodiment of the invention, the waveform saliency coefficient is irrelevant to the absolute brightness value of the shot image, so that misjudgment operation caused by that the structured light of the projection area is reflected to the projection area is avoided, and the accuracy of determining the projection area is improved.

Description

Method and device for determining projection area
Technical Field
The invention relates to the technical field of three-dimensional measurement, in particular to a method and a device for determining a projection area.
Background
The three-dimensional measurement is a high and new technology integrating light, mechanical, electrical and computational (mathematics and computer) technologies, and is mainly used for scanning the spatial appearance and structure of an object to be measured so as to obtain the spatial coordinates of the surface of the object to be measured. Wherein the three-dimensional measurement comprises a projector, a camera and a computer. The projector is used for projecting structured light, and the structured light can form a projection grating when being irradiated on an object to be measured. The camera is used for shooting the projection grating to obtain a projection grating image. And the computer is used for acquiring the space coordinates of the object to be measured based on the projection grating image. Since the projection area of the projector and the imaging area of the camera are generally not completely overlapped (the shooting area of the camera is generally larger than the projection area of the projector), even if the projection area and the imaging area are completely overlapped, the object to be measured may not be effectively imaged in some areas due to specular reflection, internal cavities, light absorption materials, or the like. That is, the projection grating image collected by the camera includes both the projection area capable of imaging and the non-projection area incapable of imaging. Therefore, the computer needs to determine a projection region from the projection grating image, which can be imaged, before acquiring the spatial coordinates of the object to be measured based on the projection grating image.
The process of determining the projection area in the prior art may be: the camera projects a pair of first monochromatic structured light with a higher brightness value and a pair of second monochromatic structured light with a lower brightness value respectively. The camera respectively shoots a first projection grating of the first monochromatic structured light reflected by the object to be measured and a second projection grating of the second monochromatic structured light reflected by the object to be measured, so as to obtain a first projection grating image and a second projection grating image. Since the projection area is projected by the first structured light and the second structured light, the projection area has a certain brightness variation. And the non-projection area is not irradiated by the structured light, so the brightness of the non-projection area is generally stable. The computer selects a position point where a luminance difference value in the first photographic raster image and the second photographic raster image is greater than a luminance change threshold value. The computer composes the selected location points into a projection area.
Since the structured light (the first structured light and/or the second structured light) in the projection region may be reflected to the non-projection region, the luminance difference of some pixel points in the non-projection region is also greater than the luminance change threshold, so that a part of the non-projection region is incorrectly determined as the projection region, and the accuracy of the method is poor. Moreover, since the reflection rate of the material of the object to be measured is different and the projection distance from the projector to the object to be measured is different, it is difficult to find a brightness change threshold value suitable for all projection areas, and the division of the projection areas based on the fixed brightness change threshold value further results in poor accuracy.
Disclosure of Invention
The invention provides a method and a device for determining a projection area, which can improve the accuracy of determining the projection area. The technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides a method for determining a projection region, where the method includes:
shooting a group of structured light in a shooting area to obtain a group of shooting images, wherein the group of structured light comprises at least three structured lights, and the group of shooting images comprises at least three shooting images;
determining a waveform significance coefficient of each position point in the shooting area according to the group of shot images, wherein the waveform significance coefficient of any position point is used for indicating the imaging quality of the structured light formation of any position point in the group of shot images;
and determining a projection area from the shooting area according to the waveform significance coefficient of each position point in the shooting area.
In one possible implementation manner, the capturing a set of structured light in the capturing area to obtain a set of captured images includes:
projecting a set of structured light in a shooting area, wherein the set of structured light is emitted to obtain a set of projection gratings;
and shooting the group of projection gratings to obtain the group of shot images.
In another possible implementation manner, the determining the waveform saliency coefficient of each position point in the shooting area according to the group of shot images includes:
for any position point in the shooting area, determining the average brightness value of the position point in the group of shooting images and the amplitude of the distribution of the structured light waveform;
and determining the imaging quality coefficient of any position point according to the average brightness value and the amplitude of any position point, and determining the imaging quality coefficient of any position point as the waveform significance coefficient of any position point.
In another possible implementation manner, the determining an imaging quality coefficient of any one of the position points according to the average brightness value and the amplitude of the any one of the position points includes:
for each of the set of captured images, determining a phase value of the any one position point and a phase shift amount of a luminance distribution of the any one position point in the set of captured images;
determining a brightness distribution value of any position point in the shot image according to the average brightness value, the amplitude, the phase value of any position point and the phase offset;
and determining the imaging quality coefficient of any position point according to the brightness distribution value of any position point in the group of shot images.
In another possible implementation manner, the determining an imaging quality coefficient of any position point according to a brightness distribution value of the position point in the group of captured images includes:
determining the sum of the brightness distribution values and the square of the sum of the brightness distribution values according to the brightness distribution values of any position point in the group of shot images;
and determining the imaging quality coefficient of any position point according to the sum of the brightness distribution values and the square of the sum of the brightness distribution values, wherein the imaging quality coefficient of any position point is inversely proportional to the sum of the brightness distribution values and is directly proportional to the square of the sum of the brightness distribution values.
In another possible implementation, the imaging quality coefficient of any one of the position points is proportional to the average brightness value of the position point and inversely proportional to the amplitude.
In another possible implementation manner, the determining a projection area from the shooting area according to the waveform saliency coefficient of each position point in the shooting area includes:
selecting a plurality of position points with waveform significance coefficients larger than a coefficient threshold value from the shooting area according to the waveform significance coefficient of each position point in the shooting area;
and combining the plurality of position points into the projection area.
In another possible implementation, the method further includes:
receiving the coefficient threshold value being input; or,
and determining the coefficient threshold according to the waveform significance coefficient of each position point in the shooting area.
In another possible implementation manner, the determining the coefficient threshold according to the waveform significance coefficient of each position point in the shooting area includes:
according to the waveform significance coefficient of each position point in the shooting area, counting the statistical histogram data of the waveform significance coefficient;
determining the maximum inter-class variance according to the statistical histogram data of the waveform significance coefficient;
and determining the coefficient threshold according to the maximum between-class variance.
In another possible implementation manner, the determining a maximum inter-class variance according to the waveform significance coefficient of each position point in the shooting region includes:
determining the number of position points of each waveform significance coefficient according to the statistical histogram data of the waveform significance coefficients;
and determining the maximum inter-class variance by a maximum inter-class difference method according to the number of the position points of each waveform significance coefficient and the number of the structured light included in the group of structured light.
In a second aspect, an embodiment of the present invention provides an apparatus for determining a projection region, where the apparatus includes:
the shooting module is used for shooting a group of structured light in a shooting area to obtain a group of shot images, wherein the group of structured light comprises at least three pieces of structured light, and the group of shot images comprises at least three shot images;
a first determination module, configured to determine, according to the group of captured images, a waveform saliency coefficient of each position point in the captured region, where the waveform saliency coefficient of any position point is used to indicate imaging quality of structured light formation of any position point in the group of captured images;
and the second determining module is used for determining a projection area from the shooting area according to the waveform significance coefficient of each position point in the shooting area.
In one possible implementation manner, the shooting module is further configured to project a set of structured light in a shooting area, where the set of structured light is emitted to obtain a set of projection gratings; and shooting the group of projection gratings to obtain the group of shot images.
In another possible implementation manner, the first determining module is further configured to determine, for any position point in the captured region, an average brightness value and an amplitude of a structured-light waveform distribution of the position point in the group of captured images; and determining the imaging quality coefficient of any position point according to the average brightness value and the amplitude of any position point, and determining the imaging quality coefficient of any position point as the waveform significance coefficient of any position point.
In another possible implementation manner, the first determining module is further configured to determine, for each captured image in the set of captured images, a phase value of the any position point and a phase shift amount of a luminance distribution of the any position point in the set of captured images; determining a brightness distribution value of any position point in the shot image according to the average brightness value, the amplitude, the phase value of any position point and the phase offset; and determining the imaging quality coefficient of any position point according to the brightness distribution value of any position point in the group of shot images.
In another possible implementation manner, the first determining module is further configured to determine a sum of the luminance distribution values and a square of the sum of the luminance distribution values according to the luminance distribution values of the any position point in the group of captured images; and determining the imaging quality coefficient of any position point according to the sum of the brightness distribution values and the square of the sum of the brightness distribution values, wherein the imaging quality coefficient of any position point is inversely proportional to the sum of the brightness distribution values and is directly proportional to the square of the sum of the brightness distribution values.
In another possible implementation, the imaging quality coefficient of any one of the position points is proportional to the average brightness value of the position point and inversely proportional to the amplitude.
In another possible implementation manner, the second determining module is further configured to select, according to the waveform significance coefficient of each position point in the shooting area, a plurality of position points from the shooting area, where the waveform significance coefficient is greater than a coefficient threshold; and combining the plurality of position points into the projection area.
In another possible implementation manner, the apparatus further includes:
a receiving module for receiving the coefficient threshold value which is input; or,
and the third determining module is used for determining the coefficient threshold according to the waveform significance coefficient of each position point in the shooting area.
In another possible implementation manner, the third determining module is further configured to count statistical histogram data of the waveform significance coefficient according to the waveform significance coefficient of each position point in the shooting area; determining the maximum inter-class variance according to the statistical histogram data of the waveform significance coefficient; and determining the coefficient threshold according to the maximum between-class variance.
In another possible implementation manner, the third determining module is further configured to determine, according to the statistical histogram data of the waveform significance coefficients, the number of location points of each waveform significance coefficient; and determining the maximum inter-class variance by a maximum inter-class difference method according to the number of the position points of each waveform significance coefficient and the number of the structured light included in the group of structured light.
In a third aspect, an embodiment of the present invention provides a computing device, where the computing device includes:
at least one processor; and
at least one memory;
the at least one memory stores one or more programs configured to be executed by the at least one processor, the one or more programs including instructions for performing the method as set forth in the first aspect or any possible implementation of the first aspect.
In a fourth aspect, an embodiment of the present invention provides a non-transitory computer-readable storage medium for storing a computer program, which is loaded by a processor to execute the instructions of the method according to the first aspect or any possible implementation manner of the first aspect.
The technical scheme provided by the embodiment of the invention has the beneficial effects that at least: in an embodiment of the present invention, waveform saliency coefficients are used to distinguish between projected regions and non-projected regions. Because the waveform saliency coefficient is irrelevant to the absolute brightness value of the shot image, misjudgment operation caused by that the structured light of the projection area is reflected to the projection area is avoided, and a brightness change threshold applicable to all the projection areas does not need to be found, so that the accuracy of determining the projection area is improved.
Drawings
FIG. 1 is a schematic diagram of a three-dimensional measurement system provided by an embodiment of the invention;
FIG. 2 is a flowchart of a method for determining a projection area according to an embodiment of the present invention;
FIG. 3 is a flowchart of a method for determining a projection area according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of an apparatus for determining a projection area according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a computing device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
An embodiment of the present invention provides a three-dimensional measurement system, and referring to fig. 1, the three-dimensional measurement system includes: a projection apparatus 101, an image capture apparatus 102, and a computing apparatus 103. The computing device 103 is connected to the projection device 101 and the image capturing device 102 via a wired or wireless network, respectively, thereby realizing control of the projection device 101 and the image capturing device 102.
When three-dimensional measurement of an object to be measured is required, the computing device 103 is configured to send a projection instruction to the projection device 101, the projection instruction being configured to instruct the projection device 101 to project a set of structured light of a specified waveform. The projection instructions carry an identification of the specified waveform and an amount of structured light included in the set of structured light. The identification of the specified waveform may be a shape name of the specified waveform. The set of structured light comprises at least 3 structured light, i.e. the set of structured light comprises at least three structured lights. The specified waveform can be set and changed according to the needs, and in the embodiment of the invention, the specified waveform is not particularly limited; for example, the specified waveform may be a sine waveform, a cosine waveform, or the like. In addition, structured light refers to a particular pattern of light, which may be a plane, a grid, or a more complex shape of light.
And the projection device 101 is used for receiving the projection instruction and projecting a group of structured light with a specified waveform according to the projection instruction. A set of structured light irradiates on an object to be measured to form a set of projection gratings, wherein one pair of structured light corresponds to one projection grating. The projection device 101 may be any device capable of projecting structured light; for example, the projection device may be a projector.
And the computing device 103 is further configured to send a capture instruction to the image capturing device 102, where the capture instruction is used to instruct the image capturing device 102 to capture a set of projection gratings and return a set of captured images.
And the camera device 102 is configured to receive the acquisition instruction, acquire a set of structured light according to the acquisition instruction, and obtain a set of shot images. The group of structured light may be a group of structured light projected by the projection device, and may also be a group of projection gratings with structured light characteristics, which are obtained by reflecting the group of structured light projected by the projection device.
The image capture device 102 is also used to transmit a set of captured images to the computing device 103. The image capture apparatus 102 may be any apparatus capable of photographing; for example, the image capture device 102 may be a video camera, a still camera, a video camera, or the like.
The computing device 103 is further configured to receive a set of captured images and determine a projection area based on the set of captured images. Wherein the computing device 103 may be any device with computing capabilities; for example, the computing device 103 may be a computer or a terminal, etc. The projection grating refers to a projection shape formed by projecting the structured light onto an object.
It should be noted that the projection device 101, the image capturing device 102, and the computing device 103 may be independent devices, and at least two of the projection device 101, the image capturing device 102, and the computing device 103 may be integrated in the same device; for example, the image capturing apparatus 102 and the computing apparatus 103 are integrated in an electronic apparatus having an image capturing function and a computing function.
An embodiment of the present invention provides a method for determining a projection area, and referring to fig. 2, the method includes:
step 201: shooting a group of structured light in the shooting area to obtain a group of shooting images, wherein the group of structured light comprises at least three structured lights, and the group of shooting images comprises at least three shooting images.
Step 202: and determining a waveform significance coefficient of each position point in the shooting area according to the group of shot images, wherein the waveform significance coefficient of any position point is used for indicating the imaging quality of the structured light formation of any position point in the group of shot images.
Step 203: and determining a projection area from the shooting area according to the waveform significance coefficient of each position point in the shooting area.
In one possible implementation manner, the capturing a set of structured light in the capturing area to obtain a set of captured images includes:
projecting a set of structured light in a shooting area, wherein the set of structured light is emitted to obtain a set of projection gratings;
and shooting the group of projection gratings to obtain the group of shot images.
In another possible implementation manner, the determining the waveform saliency coefficient of each position point in the shooting area according to the group of shot images includes:
for any position point in the shooting area, determining the average brightness value of the position point in the group of shooting images and the amplitude of the distribution of the structured light waveform;
and determining the imaging quality coefficient of any position point according to the average brightness value and the amplitude of any position point, and determining the imaging quality coefficient of any position point as the waveform significance coefficient of any position point.
In another possible implementation manner, the determining an imaging quality coefficient of any one of the position points according to the average brightness value and the amplitude of the any one of the position points includes:
for each of the set of captured images, determining a phase value of the any one position point and a phase shift amount of a luminance distribution of the any one position point in the set of captured images;
determining a brightness distribution value of any position point in the shot image according to the average brightness value, the amplitude, the phase value of any position point and the phase offset;
and determining the imaging quality coefficient of any position point according to the brightness distribution value of any position point in the group of shot images.
In another possible implementation manner, the determining an imaging quality coefficient of any position point according to a brightness distribution value of the position point in the group of captured images includes:
determining the sum of the brightness distribution values and the square of the sum of the brightness distribution values according to the brightness distribution values of any position point in the group of shot images;
and determining the imaging quality coefficient of any position point according to the sum of the brightness distribution values and the square of the sum of the brightness distribution values, wherein the imaging quality coefficient of any position point is inversely proportional to the sum of the brightness distribution values and is directly proportional to the square of the sum of the brightness distribution values.
In another possible implementation, the imaging quality coefficient of any one of the position points is proportional to the average brightness value of the position point and inversely proportional to the amplitude.
In another possible implementation manner, the determining a projection area from the shooting area according to the waveform saliency coefficient of each position point in the shooting area includes:
selecting a plurality of position points with waveform significance coefficients larger than a coefficient threshold value from the shooting area according to the waveform significance coefficient of each position point in the shooting area;
and combining the plurality of position points into the projection area.
In another possible implementation, the method further includes:
receiving the coefficient threshold value being input; or,
and determining the coefficient threshold according to the waveform significance coefficient of each position point in the shooting area. In another possible implementation manner, the determining the coefficient threshold according to the waveform significance coefficient of each position point in the shooting area includes:
according to the waveform significance coefficient of each position point in the shooting area, counting the statistical histogram data of the waveform significance coefficient;
determining the maximum inter-class variance according to the statistical histogram data of the waveform significance coefficient;
and determining the coefficient threshold according to the maximum between-class variance.
In another possible implementation manner, the determining a maximum inter-class variance according to the waveform significance coefficient of each position point in the shooting region includes:
determining the number of position points of each waveform significance coefficient according to the statistical histogram data of the waveform significance coefficients;
and determining the maximum inter-class variance by a maximum inter-class difference method according to the number of the position points of each waveform significance coefficient and the number of the structured light included in the group of structured light.
In an embodiment of the present invention, waveform saliency coefficients are used to distinguish between projected regions and non-projected regions. Because the waveform saliency coefficient is irrelevant to the absolute brightness value of the shot image, misjudgment operation caused by that the structured light of the projection area is reflected to the projection area is avoided, and a brightness change threshold applicable to all the projection areas does not need to be found, so that the accuracy of determining the projection area is improved.
An embodiment of the present invention provides a method for determining a projection area, where an execution subject of the method may be a computing device, and referring to fig. 3, the method includes:
step 301: the computing device photographs a set of structured light in a photographing region, resulting in a set of photographed images.
The set of structured light may be a set of structured light projected by the projection device, and may also be a set of projection gratings with structured light characteristics, which are obtained by reflecting the set of structured light projected by the projection device. When the set of structured light is a set of structured light projected by the projection device, the step may be: the computing device controls the projection device to project a set of structured light in the shooting area, and the computing device controls the image pickup device to shoot the set of structured light to obtain a set of shot images. When the group of structured light is a group of projection gratings with structured light characteristics, which are obtained by reflecting a group of structured light projected by the projection device, the step may be: the computing device controls the projection device to project a set of structured light in the shooting area, and the set of structured light is emitted to obtain a set of projection gratings; and the computing equipment controls the camera equipment to shoot the group of projection gratings to obtain a group of shot images.
It should be noted that the set of structured light includes at least three structured lights, for example, the set of structured light includes three structured lights, four structured lights, five structured lights, and the like. And, the more the set of structured light includes the greater the amount of structured light, the higher the accuracy of determining the projection area from the photographing area. In the embodiment of the present invention, in order to simplify the calculation, the group of structured light includes three structured lights as an example for description.
For example, the computing device photographs three pieces of structured light in a photographing region, resulting in three photographed images.
Step 302: the computing device determines a waveform saliency coefficient of each position point in the shooting area from the set of shot images, the waveform saliency coefficient of any position point being used for indicating the imaging quality of the structured light of the any position point in the set of shot images.
This step can be realized by the following steps (1) to (2), including:
(1): for any position point in the captured region, the computing device determines an average brightness value and an amplitude of the structured-light waveform distribution of the any position point in the set of captured images.
The computing device determines a luminance value of the arbitrary position point in each captured image, and determines an average luminance value based on the luminance value of the arbitrary position point in each captured image. The computing device determines an amplitude of the structured light waveform distribution based on the structured light waveform distribution of the arbitrary location point in the set of captured images.
(2): the computing device determines the imaging quality coefficient of any position point according to the average brightness value and the amplitude of the any position point, and determines the imaging quality coefficient of any position point as the waveform significance coefficient of any position point.
The imaging quality coefficient of any position point is in direct proportion to the average brightness value of any position point and in inverse proportion to the amplitude. In one possible implementation, the computing device may determine the imaging quality coefficient of any one of the position points according to the average brightness value and the amplitude of the any one of the position points by any one of operation methods proportional to the average brightness value and inverse to the amplitude. For example, the arithmetic method may be a ratio method or the like. When the operation method is a ratio method, the steps can be as follows: the computing device determines the ratio of the average brightness value and the amplitude of any position point as the imaging quality coefficient of any position point.
For example, the computing device determines the imaging quality coefficient of any position point according to the average brightness value and amplitude of the position point by the following formula one.
The formula I is as follows:
Figure GDA0003077766360000111
(x, y) is the coordinates of the arbitrary position point, γ (x, y) is the imaging quality coefficient of the arbitrary position point, I ″ (x, y) is the average luminance value of the arbitrary position point, and I' (x, y) is the amplitude of the arbitrary position point.
In another possible implementation manner, the step (2) can be implemented by the following steps (2-1) to (2-3), including:
(2-1): for each of the set of captured images, the computing device determines a phase value of the any one position point and a phase shift amount of a luminance distribution of the any one position point in the set of captured images.
When the phase shift amounts are in agreement with each other,
Figure GDA0003077766360000112
k is 1,2,3, … …, N; wherein N is the number of structured lights included in the sub-structured light. k is the serial number of the group of captured images.
(2-2): and the computing equipment determines the brightness distribution value of any position point in the shot image according to the average brightness value, the amplitude, the phase value of any position point and the phase offset.
The brightness distribution value increases with an increase in the average brightness value and the amplitude. The computing device may determine the luminance distribution value of the arbitrary position point in the captured image according to any operation method in which the luminance distribution value increases with an increase in the average luminance value or the amplitude.
For example, the calculation device determines a brightness distribution value of the any position point in the captured image by the following formula two according to the average brightness value, the amplitude, the phase value of the any position point, and the phase shift amount.
The formula II is as follows: i isk(x,y)=I′(x,y)+I″(x,y)cos(Φ(x,y)+δk)
Wherein k is the serial number of the shot image, (x, y) is the position information of any position point, Ik(x, y) is the brightness distribution value at the position point, I '(x, y) is the average brightness value at the position point, I' (x, y) is the amplitude of the brightness distribution waveform at the position point, phi (x, y) is the phase value at the position point, deltakIs a phase offset value.
(2-3): the computing device determines the imaging quality coefficient of any position point according to the brightness distribution value of the position point in the group of shot images.
The computing device determines the sum of the brightness distribution values and the square of the sum of the brightness distribution values according to the brightness distribution values of any position point in the group of shot images; and determining the imaging quality coefficient of any position point according to the sum of the brightness distribution values and the square of the sum of the brightness distribution values, wherein the imaging quality coefficient of any position point is inversely proportional to the sum of the brightness distribution values and is directly proportional to the square of the sum of the brightness distribution values.
The computing device may determine the imaging quality coefficient of any one of the position points by any one of operation methods that embodies that the imaging quality coefficient is inversely proportional to the sum of the luminance distribution values and is proportional to the square of the sum of the luminance distribution values. For example, the operation method may be a ratio method. Accordingly, the step of determining, by the computing device, the imaging quality coefficient of any one of the position points according to the sum of the luminance distribution values and the square of the sum of the luminance distribution values may be:
and the computing device determines the imaging quality coefficient of any position point according to the sum of the brightness distribution values and the square of the sum of the brightness distribution values through the following formula III.
The formula III is as follows:
Figure GDA0003077766360000121
wherein (x, y) is the coordinate of any position point, gamma (x, y) is the imaging quality coefficient of any position point,
Figure GDA0003077766360000122
is the sum of the values of the luminance distribution,
Figure GDA0003077766360000123
is the square of the sum of the luminance distribution values.
For example, when the set of structured light includes three structured lights, the set of captured images includes three captured images, which are a first captured image, a second captured image, and a third captured image, respectively. Correspondingly, the phase deviation value of the brightness distribution of the kth structured light
Figure GDA0003077766360000124
The imaging quality coefficient of the location point (x, y) can be simplified as shown in equation four:
the formula four is as follows:
Figure GDA0003077766360000125
wherein, I1Is the brightness distribution value, I, of the pixel point in the first shot image2Is the brightness distribution value, I, of the pixel point in the second shot image3And the brightness distribution value of the pixel point in the third shot image is obtained.
For another example, when the sub-structured light includes 4 pieces of structured light, the imaging quality coefficient of the position point (x, y) can be simplified as shown in formula five:
the formula five is as follows:
Figure GDA0003077766360000126
step 303: the calculation device selects a plurality of position points from the shooting area, from which the waveform saliency coefficient is greater than a coefficient threshold, according to the waveform saliency coefficient of each position point in the shooting area.
For each location point in the projection region, the computing device determines whether a waveform saliency coefficient for the location point is greater than a coefficient threshold; when the waveform significance coefficient of the position point is larger than the coefficient threshold value, selecting the position point; and when the waveform significance coefficient of the position point is not larger than the coefficient threshold value, discarding the position point.
In the embodiment of the invention, since the brightness of the group of shot images of the projection area at a certain position point is regularly changed, and the brightness of the non-projection area is obtained by reflection accumulation, and there is no regular brightness change on the group of shot images, the waveform significance is not obvious, and therefore the reflected area of the non-projection area can be distinguished based on the waveform significance coefficient.
It should be noted that the computing device may determine the coefficient threshold value first, and then select the location point based on the waveform saliency coefficient and the coefficient threshold value of each location point. The computing device may further determine the coefficient threshold before executing the step, directly acquire the stored coefficient threshold in the step, and select the position point based on the waveform saliency coefficient of each position point and the coefficient threshold, thereby improving the efficiency of determining the projection area.
In one possible implementation, the coefficient threshold may be set by a user based on experience and input to the computing device. Accordingly, the step of the computing device determining the coefficient threshold may be: the computing device receives the coefficient threshold that is input. The coefficient threshold is obtained by the user based on experience; in addition, the coefficient threshold may be set and changed as needed, and in the embodiment of the present invention, the coefficient threshold is not specifically limited; for example, the coefficient threshold may be 0.5, 0.55, 0.6, or the like.
In another possible implementation, the computing device calculates the coefficient threshold; accordingly, the step of the computing device determining the coefficient threshold may be: the calculation device determines the coefficient threshold value according to the waveform significance coefficient of each position point in the shooting area. Wherein, the computing device determines the coefficient threshold value according to the waveform significance coefficient of each position point in the shooting area through the following steps (1) to (3), including:
(1): the calculation device counts statistical histogram data of the waveform significance coefficient according to the waveform significance coefficient of each position point in the shooting area.
In one possible implementation, the computing device may perform histogram statistics based directly on the waveform saliency coefficients for each location point. In another possible implementation manner, since the waveform significance coefficient of each position point is a floating point number between 0 and 1, in order to facilitate statistical histogram analysis, the computing device amplifies the waveform significance coefficient of each position point, and then performs histogram statistics. Accordingly, this step can be realized by the following steps (1-1) to (1-3), including:
(1-1): and the computing equipment amplifies the waveform significance coefficient of each position point by a specified multiple to obtain the amplified waveform significance coefficient of each position point.
The designated multiple can be set and changed as required, and the designated multiple is not specifically limited in the embodiment of the invention; for example, the specified multiple may be 255, and so on.
(1-2): and the computing equipment rounds the amplified waveform significance coefficient of each position point to obtain an integer value of the waveform significance coefficient of each position point.
It should be noted that the computing device may perform an upper rounding or a lower rounding on the amplified waveform significance coefficient of each position point.
(1-3): the computing device counts statistical histogram data of the waveform significance coefficients according to the integer values of the waveform significance coefficients of each position point.
In the embodiment of the invention, the computing equipment amplifies the waveform significance coefficient of each position point by a specified multiple and rounds the amplified waveform significance coefficient, namely, the floating point number is converted into an integer, so that the analysis of a statistical histogram in the later step is facilitated, the statistical accuracy of the histogram is improved, and the accuracy of determining a detection area is improved.
(2): the computing device determines a maximum between-class variance based on statistical histogram data of the waveform saliency coefficients.
This step can be realized by the following steps (2-1) and (2-2), including:
(2-1): the computing device determines the number of location points of each waveform significance coefficient according to the statistical histogram data of the waveform significance coefficients.
(2-2): the computing device determines the maximum inter-class variance by a maximum inter-class method according to the number of location points of each waveform saliency coefficient and the number of structured lights included in the set of structured lights.
In one possible implementation, when the waveform significance coefficient of the location point is not amplified in step (1), this step may determine the maximum between-class variance through the following formula seven.
The formula seven:
Figure GDA0003077766360000141
wherein, delta2(k*) Is the maximum between-class variance, k is the serial number of the captured image,
Figure GDA0003077766360000142
Figure GDA0003077766360000143
n is the number of structured lights included in the set of structured lights, and L is a specified multiple.
In another possible implementation manner, when the waveform significance coefficient of the position point is amplified in step (1), this step may be: the computing device determines the maximum between-class variance through a maximum between-class method according to the number of the position points of each waveform significance coefficient, the number of the structured lights included in the set of structured lights and the specified multiple.
And the computing equipment determines the maximum inter-class variance according to the number of the position points of each waveform significance coefficient, the number of the structured lights included in the group of structured lights and the specified multiple by a maximum inter-class method through the following formula eight.
The formula eight:
Figure GDA0003077766360000144
wherein, delta2(k*) Is the maximum between-class variance, k is the serial number of the captured image,
Figure GDA0003077766360000145
Figure GDA0003077766360000146
n is the number of structured lights included in the set of structured lights, and L is a specified multiple.
(2-2): the computing device determines the maximum inter-class variance by a maximum inter-class method according to the number of location points of each waveform saliency coefficient and the number of structured lights included in the set of structured lights.
(3): the computing device determines the coefficient threshold based on the maximum between-class variance.
In one possible implementation manner, when the waveform significance coefficient of each pixel point is not amplified in step (1), in this step, the computing device directly uses the maximum inter-class variance as the coefficient threshold.
In another possible implementation manner, when the waveform significance coefficient of each pixel point is amplified and rounded in step (1), in this step, the computing device uses the ratio of the maximum between-class variance to the specified multiple as the coefficient threshold.
Step 304: the computing device composes a plurality of location points into a projection region.
In one possible implementation, the computing device groups the non-location points into non-projection regions.
In an embodiment of the present invention, waveform saliency coefficients are used to distinguish between projected regions and non-projected regions. The waveform saliency coefficient is independent of the absolute luminance value of the captured image. Therefore, misjudgment operation caused by that the structured light of the projection area is reflected to the projection area is avoided, and the brightness change threshold value suitable for all the projection areas does not need to be found, so that the accuracy of determining the projection area is improved.
In addition, the whole brightness becomes brighter or darker due to different projection distances, but the relative brightness change between the shot images is not influenced. Therefore, the embodiment of the invention can avoid the influence of the projection distance on the detection of the projection area, and further improve the accuracy of determining the projection area.
In addition, although the light emitting characteristics of different materials are different, the material of a certain position point of the shooting area is fixed, so that the brightness change of a group of shooting images at a certain position point is consistent. Therefore, the embodiment of the invention can avoid the influence of the object to be measured on the detection of the projection area, and further improve the accuracy of determining the projection area.
The embodiment of the invention provides a device for determining a projection area, which can be applied to a computing device and is used for executing the steps executed by the computing device for determining the projection area. Referring to fig. 4, the apparatus includes:
a shooting module 401, configured to shoot a group of structured light in a shooting area to obtain a group of shot images, where the group of structured light includes at least three structured lights, and the group of shot images includes at least three shot images;
a first determining module 402, configured to determine a waveform saliency coefficient of each position point in the captured region according to the group of captured images, where the waveform saliency coefficient of any position point is used to indicate an imaging quality of the structured light of the any position point in the group of captured images;
a second determining module 403, configured to determine a projection area from the shooting area according to the waveform saliency coefficient of each position point in the shooting area.
In a possible implementation manner, the shooting module 401 is further configured to project a set of structured light in the shooting area, where the set of structured light is emitted to obtain a set of projection gratings; and shooting the group of projection gratings to obtain the group of shot images.
In another possible implementation manner, the first determining module 402 is further configured to determine, for any position point in the captured region, an average brightness value and an amplitude of a structured-light waveform distribution of the position point in the set of captured images; and determining the imaging quality coefficient of any position point according to the average brightness value and the amplitude of any position point, and determining the imaging quality coefficient of any position point as the waveform significance coefficient of any position point.
In another possible implementation manner, the first determining module 402 is further configured to determine, for each captured image in the set of captured images, a phase value of the any position point and a phase offset amount of a luminance distribution of the any position point in the set of captured images; determining a brightness distribution value of any position point in the shot image according to the average brightness value, the amplitude, the phase value of any position point and the phase offset; and determining the imaging quality coefficient of any position point according to the brightness distribution value of any position point in the group of shot images.
In another possible implementation manner, the first determining module 402 is further configured to determine a sum of the luminance distribution values and a square of the sum of the luminance distribution values according to the luminance distribution values of the any position point in the group of captured images; and determining the imaging quality coefficient of any position point according to the sum of the brightness distribution values and the square of the sum of the brightness distribution values, wherein the imaging quality coefficient of any position point is inversely proportional to the sum of the brightness distribution values and is directly proportional to the square of the sum of the brightness distribution values.
In another possible implementation, the imaging quality coefficient of any one of the position points is proportional to the average brightness value of the position point and inversely proportional to the amplitude.
In another possible implementation manner, the second determining module 403 is further configured to select, according to the waveform significance coefficient of each location point in the shooting area, a plurality of location points from the shooting area whose waveform significance coefficients are greater than a coefficient threshold; and combining the plurality of position points into the projection area.
In another possible implementation manner, the apparatus further includes:
a receiving module for receiving the coefficient threshold value which is input; or,
and the third determining module is used for determining the coefficient threshold according to the waveform significance coefficient of each position point in the shooting area.
In another possible implementation manner, the third determining module is further configured to count statistical histogram data of the waveform significance coefficient according to the waveform significance coefficient of each position point in the shooting area; determining the maximum inter-class variance according to the statistical histogram data of the waveform significance coefficient; and determining the coefficient threshold according to the maximum between-class variance.
In another possible implementation manner, the third determining module is further configured to determine, according to the statistical histogram data of the waveform significance coefficients, the number of location points of each waveform significance coefficient; and determining the maximum inter-class variance by a maximum inter-class difference method according to the number of the position points of each waveform significance coefficient and the number of the structured light included in the group of structured light.
In an embodiment of the present invention, waveform saliency coefficients are used to distinguish between projected regions and non-projected regions. Because the waveform saliency coefficient is irrelevant to the absolute brightness value of the shot image, misjudgment operation caused by that the structured light of the projection area is reflected to the projection area is avoided, and a brightness change threshold applicable to all the projection areas does not need to be found, so that the accuracy of determining the projection area is improved.
It should be noted that: in the device for determining a projection area according to the above embodiment, when determining a projection area, only the division of the functional modules is illustrated, and in practical applications, the function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the functions described above. In addition, the apparatus for determining a projection area and the method for determining a projection area provided in the above embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments and are not described herein again.
Fig. 5 shows a block diagram of a computing device 500 provided in an exemplary embodiment of the invention. The computing device 500 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Computing device 500 may also be referred to by other names such as user device, portable computing device, laptop computing device, desktop computing device, and so forth.
In general, computing device 500 includes: a processor 501 and a memory 502.
The processor 501 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 501 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 501 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 501 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, processor 501 may also include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
Memory 502 may include one or more computer-readable storage media, which may be non-transitory. Memory 502 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 502 is used to store at least one instruction for execution by processor 501 to implement the method of determining a projected area provided by method embodiments herein.
In some embodiments, the computing device 500 may also optionally include: a peripheral interface 503 and at least one peripheral. The processor 501, memory 502 and peripheral interface 503 may be connected by a bus or signal lines. Each peripheral may be connected to the peripheral interface 503 by a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 504, touch screen display 505, camera 506, audio circuitry 507, positioning components 508, and power supply 509.
The peripheral interface 503 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 501 and the memory 502. In some embodiments, the processor 501, memory 502, and peripheral interface 503 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 501, the memory 502, and the peripheral interface 503 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 504 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 504 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 504 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 504 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 504 may communicate with other computing devices via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 504 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 505 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 505 is a touch display screen, the display screen 505 also has the ability to capture touch signals on or over the surface of the display screen 505. The touch signal may be input to the processor 501 as a control signal for processing. At this point, the display screen 505 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 505 may be one, providing the front panel of the computing device 500; in other embodiments, the display screens 505 may be at least two, each disposed on a different surface of the computing device 500 or in a folded design; in still other embodiments, the display 505 may be a flexible display disposed on a curved surface or on a folded surface of the computing device 500. Even more, the display screen 505 can be arranged in a non-rectangular irregular figure, i.e. a shaped screen. The Display screen 505 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and other materials.
The camera assembly 506 is used to capture images or video. Optionally, camera assembly 506 includes a front camera and a rear camera. Typically, the front facing camera is disposed on the front panel of the computing device and the rear facing camera is disposed on the back of the computing device. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 506 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
Audio circuitry 507 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 501 for processing, or inputting the electric signals to the radio frequency circuit 504 to realize voice communication. The microphones may be multiple and placed at different locations on the computing device 500 for stereo capture or noise reduction purposes. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 501 or the radio frequency circuit 504 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 507 may also include a headphone jack.
The positioning component 508 is operable to locate a current geographic Location of the computing device 500 for navigation or LBS (Location Based Service). The Positioning component 508 may be a Positioning component based on the united states GPS (Global Positioning System), the chinese beidou System, the russian graves System, or the european union's galileo System.
The power supply 509 is used to power the various components in the computing device 500. The power source 509 may be alternating current, direct current, disposable or rechargeable. When power supply 509 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, computing device 500 also includes one or more sensors 510. The one or more sensors 510 include, but are not limited to: acceleration sensor 511, gyro sensor 512, pressure sensor 513, fingerprint sensor 514, optical sensor 515, and proximity sensor 516.
The acceleration sensor 511 may detect the magnitude of acceleration in three coordinate axes of a coordinate system established with the computing device 500. For example, the acceleration sensor 511 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 501 may control the touch screen 505 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 511. The acceleration sensor 511 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 512 may detect a body direction and a rotation angle of the computing device 500, and the gyro sensor 512 may cooperate with the acceleration sensor 511 to acquire a 3D motion of the user on the computing device 500. The processor 501 may implement the following functions according to the data collected by the gyro sensor 512: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensors 513 may be disposed on a side bezel of the computing device 500 and/or underneath the touch display screen 505. When the pressure sensor 513 is disposed on a side frame of the computing device 500, a user's holding signal of the computing device 500 can be detected, and the processor 501 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 513. When the pressure sensor 513 is disposed at the lower layer of the touch display screen 505, the processor 501 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 505. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 514 is used for collecting a fingerprint of the user, and the processor 501 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 514, or the fingerprint sensor 514 identifies the identity of the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 501 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 514 may be disposed on the front, back, or side of the computing device 500. When a physical key or vendor Logo is provided on the computing device 500, the fingerprint sensor 514 may be integrated with the physical key or vendor Logo.
The optical sensor 515 is used to collect the ambient light intensity. In one embodiment, the processor 501 may control the display brightness of the touch display screen 505 based on the ambient light intensity collected by the optical sensor 515. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 505 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 505 is turned down. In another embodiment, processor 501 may also dynamically adjust the shooting parameters of camera head assembly 506 based on the ambient light intensity collected by optical sensor 515.
A proximity sensor 516, also known as a distance sensor, is typically disposed on the front panel of the computing device 500. The proximity sensor 516 is used to capture the distance between the user and the front of the computing device 500. In one embodiment, the touch display screen 505 is controlled by the processor 501 to switch from the bright screen state to the dark screen state when the proximity sensor 516 detects that the distance between the user and the front face of the computing device 500 is gradually decreased; when the proximity sensor 516 detects that the distance between the user and the front of the computing device 500 is gradually increased, the touch display screen 505 is controlled by the processor 501 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the architecture shown in FIG. 5 is not intended to be limiting of computing device 500, and may include more or fewer components than those shown, or may combine certain components, or may employ a different arrangement of components.
Embodiments of the present invention provide a non-transitory computer-readable storage medium for storing a computer program, which is loaded by a processor to execute the instructions of the method for determining a projection area.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (20)

1. A method of determining a projection region, the method comprising:
shooting a group of structured light in a shooting area to obtain a group of shooting images, wherein the group of structured light comprises at least three structured lights, and the group of shooting images comprises at least three shooting images;
for any position point in the shooting area, determining the average brightness value of the position point in the group of shooting images and the amplitude of the distribution of the structured light waveform;
determining an imaging quality coefficient of any position point according to the average brightness value and the amplitude of any position point, and determining the imaging quality coefficient of any position point as a waveform significance coefficient of any position point;
and determining a projection area from the shooting area according to the waveform significance coefficient of each position point in the shooting area.
2. The method of claim 1, wherein capturing the set of structured light in the capture area to obtain a set of captured images comprises:
projecting a set of structured light in a shooting area, wherein the set of structured light is emitted to obtain a set of projection gratings;
and shooting the group of projection gratings to obtain the group of shot images.
3. The method according to claim 1, wherein said determining an imaging quality coefficient of any one of the position points according to the average brightness value and the amplitude of the any one of the position points comprises:
for each of the set of captured images, determining a phase value of the any one position point and a phase shift amount of a luminance distribution of the any one position point in the set of captured images;
determining a brightness distribution value of any position point in the shot image according to the average brightness value, the amplitude, the phase value of any position point and the phase offset;
and determining the imaging quality coefficient of any position point according to the brightness distribution value of any position point in the group of shot images.
4. The method according to claim 3, wherein the determining the imaging quality coefficient of any position point according to the brightness distribution value of any position point in the group of captured images comprises:
determining the sum of the brightness distribution values and the square of the sum of the brightness distribution values according to the brightness distribution values of any position point in the group of shot images;
and determining the imaging quality coefficient of any position point according to the sum of the brightness distribution values and the square of the sum of the brightness distribution values, wherein the imaging quality coefficient of any position point is inversely proportional to the sum of the brightness distribution values and is directly proportional to the square of the sum of the brightness distribution values.
5. The method according to any one of claims 1-4, wherein the imaging quality factor for any one of the location points is proportional to the average brightness value of the any one of the location points and inversely proportional to the amplitude.
6. The method according to claim 1, wherein the determining a projection area from the shooting area according to the waveform significance coefficient of each position point in the shooting area comprises:
selecting a plurality of position points with waveform significance coefficients larger than a coefficient threshold value from the shooting area according to the waveform significance coefficient of each position point in the shooting area;
and combining the plurality of position points into the projection area.
7. The method of claim 6, further comprising:
receiving the coefficient threshold value being input; or,
and determining the coefficient threshold according to the waveform significance coefficient of each position point in the shooting area.
8. The method according to claim 7, wherein the determining the coefficient threshold value according to the waveform significance coefficient of each position point in the shooting area comprises:
according to the waveform significance coefficient of each position point in the shooting area, counting the statistical histogram data of the waveform significance coefficient;
determining the maximum inter-class variance according to the statistical histogram data of the waveform significance coefficient;
and determining the coefficient threshold according to the maximum between-class variance.
9. The method of claim 8, wherein determining a maximum between-class variance from the statistical histogram data of the waveform significance coefficients comprises:
determining the number of position points of each waveform significance coefficient according to the statistical histogram data of the waveform significance coefficients;
and determining the maximum inter-class variance by a maximum inter-class difference method according to the number of the position points of each waveform significance coefficient and the number of the structured light included in the group of structured light.
10. An apparatus for determining a projection region, the apparatus comprising:
the shooting module is used for shooting a group of structured light in a shooting area to obtain a group of shot images, wherein the group of structured light comprises at least three pieces of structured light, and the group of shot images comprises at least three shot images;
a first determination module, configured to determine, for any position point in the captured region, an average brightness value and an amplitude of a structured-light waveform distribution of the position point in the captured group of images; determining an imaging quality coefficient of any position point according to the average brightness value and the amplitude of any position point, and determining the imaging quality coefficient of any position point as a waveform significance coefficient of any position point;
and the second determining module is used for determining a projection area from the shooting area according to the waveform significance coefficient of each position point in the shooting area.
11. The apparatus of claim 10, wherein the capture module is further configured to project a set of structured light at the capture area, the set of structured light being emitted to produce a set of projection gratings; and shooting the group of projection gratings to obtain the group of shot images.
12. The apparatus of claim 11, wherein the first determining module is further configured to determine, for each captured image in the set of captured images, a phase value of the any position point and a phase shift amount of a luminance distribution of the any position point in the set of captured images; determining a brightness distribution value of any position point in the shot image according to the average brightness value, the amplitude, the phase value of any position point and the phase offset; and determining the imaging quality coefficient of any position point according to the brightness distribution value of any position point in the group of shot images.
13. The apparatus according to claim 12, wherein the first determining module is further configured to determine a sum of luminance distribution values and a square of the sum of luminance distribution values according to luminance distribution values of the any position point in the group of captured images; and determining the imaging quality coefficient of any position point according to the sum of the brightness distribution values and the square of the sum of the brightness distribution values, wherein the imaging quality coefficient of any position point is inversely proportional to the sum of the brightness distribution values and is directly proportional to the square of the sum of the brightness distribution values.
14. The apparatus according to any one of claims 12-13, wherein the imaging quality factor for any one of the location points is proportional to the average brightness value of the any one of the location points and inversely proportional to the amplitude.
15. The apparatus of claim 10,
the second determining module is further configured to select, according to the waveform significance coefficient of each position point in the shooting area, a plurality of position points from the shooting area, where the waveform significance coefficient is greater than a coefficient threshold; and combining the plurality of position points into the projection area.
16. The apparatus of claim 15, further comprising:
a receiving module for receiving the coefficient threshold value which is input; or,
and the third determining module is used for determining the coefficient threshold according to the waveform significance coefficient of each position point in the shooting area.
17. The apparatus of claim 16,
the third determining module is further configured to count statistical histogram data of the waveform significance coefficient according to the waveform significance coefficient of each position point in the shooting area; determining the maximum inter-class variance according to the statistical histogram data of the waveform significance coefficient; and determining the coefficient threshold according to the maximum between-class variance.
18. The apparatus of claim 17,
the third determining module is further configured to determine the number of location points of each waveform significance coefficient according to the statistical histogram data of the waveform significance coefficients; and determining the maximum inter-class variance by a maximum inter-class difference method according to the number of the position points of each waveform significance coefficient and the number of the structured light included in the group of structured light.
19. A computing device, wherein the computing device comprises:
at least one processor; and at least one memory;
the at least one memory stores one or more programs configured for execution by the at least one processor, the one or more programs including instructions for performing the method of any of claims 1-9.
20. A non-transitory computer-readable storage medium storing a computer program, the computer program being loaded by a processor to execute instructions of the method according to any one of claims 1-9.
CN201810716581.7A 2018-07-03 2018-07-03 Method and device for determining projection area Active CN110672036B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810716581.7A CN110672036B (en) 2018-07-03 2018-07-03 Method and device for determining projection area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810716581.7A CN110672036B (en) 2018-07-03 2018-07-03 Method and device for determining projection area

Publications (2)

Publication Number Publication Date
CN110672036A CN110672036A (en) 2020-01-10
CN110672036B true CN110672036B (en) 2021-09-28

Family

ID=69065515

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810716581.7A Active CN110672036B (en) 2018-07-03 2018-07-03 Method and device for determining projection area

Country Status (1)

Country Link
CN (1) CN110672036B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114261088B (en) * 2021-12-09 2024-01-16 上海联泰科技股份有限公司 Method, system and calibration method for detecting breadth brightness of energy radiation device

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005293075A (en) * 2004-03-31 2005-10-20 Brother Ind Ltd 3-dimensional shape detection device, 3-dimensional shape detection method, 3-dimensional shape detection program
EP2459960B1 (en) * 2009-07-29 2019-11-13 Canon Kabushiki Kaisha Measuring apparatus, measuring method, and computer program
CN102155924B (en) * 2010-12-17 2012-07-04 南京航空航天大学 Four-step phase shifting method based on absolute phase recovery
US9049369B2 (en) * 2013-07-10 2015-06-02 Christie Digital Systems Usa, Inc. Apparatus, system and method for projecting images onto predefined portions of objects
CN103336634B (en) * 2013-07-24 2016-04-20 清华大学 Based on touching detection system and the method for adaptive layered structured light
CN103697834A (en) * 2013-12-26 2014-04-02 南京理工大学 Automatic identification and elimination method for invalid points in dynamic scene during real-time optical three-dimensional measurement
JP6795993B2 (en) * 2016-02-18 2020-12-02 株式会社ミツトヨ Shape measurement system, shape measurement device and shape measurement method
CN106403844B (en) * 2016-08-26 2018-10-23 合肥工业大学 A kind of available point cloud Fast Recognition Algorithm for projection measurement
CN107367245B (en) * 2017-08-22 2019-12-24 西安交通大学 Invalid point detection and elimination method in optical three-dimensional profile measurement

Also Published As

Publication number Publication date
CN110672036A (en) 2020-01-10

Similar Documents

Publication Publication Date Title
CN111464749B (en) Method, device, equipment and storage medium for image synthesis
CN109886208B (en) Object detection method and device, computer equipment and storage medium
CN108616691B (en) Photographing method and device based on automatic white balance, server and storage medium
CN109302632B (en) Method, device, terminal and storage medium for acquiring live video picture
CN113763228B (en) Image processing method, device, electronic equipment and storage medium
CN109522863B (en) Ear key point detection method and device and storage medium
CN111144365A (en) Living body detection method, living body detection device, computer equipment and storage medium
CN113384880A (en) Virtual scene display method and device, computer equipment and storage medium
CN110853128A (en) Virtual object display method and device, computer equipment and storage medium
CN112396076A (en) License plate image generation method and device and computer storage medium
CN112308103B (en) Method and device for generating training samples
CN112052701B (en) Article taking and placing detection system, method and device
CN111932604A (en) Method and device for measuring human ear characteristic distance
CN109754439B (en) Calibration method, calibration device, electronic equipment and medium
CN111565309A (en) Display equipment and distortion parameter determination method, device and system thereof, and storage medium
CN113191976B (en) Image shooting method, device, terminal and storage medium
CN111127541A (en) Vehicle size determination method and device and storage medium
CN111586279A (en) Method, device and equipment for determining shooting state and storage medium
CN110263695B (en) Face position acquisition method and device, electronic equipment and storage medium
CN110672036B (en) Method and device for determining projection area
CN113824902B (en) Method, device, system, equipment and medium for determining time delay of infrared camera system
CN112184802A (en) Calibration frame adjusting method and device and storage medium
CN113592874B (en) Image display method, device and computer equipment
CN110660031B (en) Image sharpening method and device and storage medium
CN114241055A (en) Improved fisheye lens internal reference calibration method, system, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 310051 room 304, B / F, building 2, 399 Danfeng Road, Binjiang District, Hangzhou City, Zhejiang Province

Patentee after: Hangzhou Hikvision Robot Co.,Ltd.

Address before: 310051 5th floor, building 1, building 2, no.700 Dongliu Road, Binjiang District, Hangzhou City, Zhejiang Province

Patentee before: HANGZHOU HIKROBOT TECHNOLOGY Co.,Ltd.

CP03 Change of name, title or address