CN114338998A - Point cloud data acquisition method and electronic equipment - Google Patents

Point cloud data acquisition method and electronic equipment Download PDF

Info

Publication number
CN114338998A
CN114338998A CN202111670496.XA CN202111670496A CN114338998A CN 114338998 A CN114338998 A CN 114338998A CN 202111670496 A CN202111670496 A CN 202111670496A CN 114338998 A CN114338998 A CN 114338998A
Authority
CN
China
Prior art keywords
interest
region
projection
projector
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111670496.XA
Other languages
Chinese (zh)
Inventor
郭凤阳
张洲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202111670496.XA priority Critical patent/CN114338998A/en
Publication of CN114338998A publication Critical patent/CN114338998A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Projection Apparatus (AREA)

Abstract

The application provides a point cloud data acquisition method and electronic equipment, the electronic equipment includes first shooting device, first shooting device includes projector, receiver, includes projection light source, speculum and deflector in the projector, the method includes: determining a concerned area and a non-concerned area in an identification area acquired by a first shooting device, wherein the concerned area is provided with a target object, the non-concerned area is provided with a background around the target object, and the concerned area and the non-concerned area form the identification area; determining a projection mode of the projector based on the region of interest and the region of no interest, the projection mode characterizing that the ray casting density for the region of interest is higher than the ray casting density for the region of no interest; and controlling the deflector in the projector to rotate at least based on the projection mode, so that when the light projected to the identification area is reflected into the receiver by the reflector, the receiver can obtain point cloud data corresponding to the target object at least based on the reflected light.

Description

Point cloud data acquisition method and electronic equipment
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a point cloud data acquisition method and electronic equipment.
Background
The projection density of current 3D structured light devices is relatively limited. For example, the speckle structured light faceID and the exploration version coded structured light on the market have 3 million points, while other FindX have 1.5 million points, although the projection density is enough for the characteristic points of human face unlocking, the projection density is not enough for fine scanning of small objects. To support fine scanning of small objects, dense projection light is required for support.
The current hardware scheme of the transmission module is performed in a mode of Vertical Cavity Surface Emitting Laser (VCSEL + DoE);
such as the existing solution VCSEL + collimating mirror + DoE device. Among them, the VCSEL is a light emitting device, DoE (diffractive optical element) for diffracting a light projection pattern of the VCSEL into a wider area and FoV (field angle) as designed.
And specific schemes for increasing density are 1: increasing the lattice arrangement of VCSEL light-emitting cavities;
however, the disadvantages of scheme 1 are: a. the size and power consumption of the device can be directly increased by increasing the number of VCSEL lattices, and the size of the VCSEL lattices is about 10 microns, so that the integral manufacturing difficulty is high due to the increase of the number; c. the cost is high;
the scheme comprises the following steps: carrying out new DoE design;
however, the disadvantages of scheme 2 are: DoE is the structure of receiving a little, and the technical degree of difficulty that relates to is big, and is with high costs, and the performance problem that central counterpoint and the skew of the module equipment of DoE probably lead to need to consider when designing moreover, need consider diffraction efficiency simultaneously, diffraction splice seam and the problem of central bright spot to using eyes safely, so the scheme is difficult to realize.
Disclosure of Invention
The embodiment of the application provides a simple and feasible point cloud data acquisition method for increasing projection density and electronic equipment applying the method.
In order to solve the above technical problem, an embodiment of the present application provides a point cloud data obtaining method, which is applied to an electronic device, where the electronic device includes a first shooting device, the first shooting device includes a projector and a receiver, the projector includes a projection light source, a mirror, and a deflector, and the method includes:
determining a region of interest and a region of no interest in an identification region acquired by the first photographing device, wherein the region of interest has a target object therein, the region of no interest has a background around the target object, and the region of interest and the region of no interest constitute the identification region;
determining a projection pattern for the projector based on the region of interest and a region of no interest, the projection pattern characterizing a higher ray casting density for the region of interest than for the region of no interest;
and controlling a deflector in the projector to rotate at least based on the projection mode, so that when the light projected to the identification area is reflected by the reflector into the receiver, the receiver can obtain point cloud data corresponding to the target object at least based on the reflected light.
As an alternative embodiment, the electronic device further comprises a second camera,
the determining regions of interest and regions of non-interest in the identified region includes:
acquiring an infrared image corresponding to the identification range based on the cooperation of the second shooting device and the receiver;
determining the target object based on the infrared image;
determining the region of interest and a non-region of interest based on the infrared image and the target object.
As an alternative embodiment, the electronic device further comprises a third shooting device,
the determining regions of interest and regions of non-interest in the identified region includes:
obtaining a visible light image corresponding to the identification area based on the third shooting device;
determining the target object based on the visible light image;
determining the region of interest and a non-region of interest based on the visible light image and the target object.
As an alternative embodiment, the projection mode includes at least one of:
controlling the projection times of a projector to be greater than the projection times of a non-attention area for the attention area;
or, controlling the deflection speed of the projector to be smaller than the deflection speed of the non-attention area for the attention area;
or, for said region of interest, controlling the deflection frequency of the projector to be less than the deflection frequency of the non-region of interest;
or, controlling the deflection angle of the projector for the focus area each time to be smaller than the deflection angle of the non-focus area.
As an alternative embodiment, the projection light source is a dot matrix light source, and the projection mode further includes configuring the number and/or positions of the lighted light sources in the dot matrix light source based on at least the region of interest.
As an alternative embodiment, said controlling rotation of deflectors in said projector based at least on said projection mode comprises:
determining a single exposure time of the receiver; determining a projection time for the projector to complete a single projection;
determining the projection times based on the single exposure time and the projection time, and the deflection angle, the deflection speed and the deflection frequency of the deflector in each projection, so that the projector can finish the projection of the attention area and the non-attention area in the single exposure time.
Another embodiment of the present application also provides an electronic device, including:
the first shooting device comprises a projector and a receiver, wherein the projector comprises a projection light source, a reflector and a deflector: and
the processor is connected with the first shooting device and used for determining a region of interest and a region of no interest in the identification region acquired by the first shooting device, and the region of interest has a target object; determining a projection mode of the projector based on the regions of interest and regions of no interest; controlling a deflector in the projector to rotate at least based on the projection mode, so that when the light projected to the identification area is reflected by the reflector into the receiver, the receiver can obtain point cloud data corresponding to the target object at least based on the reflected light; wherein the non-region of interest has a background surrounding the target object, the region of interest and the non-region of interest constitute the identified region, and the projection pattern characterizes a higher ray casting density for the region of interest than for the non-region of interest.
As an alternative embodiment, the electronic device further comprises a second camera,
the determining regions of interest and regions of non-interest in the identified region includes:
acquiring an infrared image corresponding to the identification range based on the cooperation of the second shooting device and the receiver;
determining the target object based on the infrared image;
determining the region of interest and a non-region of interest based on the infrared image and the target object.
As an alternative embodiment, the electronic device further comprises a third shooting device,
the determining regions of interest and regions of non-interest in the identified region includes:
obtaining a visible light image corresponding to the identification area based on the third shooting device;
determining the target object based on the visible light image;
determining the region of interest and a non-region of interest based on the visible light image and the target object.
As an alternative embodiment, the projection mode includes at least one of:
controlling the projection times of a projector to be greater than the projection times of a non-attention area for the attention area;
or, controlling the deflection speed of the projector to be smaller than the deflection speed of the non-attention area for the attention area;
or, for said region of interest, controlling the deflection frequency of the projector to be less than the deflection frequency of the non-region of interest;
or, controlling the deflection angle of the projector for the focus area each time to be smaller than the deflection angle of the non-focus area.
Based on the disclosure of the above embodiment, it can be known that the beneficial effects that the embodiment of the present application has include that the density of the projected light can be effectively improved only by controlling the rotation of the deflector, so that the receiver can accurately obtain the point cloud data of the target object at least based on the reflected light, the whole process does not need to add a complex structure, the scheme is simple, the implementation is easy, and the cost is low.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present application is further described in detail by the accompanying drawings and examples.
Drawings
The accompanying drawings are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiment(s) of the application and together with the description serve to explain the application and not limit the application. In the drawings:
FIG. 1 is a prior art defect image (including central bright spot and stitching seam) formed after light is projected.
Fig. 2 is a flowchart of a point cloud data acquisition method in the embodiment of the present application.
Fig. 3 is a schematic view of the structure of a projector in an embodiment of the present application (1-projection light source, 2-mirror, 3-deflector).
Fig. 4 is a flowchart of a point cloud data obtaining method in another embodiment of the present application.
Fig. 5 is a diagram illustrating an application process of the point cloud data obtaining method according to another embodiment of the present application.
Fig. 6 is a block diagram of an electronic device in the embodiment of the present application.
Detailed Description
Specific embodiments of the present application will be described in detail below with reference to the accompanying drawings, but the present application is not limited thereto.
It will be understood that various modifications may be made to the embodiments disclosed herein. The following description is, therefore, not to be taken in a limiting sense, but is made merely as an exemplification of embodiments. Other modifications will occur to those skilled in the art within the scope and spirit of the disclosure.
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the disclosure and, together with a general description of the disclosure given above, and the detailed description of the embodiments given below, serve to explain the principles of the disclosure.
These and other characteristics of the present application will become apparent from the following description of preferred forms of embodiment, given as non-limiting examples, with reference to the attached drawings.
It should also be understood that, although the present application has been described with reference to some specific examples, a person of skill in the art shall certainly be able to achieve many other equivalent forms of application, having the characteristics as set forth in the claims and hence all coming within the field of protection defined thereby.
The above and other aspects, features and advantages of the present disclosure will become more apparent in view of the following detailed description when taken in conjunction with the accompanying drawings.
Specific embodiments of the present disclosure are described hereinafter with reference to the accompanying drawings; however, it is to be understood that the disclosed embodiments are merely examples of the disclosure that may be embodied in various forms. Well-known and/or repeated functions and structures have not been described in detail so as not to obscure the present disclosure with unnecessary or unnecessary detail. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present disclosure in virtually any appropriately detailed structure.
The specification may use the phrases "in one embodiment," "in another embodiment," "in yet another embodiment," or "in other embodiments," which may each refer to one or more of the same or different embodiments in accordance with the disclosure.
Hereinafter, embodiments of the present application will be described in detail with reference to the accompanying drawings.
As shown in fig. 2, an embodiment of the present application provides a point cloud data acquiring method, which is applied to an electronic device, where the electronic device includes a first shooting device, the first shooting device includes a projector and a receiver, the projector includes a projection light source, a mirror and a deflector, and the method includes:
determining a concerned area and a non-concerned area in an identification area acquired by a first shooting device, wherein the concerned area is provided with a target object, the non-concerned area is provided with a background around the target object, and the concerned area and the non-concerned area form the identification area;
determining a projection mode of the projector based on the region of interest and the region of no interest, the projection mode characterizing that the ray casting density for the region of interest is higher than the ray casting density for the region of no interest;
and controlling the deflector in the projector to rotate at least based on the projection mode, so that when the light projected to the identification area is reflected into the receiver by the reflector, the receiver can obtain point cloud data corresponding to the target object at least based on the reflected light.
For example, the first photographing device in the present embodiment includes a projector for projecting light toward the target object, and a receiver for receiving reflected light returned by the target object. As shown in fig. 3, the projector includes a projection light source, a reflector for reflecting light projected by the projection light source to a target object, and a deflector connected to the reflector for adjusting an angle of the reflector, thereby adjusting a light projection position. The deflector is an electrodynamic deflector, and specifically may be an electrodynamic deflector with a micro-electromechanical system (MEMS for short), and of course, may also be another type of electrodynamic deflector, or may be directly configured as a MEMS deflection mirror, that is, the deflector and the mirror are an integral piece. In practical application, two reflectors may be provided, for example, one reflector is connected to the electric deflector, the other reflector is located above the projection light source, and the projection light source is horizontally disposed, and the projection light is vertically output, so that the light is reflected by the reflector above to the reflector connected to the deflector, and the light is projected to the target object by the reflector, thereby reducing the installation space of the whole first photographing device. Of course, the specific arrangement mode is not exclusive, and other forms of arrangement can be adopted. When the electronic equipment is to acquire point cloud data, an identification area within the visual angle of the electronic equipment, namely the shooting area of the first shooting device, is acquired through the first shooting device, so that an attention area and a non-attention area are determined based on the identification area, and the attention area and the non-attention area form the identification area. The focus area has a main shooting object, i.e. a target object, while the non-focus area can be an area of an environment where the target object is located, and has a background of the target object, for example, the identification area of the first shooting device is an area where a desk is located, a water cup on the desk is the target object, the area where the water cup is located is the focus area, a desktop around the water cup is the background, and the area where the background is located is the non-focus area. When the device determines the attention area and the non-attention area, the projection mode of the projector is analyzed and determined, and when the projector projects based on the projection mode, the projection density of the light in the attention area is higher than that of the light in the non-attention area. That is, the focus is dense with light projection points within the zone, while the light projection points within the non-focus zone are relatively sparse. When the device determines the projection mode, the deflector is controlled to deflect based on the projection mode, so that the light with high density is projected into the attention area through deflection of the deflector, and the light with relatively low density is projected into the non-attention area. After receiving the projection light, the target object in the concerned area and the background object in the irrelevant area are respectively reflected to the receiver. At this time, the receiver can determine at least the positions of all light projection points based on a large amount of received reflected light, for example, the light projection points corresponding to the cup are dense, and the coverage area of the projection points at least approximately matches the shape of the cup, so that the receiver can clearly and accurately determine the point cloud data corresponding to the target object based on the light projection points or simultaneously combined with the light projection points in the background area. Certainly, the receiver may also determine the point cloud data of the background based on the reflected light, that is, the receiver may determine the point cloud data of the identification area based on the reflected light, but the point cloud data is not unique, and which point cloud data is determined may be selected according to actual needs.
Based on the disclosure of the above embodiment, it can be known that the beneficial effects that this embodiment possesses include that the density of effectively promoting the projection light can be realized only through the rotation of control deflector, need not to add the projection light source in order to promote the projection density, alright make the receiver can be based on the light that target object and background reflection returned and at least accurate point cloud data that obtain the target object, whole process need not to add complicated structure, and the scheme is simple, easily implementation, and low cost. In addition, since the present embodiment uses the mirror, i.e., the mirror reflection type projection, the efficiency is close to 100%, and the light loss is small, so that the power consumption of the projection light source can be reduced. And because a large number of projection light sources are not required to be added, images formed by projection do not have central bright spots and splicing seams, and the method can be specifically referred to as shown in figure 1, so that the precision of final point cloud data and the efficiency of processing and forming the point cloud data are obviously improved.
Further, the electronic device in this embodiment further includes a second camera, and the second camera is configured to cooperate with the first camera to capture an image.
Specifically, determining the attention area and the non-attention area in the identification area includes:
acquiring an infrared image corresponding to the identification range based on the cooperation of the second shooting device and the receiver;
determining a target object based on the infrared image;
and determining a region of interest and a non-region of interest based on the infrared image and the target object.
For example, the second camera device at least includes an infrared light source, which is different from the projection light source in the first camera device in that the projection light source projects a point light source or a line light source, and the second camera device is capable of projecting a surface light source, and further capable of cooperating with the receiver in the first camera device to obtain an infrared image corresponding to the identification area. Of course, the second camera may also independently include a receiver, so that the second camera can also generate an infrared image based on the second camera itself. In this embodiment, only the second camera does not include a receiver, and the infrared light source is illustrated by cooperating with the receiver in the first camera. Specifically, the second photographing device in this embodiment projects an infrared surface light source to the environment where the recognition area is located, and the receiver forms an infrared image corresponding to the recognition area based on the infrared light reflected by the target object and the background. The receiver may then transmit the infrared image to a processor of the electronic device, which determines a depth of field based on the infrared image, thereby demarcating the target object and the background. Then, the device determines the attention area and the non-attention area in the infrared image based on the target object or based on the background at the same time, namely determines the area where the target object is located and the area where the background is located.
Alternatively, as shown in fig. 4, in another embodiment, the electronic device further comprises a third photographing apparatus,
when determining the attention area and the non-attention area in the identification area, the method comprises the following steps:
obtaining a visible light image corresponding to the identification area based on the third shooting device;
determining a target object based on the visible light image;
and determining the attention area and the non-attention area based on the visible light image and the target object.
For example, the third photographing device in the present embodiment is a camera for photographing a visible light image under visible light. Specifically, the device may independently capture a visible light image of the corresponding recognition area by the third capturing device, and then perform image recognition processing based on the visible light image to determine the target object. The device then determines the region of interest and the non-close region in the visible light image based on the target object, i.e., determines the region where the target object is located and the region where the background is located.
Alternatively, in another embodiment, as shown in fig. 5, the electronic device may directly project the first photographing device to the identified area in a default projection mode, for example, project the identified area in a mode in which the projection density of all the identified areas is a certain value. The receiver can form an image based on the reflected light after obtaining the reflected light, and the image is sent to a processor in the device, and the device performs recognition to determine the attention area and the non-attention area.
Further, the projection mode in the present embodiment includes at least one of:
controlling the projection times of the projector to be larger than the projection times of the non-attention area for the attention area;
or, controlling the deflection speed of the projector to be greater than that of the non-attention area for the attention area;
or, controlling the deflection frequency of the projector for the region of interest to be smaller than the deflection frequency of the region of no interest;
or, the deflection angle of the projector is controlled for the focus area each time is smaller than that of the non-focus area.
The projection modes in this embodiment are all for achieving the light projection density in the region of interest greater than the light projection density in the non-occlusion region. To achieve this effect, the number of times of projection by the projector may be specifically controlled so that the number of times of projection for the region of interest is larger than the number of times of projection for the non-region of interest.
Or controlling the deflector to ensure that the deflection angle of the deflector is smaller than that of the non-attention area when the projector projects light to the attention area, so that the distance between the projection points is smaller and the projection points are denser.
Or controlling the deflector to enable the deflection speed of the deflector to be greater than the deflection speed of the corresponding non-attention area when the projector projects light to the attention area, or enable the deflection frequency of the deflector to be greater than the deflection frequency of the corresponding non-attention area, so that the projector can project light to the attention area more quickly, more frequently and at shorter intervals, for example, in a time period of projecting light to the non-attention area once, the projector can project light to the attention area for a plurality of times, and the intensity of projection points is increased. In practical applications, the device may invoke one projection mode according to actual situations, or invoke a plurality of projection modes simultaneously to cooperate, for example, invoke projection modes related to projection frequency, projection speed, and deflection angle of the deflector, so that the deflector is cooperatively controlled to rotate based on the plurality of projection modes to control the projection interval of the projection point on the target object, the projection speed, the projection frequency (including the number of projections), and the projection interval on the background, the projection speed, and the projection frequency, thereby making the projection density of the corresponding target object (i.e., the region of interest) greater than the projection density of the background (i.e., the region of no interest).
Optionally, in order to further improve the projection efficiency, reduce the energy consumption, and avoid unnecessary energy loss, the projection light source in this embodiment is a dot matrix light source, and the dot matrix light source may be an infrared light source, and a vertical cavity surface emitting laser (Vcsel) may be used in application. The projection pattern further includes configuring a number and/or locations of illuminated ones of the lattice light sources based at least on the area of interest.
For example, taking the identification area including the desktop and the cup, and taking the projection light source as the dot matrix light source as an example, the device may determine the number of light sources to be projected based on one or more of the parameters of the shape, size, point cloud accuracy of the target object, and the like, for example, the number of light sources required may be consistent or inconsistent between a large straight cup and a small straight cup, for example, the number of light sources for the large cup is large, and the number of light sources for the small cup is relatively reduced. Or according to different cup shapes, such as a hollow area comprising a carving or other decorative structures, the number of the light sources can be adjusted according to actual conditions, so that the deflector and the reflector can project light to a target position without rotating by a large angle, and the light does not exceed the outer contour of a target object. In addition, the position of the light source to be lightened can be determined so as to assist in limiting the projection position and the projection efficiency of the light. For example, since the cup is filled with a plurality of quadrangular three-dimensional reliefs having the same shape, in order to increase the projection efficiency, the positions of the lighted light sources may be arranged based on the shape and size of the quadrangular three-dimensional reliefs, so that the projected light exactly covers the quadrangular three-dimensional reliefs. Alternatively, the position of the light source to be turned on may be set according to the projection position or the position of the device relative to the projection position, so that the projected light can be projected to the target position without rotating the deflector by an excessive angle. Of course, in practical applications, the projection mode may include both the number of light sources to be lit and the position information, and is not limited to include only the light source number configuration information or the light source position configuration information. In addition, the projection mode may be set by a user in a self-defined manner, or may be generated in real time by analyzing the image of the identification area obtained by the device, or may be a configuration template preset with a plurality of features adapted to different target objects, for example, different configuration templates corresponding to different structural features and size features, the device may directly call the matched template, or may further configure the selected template to form a final projection mode suitable for the current scene, which is not specifically determined.
Further, the projection light source in this embodiment may include a plurality of point light sources, or a linear light source, or only one point light source or linear light source, and during projection, all the light sources may be turned on for projection, or some of the light sources (including one light source) may be configured as described above, or a specific light source may be turned on. When the number of the light sources to be lighted is plural, the projection area can be increased, and if there is only one light source, the required projection area can be achieved by increasing the number of times of projection (one deflection of the deflector per one projection). The deflector in this embodiment has a MEMS system that can support a deflection speed of 55Mhz, and thus can project 50 ten thousand times in a very short time to form 50 ten thousand projected points even if only one light source is used to project light, and the receiver can easily obtain fine point cloud data of the target object based on the density of the projected points.
Specifically, when the deflector in the projector is controlled to rotate based on at least the projection mode, the method comprises the following steps:
determining a single exposure time of the receiver; determining the projection time of the projector for completing single projection;
the number of shots to be projected is determined based on the single exposure time and the projection time, and the deflection angle, the deflection speed and the deflection frequency of the deflector at each shot, so that the projector can finish the projection of the region of interest and the region of no interest within the single exposure time.
For example, the receiver is a camera, and when the device controls the deflector, the device determines a single exposure time of the receiver and simultaneously determines a projection time for completing a single projection of the projector, wherein the projection time comprises a time for lighting the light source, namely a time for the light source to emit light in response to a command, a time for the deflector to deflect to a target angle in a single time, and a time for the light to be projected to a target position by the reflector. When the single exposure time and the single projection time of the receiver are determined, the equipment determines the actual projection times and one or more of the deflection angle, the deflection speed and the deflection frequency of the deflector during each projection based on the two times or the accuracy of the required point cloud data, an object (including a target object and/or a background) indicated by the generated point cloud, namely the point cloud data of the target object, which is finally required to be generated, or the point cloud data of the target object and the background, so that whether the projector completes the projection of the whole identification area within the single exposure time of the receiver.
Specifically, assuming a frame rate of 60fps for the receiver and an exposure time of 10ms, when the MEMS type deflector is used in the present embodiment, the frequency can reach 50MHz, and the time for the projector to project one time can be guaranteed to be 20 ns. Therefore, the projector can complete the projection of 0.5M times in 10ms/20ns within a single exposure time of the receiver, that is, the number of times of projection of the projector can reach 50 ten thousand times within a single exposure time.
Referring to fig. 5, the identification area is shown in fig. 5 and is a rectangular area, where the area where the portrait is located is the attention area, and the background behind the portrait is the non-attention area. In the actual projection, one way is to control the rotation of the deflector, so that the projector starts to successively project from a top corner of the identification area along the horizontal direction or the vertical direction to form a row of projection points or a column of projection points, and then adjust the rotation of the deflector, so that the projector starts to successively project from the starting point of the next row or the next column until the projection of the whole identification area is completed. Alternatively, the apparatus may control the projector to project the regions of interest first and then to project the regions of no interest, or vice versa.
Further, when the projection is performed based on the first manner, the device determines the coordinate set of the portrait (target object/region of interest) before the projection, knows the location information of the portrait in the identified region, and then sets a specific deflection strategy based on the known information in combination with the projection mode described above, so as to realize the final control of the deflector based on the deflection strategy. The deflection strategy comprises a projection position of the projector each time, a deflection angle of the deflector corresponding to each projection position, and a deflection angle, a deflection frequency and a deflection speed of the deflector each time when projection corresponding to the portrait area is performed, such as when a projection interval (which can be determined based on comparison between a coordinate set of a current row or column and a coordinate set of the portrait area) containing the portrait area in the row or the column where the current projection position is located is projected. Meanwhile, the method also comprises the deflection angle, deflection frequency and deflection speed of the deflector each time when the projection corresponding to the background area is carried out, such as when the projection is carried out on the projection interval containing the background area in the row or the column where the current projection position is located. The parameter configurations of the projection times, the deflection angle, the deflection frequency and the deflection speed are all determined by combining the projection mode. The deflection angle, deflection frequency, and deflection velocity of the deflector may be a fixed first set of values when projected toward the portrait area, and similarly the deflection angle, deflection frequency, and deflection velocity of the deflector may be a fixed second set of values when projected toward the landscape area, the second set of values being different from the first set of values. Of course, the first value set may also be varied, for example, when the projection is performed corresponding to the face region, the first value set may be different from the first value set corresponding to the region where the body or hair is located, that is, the region of interest may be further divided to obtain the region of high interest, so that the projection density of the region of high interest is greater than the projection densities of other regions of interest.
When the deflector is controlled based on the deflection strategy described above, the apparatus may control the deflector to operate at either the second set of values or switch to operate at the first set of values based on the current projection position, thereby completing the entire projection. Or, the equipment determines each projection track row or column in advance, the deflector needs to operate based on which value set, starts to operate with another value set after operating for a long time or deflecting for a few times, and enters the next row or the next column after operating for a long time or deflecting for a few times, and then the deflector directly operates based on the deflection strategy without monitoring the projection position by the equipment.
While when projection is performed based on the second manner, the apparatus may predetermine the relevant control parameters, such as the first set of values, when projected towards the portrait area, and the relevant control parameters, such as the second set of values, when projected towards the background area. Then, the device can specifically set corresponding projection tracks respectively based on the coordinate set of the portrait area and the coordinate set of the background area, form a deflection strategy by the projection tracks and the control parameters, and finally control the operation of the deflector based on the deflection strategy to complete all projections of the identification area.
After the receiver receives the reflected light rays corresponding to the projected light rays, the reflected light rays can be converted to form point cloud data corresponding to the identification area, or point cloud data only corresponding to the portrait area is formed.
As shown in fig. 6, another embodiment of the present application also provides an electronic device, including:
the first shooting device comprises a projector and a receiver, wherein the projector comprises a projection light source, a reflector and a deflector: and
the processor is connected with the first shooting device and used for determining a concerned area and a non-concerned area in the identification area acquired by the first shooting device, and the concerned area is provided with a target object; determining a projection mode of the projector based on the region of interest and the region of no interest; controlling a deflector in a projector to rotate at least based on a projection mode, so that when the light projected to the identification area is reflected into a receiver by a reflector, the receiver can obtain point cloud data corresponding to a target object at least based on the reflected light; the non-attention area has a background around the target object, the attention area and the non-attention area form an identification area, and the projection mode represents that the light projection density of the attention area is higher than that of the non-attention area.
The electronic device in this embodiment may be, for example, an electronic device such as a mobile phone, a notebook computer, a tablet computer, a camera, and the like, and the electronic device can quickly and accurately obtain point cloud data of a target object, and assist the device to complete functions such as 3D imaging, and is simple and fast.
As an alternative embodiment, the electronic device further comprises a second camera,
determining a region of interest and a region of no interest in the identified region, comprising:
acquiring an infrared image corresponding to the identification range based on the cooperation of the second shooting device and the receiver;
determining a target object based on the infrared image;
and determining a region of interest and a non-region of interest based on the infrared image and the target object.
As an alternative embodiment, the electronic device further comprises a third photographing apparatus,
determining a region of interest and a region of no interest in the identified region, comprising:
obtaining a visible light image corresponding to the identification area based on the third shooting device;
determining a target object based on the visible light image;
and determining the attention area and the non-attention area based on the visible light image and the target object.
As an alternative embodiment, the projection mode includes at least one of:
controlling the projection times of the projector to be larger than the projection times of the non-attention area for the attention area;
or, controlling the deflection speed of the projector to be greater than that of the non-attention area for the attention area;
or, controlling the deflection frequency of the projector for the region of interest to be greater than the deflection frequency of the region of no interest;
or, the deflection angle of the projector is controlled for the focus area each time is smaller than that of the non-focus area.
As an alternative embodiment, the projection light source is a dot matrix light source, and the projection mode further includes configuring the number and/or positions of the illuminated light sources in the dot matrix light source based on at least the region of interest.
As an alternative embodiment, controlling the rotation of the deflector in the projector based on at least the projection mode comprises:
determining a single exposure time of the receiver; determining the projection time of the projector for completing single projection;
the number of shots to be projected is determined based on the single exposure time and the projection time, and the deflection angle, the deflection speed and the deflection frequency of the deflector at each shot, so that the projector can finish the projection of the region of interest and the region of no interest within the single exposure time.
An embodiment of the present application also provides a storage medium, on which a computer program is stored, which when executed by a processor of a terminal device, implements the method as described above. It should be understood that each solution in this embodiment has a corresponding technical effect in the foregoing method embodiments, and details are not described here.
Embodiments of the present application also provide a computer program product tangibly stored on a computer-readable medium and comprising computer-executable instructions that, when executed, cause at least one processor to perform a processing method such as the embodiments described above. It should be understood that each solution in this embodiment has a corresponding technical effect in the foregoing method embodiments, and details are not described here.
It should be noted that the computer storage media of the present application can be computer readable signal media or computer readable storage media or any combination of the two. The computer readable medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access storage media (RAM), a read-only storage media (ROM), an erasable programmable read-only storage media (EPROM or flash memory), an optical fiber, a portable compact disc read-only storage media (CD-ROM), an optical storage media piece, a magnetic storage media piece, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, antenna, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
It should be understood that although the present application has been described in terms of various embodiments, not every embodiment includes only a single embodiment, and such description is for clarity purposes only, and those skilled in the art will recognize that the embodiments described herein may be combined as suitable to form other embodiments, as will be appreciated by those skilled in the art.
The above embodiments are only exemplary embodiments of the present application, and are not intended to limit the present application, and the protection scope of the present application is defined by the claims. Various modifications and equivalents may be made by those skilled in the art within the spirit and scope of the present application and such modifications and equivalents should also be considered to be within the scope of the present application.

Claims (10)

1. A point cloud data acquisition method is applied to electronic equipment, the electronic equipment comprises a first shooting device, the first shooting device comprises a projector and a receiver, the projector comprises a projection light source, a reflecting mirror and a deflector, and the method comprises the following steps:
determining a region of interest and a region of no interest in an identification region acquired by the first photographing device, wherein the region of interest has a target object therein, the region of no interest has a background around the target object, and the region of interest and the region of no interest constitute the identification region;
determining a projection pattern for the projector based on the region of interest and a region of no interest, the projection pattern characterizing a higher ray casting density for the region of interest than for the region of no interest;
and controlling a deflector in the projector to rotate at least based on the projection mode, so that when the light projected to the identification area is reflected by the reflector into the receiver, the receiver can obtain point cloud data corresponding to the target object at least based on the reflected light.
2. The method of claim 1, wherein the electronic device further comprises a second camera,
the determining regions of interest and regions of non-interest in the identified region includes:
acquiring an infrared image corresponding to the identification range based on the cooperation of the second shooting device and the receiver;
determining the target object based on the infrared image;
determining the region of interest and a non-region of interest based on the infrared image and the target object.
3. The method of claim 1, wherein the electronic device further comprises a third camera,
the determining regions of interest and regions of non-interest in the identified region includes:
obtaining a visible light image corresponding to the identification area based on the third shooting device;
determining the target object based on the visible light image;
determining the region of interest and a non-region of interest based on the visible light image and the target object.
4. The method of claim 2 or 3, wherein the projection pattern comprises at least one of:
controlling the projection times of a projector to be greater than the projection times of a non-attention area for the attention area;
or, controlling the deflection speed of the projector to be greater than the deflection speed of the non-attention area for the attention area;
or, controlling the deflection frequency of the projector for the region of interest to be greater than the deflection frequency of the region of no interest;
or, controlling the deflection angle of the projector for the focus area each time to be smaller than the deflection angle of the non-focus area.
5. The method of claim 4, wherein the projection light sources are lattice light sources, the projection pattern further comprising configuring a number and/or locations of illuminated ones of the lattice light sources based at least on the region of interest.
6. The method of claim 5, wherein the controlling rotation of a deflector in the projector based at least on the projection pattern comprises:
determining a single exposure time of the receiver; determining a projection time for the projector to complete a single projection;
determining the projection times based on the single exposure time and the projection time, and the deflection angle, the deflection speed and the deflection frequency of the deflector in each projection, so that the projector can finish the projection of the attention area and the non-attention area in the single exposure time.
7. An electronic device, comprising:
the first shooting device comprises a projector and a receiver, wherein the projector comprises a projection light source, a reflector and a deflector: and
the processor is connected with the first shooting device and used for determining a region of interest and a region of no interest in the identification region acquired by the first shooting device, and the region of interest has a target object; determining a projection mode of the projector based on the regions of interest and regions of no interest; controlling a deflector in the projector to rotate at least based on the projection mode, so that when the light projected to the identification area is reflected by the reflector into the receiver, the receiver can obtain point cloud data corresponding to the target object at least based on the reflected light; wherein the non-region of interest has a background surrounding the target object, the region of interest and the non-region of interest constitute the identified region, and the projection pattern characterizes a higher ray casting density for the region of interest than for the non-region of interest.
8. The electronic device of claim 7, wherein the electronic device further comprises a second camera,
the determining regions of interest and regions of non-interest in the identified region includes:
acquiring an infrared image corresponding to the identification range based on the cooperation of the second shooting device and the receiver;
determining the target object based on the infrared image;
determining the region of interest and a non-region of interest based on the infrared image and the target object.
9. The electronic device according to claim 7, wherein the electronic device further comprises a third photographing apparatus,
the determining regions of interest and regions of non-interest in the identified region includes:
obtaining a visible light image corresponding to the identification area based on the third shooting device;
determining the target object based on the visible light image;
determining the region of interest and a non-region of interest based on the visible light image and the target object.
10. The electronic device of claim 8 or 9, wherein the projection pattern comprises at least one of:
controlling the projection times of a projector to be greater than the projection times of a non-attention area for the attention area;
or, controlling the deflection speed of the projector to be greater than the deflection speed of the non-attention area for the attention area;
or, controlling the deflection frequency of the projector for the region of interest to be greater than the deflection frequency of the region of no interest;
or, controlling the deflection angle of the projector for the focus area each time to be smaller than the deflection angle of the non-focus area.
CN202111670496.XA 2021-12-31 2021-12-31 Point cloud data acquisition method and electronic equipment Pending CN114338998A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111670496.XA CN114338998A (en) 2021-12-31 2021-12-31 Point cloud data acquisition method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111670496.XA CN114338998A (en) 2021-12-31 2021-12-31 Point cloud data acquisition method and electronic equipment

Publications (1)

Publication Number Publication Date
CN114338998A true CN114338998A (en) 2022-04-12

Family

ID=81020406

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111670496.XA Pending CN114338998A (en) 2021-12-31 2021-12-31 Point cloud data acquisition method and electronic equipment

Country Status (1)

Country Link
CN (1) CN114338998A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102074044A (en) * 2011-01-27 2011-05-25 深圳泰山在线科技有限公司 System and method for reconstructing surface of object
CN108471525A (en) * 2018-03-27 2018-08-31 百度在线网络技术(北京)有限公司 Control method and device for projecting apparatus
CN109997057A (en) * 2016-09-20 2019-07-09 创新科技有限公司 Laser radar system and method
CN111982022A (en) * 2020-08-19 2020-11-24 长春理工大学 Spatial structure detection method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102074044A (en) * 2011-01-27 2011-05-25 深圳泰山在线科技有限公司 System and method for reconstructing surface of object
CN109997057A (en) * 2016-09-20 2019-07-09 创新科技有限公司 Laser radar system and method
CN111796255A (en) * 2016-09-20 2020-10-20 创新科技有限公司 Laser radar system, method for detecting object by using laser radar system and vehicle
CN108471525A (en) * 2018-03-27 2018-08-31 百度在线网络技术(北京)有限公司 Control method and device for projecting apparatus
CN111982022A (en) * 2020-08-19 2020-11-24 长春理工大学 Spatial structure detection method and system

Similar Documents

Publication Publication Date Title
US10055855B2 (en) Time-of-flight camera system and method to improve measurement quality of weak field-of-view signal regions
US9979953B1 (en) Reflector-based depth mapping of a scene
US11675056B2 (en) Illumination for zoned time-of-flight imaging
US11255972B2 (en) Actuated optical element for light beam scanning device
US20210311171A1 (en) Improved 3d sensing
KR102595996B1 (en) Mixed-mode depth detection
CN106574761B (en) For controlling the method and system of the luminescent system based on laser
US9401144B1 (en) Voice gestures
US9472005B1 (en) Projection and camera system for augmented reality environment
US10514256B1 (en) Single source multi camera vision system
US9557630B1 (en) Projection system with refractive beam steering
US10007994B2 (en) Stereodepth camera using VCSEL projector with controlled projection lens
US9430187B2 (en) Remote control of projection and camera system
US9268520B1 (en) Altering content projection
CN111083453B (en) Projection device, method and computer readable storage medium
TW201632945A (en) Adjustable focal plane optical system
US9281727B1 (en) User device-based control of system functionality
CN108471525B (en) Control method and device for projector and projector for implementing method
WO2020192503A1 (en) Method for determining object depth information, electronic device, and circuit system
CN112433382B (en) Speckle projection device and method, electronic equipment and distance measurement system
CN114338998A (en) Point cloud data acquisition method and electronic equipment
CN112424673B (en) Infrared projector, imaging device and terminal device
US20190364226A1 (en) Dot projector and method for capturing image using dot projector
CN113589316A (en) N-line laser radar scanning system and method
CN112987022A (en) Distance measurement method and device, computer readable medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220412

RJ01 Rejection of invention patent application after publication