CN114677454B - Image generation method and device - Google Patents

Image generation method and device Download PDF

Info

Publication number
CN114677454B
CN114677454B CN202210305072.1A CN202210305072A CN114677454B CN 114677454 B CN114677454 B CN 114677454B CN 202210305072 A CN202210305072 A CN 202210305072A CN 114677454 B CN114677454 B CN 114677454B
Authority
CN
China
Prior art keywords
preset
projection
image
dimensional point
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210305072.1A
Other languages
Chinese (zh)
Other versions
CN114677454A (en
Inventor
杜林鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Ruiying Technology Co ltd
Original Assignee
Hangzhou Ruiying Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Ruiying Technology Co ltd filed Critical Hangzhou Ruiying Technology Co ltd
Priority to CN202210305072.1A priority Critical patent/CN114677454B/en
Publication of CN114677454A publication Critical patent/CN114677454A/en
Application granted granted Critical
Publication of CN114677454B publication Critical patent/CN114677454B/en
Priority to PCT/CN2022/127355 priority patent/WO2023179011A1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/06Topological mapping of higher dimensional structures onto lower dimensional surfaces
    • G06T3/067Reshaping or unfolding 3D tree structures onto 2D planes

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application provides an image generation method and device, which relate to the technical field of image processing, and the method comprises the following steps: acquiring a three-dimensional point cloud picture; determining statistical characteristics corresponding to the amplitude values of three-dimensional points in the neighborhood of the preset three-dimensional points in the three-dimensional point cloud picture, and determining the image characteristics of the preset three-dimensional points according to the statistical characteristics; the amplitude of the three-dimensional point is used for representing the electromagnetic scattering property at the position corresponding to the three-dimensional point, and the neighborhood of the preset three-dimensional point refers to a preset point cloud area comprising the preset three-dimensional point; determining the projection characteristics of the preset three-dimensional points in the preset linear direction perpendicular to the preset projection plane on the preset projection plane according to the image characteristics of the preset three-dimensional points in the preset linear direction; and determining a two-dimensional projection drawing of the three-dimensional point cloud drawing on the preset projection plane according to the projection characteristics on the preset projection plane. Therefore, a two-dimensional image corresponding to the three-dimensional point cloud picture can be generated, the generated two-dimensional image can adapt to different application scenes, and the application effect of the image can be improved.

Description

Image generation method and device
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image generation method and apparatus.
Background
In the related art, information of a plurality of points in space, including coordinates and corresponding amplitudes, can be obtained through three-dimensional scanning equipment such as a laser radar, a millimeter wave radar, a microwave radar and the like, so as to obtain a three-dimensional point cloud picture. The amplitude of each three-dimensional point in the three-dimensional point cloud picture is determined according to the electromagnetic scattering property of the corresponding position of the three-dimensional point in space.
However, the data size of the three-dimensional point cloud image is large, the complexity of directly performing business processing (for example, target detection, target identification, and the like) on the three-dimensional point cloud image is large, and the cost is high. Therefore, a method is needed to generate a two-dimensional image corresponding to a three-dimensional point cloud image, and further, a business process can be performed based on the two-dimensional image.
Disclosure of Invention
An object of the embodiments of the present application is to provide an image generation method and an image generation device, which can generate a two-dimensional image corresponding to a three-dimensional point cloud image, and the generated two-dimensional image can adapt to different application scenes. The specific technical scheme is as follows:
in a first aspect, to achieve the above object, an embodiment of the present application discloses an image generation method, including:
acquiring a three-dimensional point cloud picture;
determining statistical characteristics corresponding to the amplitude values of three-dimensional points in the neighborhood of a preset three-dimensional point in the three-dimensional point cloud picture, and determining the image characteristics of the preset three-dimensional point according to the statistical characteristics; the amplitude of the three-dimensional point is used for representing the electromagnetic scattering characteristic of a position corresponding to the three-dimensional point, and the neighborhood of the preset three-dimensional point refers to a preset point cloud area comprising the preset three-dimensional point;
determining the projection characteristics of the preset three-dimensional points in the preset linear direction on a preset projection plane according to the image characteristics of the preset three-dimensional points in the preset linear direction; the preset linear direction is perpendicular to the preset projection plane;
and determining a two-dimensional projection drawing of the three-dimensional point cloud drawing on the preset projection plane according to the projection characteristics on the preset projection plane.
In order to achieve the above object, an embodiment of the present application discloses an image generating apparatus, including:
the three-dimensional point cloud picture acquisition module is used for acquiring a three-dimensional point cloud picture;
the image feature acquisition module is used for determining statistical features corresponding to the amplitude values of three-dimensional points in the neighborhood of a preset three-dimensional point in the three-dimensional point cloud picture and determining the image features of the preset three-dimensional point according to the statistical features; the amplitude of the three-dimensional point is used for representing the electromagnetic scattering property of the corresponding position of the three-dimensional point, and the neighborhood of the preset three-dimensional point refers to a preset point cloud area comprising the preset three-dimensional point;
the projection characteristic acquisition module is used for determining the projection characteristics of the preset three-dimensional points in the preset linear direction on a preset projection plane according to the image characteristics of the preset three-dimensional points in the preset linear direction; the preset linear direction is vertical to the preset projection plane;
and the two-dimensional projection image acquisition module is used for determining a two-dimensional projection image of the three-dimensional point cloud image on the preset projection plane according to the projection characteristics on the preset projection plane.
In another aspect of this application, in order to achieve the above object, an embodiment of this application further discloses an electronic device, where the electronic device includes a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
the memory is used for storing a computer program;
the processor is configured to implement the image generation method according to any one of the above aspects when executing the program stored in the memory.
In yet another aspect of this application implementation, there is also provided a computer-readable storage medium having stored therein a computer program which, when executed by a processor, implements the image generation method as described in any one of the above.
Embodiments of the present application further provide a computer program product containing instructions, which when run on a computer, cause the computer to perform any of the image generation methods described above.
The embodiment of the application has the following beneficial effects:
the image generation method provided by the embodiment of the application acquires a three-dimensional point cloud picture; determining statistical characteristics corresponding to the amplitude values of three-dimensional points in the neighborhood of the preset three-dimensional points in the three-dimensional point cloud picture, and determining the image characteristics of the preset three-dimensional points according to the statistical characteristics; the amplitude of the three-dimensional point is used for representing the electromagnetic scattering characteristic of the corresponding position of the three-dimensional point, and the neighborhood of the preset three-dimensional point refers to a preset point cloud area comprising the preset three-dimensional point; determining the projection characteristics of the preset three-dimensional points in the preset linear direction on a preset projection plane according to the image characteristics of the preset three-dimensional points in the preset linear direction; the preset linear direction is vertical to the preset projection plane; and determining a two-dimensional projection drawing of the three-dimensional point cloud drawing on the preset projection plane according to the projection characteristics on the preset projection plane.
Based on the processing, the statistical characteristics corresponding to the amplitudes of the three-dimensional points in the neighborhood of the preset three-dimensional points in the three-dimensional point cloud picture can be determined, the image characteristics of the preset three-dimensional points are determined according to the determined statistical characteristics, and the two-dimensional projection picture of the three-dimensional point cloud picture on the preset projection plane is determined according to the image characteristics of the preset three-dimensional points in the preset straight line direction, namely the two-dimensional image corresponding to the three-dimensional point cloud picture can be generated. Moreover, different statistical characteristics can reflect different image characteristics, correspondingly, different statistical characteristics can be adopted based on actual requirements for different application scenes, and the generated two-dimensional image can reflect corresponding image characteristics to adapt to the current application scene, that is, the quality of the two-dimensional image obtained by projection can be improved, and further the application effect of the image can be improved.
Of course, not all advantages described above need to be achieved at the same time in the practice of any one product or method of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and it is also obvious for a person skilled in the art to obtain other embodiments according to the drawings.
Fig. 1 is a flowchart of an image generation method according to an embodiment of the present application;
fig. 2 is a schematic diagram of a three-dimensional space provided by an embodiment of the present application;
FIG. 3 is a schematic view of another three-dimensional space provided by embodiments of the present application;
FIG. 4 is a flowchart of another image generation method provided in an embodiment of the present application;
FIG. 5 is a flow chart of another image generation method provided by the embodiments of the present application;
FIG. 6 is a flow chart of another image generation method provided by the embodiments of the present application;
fig. 7a is a first projection subgraph provided in the embodiment of the present application;
FIG. 7b is an enlarged view of a portion of the image area of the first projected sub-image of FIG. 7 a;
fig. 8a is a second projection subgraph provided in the embodiment of the present application;
FIG. 8b is an enlarged view of a portion of the image area of the second projected sub-image shown in FIG. 8 a;
fig. 9a is a third projection subgraph provided in the embodiment of the present application;
FIG. 9b is an enlarged view of a partial image area of the third projected subgraph shown in FIG. 9 a;
FIG. 10 is a flow chart of another image generation method provided by an embodiment of the present application;
FIG. 11 is a schematic diagram illustrating an image generation method according to an embodiment of the present disclosure;
FIG. 12 is a flow chart of another image generation method provided by an embodiment of the present application;
FIG. 13 is a flow chart of another image generation method provided by an embodiment of the present application;
FIG. 14 is a comparison of a two-dimensional projection view provided by an embodiment of the present application;
FIG. 15 is a comparison of an enlarged view of a portion of the image region corresponding to the same location in each of the two-dimensional projection views of FIG. 14;
FIG. 16 is a comparison of an enlarged view of another portion of the image region corresponding to the same location in each of the two-dimensional renderings shown in FIG. 14;
FIG. 17 is a comparison of an enlarged view of another portion of the image region corresponding to the same location in each of the two-dimensional renderings shown in FIG. 14;
FIG. 18 is a comparison of another two-dimensional projection provided by an embodiment of the present application;
FIG. 19 is a comparison of another two-dimensional projection view provided by an embodiment of the present application;
FIG. 20 is a comparison of another two-dimensional projection provided by an embodiment of the present application;
FIG. 21 is a comparison of another two-dimensional projection view provided by an embodiment of the present application;
FIG. 22 is a comparison of another two-dimensional projection provided by an embodiment of the present application;
fig. 23 is a structural diagram of an image generating apparatus according to an embodiment of the present application;
fig. 24 is a structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly and completely with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, and not all embodiments. All other embodiments that can be derived by one of ordinary skill in the art from the description herein are intended to be within the scope of the present disclosure.
The embodiment of the application provides an image generation method, which can be applied to electronic equipment, wherein the electronic equipment can process an acquired three-dimensional point cloud picture to obtain a corresponding two-dimensional image based on the method provided by the embodiment of the application. Accordingly, after the two-dimensional image is obtained, the two-dimensional image may be subjected to image recognition, for example, a category of an object (e.g., a person, an article, or the like) in the two-dimensional image may be recognized, or an image region to which the object belongs in the two-dimensional image may be recognized, or whether or not a specified object exists in the two-dimensional image may be recognized, but the two-dimensional image is not limited thereto.
Referring to fig. 1, fig. 1 is a flowchart of an image generation method provided in an embodiment of the present application, where the method may include the following steps:
s101: and acquiring a three-dimensional point cloud picture.
S102: and determining statistical characteristics corresponding to the amplitude values of the three-dimensional points in the neighborhood of the preset three-dimensional points in the three-dimensional point cloud picture, and determining the image characteristics of the preset three-dimensional points according to the statistical characteristics.
The amplitude of the three-dimensional point is used for representing the electromagnetic scattering characteristic of the corresponding position of the three-dimensional point, and the neighborhood of the preset three-dimensional point refers to a preset point cloud area comprising the preset three-dimensional point. The size and shape of the preset point cloud area can be determined according to actual requirements. The statistical characteristics corresponding to the amplitudes of the three-dimensional points in the neighborhood refer to characteristic quantities obtained by performing statistical processing on the amplitudes by using a statistical rule. Compared with the situation that only the amplitude of the preset three-dimensional point is considered, the method has the advantages that the statistical characteristics are utilized, the integral representation of the corresponding amplitude characteristics of the preset three-dimensional point is facilitated, and further the foundation is laid for obtaining a high-quality two-dimensional projection image subsequently. For example, the obtained statistical features may be directly determined as the image features of the preset three-dimensional points, or the obtained statistical features may be processed, for example, the statistical features are operated or transformed according to a preset operation rule (which may be determined according to different processing requirements), and the processing result is determined as the image features of the preset three-dimensional points.
S103: and determining the projection characteristics of the preset three-dimensional points in the preset linear direction on the preset projection plane according to the image characteristics of the preset three-dimensional points in the preset linear direction.
Wherein, the preset straight line direction is perpendicular to the preset projection plane. And if a plurality of preset three-dimensional points are arranged in each preset linear direction, the corresponding image characteristics are multiple, and the corresponding projection characteristics of the preset linear direction on the preset projection plane can be determined according to the multiple image characteristics. The projection characteristics are a direct basis for forming the two-dimensional projection image, and the selection of the projection characteristics is related to the quality of the two-dimensional projection image. For example, the pixel value of the preset three-dimensional point corresponding to the projection feature in the three-dimensional point cloud image may be used to determine the pixel value of the preset three-dimensional point corresponding to the two-dimensional projection image.
S104: and determining a two-dimensional projection drawing of the three-dimensional point cloud drawing on the preset projection plane according to the projection characteristics on the preset projection plane.
Based on the image generation method provided by the embodiment of the application, the statistical characteristics corresponding to the amplitudes of the three-dimensional points in the neighborhood of the preset three-dimensional points in the three-dimensional point cloud picture can be determined, the image characteristics of the preset three-dimensional points are determined according to the determined statistical characteristics, the two-dimensional projection picture of the three-dimensional point cloud picture on the preset projection plane is determined according to the image characteristics of the preset three-dimensional points in the preset straight line direction, and the two-dimensional image corresponding to the three-dimensional point cloud picture can be generated. Moreover, different statistical characteristics can reflect different image characteristics, correspondingly, different statistical characteristics can be adopted based on actual requirements for different application scenes, and the generated two-dimensional image can reflect corresponding image characteristics to adapt to the current application scene, that is, the quality of the two-dimensional image obtained by projection can be improved, and further the application effect of the image can be improved.
For step S101, the electronic device may acquire a three-dimensional point cloud chart through the radar device. The three-dimensional point cloud in the embodiment of the present application may be a Synthetic Aperture Radar (SAR) image. The radar device may be a laser radar, or a millimeter wave radar, or a microwave radar, etc.
In one embodiment, step S101 may include the steps of: a three-dimensional point cloud image of a scanned object is acquired.
For example, in security inspection scenes such as airport security inspection and subway security inspection, a millimeter wave radar in an active millimeter wave security inspection apparatus may transmit a modulation signal (the modulation signal is a millimeter wave signal), and an echo signal reflected by the modulation signal at a scanned object (e.g., a person, an article, etc.) is processed according to an imaging algorithm, so as to obtain a three-dimensional point cloud picture of the scanned object. Then, the electronic device may generate a corresponding two-dimensional image based on the acquired three-dimensional point cloud image based on the method provided by the embodiment of the present application. Subsequently, image recognition can be performed on the two-dimensional image to detect, recognize and locate the dangerous goods.
When the three-dimensional point cloud picture is obtained through the active millimeter wave security check instrument, based on an imaging mechanism and actual conditions, echo signals reflected by millimeter wave signals at a scanning object are uniformly sampled, so that a corresponding three-dimensional point cloud picture is generated. Therefore, the acquired three-dimensional point cloud picture comprises a plurality of three-dimensional points which are uniformly distributed in a three-dimensional space, the interval between the three-dimensional points is determined based on the sampling interval when the three-dimensional point cloud picture is generated, and the amplitude of one three-dimensional point in the three-dimensional point cloud picture represents the electromagnetic scattering property of the corresponding position of the three-dimensional point in the space. The respective amplitudes of the three-dimensional points in the three-dimensional point cloud graph can be represented by a three-dimensional matrix.
A plurality of three-dimensional points in a three-dimensional point cloud image acquired by other means (for example, a laser radar) are sparsely distributed in a three-dimensional space.
When a two-dimensional projection map is generated based on a three-dimensional point cloud map, a projection plane (i.e., a preset projection plane in the embodiment of the present application) where the two-dimensional projection map to be generated is located may be determined, and the three-dimensional point cloud map is projected onto the preset projection plane, so that a corresponding two-dimensional projection map may be obtained.
The presetting of the projection plane may include: a projection plane parallel to the front and/or back of the scanned object. In the security check scene, the scanning object is a person, the preset projection plane may be a projection plane parallel to the front and/or back of the person, and the front of the person is a plane toward which the face of the person faces. Correspondingly, the two-dimensional projection image of the three-dimensional point cloud image on the preset projection plane contains the complete image information of the person and the image information of the article carried by the person, so that the person can be conveniently subjected to security inspection. For example, the person carries a wrench, and a two-dimensional projection diagram generated according to a preset projection plane parallel to the front face of the person contains complete image information of the wrench, and the wrench can be detected, identified and located based on the two-dimensional projection diagram.
When the preset projection plane comprises the projection planes parallel to the front and the back of the person, a plurality of two-dimensional projection images in different directions can be obtained, so that the image information of the person can be enriched in different directions, and the image information in different directions is combined for identification, so that the accuracy of image identification can be improved.
In a security check scene, a two-dimensional image at a preset projection view angle may be generated based on a three-dimensional point cloud image, for example, a two-dimensional image of a scanned object at a main view angle (i.e., a main view of the scanned object) may be generated based on the three-dimensional point cloud image. A two-dimensional image of the scanned object at a side view perspective (i.e., a side view of the scanned object) may also be generated based on the three-dimensional point cloud map.
In one implementation, the electronic device may determine that a projection plane at a preset projection view angle is the preset projection plane, and determine whether an original three-dimensional point cloud image acquired by the radar device matches the preset projection view angle. If the original three-dimensional point cloud image matches the predetermined projection view angle, for example, the predetermined projection plane is parallel to a coordinate plane in the three-dimensional coordinate system of the original three-dimensional point cloud image, the electronic device may directly calculate the statistical characteristics corresponding to the amplitudes of the three-dimensional points in the neighborhood of the predetermined three-dimensional points in the original three-dimensional point cloud image.
Exemplarily, referring to fig. 2, fig. 2 is a schematic diagram of a three-dimensional space provided in an embodiment of the present application. The scanned object is a human figure, the three-dimensional range shown by the cuboid in fig. 2 represents an original three-dimensional point cloud picture of the human figure, and the coordinate plane of the three-dimensional coordinate system shown in fig. 2 comprises: the XOY plane, the XOZ plane, and the YOZ plane. When the main view of the figure shown in fig. 2 needs to be generated, the preset projection view angle is the main view angle, the preset projection plane at the main view angle is the projection plane parallel to the front face of the figure, and the preset projection plane is parallel to the XOY plane in the three-dimensional coordinate system, that is, the original three-dimensional point cloud image is matched with the preset projection view angle. Accordingly, after the electronic device acquires the original three-dimensional point cloud image shown in fig. 2, the statistical characteristics corresponding to the amplitudes of the three-dimensional points in the neighborhood of the preset three-dimensional points in the three-dimensional point cloud image can be directly calculated.
In another implementation, step S101 may include the following steps: and rotating the original three-dimensional point cloud picture based on the preset projection view angle to obtain a three-dimensional point cloud picture matched with the preset projection view angle. The preset projection plane is a projection plane under a preset projection visual angle.
If the original three-dimensional point cloud picture acquired by the radar device is not matched with the preset projection visual angle, for example, all coordinate planes in a three-dimensional coordinate system of the preset projection plane and the three-dimensional point cloud picture are not parallel, in order to generate a two-dimensional projection image of the three-dimensional point cloud picture at the preset projection visual angle, after the original three-dimensional point cloud picture is acquired, the electronic device can rotate the original three-dimensional point cloud picture based on the position relation between the preset projection visual angle and the original three-dimensional point cloud picture to obtain the three-dimensional point cloud picture matched with the preset projection visual angle.
The electronic equipment can perform coordinate conversion on the original three-dimensional point cloud image based on a two-dimensional coordinate system of a preset projection plane and a coordinate mapping relation between the two-dimensional coordinate system of the preset projection plane and a three-dimensional coordinate system of the original three-dimensional point cloud image, so that after the coordinate conversion is performed, one coordinate plane in the three-dimensional coordinate system of the obtained three-dimensional point cloud image is parallel to the preset projection plane, and the three-dimensional point cloud image obtained through the coordinate conversion is a three-dimensional point cloud image matched with a preset projection visual angle. The electronic device may then calculate statistical features corresponding to the amplitudes of the three-dimensional points in the neighborhood of the preset three-dimensional point in the three-dimensional point cloud map.
Exemplarily, referring to fig. 3, fig. 3 is a schematic diagram of another three-dimensional space provided in the embodiment of the present application. The scanned object is a human figure, the three-dimensional range shown by the cuboid in fig. 3 represents an original three-dimensional point cloud picture of the human figure, and the coordinate plane of the three-dimensional coordinate system shown in fig. 3 comprises: the XOY plane, the XOZ plane, and the YOZ plane. When the two-dimensional projection diagram of the figure shown in fig. 3 at the side viewing angle needs to be generated, the preset projection viewing angle is the side viewing angle, and the preset projection plane at the side viewing angle is X 1 O 1 Y 1 Plane, X 1 O 1 Y 1 The plane is not parallel to all coordinate planes in the three-dimensional coordinate system.
Accordingly, the electronic device may be based on X 1 O 1 Y 1 A two-dimensional coordinate system of a plane, and X 1 O 1 Y 1 Coordinate mapping relation between the two-dimensional coordinate system of the plane and the three-dimensional coordinate system of the original three-dimensional point cloud picture, and coordinate conversion is carried out on the original three-dimensional point cloud picture, so that an XOY plane and an X plane in the three-dimensional coordinate system of the three-dimensional point cloud picture after coordinate conversion is carried out 1 O 1 Y 1 The planes are parallel. The electronic device may then calculate statistical features corresponding to the amplitudes of the three-dimensional points in the neighborhood of the preset three-dimensional point in the three-dimensional point cloud map.
For step S102, the preset three-dimensional point may be any one of three-dimensional points in a three-dimensional point cloud graph. The neighborhood of the preset three-dimensional point refers to a preset point cloud area including the preset three-dimensional point.
The preset point cloud area may be determined based on demand. The predetermined point cloud area may be a one-dimensional area, for example, each three-dimensional point in the three-dimensional point cloud image is located on a straight line perpendicular to the predetermined projection plane and including the predetermined three-dimensional point. Alternatively, the predetermined point cloud area may also be a two-dimensional area, for example, each three-dimensional point in the three-dimensional point cloud image located in a plane area parallel to the predetermined projection plane and including the predetermined three-dimensional point, and the shape of the plane area may be a rectangle, a circle, or the like. Alternatively, the preset point cloud region may also be a three-dimensional region, for example, three-dimensional points in a cuboid or a sphere that includes the preset three-dimensional point in the three-dimensional point cloud image are located, and the embodiment is not limited specifically.
Different image characteristics of the three-dimensional point can be determined based on different statistical characteristics, and different characteristic statistical modes can be selected based on different requirements when the image characteristics of the preset three-dimensional point are determined.
In one embodiment, the image feature of the preset three-dimensional point comprises: the image processing method comprises the steps of first image features for participating in representing image details, second image features for participating in determining smooth areas of an image, and third image features for participating in representing background noise of the image.
The electronic device can generate the two-dimensional projection map based on the first image feature, the second image feature and the third image feature, can embody image details and an image smooth area, can reduce image background noise, and can improve the quality of the generated two-dimensional projection map so as to improve the accuracy of identifying the scanned object in the image. For example, when the two-dimensional projection view is applied to a security scene, the two-dimensional projection view can embody details of different scanned objects. In addition, because the surface of the human body is relatively smooth, the obtained two-dimensional projection image can show the image area where the person is located, and then security inspection is carried out based on the two-dimensional projection image, the person in the two-dimensional projection image and the article carried by the person can be accurately distinguished, so that whether the article carried by the person is a dangerous article or not can be determined, and the accuracy of dangerous article identification can be improved.
In one embodiment, on the basis of fig. 1, referring to fig. 4, step S102 may include the following steps:
s1021: and determining a first statistical characteristic corresponding to the amplitude value of the three-dimensional point in the three-dimensional point cloud picture in the first neighborhood of the preset three-dimensional point as a first image characteristic of the preset three-dimensional point through at least one characteristic statistical mode of variance calculation, gradient calculation and Laplace operator.
And the plane where the first neighborhood is located is parallel to the preset projection plane.
S1022: and determining a second statistical characteristic corresponding to the amplitude of the three-dimensional point in the first neighborhood of the preset three-dimensional point in the three-dimensional point cloud picture as a second image characteristic of the preset three-dimensional point by at least one characteristic statistical mode of entropy calculation, integral sidelobe ratio calculation and peak sidelobe ratio calculation.
S1023: and determining a third statistical characteristic corresponding to the amplitude of the three-dimensional point in the second neighborhood of the preset three-dimensional point in the three-dimensional point cloud picture as a third image characteristic of the preset three-dimensional point by at least one characteristic statistical mode of variance calculation, gradient calculation or Laplacian operator.
And the plane where the second neighborhood is located is vertical to the preset projection plane.
The statistical characteristics of the preset three-dimensional point in different dimensions can be obtained based on different characteristic statistical modes, and the image characteristics of the preset three-dimensional point in different dimensions can be determined based on the statistical characteristics of different dimensions. Correspondingly, in order to enable the generated two-dimensional projection drawing to embody image characteristics of different dimensions and facilitate subsequent image recognition of the two-dimensional projection drawing, the electronic device may calculate statistical characteristics corresponding to the amplitude values of the three-dimensional points in the neighborhood of the preset three-dimensional points by adopting a plurality of characteristic statistical modes. And further determining the image characteristics of the preset three-dimensional points according to the obtained statistical characteristics.
And determining statistical characteristics (namely first statistical characteristics) corresponding to the amplitude values of the three-dimensional points in the neighborhood of the preset three-dimensional points in the three-dimensional point cloud picture by adopting at least one characteristic statistical mode of variance calculation, gradient calculation, laplacian operator and the like. For example, edge features of the scanned object can be extracted by using gradient calculation and a laplacian operator, dispersion of amplitudes of three-dimensional points in a neighborhood of a preset three-dimensional point can be reflected by using a variance value obtained by variance calculation, the dispersion represents difference of the amplitudes of the three-dimensional points in the neighborhood of the preset three-dimensional point, and the difference between the amplitude of the three-dimensional point at the edge of the scanned object in the three-dimensional point cloud picture and amplitudes of other three-dimensional points is large. Therefore, the first statistical feature can embody the edge feature of the scanning object corresponding to the three-dimensional point cloud picture and the image details of different areas in the three-dimensional point cloud picture. The first image characteristic used for representing the image details is used as an evaluation index in the process of determining the two-dimensional projection image, so that the loss of the details on the image caused by the fact that the background clutter covers the dark areas in the strong scattering target can be avoided.
Aiming at a security check scene, a three-dimensional point cloud picture can be obtained through the active millimeter wave security check instrument. According to the focusing characteristics of the three-dimensional point cloud image and the electromagnetic scattering characteristics of the scanned object, the target area with strong electromagnetic scattering in the three-dimensional point cloud image contains less three-dimensional points in the distance direction, and the distance direction is the direction perpendicular to the preset projection plane, namely, only the three-dimensional points with less distance upwards contain the focusing information of the target area. The other three-dimensional points with upward distances correspond to other areas, for example, a background area or a transmission area, the amplitudes of the three-dimensional points corresponding to the other areas in the three-dimensional point cloud picture are lower than the amplitudes of the three-dimensional points corresponding to the target area in the three-dimensional point cloud picture, that is, the amplitudes of the three-dimensional points corresponding to the target area with strong electromagnetic scattering in the three-dimensional point cloud picture are larger, so that the dispersion of the three-dimensional points in the target area in the three-dimensional point cloud picture is also larger.
Therefore, in order to avoid the loss of image details of the generated two-dimensional projection image due to the fact that the background area covers a darker area in the target area, at least one characteristic statistical mode of variance calculation, gradient calculation and laplacian can be adopted to calculate a first statistical characteristic corresponding to the amplitude of the three-dimensional points in the three-dimensional point cloud image, wherein the three-dimensional points are located in the first neighborhood of the preset three-dimensional point, and the first statistical characteristic is used as a first image characteristic of the preset three-dimensional point cloud image. Because the first image characteristic is used for participating in representing the image details, the two-dimensional projection graph obtained based on the first image characteristic can embody the details of the scanned object, that is, the quality of the generated two-dimensional projection graph can be improved, and the subsequent image recognition of the two-dimensional projection graph is facilitated.
The statistical characteristics (namely, the second statistical characteristics) corresponding to the amplitudes of the three-dimensional points in the first neighborhood of the preset three-dimensional points in the three-dimensional point cloud picture are determined by adopting at least one characteristic statistical mode of entropy calculation, peak side lobe ratio calculation, integral side lobe ratio calculation and the like, the order degree and the focusing degree of the amplitudes of the three-dimensional points in the first neighborhood of the preset three-dimensional points can be embodied, and the order degree represents the variation trend of the amplitudes of the three-dimensional points in the neighborhood of the preset three-dimensional points, such as smooth increasing or smooth decreasing and the like. The degree of focusing represents the degree of concentration of the amplitudes of the three-dimensional points within the neighborhood of the preset three-dimensional point. The concentration degree of the amplitude values of the three-dimensional points corresponding to the smooth area in the scanned object in the three-dimensional point cloud picture is high, and the three-dimensional points are in a smooth change trend, so that the second statistical characteristic can reflect the smoothness degree of different areas in the scanned object corresponding to the three-dimensional point cloud picture.
For a security check scene, a scanned object can be a person, and the human body surface is smooth, so that the degree of order of the amplitude of each three-dimensional point corresponding to the human body surface in the three-dimensional point cloud picture is high. Therefore, in order to enable the generated two-dimensional projection map to represent a smooth region of a scanning object, at least one characteristic statistical mode of entropy calculation, integral sidelobe ratio calculation and peak sidelobe ratio calculation can be adopted to calculate a second statistical characteristic corresponding to the amplitude of the three-dimensional points in the three-dimensional point cloud map in the first neighborhood of the preset three-dimensional points, and the second statistical characteristic is used as a second image characteristic of the preset three-dimensional points. The second image characteristic is used for participating in determining the smooth area of the image, and the two-dimensional projection image obtained based on the second image characteristic can reflect the smooth area of the scanned object, so that the quality of the generated two-dimensional projection image can be improved, and the subsequent image recognition of the two-dimensional projection image is facilitated.
The method comprises the steps of determining statistical characteristics (namely third statistical characteristics) corresponding to the amplitude values of three-dimensional points in the neighborhood of the preset three-dimensional point in the three-dimensional point cloud picture by adopting at least one characteristic statistical mode of variance calculation, gradient calculation, laplace operator and the like, and reflecting the dispersion of the amplitude values of the three-dimensional points in the second neighborhood of the preset three-dimensional point, wherein the dispersion represents the difference of the amplitude values of the three-dimensional points in the neighborhood of the preset three-dimensional point. For example, the amplitude of each three-dimensional point in the background region in the three-dimensional point cloud image is greatly different from the amplitude of each three-dimensional point in the target region containing the scanning object in the three-dimensional point cloud image, and therefore, the third statistical characteristic can be used for distinguishing the background region from the target region containing the scanning object.
Aiming at a security check scene, a target area with strong electromagnetic scattering in a three-dimensional point cloud picture obtained by an active millimeter wave security check instrument contains less three-dimensional points in the distance direction, and the amplitude value of the contained three-dimensional points is changed greatly. The background area has a small variation in the amplitude of the distance to the contained three-dimensional point. Therefore, the difference between the amplitudes of the three-dimensional points in the target region and the background region in the three-dimensional point cloud image is large.
Therefore, in order to enable the generated two-dimensional projection image to represent the background region and the target region including the scanning object and reduce the influence of the background region on the target region including the scanning object, at least one feature statistical mode of variance calculation, gradient calculation or laplacian may be adopted to calculate a third statistical feature corresponding to the amplitude of the three-dimensional points in the second neighborhood of the preset three-dimensional points in the three-dimensional point cloud image as a third image feature of the preset three-dimensional points. Because the third image characteristic is used for participating in representing the image background noise, the two-dimensional projection graph obtained based on the third image characteristic can reduce the image background noise, and can distinguish a background area from a target area containing a scanned object, namely, the quality of the generated two-dimensional projection graph can be improved, and the two-dimensional projection graph is convenient to carry out image identification.
When the statistical characteristics corresponding to the amplitude values of the three-dimensional points in the neighborhood of the preset three-dimensional points in the three-dimensional point cloud picture are determined, besides the enumerated characteristic statistical modes, the statistical characteristics can be realized through a self-defined function, so that the effects of retaining image details, determining an image smooth area, determining image background noise and the like are realized. The specific form of the custom function may be determined according to specific processing requirements. For example, the custom function is a power function, and the electronic device may calculate, based on the power function, a power sum of amplitudes of three-dimensional points in a neighborhood of a preset three-dimensional point in the three-dimensional point cloud image as an image feature of the preset three-dimensional point. Correspondingly, the obtained image characteristics can represent the difference between the amplitude of the three-dimensional point in the neighborhood of the preset three-dimensional point and the average amplitude of the image background noise, so that the image details are retained in the generation of the two-dimensional projection image.
In one embodiment, on the basis of fig. 1, referring to fig. 5, step S103 may include the steps of:
s1031: and determining the feature with the largest median value of the first image features of the preset three-dimensional points in the preset linear direction as the first projection feature of the preset three-dimensional points in the preset linear direction on the preset projection plane.
S1032: and determining the feature with the largest median value of the second image features of the preset three-dimensional points in the preset linear direction as a second projection feature of the preset three-dimensional points in the preset linear direction on the preset projection plane.
S1033: and determining the third image characteristic of the preset three-dimensional point in the preset linear direction as a third projection characteristic of the preset three-dimensional point in the preset linear direction on the preset projection plane.
The direction of the preset straight line is perpendicular to the preset projection plane, and the plane where the second neighborhood is located considered in the process of determining the third image feature is perpendicular to the preset projection plane, so that when the second neighborhood is a one-dimensional neighborhood, for the same preset three-dimensional point, the second neighborhood is parallel to or coincides with the preset straight line where the second neighborhood is located, and for the condition of complete coincidence (each preset three-dimensional point in the second neighborhood is each preset three-dimensional point on the preset straight line), the statistical feature or the third image feature, which is the same value, is obtained through calculation.
Accordingly, step S104 may include the steps of:
s1041: and determining a two-dimensional projection drawing of the three-dimensional point cloud drawing on the preset projection plane based on the first projection characteristic, the second projection characteristic and the third projection characteristic on the preset projection plane.
For example, the feature with the largest value in the first image feature may be a feature corresponding to the largest dispersion of the amplitudes, where the dispersion represents a difference between the amplitudes of the three-dimensional points, and the difference between the amplitudes of the edge of the scanned object and the adjacent position is larger, so that the preset three-dimensional point corresponding to the feature with the largest value in the first image feature is the three-dimensional point corresponding to the edge of the scanned object. Furthermore, based on the first image characteristics, the edge of the scanned object in the three-dimensional point cloud image can be effectively determined, so that the image details of the generated two-dimensional projection image are enriched.
Therefore, in order to enable the generated two-dimensional projection map to embody details of the scanned object, the electronic device may determine a feature having a largest median value of the first image features of the preset three-dimensional points in the preset straight line direction as a first projection feature of the preset three-dimensional points in the preset straight line direction on the preset projection plane.
For example, the feature with the largest median value of the second image features may be a feature corresponding to the largest amplitude order degree, the order degree may represent the smoothness degree of different regions in the scanned object corresponding to the three-dimensional point cloud image, the region where the preset three-dimensional point corresponding to the image feature with the largest order degree is the image region with the largest smoothness degree, and in order to enable the generated two-dimensional projection image to represent the smooth region of the scanned object, the electronic device may determine the feature with the largest median value of the second image features of the preset three-dimensional point in the preset straight line direction as the second projection feature of the preset three-dimensional point in the preset straight line direction on the preset projection plane.
The electronic device may determine the third image feature of the preset three-dimensional point in the preset straight line direction as a third projection feature of the preset three-dimensional point in the preset straight line direction on the preset projection plane, in order to reduce an influence of the image background noise on a target region including a scanned object and enable the generated two-dimensional projection map to distinguish the background region from the target region including the scanned object.
Furthermore, the electronic device may generate a two-dimensional projection diagram of the three-dimensional point cloud diagram on the preset projection plane based on the first projection feature, the second projection feature and the third projection feature on the preset projection plane in different manners.
In another implementation, on the basis of fig. 5, referring to fig. 6, step S1041 may include the following steps:
s10411: determining a first pixel value of a preset three-dimensional point corresponding to a first projection feature on a preset projection plane in a three-dimensional point cloud picture, and generating a first projection subgraph based on the first pixel value.
S10412: and determining a second pixel value of the preset three-dimensional point corresponding to the second projection feature on the preset projection plane in the three-dimensional point cloud picture, and generating a second projection subgraph based on the second pixel value.
S10413: and taking a third projection feature on a preset projection plane as a third pixel value, and generating a third projection subgraph based on the third pixel value.
S10414: and performing fusion processing on the first projection subgraph, the second projection subgraph and the third projection subgraph to obtain a two-dimensional projection graph corresponding to the three-dimensional point cloud graph.
And a preset straight line corresponds to a pixel point in a preset projection plane, a preset three-dimensional point on the preset straight line is reversely determined according to the projection characteristics, and the pixel value corresponding to the preset three-dimensional point is also the pixel value of the pixel point in the preset projection plane. Correspondingly, the pixel values corresponding to the preset three-dimensional points in each preset straight line direction are determined, that is, the pixel values of the pixel points in the preset projection plane are determined, and the corresponding projection subgraph can be obtained.
The pixel value corresponding to a preset three-dimensional point is the amplitude value of the preset three-dimensional point in the three-dimensional point cloud picture.
For the embodiment shown in fig. 2, the three-dimensional range shown by the rectangular parallelepiped in fig. 2 represents a three-dimensional point cloud image of a person, and the preset projection plane is an XOY plane. The predetermined linear direction is a direction perpendicular to the XOY plane. The plane in which the first neighbourhood of the predetermined three-dimensional point lies is parallel to the XOY plane.
For each preset three-dimensional point in the preset straight line direction, the position of the preset three-dimensional point in the three-dimensional coordinate system can be represented as (x, y, z). Since each three-dimensional point is located in a predetermined straight line direction perpendicular to the XOY plane, x and y of each three-dimensional point are the same.
Accordingly, the predetermined three-dimensional point corresponding to the first projection feature can be expressed as (x, y, z) 1 ). The first pixel value of the preset three-dimensional point corresponding to the first projection characteristic in the three-dimensional point cloud picture is as follows: the amplitude value of the preset three-dimensional point in the three-dimensional point cloud picture is the first pixel value sigma corresponding to the preset three-dimensional point corresponding to the first projection characteristic 1 Can be expressed as:
σ 1 =σ(x,y,z 1 ) (1)
σ(x,y,z 1 ) And representing the amplitude of the preset three-dimensional point corresponding to the first projection characteristic in the three-dimensional point cloud picture.
Referring to fig. 7a and 7b, fig. 7a is a first projection subgraph provided in an embodiment of the present application. The electronic device may generate a first projection subgraph as illustrated in fig. 7a based on the first projection feature.
The image on the left side in fig. 7b is an enlarged view of the background area (the area other than the image area where the person is located) in the first projected sub-image shown in fig. 7 a. The middle image in fig. 7b is an enlarged view of the area of the back of the person in the first projected subgraph shown in fig. 7 a. The image on the right side in fig. 7b is an enlarged view of the region of the leg of the person in the first projected subgraph shown in fig. 7 a.
As can be seen in conjunction with fig. 7a and 7b, both fig. 7a and 7b contain more image detail. For example, the middle image of fig. 7b may show different positions of the back area of the person, and the right image of fig. 7b may show the positions of the legs of the person with a wrench.
For the embodiment shown in fig. 2, the three-dimensional range shown by the rectangular parallelepiped in fig. 2 represents a three-dimensional point cloud image of a person, and the preset projection plane is an XOY plane. The predetermined linear direction is a direction perpendicular to the XOY plane. The plane in which the first neighbourhood of the predetermined three-dimensional point lies is parallel to the XOY plane.
For each preset three-dimensional point in the preset straight line direction, the position of the preset three-dimensional point in the three-dimensional coordinate system can be represented as (x, y, z). Since each three-dimensional point is located in a predetermined straight line direction perpendicular to the XOY plane, x and y of each three-dimensional point are the same.
Accordingly, the predetermined three-dimensional point corresponding to the second projection feature can be expressed as (x, y, z) 2 ). The second pixel value of the preset three-dimensional point corresponding to the second projection characteristic in the three-dimensional point cloud picture is as follows: the amplitude value of the preset three-dimensional point in the three-dimensional point cloud picture is the second pixel value sigma corresponding to the preset three-dimensional point corresponding to the second projection characteristic 2 Can be expressed as:
σ 2 =σ(x,y,z 2 ) (2)
σ(x,y,z 2 ) And representing the amplitude of the preset three-dimensional point corresponding to the second projection characteristic in the three-dimensional point cloud picture.
Referring to fig. 8a and 8b, fig. 8a is a second projection subgraph provided in the embodiment of the present application. The electronic device may generate a second projection subgraph as shown in fig. 8a based on the second projection feature.
The image on the left side in fig. 8b is an enlarged view of the background area in the second projected subgraph shown in fig. 8 a. The middle image in fig. 8b is an enlarged view of the area of the back of the person in the second sub-image of fig. 8 a. The image on the right side in fig. 8b is an enlarged view of the region of the leg of the person in the second projected sub-image shown in fig. 8 a.
As can be seen from fig. 8a and 8b, fig. 8a and 8b may both represent smooth areas of the image, i.e. may represent the smoothness of the scanned object. For example, since the smoothness of the surface of a human body is different from the smoothness of other objects (e.g., a wrench carried by a person), the brightness of the region including the scanning object in fig. 8a is different from that of the background region. The brightness difference is small for each position in the image in the middle of fig. 8 b. The area of the right image in fig. 8b where the leg of the person is located is different in brightness from the area where the wrench carried by the person is located.
For the embodiment shown in fig. 2, the three-dimensional range shown by the rectangular parallelepiped in fig. 2 represents a three-dimensional point cloud image of a person, and the preset projection plane is an XOY plane. The predetermined linear direction is a direction perpendicular to the XOY plane. The plane in which the second neighbourhood of the predetermined three-dimensional point lies is perpendicular to the XOY plane.
For each preset three-dimensional point in the preset linear direction, the position of the preset three-dimensional point in the three-dimensional coordinate system can be tabulatedShown as (x, y, z). Correspondingly, the third pixel value σ corresponding to the preset three-dimensional point corresponding to the third projection feature 3 Can be expressed as:
σ 3 =f(σ(x,y,z)) (3)
f () represents an objective function for calculating the third image feature of the preset three-dimensional point. σ (x, y, z) represents the amplitude of a three-dimensional point in the neighborhood of a preset three-dimensional point in the three-dimensional point cloud image.
Referring to fig. 9a and 9b, fig. 9a is a third projection subgraph provided in the embodiment of the present application. The electronic device may generate a third projection subgraph, shown in fig. 9a, based on the third projection feature.
The image on the left side in fig. 9b is an enlarged view of the background area in the third projected sub-image shown in fig. 9 a. The middle image in fig. 9b is an enlarged view of the area of the back of the person in the third projected sub-image shown in fig. 9 a. The image on the right side in fig. 9b is an enlarged view of the region of the leg of the human being in the third projected sub-image shown in fig. 9 a.
In fig. 9a and 9b, the boundary between the target region and the background region where the scanned object is located can be clearly displayed, and the complete contour of the scanned object can be displayed, and the noise in the background region is less.
Furthermore, after the first projection subgraph, the second projection subgraph and the third projection subgraph are obtained, the electronic device can perform fusion processing on the first projection subgraph, the second projection subgraph and the third projection subgraph to obtain a two-dimensional projection graph corresponding to the three-dimensional point cloud graph.
In an implementation manner, if the data magnitudes of the first projection subgraph, the second projection subgraph and the third projection subgraph are the same, for example, the electronic device determines the feature with the largest first image feature median value of the preset three-dimensional points in the preset straight line direction as the first projection feature, and uses the amplitude value of the preset three-dimensional point corresponding to the first projection feature in the three-dimensional point cloud graph as the pixel value to obtain the first projection subgraph. And determining the feature with the largest median value of the second image features of the preset three-dimensional points in the preset linear direction as a second projection feature, and taking the amplitude value of the preset three-dimensional points corresponding to the second projection feature in the three-dimensional point cloud picture as a pixel value to obtain a second projection subgraph. And determining the feature with the largest median value of the third image features of the preset three-dimensional points in the preset linear direction as a third projection feature, and taking the amplitude of the preset three-dimensional points corresponding to the third projection feature in the three-dimensional point cloud picture as a pixel value to obtain a third projection subgraph. Correspondingly, the first projection subgraph, the second projection subgraph and the third projection subgraph are data magnitude levels corresponding to the amplitude values of three-dimensional points in the three-dimensional point cloud picture.
The electronic device can directly perform fusion processing on the first projection subgraph, the second projection subgraph and the third projection subgraph based on a preset fusion algorithm to obtain a two-dimensional projection graph corresponding to the three-dimensional point cloud graph.
For example, the electronic device may perform the fusion process on the projection subgraphs according to the following formula.
H(x,y)=g(h 1 (x,y),h 2 (x,y),h 3 (x,y)) (4)
H (x, y) represents a pixel value of a pixel point having coordinates (x, y) in the two-dimensional projection view. g () represents a fusion function. h is 1 And (x, y) represents the pixel value of the pixel point with the coordinate (x, y) in the first projection subgraph. h is a total of 2 And (x, y) represents the pixel value of the pixel point with the coordinate (x, y) in the second projection subgraph. h is 3 And (x, y) represents the pixel value of the pixel point with the coordinate (x, y) in the third projection subgraph.
In one embodiment, the electronic device may perform fusion processing on two projection subgraphs with the same data magnitude in the first projection subgraph, the second projection subgraph and the third projection subgraph to obtain an intermediate image. In addition, the electronic device may perform preprocessing (e.g., normalization processing) on the other projection subgraph, and perform fusion processing on the intermediate image and a result of the preprocessing to obtain a corresponding two-dimensional projection graph.
For example, the electronic device determines a feature with a largest median value of first image features of preset three-dimensional points in a preset straight line direction as a first projection feature, and obtains a first projection subgraph by taking an amplitude value of the preset three-dimensional points corresponding to the first projection feature in a three-dimensional point cloud graph as a pixel value. And determining the feature with the largest median value of the second image features of the preset three-dimensional points in the preset linear direction as a second projection feature, and taking the amplitude value of the preset three-dimensional points corresponding to the second projection feature in the three-dimensional point cloud picture as a pixel value to obtain a second projection subgraph. The first projection subgraph and the second projection subgraph are both data magnitude corresponding to the amplitude of the three-dimensional points in the three-dimensional point cloud picture. And the electronic equipment determines a third image feature of a preset three-dimensional point in the preset linear direction as a third projection feature, and the third projection feature is used as a pixel value to obtain a third projection subgraph. And the third projection subgraph is the data magnitude corresponding to the statistical characteristics of the amplitude of the three-dimensional points in the three-dimensional point cloud picture.
Because the data magnitude of the first projection subgraph is the same as that of the second projection subgraph, the electronic equipment can perform fusion processing on the first projection subgraph and the second projection subgraph to obtain an intermediate image. In addition, the electronic device may perform preprocessing on the third projection subgraph, and perform fusion processing on the intermediate image and the result of the preprocessing to obtain a corresponding two-dimensional projection graph.
Accordingly, on the basis of fig. 6, referring to fig. 10, step S10414 may include the steps of:
s104141: and processing the first projection subgraph and the second projection subgraph by using a preset fusion algorithm to obtain a first intermediate image.
S104142: and carrying out normalization processing and gray level transformation processing on the third projection subgraph to obtain a second intermediate image.
Wherein the grey scale transformation process is used for adjusting the difference of pixel values between the background area and the target area on the third projection subgraph.
S104143: and determining the product of the pixel values at the corresponding same positions on the first intermediate image and the second intermediate image, and generating a two-dimensional projection graph corresponding to the three-dimensional point cloud graph based on the product.
The preset fusion algorithm may be any one of a weighted fusion algorithm, a pyramid fusion algorithm, and a maximum value mapping algorithm.
For example, the preset fusion algorithm is a weighted fusion algorithm, the pixel values at the same positions in the first projection subgraph and the second projection subgraph correspond to different weights, the electronic device may calculate a weighted sum of the pixel values at the same positions in the first projection subgraph and the second projection subgraph according to the corresponding weights, and the calculated weighted sum is used as the pixel value to obtain the first intermediate image. Or, the preset fusion algorithm is a maximum value mapping algorithm, and the electronic device may select a maximum pixel value from pixel values at the same corresponding positions in the first projection subgraph and the second projection subgraph, so as to obtain the first intermediate image.
The electronic device may further perform normalization processing on the third projection subgraph, for example, the pixel value of the third projection subgraph may be normalized to [0,1] to adjust the data magnitude of the third projection subgraph, so that the data magnitude of the third projection subgraph is smaller than the data magnitudes of the first projection subgraph and the second projection subgraph. Then, the third projection sub-image after normalization is subjected to gray level transformation, for example, the electronic device may perform linear gray level transformation on the third projection sub-image after normalization, or perform nonlinear gray level transformation on the third projection sub-image after normalization based on a nonlinear transformation function (e.g., an exponential function, a logarithmic function, a power function, a gamma function, etc.) to adjust contrast of different regions in the third projection sub-image after normalization, thereby avoiding loss of image detail information in a subsequent image fusion process, or ensuring that a better noise reduction effect is achieved.
Specifically, through the gray-scale transformation process, the difference of pixel values between the background region and the target region on the third projection sub-image (i.e. reducing the contrast of different image regions) can be reduced, or the difference of pixel values between the background region and the target region on the third projection sub-image (i.e. improving the contrast of different image regions) can be improved, which is related to the image processing requirements and the functional form used in the gray-scale transformation process. For example, in order to achieve a better noise reduction effect in the image fusion process, an exponential function can be adopted in the gray scale conversion process to improve the contrast of different areas in the projection sub-image; alternatively, for example, in order to retain more image detail information in the image fusion process, a logarithmic function may be used to reduce the contrast of different regions in the projection sub-image.
Taking a millimeter wave security check scene as an example, the echo at the edge of a human body is weak due to a scattering angle, and if the contrast of a projection sub-image is too high, part of human body edge information may be lost in the subsequent image fusion process, so that the image contrast is reduced through gray scale conversion, and useful detail information can be kept as much as possible while the background noise is reduced. If the contrast of the projection sub-image is low, the noise reduction effect in the image fusion process is general, so that the contrast can be improved through gray scale transformation, and the better noise reduction effect can be achieved in the image fusion process.
And carrying out gray level transformation processing on the normalized third projection sub-image, so that the difference of pixel values between a background area and a target area containing a scanning object on the normalized third projection sub-image can be adjusted, and the background area and the target area containing the scanning object can be obviously distinguished in the finally obtained two-dimensional projection image.
Furthermore, the electronic device may calculate a product of pixel values at the same positions on the first intermediate image and the second intermediate image, and generate a two-dimensional projection map corresponding to the three-dimensional point cloud map based on the calculated product. For example, the electronic device may perform normalization processing on the calculated product according to a preset normalization interval (e.g., 0 to 255), and use the normalization result as a pixel value, so as to obtain a two-dimensional projection diagram corresponding to the three-dimensional point cloud diagram.
Based on the processing, the influence of the image background noise on the target area containing the scanning object can be reduced, namely the noise reduction is realized by presetting the image characteristics with different three-dimensional points, reserving the image details, determining the smooth area of the image and determining the image background noise. Correspondingly, the obtained two-dimensional projection image can embody the details and the smooth area of the scanned object, can distinguish the background area from the target area containing the scanned object, has low background noise, can improve the quality of the generated two-dimensional projection image, and improves the application effect of the image when the subsequent two-dimensional projection image is applied to different scenes.
Referring to fig. 11, fig. 11 is a schematic diagram illustrating an image generating method according to an embodiment of the present disclosure. The larger cube in fig. 11 represents a 3D point cloud, which is a three-dimensional point cloud map in the embodiments of the present application. The different statistical characteristics represent different image characteristics of preset three-dimensional points in the embodiment of the application. The three smaller cubes in fig. 11 represent respective neighborhoods of three predetermined three-dimensional points in the predetermined straight line direction. The predetermined projection plane is a front surface of the larger cube in fig. 11, and a connection line (i.e., a predetermined straight line) of the three predetermined three-dimensional points is perpendicular to the predetermined projection plane.
The electronic equipment can obtain the three-dimensional point cloud picture, respectively calculate the statistical characteristics corresponding to the amplitudes of the three-dimensional points in the neighborhood of the preset three-dimensional points based on different characteristic statistical modes, and determine different image characteristics of the preset three-dimensional points based on the statistical characteristics obtained by calculation. Then, the electronic device may determine, based on each image feature of the preset three-dimensional points, a projection feature of the preset three-dimensional point on a preset projection plane corresponding to the image feature. Furthermore, the electronic device can obtain a projection subgraph corresponding to the projection feature according to each projection feature of the preset three-dimensional point on the preset projection plane. Furthermore, the electronic device can perform fusion processing on each projection subgraph to obtain a two-dimensional projection graph of the three-dimensional point cloud graph on a preset projection plane.
Based on the processing, the influence of the image background noise on a target area containing a scanning object can be reduced by presetting different image characteristics of three-dimensional points, reserving image details, determining an image smooth area and determining the image background noise. Correspondingly, the obtained two-dimensional projection image can embody the details and the smooth area of the scanned object, can distinguish the background area from the target area containing the scanned object, has low background noise, can improve the quality of the generated two-dimensional projection image, and improves the application effect of the image when the subsequent two-dimensional projection image is applied to different scenes.
Referring to fig. 12, fig. 12 is a flowchart of an image generation method according to an embodiment of the present disclosure.
The electronic device can acquire a 3D point cloud, where the 3D point cloud is a three-dimensional point cloud image in the embodiment of the present application, and determine a plane under a preset projection view angle as a preset projection plane. If the three-dimensional point cloud image is matched with the preset projection visual angle, for example, the preset projection plane is parallel to one coordinate plane in the three-dimensional coordinate system of the three-dimensional point cloud image, the electronic device can directly perform neighborhood statistical feature calculation, that is, different image features of the preset three-dimensional points are respectively calculated according to different feature statistical modes. If the three-dimensional point cloud picture is not matched with the preset projection visual angle, for example, all coordinate planes in the three-dimensional coordinate system of the preset projection plane and the three-dimensional point cloud picture are not parallel, the electronic device can perform three-dimensional rotation, that is, the electronic device performs coordinate conversion on the three-dimensional point cloud picture based on the preset projection visual angle, so that after the coordinate conversion is performed, one coordinate plane in the three-dimensional coordinate system of the three-dimensional point cloud picture is parallel to the preset projection plane, and the three-dimensional point cloud picture matched with the preset projection visual angle is obtained. Then, the electronic device can perform neighborhood statistical feature calculation, that is, respectively calculate different image features of the preset three-dimensional points according to different feature statistical modes.
Furthermore, the electronic device may perform projection, that is, determine different projection features of the preset three-dimensional point on the preset projection plane based on different image features of the preset three-dimensional point, and generate a projection subgraph corresponding to the projection features based on the different projection features. Furthermore, the electronic device can perform fusion processing on the projection subgraphs to obtain a two-dimensional projection image of the three-dimensional point cloud image on a preset projection plane.
Based on the processing, the influence of the image background noise on a target area containing a scanning object can be reduced by presetting different image characteristics of three-dimensional points, reserving image details, determining an image smooth area and determining the image background noise. Correspondingly, the obtained two-dimensional projection image can embody the details and the smooth area of the scanned object, can distinguish the background area from the target area containing the scanned object, has low background noise, can improve the quality of the generated two-dimensional projection image, and improves the application effect of the image when the subsequent two-dimensional projection image is applied to different scenes.
Referring to fig. 13, fig. 13 is a flowchart of an image generation method according to an embodiment of the present application.
The electronic device may obtain a 3D point cloud, where the 3D point cloud is a three-dimensional point cloud map in the embodiment of the present application. Then, the electronic device can respectively calculate different image characteristics of preset three-dimensional points in the three-dimensional point cloud picture according to different characteristic statistical modes, wherein the image characteristics of one preset three-dimensional point are as follows: and according to different characteristic statistical modes, calculating the statistical characteristics of the amplitude values of the three-dimensional points in the neighborhood taking the preset three-dimensional point as the central point. The statistical characteristics of the preset three-dimensional points comprise: the statistical characteristics comprise 1 statistical characteristic, 2 statistical characteristic, … … and n statistical characteristic, wherein each statistical characteristic corresponds to a characteristic statistical mode, n represents the number of the characteristic statistical modes, and n is larger than or equal to 1. The feature statistics may include: mode calculation, maximum calculation, variance calculation, signal-to-noise ratio calculation, entropy calculation, convolution operator, custom function and the like.
Then, for each image feature of the preset three-dimensional points, the electronic device determines a projection feature of the preset three-dimensional point corresponding to the image feature on a preset projection plane based on the image feature, and generates a projection sub-graph corresponding to the projection feature based on each projection feature. The generated projection subgraph comprises: projection view 1, projection view 2, … …, projection view n. The electronic device can perform fusion processing on each projection subgraph (namely, the projection graph 1, the projection graph 2, the projection graph … … and the projection graph n) to obtain a front view of the scanning object, wherein the front view of the scanning object is a two-dimensional projection graph of the three-dimensional point cloud graph in the main view direction.
Based on the processing, the influence of the image background noise on a target area containing a scanning object can be reduced by presetting different image characteristics of three-dimensional points, reserving image details, determining an image smooth area and determining the image background noise. Correspondingly, the obtained two-dimensional projection image can embody the details and the smooth area of the scanned object, can distinguish the background area from the target area containing the scanned object, has low background noise, can improve the quality of the generated two-dimensional projection image, and improves the application effect of the image when the subsequent two-dimensional projection image is applied to different scenes.
In order to more clearly illustrate the technical effects of the image generation method provided by the embodiment of the present application, the image generation method provided by the embodiment of the present application and the image generation method of the related art are compared with each other through the comparison diagrams of the two-dimensional projection diagrams shown in fig. 14 to fig. 22. The image generation method in the related art is a maximum value projection method. The maximum projection method determines the projection characteristics of the projection of the preset three-dimensional point in the preset straight line direction on the preset projection plane through the following formula:
σ 4 =max(σ(x,y,x)) (5)
σ 4 the method comprises the steps of representing projection characteristics of a preset three-dimensional point in a preset straight line direction projected on a preset projection plane determined based on a maximum value projection method, representing the amplitude value of the preset three-dimensional point in the preset straight line direction in a three-dimensional point cloud picture by sigma (x, y, x), and representing a maximum value function by max ().
The left image in fig. 14 to 22 is a two-dimensional projection view obtained by an image generation method in the related art, and the right image is a two-dimensional projection view obtained by an image generation method provided in an embodiment of the present application. Fig. 15, 16 and 17 are enlarged views of sets of partial image regions, each set being an enlarged view of a corresponding same location of a partial image region of the two-dimensional projection views of fig. 14.
The two-dimensional projection view on the right in fig. 14 is more distinct with respect to the edges of the different regions of the two-dimensional projection view on the left, and the two-dimensional projection view on the right in fig. 14 contains more image detail with respect to the two-dimensional projection view on the left.
The image in fig. 15 is an enlarged view of the background area in the two-dimensional projection view. It can be seen that the background area in the image on the right side in fig. 15 is darker relative to the background area in the image on the left side, i.e. the background area in the image on the right side is lower in magnitude relative to the background area in the image on the left side.
The image in fig. 16 is an enlarged view of the area of the knee of the human being in the two-dimensional projection. It can be seen that the image on the right in figure 16 is more distinct relative to the edge of the image knee pad on the left.
The scanning object in fig. 17 and 18 is a wrench. The image on the right in fig. 17 and 18 is more clear than the edge of the image wrench on the left. The right image in fig. 17 includes more image details than the left image, and for example, the head details, the handle details, and the like of the wrench can be clearly observed in the right image in fig. 17.
The scanning objects in fig. 19 and 20 are a cellular phone and an earphone box. The image on the right side in fig. 19 and 20 is more clear than the edge of the image handset and headphone case on the left side. In addition, the right image in fig. 20 contains more image details than the left image, for example, the scan target including a mobile phone can be clearly observed in the right image in fig. 20.
The scanning object in fig. 21 is plasticine. Echo signals reflected by the millimeter wave signals at the plasticine are weaker than echo signals reflected by the millimeter wave signals at the human body. The image on the right side in fig. 21 scans the outline of the object more clearly and the contrast between light and dark is more apparent than the image on the left side.
The scanning object in fig. 22 is a tool. The image on the right in fig. 22 is more clear than the edge of the tool in the image on the left, and the image on the right in fig. 22 contains more image detail than the image on the left.
Therefore, compared with a two-dimensional projection graph generated by an image generation method based on the related art, the edges of different areas in the two-dimensional projection graph generated by the image generation method based on the embodiment of the application are clearer, namely the outline of a scanned object is clearer, more image details are contained, different areas can be clearly distinguished, background noise is lower, the quality of the generated two-dimensional projection graph can be improved, and the application effect of an image is improved when a subsequent two-dimensional projection graph is applied to different scenes.
Corresponding to the embodiment of the method in fig. 1, referring to fig. 23, fig. 23 is a structural diagram of an image generating apparatus provided in the embodiment of the present application, and the contents not explained in detail in the following embodiments may refer to the description of the above embodiments. As shown in fig. 23, the image generation apparatus includes:
a three-dimensional point cloud picture acquisition module 2301 for acquiring a three-dimensional point cloud picture;
an image feature obtaining module 2302 for determining a statistical feature corresponding to the amplitude of a three-dimensional point in a neighborhood of a preset three-dimensional point in the three-dimensional point cloud image, and determining an image feature of the preset three-dimensional point according to the statistical feature; the amplitude of the three-dimensional point is used for representing the electromagnetic scattering property of the corresponding position of the three-dimensional point, and the neighborhood of the preset three-dimensional point refers to a preset point cloud area comprising the preset three-dimensional point;
a projection feature obtaining module 2303, configured to determine, according to an image feature of a preset three-dimensional point in a preset linear direction, a projection feature of the preset three-dimensional point in the preset linear direction on a preset projection plane; the preset linear direction is vertical to the preset projection plane;
a two-dimensional projection image obtaining module 2304, configured to determine, according to the projection feature on the preset projection plane, a two-dimensional projection image of the three-dimensional point cloud image on the preset projection plane.
Optionally, the image features of the preset three-dimensional point include: the image processing method comprises the steps of obtaining a first image characteristic used for representing image details, a second image characteristic used for determining an image smooth area and a third image characteristic used for representing image background noise;
the image feature obtaining module 2302 is specifically configured to:
determining a first statistical characteristic corresponding to the amplitude of a three-dimensional point in a first neighborhood of a preset three-dimensional point in the three-dimensional point cloud picture as a first image characteristic of the preset three-dimensional point through at least one characteristic statistical mode of variance calculation, gradient calculation and Laplace operator; the plane where the first neighborhood is located is parallel to the preset projection plane;
determining a second statistical characteristic corresponding to the amplitude of a three-dimensional point in the first neighborhood of a preset three-dimensional point in the three-dimensional point cloud picture as a second image characteristic of the preset three-dimensional point through at least one characteristic statistical mode of entropy calculation, integral sidelobe ratio calculation and peak sidelobe ratio calculation;
determining a third statistical characteristic corresponding to the amplitude of the three-dimensional point in the second neighborhood of the preset three-dimensional point in the three-dimensional point cloud picture as a third image characteristic of the preset three-dimensional point through at least one characteristic statistical mode of variance calculation, gradient calculation or Laplace operator; wherein the plane in which the second neighbourhood is located is perpendicular to the preset projection plane;
the projection feature obtaining module 2303 is specifically configured to:
determining the feature with the largest median value of the first image features of the preset three-dimensional points in the preset linear direction as a first projection feature of the preset three-dimensional points in the preset linear direction on a preset projection plane;
determining the feature with the largest median value of the second image features of the preset three-dimensional points in the preset linear direction as a second projection feature of the preset three-dimensional points in the preset linear direction on a preset projection plane;
determining a third image feature of a preset three-dimensional point in a preset linear direction as a third projection feature of the preset three-dimensional point in the preset linear direction on the preset projection plane;
the two-dimensional projection image obtaining module 2304 is specifically configured to determine a two-dimensional projection image of the three-dimensional point cloud image on the preset projection plane based on the first projection feature, the second projection feature, and the third projection feature on the preset projection plane;
the two-dimensional projection view acquisition module 2304 is specifically configured to:
determining a first pixel value of a preset three-dimensional point corresponding to a first projection feature on the preset projection plane in the three-dimensional point cloud picture, and generating a first projection subgraph based on the first pixel value;
determining a second pixel value of a preset three-dimensional point corresponding to a second projection feature on the preset projection plane in the three-dimensional point cloud picture, and generating a second projection subgraph based on the second pixel value;
taking a third projection feature on the preset projection plane as a third pixel value, and generating a third projection subgraph based on the third pixel value;
and performing fusion processing on the first projection subgraph, the second projection subgraph and the third projection subgraph to obtain a two-dimensional projection graph corresponding to the three-dimensional point cloud graph.
The two-dimensional projection view acquisition module 2304 is specifically configured to:
processing the first projection subgraph and the second projection subgraph by using a preset fusion algorithm to obtain a first intermediate image;
carrying out normalization processing and gray level transformation processing on the third projection subgraph to obtain a second intermediate image; wherein the gray scale transformation process is used for adjusting the difference of pixel values between a background region and a target region on the third projection subgraph;
determining a product of pixel values at corresponding same positions on the first intermediate image and the second intermediate image, and generating a two-dimensional projection graph corresponding to the three-dimensional point cloud graph based on the product;
the three-dimensional point cloud image acquisition module 2301 is specifically configured to acquire a three-dimensional point cloud image of a scanned object;
the preset projection plane includes: a projection plane parallel to the front and/or back of the scan object.
The three-dimensional point cloud image acquisition module 2301 is specifically configured to rotate an original three-dimensional point cloud image based on a preset projection view angle to obtain a three-dimensional point cloud image matched with the preset projection view angle;
the preset projection plane is a projection plane under the preset projection visual angle.
Based on the image generation device provided by the embodiment of the application, the statistical characteristics corresponding to the amplitudes of the three-dimensional points in the neighborhood of the preset three-dimensional points in the three-dimensional point cloud picture can be determined, the image characteristics of the preset three-dimensional points are determined according to the determined statistical characteristics, the two-dimensional projection picture of the three-dimensional point cloud picture on the preset projection plane is determined according to the image characteristics of the preset three-dimensional points in the preset straight line direction, and the two-dimensional image corresponding to the three-dimensional point cloud picture can be generated. Moreover, different statistical characteristics can reflect different image characteristics, correspondingly, different statistical characteristics can be adopted based on actual requirements for different application scenes, and the generated two-dimensional image can reflect corresponding image characteristics to adapt to the current application scene, that is, the quality of the two-dimensional image obtained by projection can be improved, and further the application effect of the image can be improved.
An embodiment of the present application further provides an electronic device, as shown in fig. 24, including a processor 2401, a communication interface 2402, a memory 2403 and a communication bus 2404, where the processor 2401, the communication interface 2402 and the memory 2403 complete communication with each other through the communication bus 2404,
a memory 2403 for storing a computer program;
the processor 2401, when executing the program stored in the memory 2403, implements the following steps:
acquiring a three-dimensional point cloud picture;
determining statistical characteristics corresponding to the amplitude values of three-dimensional points in the neighborhood of a preset three-dimensional point in the three-dimensional point cloud picture, and determining the image characteristics of the preset three-dimensional point according to the statistical characteristics; the amplitude of the three-dimensional point is used for representing the electromagnetic scattering property of the corresponding position of the three-dimensional point, and the neighborhood of the preset three-dimensional point refers to a preset point cloud area comprising the preset three-dimensional point;
determining the projection characteristics of the preset three-dimensional points in the preset linear direction on a preset projection plane according to the image characteristics of the preset three-dimensional points in the preset linear direction; the preset linear direction is vertical to the preset projection plane;
and determining a two-dimensional projection drawing of the three-dimensional point cloud drawing on the preset projection plane according to the projection characteristics on the preset projection plane.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
In yet another embodiment provided by the present application, a computer-readable storage medium is further provided, in which a computer program is stored, and the computer program realizes the steps of any one of the image generation methods described above when executed by a processor.
In yet another embodiment provided by the present application, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform any of the image generation methods of the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions described in accordance with the embodiments of the application are all or partially generated when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus, the electronic device, the computer-readable storage medium, and the computer program product embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiments.
The above description is only for the preferred embodiment of the present application and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application are included in the protection scope of the present application.

Claims (12)

1. An image generation method, characterized in that the method comprises:
acquiring a three-dimensional point cloud picture;
determining statistical characteristics corresponding to the amplitude values of three-dimensional points in the neighborhood of a preset three-dimensional point in the three-dimensional point cloud picture, and determining the image characteristics of the preset three-dimensional point according to the statistical characteristics; the amplitude of the three-dimensional point is used for representing the electromagnetic scattering property of the corresponding position of the three-dimensional point, and the neighborhood of the preset three-dimensional point refers to a preset point cloud area comprising the preset three-dimensional point;
determining the projection characteristics of the preset three-dimensional points in the preset linear direction on a preset projection plane according to the image characteristics of the preset three-dimensional points in the preset linear direction; the preset linear direction is vertical to the preset projection plane;
and determining a two-dimensional projection drawing of the three-dimensional point cloud drawing on the preset projection plane according to the projection characteristics on the preset projection plane.
2. The method according to claim 1, wherein the image feature of the preset three-dimensional point comprises: the image processing system comprises a first image feature for participating in characterizing image details, a second image feature for participating in determining smooth regions of an image, and a third image feature for participating in characterizing background noise of the image.
3. The method according to claim 2, wherein the determining of the statistical features corresponding to the amplitudes of the three-dimensional points in the neighborhood of the preset three-dimensional point in the three-dimensional point cloud image and the determining of the image features of the preset three-dimensional point according to the statistical features comprises:
determining a first statistical characteristic corresponding to the amplitude of a three-dimensional point in a first neighborhood of a preset three-dimensional point in the three-dimensional point cloud picture as a first image characteristic of the preset three-dimensional point through at least one characteristic statistical mode of variance calculation, gradient calculation and Laplacian operator; the plane where the first neighborhood is located is parallel to the preset projection plane;
determining a second statistical characteristic corresponding to the amplitude of a three-dimensional point in the three-dimensional point cloud picture in the first neighborhood of a preset three-dimensional point through at least one characteristic statistical mode of entropy calculation, integral side lobe ratio calculation and peak side lobe ratio calculation, and taking the second statistical characteristic as a second image characteristic of the preset three-dimensional point;
determining a third statistical characteristic corresponding to the amplitude of the three-dimensional point in the second neighborhood of the preset three-dimensional point in the three-dimensional point cloud picture as a third image characteristic of the preset three-dimensional point through at least one characteristic statistical mode of variance calculation, gradient calculation or Laplace operator; and the plane where the second neighborhood is located is vertical to the preset projection plane.
4. The method according to claim 2, wherein the determining the projection feature of the preset three-dimensional point in the preset straight line direction on the preset projection plane according to the image feature of the preset three-dimensional point in the preset straight line direction comprises:
determining the feature with the largest median value of the first image features of the preset three-dimensional points in the preset linear direction as a first projection feature of the preset three-dimensional points in the preset linear direction on a preset projection plane;
determining the feature with the largest median value of the second image features of the preset three-dimensional points in the preset linear direction as a second projection feature of the preset three-dimensional points in the preset linear direction on a preset projection plane;
determining a third image characteristic of a preset three-dimensional point in a preset linear direction as a third projection characteristic of the preset three-dimensional point in the preset linear direction on the preset projection plane;
the determining the two-dimensional projection drawing of the three-dimensional point cloud drawing on the preset projection plane according to the projection characteristics on the preset projection plane comprises:
and determining a two-dimensional projection drawing of the three-dimensional point cloud drawing on the preset projection plane based on the first projection characteristic, the second projection characteristic and the third projection characteristic on the preset projection plane.
5. The method of claim 4, wherein determining the two-dimensional projection map of the three-dimensional point cloud map on the preset projection plane based on the first projection feature, the second projection feature and the third projection feature on the preset projection plane comprises:
determining a first pixel value of a preset three-dimensional point corresponding to a first projection feature on the preset projection plane in the three-dimensional point cloud picture, and generating a first projection subgraph based on the first pixel value;
determining a second pixel value of a preset three-dimensional point corresponding to a second projection feature on the preset projection plane in the three-dimensional point cloud picture, and generating a second projection subgraph based on the second pixel value;
taking a third projection feature on the preset projection plane as a third pixel value, and generating a third projection subgraph based on the third pixel value;
and performing fusion processing on the first projection subgraph, the second projection subgraph and the third projection subgraph to obtain a two-dimensional projection graph corresponding to the three-dimensional point cloud graph.
6. The method of claim 5, wherein performing fusion processing on the first projection subgraph, the second projection subgraph and the third projection subgraph to obtain a two-dimensional projection graph corresponding to the three-dimensional point cloud graph comprises:
processing the first projection subgraph and the second projection subgraph by using a preset fusion algorithm to obtain a first intermediate image;
carrying out normalization processing and gray level transformation processing on the third projection subgraph to obtain a second intermediate image; wherein the gray scale transformation process is used for adjusting the difference of pixel values between a background region and a target region on the third projection subgraph;
and determining the product of pixel values at the corresponding same positions on the first intermediate image and the second intermediate image, and generating a two-dimensional projection graph corresponding to the three-dimensional point cloud graph based on the product.
7. The method of claim 1, wherein the obtaining a three-dimensional point cloud comprises:
acquiring a three-dimensional point cloud picture of a scanned object;
the preset projection plane includes: a projection plane parallel to the front and/or back of the scan object.
8. The method of claim 1, wherein the obtaining a three-dimensional point cloud comprises:
rotating the original three-dimensional point cloud picture based on a preset projection visual angle to obtain a three-dimensional point cloud picture matched with the preset projection visual angle;
the preset projection plane is a projection plane under the preset projection visual angle.
9. An image generation apparatus, characterized in that the apparatus comprises:
the three-dimensional point cloud picture acquisition module is used for acquiring a three-dimensional point cloud picture;
the image feature acquisition module is used for determining statistical features corresponding to the amplitude values of three-dimensional points in the neighborhood of a preset three-dimensional point in the three-dimensional point cloud picture and determining the image features of the preset three-dimensional point according to the statistical features; the amplitude of the three-dimensional point is used for representing the electromagnetic scattering property of the corresponding position of the three-dimensional point, and the neighborhood of the preset three-dimensional point refers to a preset point cloud area comprising the preset three-dimensional point;
the projection characteristic acquisition module is used for determining the projection characteristics of the preset three-dimensional points in the preset linear direction on a preset projection plane according to the image characteristics of the preset three-dimensional points in the preset linear direction; the preset linear direction is vertical to the preset projection plane;
and the two-dimensional projection image acquisition module is used for determining a two-dimensional projection image of the three-dimensional point cloud image on the preset projection plane according to the projection characteristics on the preset projection plane.
10. The apparatus of claim 9, wherein the image features of the preset three-dimensional points comprise: the image processing method comprises the steps of obtaining a first image characteristic used for representing image details, a second image characteristic used for determining an image smooth area and a third image characteristic used for representing image background noise;
the image feature acquisition module is specifically configured to:
determining a first statistical characteristic corresponding to the amplitude of a three-dimensional point in a first neighborhood of a preset three-dimensional point in the three-dimensional point cloud picture as a first image characteristic of the preset three-dimensional point through at least one characteristic statistical mode of variance calculation, gradient calculation and Laplace operator; the plane where the first neighborhood is located is parallel to the preset projection plane;
determining a second statistical characteristic corresponding to the amplitude of a three-dimensional point in the first neighborhood of a preset three-dimensional point in the three-dimensional point cloud picture as a second image characteristic of the preset three-dimensional point through at least one characteristic statistical mode of entropy calculation, integral sidelobe ratio calculation and peak sidelobe ratio calculation;
determining a third statistical characteristic corresponding to the amplitude of the three-dimensional point in the second neighborhood of the preset three-dimensional point in the three-dimensional point cloud picture as a third image characteristic of the preset three-dimensional point through at least one characteristic statistical mode of variance calculation, gradient calculation or Laplace operator; wherein the plane where the second neighborhood is located is perpendicular to the preset projection plane;
the projection feature acquisition module is specifically configured to:
determining the feature with the largest median value of the first image features of the preset three-dimensional points in the preset linear direction as a first projection feature of the preset three-dimensional points in the preset linear direction on a preset projection plane;
determining the feature with the largest median value of the second image features of the preset three-dimensional points in the preset linear direction as a second projection feature of the preset three-dimensional points in the preset linear direction on a preset projection plane;
determining a third image characteristic of a preset three-dimensional point in a preset linear direction as a third projection characteristic of the preset three-dimensional point in the preset linear direction on the preset projection plane;
the two-dimensional projection image acquisition module is specifically used for determining a two-dimensional projection image of the three-dimensional point cloud image on the preset projection plane based on the first projection feature, the second projection feature and the third projection feature on the preset projection plane;
the two-dimensional projection image acquisition module is specifically configured to:
determining a first pixel value of a preset three-dimensional point corresponding to a first projection feature on the preset projection plane in the three-dimensional point cloud picture, and generating a first projection subgraph based on the first pixel value;
determining a second pixel value of a preset three-dimensional point corresponding to a second projection feature on the preset projection plane in the three-dimensional point cloud image, and generating a second projection subgraph based on the second pixel value;
taking a third projection feature on the preset projection plane as a third pixel value, and generating a third projection subgraph based on the third pixel value;
performing fusion processing on the first projection subgraph, the second projection subgraph and the third projection subgraph to obtain a two-dimensional projection graph corresponding to the three-dimensional point cloud graph;
the two-dimensional projection image acquisition module is specifically configured to:
processing the first projection subgraph and the second projection subgraph by using a preset fusion algorithm to obtain a first intermediate image;
carrying out normalization processing and gray level transformation processing on the third projection subgraph to obtain a second intermediate image; wherein the gray scale transformation process is used for adjusting the difference of pixel values between a background region and a target region on the third projection subgraph;
determining a product of pixel values at the corresponding same positions on the first intermediate image and the second intermediate image, and generating a two-dimensional projection graph corresponding to the three-dimensional point cloud graph based on the product;
the three-dimensional point cloud picture acquisition module is specifically used for acquiring a three-dimensional point cloud picture of a scanned object;
the preset projection plane includes: a projection plane parallel to the front and/or back of the scan object;
the three-dimensional point cloud picture acquisition module is specifically used for rotating an original three-dimensional point cloud picture based on a preset projection visual angle to obtain a three-dimensional point cloud picture matched with the preset projection visual angle;
the preset projection plane is a projection plane under the preset projection visual angle.
11. The electronic equipment is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing the communication between the processor and the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any of claims 1 to 8 when executing a program stored in the memory.
12. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of the claims 1-8.
CN202210305072.1A 2022-03-25 2022-03-25 Image generation method and device Active CN114677454B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210305072.1A CN114677454B (en) 2022-03-25 2022-03-25 Image generation method and device
PCT/CN2022/127355 WO2023179011A1 (en) 2022-03-25 2022-10-25 Image generation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210305072.1A CN114677454B (en) 2022-03-25 2022-03-25 Image generation method and device

Publications (2)

Publication Number Publication Date
CN114677454A CN114677454A (en) 2022-06-28
CN114677454B true CN114677454B (en) 2022-10-04

Family

ID=82076155

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210305072.1A Active CN114677454B (en) 2022-03-25 2022-03-25 Image generation method and device

Country Status (2)

Country Link
CN (1) CN114677454B (en)
WO (1) WO2023179011A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114677454B (en) * 2022-03-25 2022-10-04 杭州睿影科技有限公司 Image generation method and device
CN115660789B (en) * 2022-11-23 2023-08-04 广州锐竞信息科技有限责任公司 Product image management system based on intelligent electronic commerce platform

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107316325A (en) * 2017-06-07 2017-11-03 华南理工大学 A kind of airborne laser point cloud based on image registration and Image registration fusion method
CN110517193A (en) * 2019-06-28 2019-11-29 西安理工大学 A kind of bottom mounted sonar Processing Method of Point-clouds
CN111602171A (en) * 2019-07-26 2020-08-28 深圳市大疆创新科技有限公司 Point cloud feature point extraction method, point cloud sensing system and movable platform
WO2021051346A1 (en) * 2019-09-19 2021-03-25 深圳市大疆创新科技有限公司 Three-dimensional vehicle lane line determination method, device, and electronic apparatus
CN114219894A (en) * 2021-12-15 2022-03-22 中国科学院空天信息创新研究院 Three-dimensional modeling method, device, equipment and medium based on chromatography SAR point cloud

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3467785A1 (en) * 2017-10-06 2019-04-10 Thomson Licensing A method and apparatus for encoding a point cloud representing three-dimensional objects
CN108573231B (en) * 2018-04-17 2021-08-31 中国民航大学 Human body behavior identification method of depth motion map generated based on motion history point cloud
CN114677454B (en) * 2022-03-25 2022-10-04 杭州睿影科技有限公司 Image generation method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107316325A (en) * 2017-06-07 2017-11-03 华南理工大学 A kind of airborne laser point cloud based on image registration and Image registration fusion method
CN110517193A (en) * 2019-06-28 2019-11-29 西安理工大学 A kind of bottom mounted sonar Processing Method of Point-clouds
CN111602171A (en) * 2019-07-26 2020-08-28 深圳市大疆创新科技有限公司 Point cloud feature point extraction method, point cloud sensing system and movable platform
WO2021051346A1 (en) * 2019-09-19 2021-03-25 深圳市大疆创新科技有限公司 Three-dimensional vehicle lane line determination method, device, and electronic apparatus
CN114219894A (en) * 2021-12-15 2022-03-22 中国科学院空天信息创新研究院 Three-dimensional modeling method, device, equipment and medium based on chromatography SAR point cloud

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
结构光视觉三维点云逐层三角网格化算法;秦绪佳 等;《计算机科学》;20161115;第43卷(第11A期);第383-388页 *

Also Published As

Publication number Publication date
CN114677454A (en) 2022-06-28
WO2023179011A1 (en) 2023-09-28

Similar Documents

Publication Publication Date Title
US9521391B2 (en) Settings of a digital camera for depth map refinement
CN114677454B (en) Image generation method and device
Wilson et al. A new metric for grey-scale image comparison
US10346998B1 (en) Method of merging point clouds that identifies and retains preferred points
JP7062878B2 (en) Information processing method and information processing equipment
JP2018055679A (en) Selection of balanced probe sites for 3d alignment algorithms
US20170316597A1 (en) Texturing a three-dimensional scanned model with localized patch colors
CN110176010B (en) Image detection method, device, equipment and storage medium
US20160245641A1 (en) Projection transformations for depth estimation
WO2019228471A1 (en) Fingerprint recognition method and device, and computer-readable storage medium
CN113837952A (en) Three-dimensional point cloud noise reduction method and device based on normal vector, computer readable storage medium and electronic equipment
WO2021056501A1 (en) Feature point extraction method, movable platform and storage medium
US11816857B2 (en) Methods and apparatus for generating point cloud histograms
CN113689337A (en) Ultrasonic image super-resolution reconstruction method and system based on generation countermeasure network
US11748908B1 (en) Systems and methods for generating point-accurate three-dimensional models with point-accurate color information from a non-cosited capture
CN112883920A (en) Point cloud deep learning-based three-dimensional face scanning feature point detection method and device
CN116342519A (en) Image processing method based on machine learning
CN114387353A (en) Camera calibration method, calibration device and computer readable storage medium
Casas et al. Image-based multi-view scene analysis using'conexels'
JP6741154B2 (en) Information processing apparatus, information processing method, and program
CN111383262A (en) Occlusion detection method, system, electronic terminal and storage medium
CN116386016B (en) Foreign matter treatment method and device, electronic equipment and storage medium
GB2533450B (en) Settings of a digital camera for depth map refinement
CN112512434B (en) Ultrasonic imaging method and related equipment
CN116188726A (en) Human body 3D grid model construction method and system based on millimeter wave and image fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant