CN115880206A - Image accuracy judging method, device, equipment, storage medium and program product - Google Patents

Image accuracy judging method, device, equipment, storage medium and program product Download PDF

Info

Publication number
CN115880206A
CN115880206A CN202111130794.XA CN202111130794A CN115880206A CN 115880206 A CN115880206 A CN 115880206A CN 202111130794 A CN202111130794 A CN 202111130794A CN 115880206 A CN115880206 A CN 115880206A
Authority
CN
China
Prior art keywords
point cloud
point
image
picture
cloud picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111130794.XA
Other languages
Chinese (zh)
Inventor
洪哲鸣
王少鸣
王军
赵伟
彭旭康
姚炜鹏
郭润增
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202111130794.XA priority Critical patent/CN115880206A/en
Publication of CN115880206A publication Critical patent/CN115880206A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The application provides a method, a device, equipment, a storage medium and a program product for judging the accuracy of an image; the embodiment of the application can be applied to various scenes such as cloud technology, artificial intelligence, intelligent traffic, vehicle-mounted and the like, and relates to an image processing technology; the method comprises the following steps: acquiring a depth map to be judged and a reference point cloud map of a shooting object contained in the depth map to be judged; the reference point cloud picture is a point cloud picture which is obtained by scanning and has the precision not less than a precision threshold value; performing point cloud image conversion on the depth image to be judged to obtain a converted point cloud image; the conversion point cloud picture records three-dimensional coordinate information of each point of the shooting object reconstructed based on the depth value recorded by the depth picture to be judged; and determining the accuracy of the depth map to be judged based on the coincidence degree detection of the conversion point cloud map and the reference point cloud map, and realizing the image accuracy judgment. Through the method and the device, the judgment precision of the image accuracy can be improved.

Description

Image accuracy judging method, device, equipment, storage medium and program product
Technical Field
The present application relates to image processing technologies, and in particular, to a method, an apparatus, a device, a storage medium, and a program product for determining image accuracy.
Background
The depth map is an image in which the depth from the image capture device to each point in the captured scene is a pixel value, and can reflect depth information of an object in the captured scene. The depth map is generally acquired by a depth camera, and the depth camera is widely applied to a face payment system and an automatic driving system.
Before the depth camera is used, accuracy verification needs to be performed on a depth map acquired by the depth camera, so that the credibility of the depth map acquired by the depth camera in the actual use process is guaranteed. In the related art, the problem of poor judgment effect of the stereo accuracy of the depth map exists, and finally the accuracy of image accuracy judgment is low.
Disclosure of Invention
The embodiment of the application provides a method and a device for judging the accuracy of an image, a storage medium and a program product, which can improve the judgment precision of the accuracy of the image.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides an image accuracy judging method, which comprises the following steps:
acquiring a depth map to be judged and a reference point cloud map of a shooting object contained in the depth map to be judged; the reference point cloud picture is a point cloud picture with the precision not less than a precision threshold value obtained by scanning, and the reference point cloud picture records real three-dimensional coordinate information of each point of the shooting object;
performing point cloud image conversion on the depth image to be judged to obtain a conversion point cloud image; the converted point cloud image records three-dimensional coordinate information of each point of the shooting object, which is reconstructed based on the depth value recorded by the depth image to be judged;
and determining the accuracy of the depth map to be judged based on the coincidence degree detection of the conversion point cloud map and the reference point cloud map, so as to realize the image accuracy judgment.
The embodiment of the application provides an image accuracy judging device, which comprises: .
The device comprises an image acquisition module, a judging module and a judging module, wherein the image acquisition module is used for acquiring a depth map to be judged and a reference point cloud map of a shooting object contained in the depth map to be judged; the reference point cloud picture is a point cloud picture with the precision not less than a precision threshold value obtained by scanning, and the reference point cloud picture records real three-dimensional coordinate information of each point of the shooting object;
the image conversion module is used for converting the point cloud image aiming at the depth image to be judged to obtain a conversion point cloud image; the converted point cloud image records three-dimensional coordinate information of each point of the shooting object, which is reconstructed based on the depth value recorded by the depth image to be judged;
and the accuracy judgment module is used for determining the accuracy of the depth map to be judged based on the coincidence degree detection of the conversion point cloud map and the reference point cloud map, so as to realize the image accuracy judgment.
In some embodiments of the present application, the accuracy determining module is further configured to generate a plurality of matching point pairs by registering the conversion point cloud image with the reference point cloud image; and respectively carrying out contact ratio calculation on the plurality of matching point pairs to obtain the accuracy of the depth map to be judged.
In some embodiments of the present application, the accuracy determining module is further configured to perform distance calculation on at least two points included in each of the plurality of matching point pairs to obtain a plurality of point pair distances; calculating the contact ratio of the conversion point cloud picture and the reference point cloud picture according to the point pair distances; and converting the contact ratio into the accuracy of the depth map to be judged.
In some embodiments of the present application, the accuracy determining module is further configured to respectively screen out, from the plurality of point pair distances, a first distance corresponding to N points in the converted point cloud image that are closest to a coordinate origin and a second distance corresponding to N points that are farthest from the coordinate origin; determining the difference between the average value of the N first distances and the average value of the N second distances as the coincidence ratio of the conversion point cloud picture and the reference point cloud picture; wherein N is a positive integer.
In some embodiments of the present application, the accuracy determining module is further configured to filter out, from the plurality of point-to-point distances, a third distance corresponding to M points in a target area of the converted point cloud image; calculating the Gaussian distribution variance of the M third distances to obtain the coincidence ratio of the conversion point cloud picture and the reference point cloud picture; wherein M is a positive integer.
In some embodiments of the present application, the image accuracy determination apparatus further includes: the parameter determining module is used for determining the reconstructed shape parameters of the shot object according to the conversion point cloud picture and determining the reference shape parameters of the shot object according to the reference point cloud picture;
the accuracy judging module is further configured to determine the accuracy of the depth map to be judged according to a difference between the reconstructed shape parameter and the reference shape parameter.
In some embodiments of the present application, the conversion point cloud comprises: a plurality of point clouds; the accuracy judging module is further configured to screen a candidate point with a minimum distance from the reference point cloud picture for each point of each point cloud picture in the plurality of point cloud pictures; generating a transformed point cloud picture corresponding to each point cloud picture based on rigid body transformation between each point in each point cloud picture and a corresponding candidate point; screening out a matching point with the minimum distance from each point of the transformation point cloud picture corresponding to each point cloud picture from the reference point cloud picture, and generating a plurality of transformation point pairs by using each point of the transformation point cloud picture and the corresponding matching point; and fusing the transformed point cloud images corresponding to each point cloud image to obtain a fused point cloud image, and determining the plurality of matching point pairs from the plurality of transformed point pairs aiming at each point in the fused point cloud image.
In some embodiments of the present application, the transformation parameters include: a rotation parameter and a translation parameter; the accuracy judging module is further configured to determine the rotation parameter and the translation parameter by performing rigid body transformation with a minimum distance between each point of each point cloud picture and the corresponding candidate point; and carrying out transformation corresponding to the rotation parameters and the translation parameters on each point cloud picture to obtain the transformed point cloud picture corresponding to each point cloud picture.
In some embodiments of the present application, the accuracy determining module is further configured to screen out a target transformation point cloud graph from the transformation point cloud graphs corresponding to each point cloud graph; calculating the normal difference and the space distance between each point in the target transformation point cloud picture and each point in other transformation point cloud pictures; the other transformation point cloud pictures refer to transformation point cloud pictures except the target transformation point cloud picture in the transformation point cloud picture corresponding to each point cloud picture; according to the normal difference and the space distance, aiming at each point of the target transformation point cloud picture, screening out points to be fused from other transformation point cloud pictures; and performing weighted fusion on each point of the target transformation point cloud picture and the corresponding point to be fused to obtain the fused point cloud picture.
In some embodiments of the present application, the accuracy determining module is further configured to determine, for each point of the target transformation point cloud picture, a target matching point from the reference point cloud picture, and determine, for the point to be fused, a fused matching point from the reference point cloud picture; determining a first fusion weight of each point of the target transformation point cloud picture by using the distance between each point in the target transformation point cloud picture and the target matching point; determining a second fusion weight of the point to be fused by using the distance between the point to be fused and the fusion matching point; and according to the first fusion weight and the second fusion weight, performing weighted fusion on each point of the target transformation point cloud picture and the point to be fused to obtain the fusion point cloud picture.
In some embodiments of the present application, the depth map to be determined includes: a plurality of depth maps in succession; the image conversion module is further configured to perform downsampling on the plurality of depth maps respectively to obtain a plurality of downsampled depth maps; performing image conversion on the plurality of downsampling depth maps and the plurality of depth maps to obtain a plurality of point cloud maps; and determining the plurality of point cloud pictures as the conversion point cloud picture.
The embodiment of the application provides an electronic equipment for image accuracy judges, includes:
a memory for storing executable instructions;
and the processor is used for realizing the image accuracy judging method provided by the embodiment of the application when the executable instructions stored in the memory are executed.
The embodiment of the present application provides a computer-readable storage medium, which stores executable instructions for causing a processor to execute the computer-readable storage medium, so as to implement the image accuracy determining method provided by the embodiment of the present application.
The embodiment of the present application provides a computer program product, which includes a computer program or an instruction, and the computer program or the instruction, when executed by a processor, implements the image accuracy determination method provided in the embodiment of the present application.
The embodiment of the application has the following beneficial effects: the electronic equipment can convert the depth image to be judged into the point cloud image, and the difference between the three-dimensional coordinate information of each point of the shot object reconstructed according to the depth value to be judged and the real three-dimensional coordinate information of each point of the shot object is determined by calculating the contact ratio of the converted point cloud image and the reference point cloud image, so that whether the space coordinate of the object determined according to the depth image is consistent with the actual space coordinate of the object can be judged, the judgment effect on the three-dimensional accuracy of the depth image is improved, and the judgment precision of the image accuracy is also improved.
Drawings
FIG. 1 is a schematic diagram of a speckle structured light imaging system;
FIG. 2 is a schematic illustration of a point cloud;
FIG. 3 is a schematic diagram illustrating an architecture of an image accuracy determination system according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of the terminal in fig. 3 provided in an embodiment of the present application;
fig. 5 is a first flowchart illustrating an image accuracy determining method according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a cloud reference point provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of a camera coordinate system converted into a pixel coordinate system according to an embodiment of the present disclosure;
FIG. 8 is a schematic diagram of a transformation relationship provided by an embodiment of the present application;
FIG. 9 is a schematic diagram of camera external referencing provided by embodiments of the present application;
fig. 10 is a flowchart illustrating a second method for determining image accuracy according to an embodiment of the present disclosure;
FIG. 11 is a schematic diagram of a reference point for performing a contact ratio determination according to an embodiment of the present application;
fig. 12 is a third schematic flowchart of an image accuracy determining method according to an embodiment of the present application;
fig. 13 is a fourth schematic flowchart of an image accuracy determining method according to an embodiment of the present application;
fig. 14 is a schematic diagram illustrating the principle of determining the accuracy of a depth map captured by a 3D camera in a face payment system according to an embodiment of the present application.
Detailed Description
In order to make the purpose, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the accompanying drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without making creative efforts fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the following description, references to the terms "first \ second \ third" are only to distinguish similar objects and do not denote a particular order or importance, but rather "first \ second \ third" may, where permissible, be interchanged in a particular order or sequence so that embodiments of the present application described herein can be practiced in other than the order shown or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
1) The depth camera is a camera capable of capturing a depth distance of a shooting space. Compared with a common color camera, the depth camera can record the distance between an object and the camera, so that the three-dimensional coordinates of each point in an image can be obtained, a real scene can be restored through the three-dimensional coordinates, and applications such as scene modeling are realized.
2) The color map is obtained by collecting natural light by a color sensor and imaging. The color image is represented by a two-dimensional matrix of three primary colors (three primary colors), each value of the three primary colors is between 0 and 255, 0 represents that the corresponding primary color does not appear in the pixel point, and 255 represents that the corresponding primary color obtains the maximum value in the pixel point.
3) The depth map is an image in which the depth from the image pickup to each point in the captured scene is set as a pixel value. The infrared sensor can be used for collecting the infrared light of the speckle structure in the depth map, and then the speckles are analyzed to obtain the depth map. A depth map is an image or image channel that contains depth information from the surface of a common object to a viewpoint. Each pixel point of the depth map represents the vertical distance between the depth camera plane and the photographed object plane. Although the depth map also includes depth information of an object, the x-coordinate and the y-coordinate of the object are pixel coordinates.
4) The infrared image is an infrared image presented by collecting the infrared light by the infrared sensor.
5) The speckle structured light is lattice light projected by an infrared speckle projector and arranged according to a certain structural rule.
6) Speckle structured light imaging is a common imaging mode for depth cameras. In speckle structure light imaging, speckle structure light is generally projected to the surface of an object by an infrared laser projector and then collected by an infrared sensor, so that three-dimensional (3D) coordinate information of the surface of the object can be restored according to a triangulation principle, and a depth map is obtained.
Illustratively, FIG. 1 is a schematic diagram of a speckle structured light imaging system. The speckle structure light imaging system 1-1 scans the face 1-2 to obtain the depth map of the face. The speckle structured light imaging system 1-1 comprises an infrared speckle projector 1-11 and an infrared sensor 1-12, wherein the infrared speckle projector 1-11 is used for projecting a structured light pattern to a face 1-2, and the infrared sensor 1-12 is used for collecting the structured light pattern reflected by the face surface, so that the spatial information of the face surface is calculated through the deformation of the structured light pattern, and a depth map is obtained.
7) The cloud point picture records the three-dimensional (3D) coordinate information of an object in the real world. The point cloud image can be obtained by depth map and camera parameter calculation. The point cloud may be displayed in a 3D rendering engine to reflect the 3D positional relationship of the different points.
Illustratively, FIG. 2 is a schematic diagram of a point cloud in which 3D coordinate information of a hand in the real world is represented by a point cloud 2-1.
8) The camera intrinsic parameter is a parameter for describing a conversion relationship between the 3D coordinates and the imaged pixel coordinates when an object in the real world is imaged on the camera sensor. The depth map and the cloud map may be converted to each other by camera parameters.
9) The camera coordinate system is a coordinate system which takes the optical center of the camera as the origin of coordinates, the optical axis as the z-axis, and the x-axis and the y-axis are parallel to the x-axis and the y-axis of a pixel plane imaged by the camera. It should be noted that the 3D coordinates obtained by the depth map and the camera internal reference are coordinates in the camera coordinate system.
10 Camera external parameters are parameters used to describe the transformation relationship between other 3D coordinate systems and the camera coordinate system. When a plurality of cameras are arranged, when the coordinates of an object point in the camera coordinate system of one camera are converted into the coordinate system of the other camera through the rotation matrix and the translation matrix, the rotation matrix and the translation matrix are external parameters among the different cameras, and therefore the external parameters of the cameras describe the conversion relation among the coordinates of the different cameras.
11 Live body detection), a process of determining whether an object detected with a depth camera is a live body. For example, in face payment, it is generally determined whether a person brushing the face is a real person, a photo, or a silica gel model, and generally, whether the person brushing the face is a photo is determined by a depth map, and whether the person brushing the face is a silica gel model is determined by the brightness of an infrared map.
12 The process of identifying which user the person brushing the face is when the face pays generally includes extracting features through a color image, comparing feature similarity, and then comparing three-dimensional feature similarity through a depth image in an assisting manner to obtain an identification result.
The depth map is an image in which the depth from the image pickup to each point in the captured scene is a pixel value, and can reflect depth information of an object in the captured scene. The depth map is generally acquired by a depth camera, and the depth camera is widely applied to living body detection and contrast recognition in a face payment system and an automatic driving system.
Before the depth camera is used, accuracy verification needs to be performed on a depth map acquired by the depth camera, so that the credibility of the depth map acquired by the depth camera in the actual use process is guaranteed.
In the related art, the depth of a horizontal plane reflected by a depth map is usually compared with the actual depth of the horizontal plane obtained by measurement, and the accuracy of the depth map is determined by using the difference between the depth distance reflected by the depth map and the actual distance. However, this method can obtain the accuracy of the distance from the object to the camera plane, that is, is suitable for determining the accuracy of the horizontal plane of the depth map, and it is difficult to determine whether the spatial coordinates of the object determined according to the depth map match the actual spatial coordinates of the object, so that the method is not suitable for determining the stereoscopic accuracy of the depth map, and therefore, the method has a poor determination effect on the stereoscopic accuracy of the depth map, and finally, the determination accuracy of the image accuracy is low.
The embodiment of the application provides a method, a device and equipment for judging the accuracy of an image, a storage medium and a program product, which can improve the judgment precision of the accuracy of the image. An exemplary application of the electronic device for performing image accuracy determination according to the embodiment of the present application is described below, and the electronic device according to the embodiment of the present application may be implemented as various types of user terminals such as a notebook computer, a tablet computer, a desktop computer, and a mobile device, and may also be implemented as a server. In the following, an exemplary application when the electronic device is implemented as a terminal will be explained.
Referring to fig. 3, fig. 3 is a schematic diagram of an architecture of an image accuracy determining system according to an embodiment of the present disclosure. In order to support an image accuracy determination application, in the image accuracy determination system 100 shown in fig. 3, the terminal 400 is connected to the depth camera 200 and the high-precision scanner 500 through the network 300, and the network 300 may be a wide area network or a local area network, or a combination of the two.
The terminal 400 is configured to obtain a depth map to be determined from the depth camera 200, and obtain a reference point cloud map of a photographic object included in the depth map to be determined from the high-precision scanner 500, where the reference point cloud map is a point cloud map obtained by scanning with the high-precision scanner 500, and the accuracy of the point cloud map is not less than a precision threshold, and the reference point cloud map records real three-dimensional coordinate information of each point of the photographic object; performing point cloud image conversion on the depth image to be judged to obtain a converted point cloud image, wherein the converted point cloud image records three-dimensional coordinate information of the shooting object reconstructed based on the depth value recorded by the depth image to be judged; and determining the accuracy of the depth map to be judged based on the coincidence degree detection of the point cloud map to be judged and the reference point cloud map, and realizing the image accuracy judgment.
In some embodiments, the terminal 400 may be, but is not limited to, a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a smart watch, a smart home appliance, a vehicle-mounted terminal, and the like. The terminal 400, the depth camera 200 and the high-precision scanner 500 may be directly or indirectly connected through wired or wireless communication, and the embodiment of the present invention is not limited thereto.
Referring to fig. 4, fig. 4 is a schematic structural diagram of the terminal in fig. 3 according to an embodiment of the present application, where the terminal 400 shown in fig. 4 includes: at least one processor 410, memory 450, at least one network interface 420, and a user interface 430. The various components in the terminal 400 are coupled together by a bus system 440. It is understood that the bus system 440 is used to enable communications among the components. The bus system 440 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 440 in fig. 4.
The Processor 410 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 430 includes one or more output devices 431, including one or more speakers and/or one or more visual displays, that enable the presentation of media content. The user interface 430 also includes one or more input devices 432, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 450 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 450 optionally includes one or more storage devices physically located remote from processor 410.
The memory 450 includes both volatile memory and nonvolatile memory, and can include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a Random Access Memory (RAM). The memory 450 described in embodiments herein is intended to comprise any suitable type of memory.
In some embodiments, memory 450 is capable of storing data, examples of which include programs, modules, and data structures, or a subset or superset thereof, to support various operations, as exemplified below.
An operating system 451, including system programs for handling various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and handling hardware-based tasks;
a network communication module 452 for communicating to other computing devices via one or more (wired or wireless) network interfaces 420, exemplary network interfaces 420 including: bluetooth, wireless-compatibility authentication (Wi-Fi), and Universal Serial Bus (USB), etc.;
a presentation module 453 for enabling presentation of information (e.g., user interfaces for operating peripherals and displaying content and information) via one or more output devices 431 (e.g., display screens, speakers, etc.) associated with user interface 430;
an input processing module 454 for detecting one or more user inputs or interactions from one of the one or more input devices 432 and translating the detected inputs or interactions.
In some embodiments, the apparatus provided in the embodiments of the present application may be implemented in software, and fig. 4 illustrates an image accuracy determining apparatus 455 stored in the memory 450, which may be software in the form of programs and plug-ins, and includes the following software modules: an image acquisition module 4551, an image conversion module 4552, an accuracy determination module 4553 and a parameter determination module 4554, which are logical and thus may be arbitrarily combined or further separated depending on the functions implemented. The functions of the respective modules will be explained below.
In other embodiments, the image accuracy determining apparatus provided in the embodiments of the present Application may be implemented in hardware, and for example, the image accuracy determining apparatus provided in the embodiments of the present Application may be a processor in the form of a hardware decoding processor, which is programmed to execute the image accuracy determining method provided in the embodiments of the present Application, for example, the processor in the form of the hardware decoding processor may be one or more Application Specific Integrated Circuits (ASICs), DSPs, programmable Logic Devices (PLDs), complex Programmable Logic Devices (CPLDs), field Programmable Gate Arrays (FPGAs), or other electronic components.
By way of example, an embodiment of the present application provides an electronic device for image accuracy determination, including:
a memory for storing executable instructions;
and the processor is used for realizing the image accuracy judging method provided by the embodiment of the application when the executable instructions stored in the memory are executed.
In some embodiments, the electronic device may implement the image accuracy determination method provided by the embodiments of the present application by running a computer program. For example, the computer program may be a native program or a software module in an operating system; may be a Native Application (APP), i.e. a program that needs to be installed in an operating system to run; or may be an applet, i.e. a program that can be run only by downloading it to the browser environment; but also an applet that can be embedded into any APP. In general, the computer programs described above may be any form of application, module or plug-in.
The image accuracy judging method provided by the embodiment of the application can be applied to various scenes such as cloud technology, artificial intelligence, intelligent traffic, vehicle-mounted and the like. In the following, the image accuracy determination method provided by the embodiment of the present application will be described in conjunction with an exemplary application and implementation of the electronic device for image accuracy determination provided by the embodiment of the present application.
Referring to fig. 5, fig. 5 is a first flowchart of an image accuracy determining method provided in the embodiment of the present application, and will be described with reference to the steps shown in fig. 5.
S101, obtaining a depth map to be judged and a reference point cloud picture of a shooting object contained in the depth map to be judged.
The depth map accuracy judging method and device are achieved in the scene of judging the accuracy of the depth map, for example, accuracy judgment is conducted on the depth map shot by a depth camera in a face payment system or a depth camera in an automatic driving system, whether the depth cameras are accurate or not is determined according to the accuracy of the depth map, or accuracy judgment is conducted on any depth map, whether the depth map needs to be collected again or not is determined. The electronic equipment can acquire the depth map to be judged and the reference point cloud map from the storage space of the electronic equipment, can download the depth map to be judged and the reference point cloud map from a network, can directly call a depth camera connected with the electronic equipment and a high-precision scanner used for scanning the point cloud map, shoots a shot object through the depth camera to obtain the depth map to be judged, and scans the shot object through the high-precision scanner to obtain the reference point cloud map.
It should be noted that, in order to accurately reflect the three-dimensional coordinate information of each point of the photographic subject, the reference point cloud chart is required to be a cloud chart obtained by scanning, the accuracy of which is not less than the accuracy threshold, that is, the reference point cloud chart is considered to be an accurate cloud chart of the photographic subject, and thus, the reference point cloud chart records the real three-dimensional coordinate information of each point of the photographic subject. It can be understood that the shooting objects of the depth map to be determined and the reference point cloud map are the same, so that the accuracy of the depth map to be determined can be accurately determined.
It can be understood that the shooting object may be a dedicated silicone head mold, or may also be a human face, a common article, and the like, and the present application is not limited herein.
In some embodiments, the depth map to be determined may include a plurality of depth maps, the plurality of depth maps may be obtained by continuously shooting the shooting object by a depth camera, and the plurality of depth maps may be obtained by shooting the shooting object from the same shooting angle or from different shooting angles. Similarly, the reference point cloud image may also include a plurality of reference images, and the plurality of reference images may be obtained by scanning the object at the same angle or obtained by scanning the object at different angles, which is not limited herein.
For example, fig. 6 is a schematic diagram of a reference point cloud graph provided in an embodiment of the present application. The image 6-1 shows the shot object 6-11, and the electronic device can scan the shot object 6-11 at two angles of the side surface and the front surface respectively by calling a high-precision scanner to obtain a side reference image 6-2 and a front reference image 6-3. The side reference fig. 6-2 and the front reference fig. 6-3 are reference point clouds.
S102, converting the point cloud picture aiming at the depth picture to be judged to obtain a converted point cloud picture.
The electronic equipment acquires camera parameters corresponding to the depth map to be judged, and then converts pixel points in the depth map to be judged into a three-dimensional camera coordinate system from a pixel coordinate system according to the camera parameters, namely, physical coordinates of the pixel points in the real world are determined aiming at each pixel point in the depth map, so that a converted cloud map is obtained.
That is to say, the conversion point cloud picture records the three-dimensional coordinate information of each point of the shooting object reconstructed based on the depth value recorded by the depth map to be judged, and whether the depth map to be judged is accurate or not is directly determined, so that the three-dimensional coordinate information reflected by the conversion point cloud picture and the three-dimensional coordinate information reflected by the reference point cloud picture are jointly analyzed, and whether the depth map to be judged is accurate or not can be determined.
It can be understood that the camera parameters corresponding to the depth map to be determined refer to camera parameters of the depth camera that captures the depth map to be determined. The process of the electronic device converting the points of the camera coordinate system into the points of the pixel coordinate system using the camera intrinsic parameters can be explained by fig. 7.
Fig. 7 is a schematic diagram of converting a camera coordinate system into a pixel coordinate system according to an embodiment of the present application. In fig. 7, oc-XcYcZc is a camera coordinate system, an optical center Oc is an origin thereof, a unit is m, o-xy is an image coordinate system, an optical center is an image midpoint, a unit is m, uv is a pixel coordinate system, an origin is an upper left corner of the image, a unit is a pixel, P (Xc, yc, zc) is a point in a world coordinate system, P (x, y) is an imaged point of the point P in the image, coordinates thereof in the image coordinate system (o-x-y) are (x, y), coordinates in the pixel coordinate system (o-u-v) are (u, v), and f is a focal length, and at this time, a conversion from the camera coordinate system to the pixel coordinate system can be expressed as formula (1):
Figure BDA0003280446560000131
wherein (u) 0 ,v 0 ) Coordinates representing the origin of the pixel coordinate system, f x =f/dx,f y =f/dy。
It should be noted that the camera internal reference reflects a conversion relationship between three-dimensional coordinate information of the depth camera during imaging and pixel coordinates after imaging. Illustratively, fig. 8 is a schematic diagram of a conversion relationship provided in an embodiment of the present application. Referring to FIG. 8, the vertical distance from the optical center O8-11 in the camera plane 8-1 (coordinate system O-x-y-z) to the imaging plane 8-2 (O ' -x ' -y ' -z), i.e., the focal length 8-3. A projection of any point P in the three-dimensional space on the imaging plane 8-2 through the optical center 8-11 is P ', and accordingly, A similar triangle 8-4 can be obtained, such as A triangle formed by O-A-P and A triangle formed by O-B-P ', and A conversion relation exists between X ' and f in the similar triangle 8-4 and X and Z of the real point P.
Further, the depth camera may be a binocular camera, and when the depth map is generated by using the binocular camera, images respectively captured by the binocular camera need to be converted into the same camera coordinate system, that is, which two points of the left and right cameras are corresponding to each other is determined. The camera external reference describes the conversion relationship between two camera coordinate systems.
For example, fig. 9 is a schematic diagram of external references of a camera provided in an embodiment of the present application. The center of the left camera in the binocular camera is c 0 The center of the right camera is c 1 P is any point in space, P is at c 0 、c 1 The projections on the corresponding image planes are respectively x 0 And x 1 。c 0 And c 1 With the intersection point e of the connecting line of (a) with the image plane 9-1 and the image plane 9-2 0 And e 1 Is a pole,/ 0 And l 1 Is polar line, c 0 、c 1 And P make up the polar plane 9-3. It can be seen that the camera coordinate system of the left camera and the camera coordinate system of the right camera can be converted by rotating the matrix R and translating the matrix t, i.e., (R, t), which are the external parameters of the cameras.
It should be noted that, when the depth map to be determined includes a plurality of depth maps, the electronic device converts all of the depth maps into the point cloud map, so that the converted point cloud map includes a plurality of point cloud maps.
S103, determining the accuracy of the depth map to be judged based on coincidence degree detection of the conversion point cloud map and the reference point cloud map, and realizing image accuracy judgment.
The coincidence degree of the cloud picture of the point to be judged and the cloud picture of the reference point can explain the difference situation between the three-dimensional coordinate information of the shooting object reconstructed according to the depth map to be judged and the real three-dimensional coordinate information of the shooting object, so that the three-dimensional accuracy of the depth map to be judged can be reflected. Therefore, in the embodiment of the application, the electronic device calculates the coincidence degree of the point cloud image to be judged and the reference point cloud image, and then determines the accuracy of the depth image to be judged according to the coincidence degree. Therefore, the stereoscopic accuracy of the depth map to be judged can be accurately judged, and the accuracy of the depth map can be more accurately judged.
It should be noted that, the electronic device may directly determine the coincidence degree of the conversion point cloud image and the reference point cloud image as the accuracy of the depth map to be determined, and also compare the coincidence degree of the conversion point cloud image and the reference point cloud image with thresholds corresponding to multiple coincidence levels, and determine the preset accuracy of the coincidence level hit by the coincidence degree as the accuracy of the depth map to be determined, which is not limited herein.
It can be understood that, when the conversion point cloud image includes a plurality of point cloud images, the electronic device may fuse the plurality of point cloud images, and then perform coincidence detection with a fusion result of a plurality of reference images included in the reference point cloud image, or may perform coincidence detection on each point cloud image and a reference image in the plurality of reference images at the same angle as the point cloud image to obtain a plurality of coincidence degrees, and then perform weighted fusion on the plurality of coincidence degrees to obtain the accuracy of the depth image to be determined.
In the embodiment of the application, compared with the depth of a horizontal plane reflected by a depth map and a mode of determining the image accuracy with the actual depth of the measured horizontal plane in the related art, the depth map to be judged can be converted into a point cloud map, and the difference between the three-dimensional coordinate information of each point of the shooting object reconstructed according to the depth value to be judged and the actual three-dimensional coordinate information of each point of the shooting object is determined by calculating the contact ratio between the converted point cloud map and the reference point cloud map, so that whether the space coordinate of the object determined according to the depth map conforms to the actual space coordinate of the object can be judged, the judgment effect on the three-dimensional accuracy of the depth map is improved, and the judgment accuracy of the image accuracy is finally improved. Furthermore, the conversion of the depth image into the point cloud image can be realized only by means of the camera parameters, so that the accuracy of the camera parameters of the depth camera can be judged in an auxiliary manner by calculating the contact ratio of the converted point cloud image and the reference point cloud image.
Based on fig. 5, referring to fig. 10, fig. 10 is a schematic flowchart of a second image accuracy determination method provided in the embodiment of the present application. In some embodiments of the present application, determining the accuracy of the depth map to be determined based on detecting the coincidence of the conversion point cloud map and the reference point cloud map, that is, the specific implementation process of S103 may include: S1031-S1032 are as follows:
and S1031, registering the conversion point cloud image and the reference point cloud image to generate a plurality of matching point pairs.
The electronic equipment registers the conversion point cloud picture and the reference point cloud picture, so that a corresponding matching point is found in the reference point cloud picture aiming at each point in the conversion point cloud picture, and the matching points of each point are paired to generate a plurality of matching point pairs. Each point in the matching point pair represents the three-dimensional coordinate information (point in the converted point cloud picture) reconstructed for the point in the real world and the real three-dimensional coordinate information (point in the reference point cloud picture) of the point in the real world.
And S1032, respectively calculating the contact ratio of the plurality of matching point pairs to obtain the accuracy of the depth map to be judged.
The electronic equipment calculates the contact ratio of two points in each matching point pair to obtain the contact ratio of each matching point pair, and then analyzes the contact ratio of each matching point pair to clearly determine the difference between the conversion point cloud picture and the reference point cloud picture, so that the accuracy of the depth map to be judged can be accurately analyzed.
It may be understood that, the electronic device may determine the contact ratio of each matching point pair by calculating a spatial distance between each point in each matching point pair, may also determine the contact ratio of each matching point pair by calculating projection positions of two points in each matching point pair, and may also determine the contact ratio of each matching point pair by using other feasible manners, which is not limited herein.
In the embodiment of the application, the electronic equipment registers the conversion point cloud picture with the reference point cloud picture to obtain a plurality of matching point pairs, and then calculates the degree of coincidence in the dimension of the matching point pairs, namely, the accuracy of the point dimension is judged for the depth map to be judged, so that the judgment dimension of the accuracy is more detailed, and the accuracy is more accurate.
In some embodiments of the present application, the obtaining of the accuracy of the depth map to be determined based on performing overlap ratio calculation on the multiple matching point pairs, that is, the specific implementation process of S1032 may include: s1032a-S1032c, as follows:
s1032a, performing distance calculation on at least two points included in each of the plurality of matching point pairs to obtain a plurality of point pair distances.
In the embodiment of the application, each matching point pair comprises at least two points. The electronic equipment calculates the distance of at least two points contained in each matching point pair, and takes the calculated distance as the point pair distance corresponding to each matching point pair. When the electronic device completes distance calculation for a plurality of matching point pairs, a plurality of point pair distances are obtained.
It is to be understood that when each matching point pair contains only two points, the electronic device determines the spatial distance between the two points as the point pair distance. When each matching point pair contains more than two points, the electronic device finds the distance for each pair of the points, and then takes the average value of the distances as the final point-to-point distance.
S1032b, determining the coincidence ratio of the conversion point cloud picture and the reference point cloud picture according to the plurality of point pair distances.
The electronic equipment analyzes the distances of the plurality of points to determine the reconstructed three-dimensional coordinate information of each point in the real world and the distance of the real three-dimensional coordinate information, and the difference between the converted cloud picture and the reference cloud picture can be judged according to the distance to obtain the contact ratio.
In some embodiments, the electronic device may directly determine a minimum distance or a maximum distance of the point-to-point distances as the contact degree, may also average the point-to-point distances, and use a distance average as the contact degree, and may further screen out, from the point-to-point distances, the point-to-point distances corresponding to the points in the specific area of the point cloud map, so as to analyze the point-to-point distances corresponding to the specific area, and obtain the contact degree, and the like, which is not limited herein.
And S1032c, converting the contact ratio into the accuracy of the depth map to be judged.
The electronic device may directly determine the degree of coincidence as the accuracy of the depth map to be determined, or may perform conversion processing such as rounding, halving, and percentage calculation on the degree of coincidence to obtain the accuracy of the depth map to be determined.
For example, when the coincidence degree is 0.823, the electronic device may determine the accuracy of the depth map to be determined to be 0.8, etc. by rounding, which is not limited herein.
In the embodiment of the application, the electronic device calculates the point pair distance of the matching point pair, and the point pair distance is analyzed to obtain the accuracy of the depth map to be judged, so that the accuracy of the depth map to be judged can be measured in the dimension of the spatial distance, and the obtained accuracy is more accurate.
In some embodiments of the present application, calculating the contact ratio between the cloud images of the conversion points and the cloud images of the reference points according to the point-to-point distances, that is, a specific implementation process of S1032b may include: S201-S202, as follows:
s201, respectively screening out a first distance corresponding to the N points which are closest to the origin of coordinates in the converted cloud picture and a second distance corresponding to the N points which are farthest from the origin of coordinates from the plurality of point pair distances.
The electronic equipment selects N points closest to the origin of coordinates of the camera coordinate system and N points farthest from the origin of coordinates from the points contained in the converted cloud images as reference points for judging the contact ratio. Each point in the conversion point cloud graph has a matching point pair to which the point pair belongs, and the matching point pairs and the point pair distances have a corresponding relation, so that the electronic equipment can screen out first distances corresponding to N points nearest to the coordinate origin from the multiple point pair distances to obtain N first distances, screen out second distances corresponding to N points farthest to the coordinate origin to obtain N second distances, and analyze error distribution conditions of the point pairs in different areas in the conversion point cloud graph by using the N first distances and the N second distances in the follow-up process.
It is understood that N is a positive integer. The value of N may be set, for example, 100, 200, etc., or may be calculated according to the total number of points included in the converted point cloud image and the point-taking ratio automatically generated by the electronic device. The point extraction ratio is a ratio of points to be extracted to a total number of points included in the cloud image of the conversion points, for example, 30%,50%, and the like, and the point extraction ratio may be randomly generated by the electronic device, or may be automatically selected by the electronic device according to time, a total number of matching point pairs, and the like.
For example, fig. 11 is a schematic diagram of a reference point for performing coincidence degree determination according to an embodiment of the present application. In fig. 11, each point 11-1 in the cloud image of the transformed points is represented by a black solid point, and the electronic device screens 30% of the points 11-3 (including 2 points) closest to the origin of coordinates 11-2 and 30% of the points 11-4 (including 3 points) farthest from the origin of coordinates 11-2 as reference points for coincidence degree determination, and extracts point-to-point distances corresponding to the points.
S202, determining the difference between the average value of the N first distances and the average value of the N second distances as the coincidence degree of the conversion point cloud picture and the reference point cloud picture.
The electronic equipment calculates the mean value of the N first distances to obtain the mean value of the N first distances, and simultaneously calculates the mean value of the N second distances to obtain the mean value of the N second distances. Then, the electronic device calculates the difference between the average value of the N first distances and the average value of the N second distances, and the calculated difference result is the accuracy between the conversion point cloud picture and the reference point cloud picture.
It is understood that the electronic device may obtain the difference result by subtracting the average value of the N first distances from the average value of the N second distances, or may obtain the difference result by comparing the average value of the N first distances with the average value of the N second distances, which is not limited herein.
In the embodiment of the application, the electronic equipment can clearly determine the contact ratio of the conversion point cloud picture and the reference point cloud picture by calculating the error conditions of the points in different areas in the conversion point cloud picture, so that the contact ratio calculation by point-to-distance is more various, and the errors in different areas are compared, so that the contact ratio is more accurate.
In some embodiments of the present application, calculating a coincidence ratio between the conversion point cloud image and the reference point cloud image according to the plurality of point-to-point distances, that is, a specific implementation process of S1032b may include: S203-S204, as follows:
s203, screening out a third distance corresponding to the M points in the target area of the conversion point cloud picture from the plurality of point pair distances.
The electronic equipment screens out M points in a target area of the screening conversion point cloud picture as reference points for judging the contact ratio, then obtains a point pair distance corresponding to a matching point pair to which each point in the M points belongs from a plurality of point pair distances, and records the point pair distance as a third distance, so that the electronic equipment can obtain M third distances.
It is understood that the target area may be any area in the conversion point cloud image, for example, an area closest to the coordinate origin of the camera coordinate system by 30%, or an area where the points in the conversion point cloud image are the most dense, and the present application is not limited thereto.
And S204, calculating the Gaussian distribution variance of the M third distances to obtain the coincidence ratio of the conversion point cloud picture and the reference point cloud picture.
And the electronic equipment performs Gaussian distribution fitting according to the M third distances, calculates the fitted Gaussian distribution, performs variance calculation, and determines the calculated variance as the contact ratio, wherein M is a positive integer.
In the embodiment of the application, the electronic equipment can calculate the contact ratio based on the statistical analysis of the point pair distance of the points in the same region, so that the contact ratio calculation mode is more various, and the contact ratio is more accurate.
Based on fig. 5, referring to fig. 12, fig. 12 is a schematic flowchart third of the image accuracy determination method provided in the embodiment of the present application. In some embodiments of the present application, after the point cloud image is converted with respect to the depth map to be determined, and a converted point cloud image is obtained, that is, after S102, the method may further include: S104-S105, as follows:
and S104, determining the reconstructed shape parameters of the shot object according to the conversion point cloud picture, and determining the reference shape parameters of the shot object according to the reference point cloud picture.
The electronic equipment measures parameters related to the shape, such as the radius, the side length, the height and the like of the shot object according to the conversion point cloud picture, and records the parameters as the parameters of the reconstructed shape. Meanwhile, the electronic equipment also measures the radius, the side length, the height and other parameters of the shot object according to the reference point cloud picture, and the reference shape parameters obtained according to the reference point cloud picture are accurate due to the fact that the reference point cloud picture is accurate.
And S105, determining the accuracy of the depth map to be judged through the difference between the reconstruction shape parameters and the reference shape parameters.
The electronic device compares the reconstructed shape parameters to the reference shape parameters and determines differences between the reconstructed shape parameters and the reference shape parameters, such as differences in radius, differences in height, and the like. Then, the electronic device further analyzes and processes the obtained difference, for example, the difference of the radius and the inverse of the difference of the height are taken as the accuracy, or the difference of the percentage of the difference and 1 is determined as the accuracy, and the application is not limited herein.
Further, the photographic subject may be required to be an object of a standard shape, such as a cube, a cylinder, a sphere, etc., so that the accuracy of the depth map to be judged can be accurately obtained.
In the embodiment of the application, the electronic equipment can also determine the accuracy of the depth map to be judged based on the shape measurement of the shot object, so that the judgment mode of the accuracy is more diversified.
Based on fig. 10, referring to fig. 13, fig. 13 is a schematic flowchart of a fourth method for determining image accuracy according to an embodiment of the present application. In some embodiments of the present application, the conversion point cloud comprises: the specific implementation process of the S1031, which is to generate a plurality of matching point pairs by registering the converted point cloud image with the reference point cloud image, may include: s1031a to S1031d, as follows:
and S1031a, aiming at each point of each point cloud picture in the plurality of point cloud pictures, screening out a candidate point with the minimum distance from the reference point cloud picture.
The electronic equipment calculates the distance between each point in each point cloud picture and each point in the reference point cloud picture, then screens out the point with the minimum distance from each point in each point cloud picture from the reference point cloud picture, and determines the point as the candidate point corresponding to each point.
That is, the electronic device indexes candidate points in the reference point cloud graph for each point in each point cloud graph, with the index targeting a minimum distance.
It can be understood that, the electronic device may perform spatial distance calculation on the three-dimensional coordinate information of each point in each point cloud graph and the three-dimensional coordinate information in the reference point cloud graph to obtain the distance between each point and each point in the reference point cloud graph. The formula of the spatial distance is shown in formula (2):
Figure BDA0003280446560000201
where (x 1, y1, z 1) represents the three-dimensional coordinate information of each point in each point cloud graph, (x 2, y2, z 2) represents the three-dimensional coordinate information of each point in the reference point cloud graph, and E is the calculated distance.
And S1031b, generating a transformed cloud picture corresponding to each point cloud picture based on rigid body transformation between each point in each point cloud picture and the corresponding candidate point.
And the electronic equipment solves each point in each point cloud picture and transformation parameters when the candidate points corresponding to the point cloud pictures are subjected to rigid body transformation, and then transforms each point cloud picture by using the transformation parameters to obtain a transformed point cloud picture corresponding to each point cloud picture.
It should be noted that the transformed point cloud graph in the embodiment of the present application refers to a transformed point cloud graph in which an average distance (i.e., an average value of distances between each point) from each point cloud graph is smaller than a preset threshold, and before the transformed point cloud graph is obtained, the device may perform iterative transformation on each point cloud graph for multiple times until the distance threshold requirement is met, and determine the point cloud graph obtained through the last iterative transformation as the transformed point cloud graph.
That is to say, the electronic device firstly transforms each point cloud image by using a transformation parameter to obtain a new point cloud image, then calculates an average distance between the new point cloud image and each original point cloud image, and judges whether the average distance at the moment is smaller than a distance threshold, when the average distance is smaller than the distance threshold, the new point cloud image is used as the transformed point cloud image, and when the average distance is not smaller than the distance threshold, the new point cloud image continues to be transformed until the transformed point cloud image meeting the requirement of the distance threshold is obtained.
And S1031c, screening out a matching point with the minimum distance from each point of the transformed point cloud picture corresponding to each point cloud picture from the reference point cloud pictures, and generating a plurality of transformed point pairs by using each point of the transformed point cloud picture and the corresponding matching point.
The electronic equipment searches for a point with the minimum distance from the reference point cloud picture aiming at each point of the transformed point cloud picture, and determines the searched point as a matching point. Then, the electronic device pairs each point in the transformed point cloud image with the corresponding matching point to generate a plurality of transformed point pairs.
And S1031d, fusing the transformed point cloud images corresponding to each point cloud image to obtain a fused point cloud image, and determining a plurality of matching point pairs from the plurality of transformed point pairs aiming at each point in the fused point cloud image.
The transformed point cloud image of each point cloud image may be a point cloud image of a photographic subject at different angles, or may be a plurality of point cloud images of the photographic subject at the same angle. In order to enable the quality of the point cloud picture of the shot object to be better, the electronic equipment fuses the transformed point cloud pictures corresponding to each point cloud picture, so that a global point cloud picture of the shot object is obtained, or the point cloud pictures corresponding to the shot object and having better quality at the same angle are shot, and the obtained point cloud pictures are fused point cloud pictures. During the fusion, the electronic device may fuse some different points into the same point, or discard some points, so that each point in the fused point cloud graph is a part of each point in the transformed point cloud graphs corresponding to all the point cloud graphs, and the electronic device determines the point pair to which the electronic device belongs from the transformed point pairs according to each point in the fused point cloud graph, thereby obtaining a plurality of matching point pairs.
It can be understood that, when the electronic device fuses different points into one point, corresponding fusion can be performed on the transformed point pairs to which the points belong, and the fused point pairs are matching point pairs. When the electronic device obtains the fused point cloud image by discarding some points, only the transformed point pairs to which the remaining points belong may be determined as matching point pairs.
In the embodiment of the application, the electronic equipment can screen out the candidate points for each point in each point cloud picture, then transform each point cloud picture, and then continue to screen the matching points, so that the matching points corresponding to each point can be determined more accurately, and finally the transformed point cloud pictures corresponding to each point cloud picture are fused to obtain the point cloud pictures with better quality.
In some embodiments of the present application, the transformation parameters include: the specific implementation process of the rotation parameter and the translation parameter, at this time, generating a transformed point cloud chart corresponding to each point cloud chart based on the rigid transformation between each point in each point cloud chart and the corresponding candidate point, that is, S1031b, may include: S301-S302, as follows:
s301, performing rigid body transformation with the minimum distance on each point of each cloud picture and the corresponding candidate point to determine rotation parameters and translation parameters.
S302, carrying out transformation corresponding to the rotation parameters and the translation parameters aiming at each point cloud picture to obtain a transformation point cloud picture corresponding to each point cloud picture.
The electronic equipment carries out solving of the rigid body transformation with the minimum distance on each point of each point cloud picture and the corresponding candidate point to obtain a rotation parameter and a translation parameter which enable the distance between each point and the corresponding candidate point to be the minimum when the rigid body transformation is carried out on each point, then rotates each point cloud picture according to the rotation parameter and translates according to the translation parameter to obtain the corresponding transformation point cloud picture.
It will be appreciated that the rotation parameter characterizes the measure of rotation required for each cloud image and the translation parameter characterizes the measure of translation required for each cloud image.
In the embodiment of the application, the electronic device may perform corresponding transformation on each point cloud picture according to the obtained rotation parameter and translation parameter, so that each point cloud picture becomes a transformed point cloud picture after performing rigid body transformation.
In some embodiments of the present application, the fusing the transformed point cloud images corresponding to each point cloud image to obtain a fused point cloud image, that is, a specific implementation process of S1031d may include: S303-S306, as follows:
s303, screening out a target transformation point cloud picture from the transformation point cloud pictures corresponding to each point cloud picture.
The electronic device may optionally select one of the changed point clouds corresponding to each point cloud image as a target transformed point cloud image, or determine the transformed point cloud image corresponding to the first point cloud image as the target transformed point cloud image, which is not limited herein.
And S304, calculating the normal difference and the space distance between each point in the target transformation point cloud picture and each point in other transformation point cloud pictures.
The other transformation point cloud pictures are transformation point cloud pictures except the target transformation point cloud picture in the transformation point cloud picture corresponding to each point cloud picture.
That is, the electronic device calculates the normal difference and the spatial distance between each point of the fused reference point cloud image and each point of the transformed point cloud images except the target transformed point cloud image. Wherein, the normal difference refers to the difference between normal vectors.
S305, according to the normal difference and the space distance, aiming at each point of the target transformation point cloud picture, screening out points to be fused from other transformation point cloud pictures.
And the electronic equipment screens out points with the space distance smaller than the distance threshold value, wherein the normal difference between the points and each point of the target transformation point cloud picture is smaller than the difference threshold value, and the screened points are points close enough to each point of the target transformation point cloud picture and can be used for fusion, so that the points to be fused are obtained.
It should be noted that, in other transformation point cloud charts, the normal difference from a certain point of the target transformation point cloud chart is not less than the difference threshold, or the spatial distance is not less than the distance threshold, and the electronic device may discard the point in the fusion process, at this time, the electronic device may directly determine the point as a point in the fusion point cloud chart, or may discard the point, that is, the corresponding point to be fused cannot be found and does not participate in the fusion process.
S306, conducting weighted fusion on each point of the target transformation point cloud picture and the corresponding point to be fused to obtain a fusion point cloud picture.
And the electronic equipment calculates a fusion weight for each point in the target transformation point cloud picture, and then each point in the target transformation point cloud picture and the corresponding point to be fused are fused with the weight, and the obtained fusion result is the fusion point cloud picture.
In some embodiments of the present application, performing weighted fusion on each point of the target transformation point cloud graph and the corresponding point to be fused to obtain a fused point cloud graph, that is, a specific implementation process of S306 may include: S3061-S3064, as follows:
s3061, determining a target matching point from the reference point cloud picture aiming at each point of the target transformation point cloud picture, and determining a fusion matching point from the reference point cloud picture aiming at the point to be fused.
In the foregoing steps, the electronic device determines a matching point from the reference point cloud picture for each point in the transformed point cloud picture, and the target transformed point cloud picture and other transformed point cloud pictures belong to the transformed point cloud pictures, so that each point of the target transformed point cloud picture and other transformed point cloud pictures have corresponding matching points. And the electronic equipment records the matching point corresponding to each point of the target transformation point cloud picture as a target matching point, and records the matching points corresponding to the points to be fused of other transformation point cloud pictures as fusion matching points.
S3062, determining a first fusion weight of each point of the target transformation point cloud picture by using the distance between each point in the target transformation point cloud picture and the target matching point.
S3063, determining second fusion weight of the point to be fused by using the distance between the point to be fused and the fusion matching point.
The electronic equipment calculates the distance between each point of the target transformation point cloud picture and the corresponding target matching point to obtain a first matching point distance, and calculates the distance between the point to be fused and the fusion matching point to obtain a second matching point distance. Next, the electronic device performs an inverse proportional fitting process on the absolute value of the distance of the first matching point, for example, obtaining a reciprocal (i.e., obtaining a reciprocal when the distance is less than 1 and obtaining a reciprocal when the distance is greater than 1), and the like, to obtain a first fusion weight. Similarly, the electronic device determines a second fusion weight according to the distance between the point to be fused and the fusion matching point.
That is, the first fusion weight and the first matching point distance, and the second fusion weight and the second matching point distance are all in inverse proportion, and the smaller the distance, the larger the corresponding fusion weight.
It is understood that in some embodiments, the electronic device may perform S3063 and then perform S3062, or perform S3062 and S3063 simultaneously, which is not limited herein.
S3064, carrying out weighted fusion on each point of the target transformation point cloud picture and the point to be fused according to the first fusion weight and the second fusion weight to obtain a fusion point cloud picture.
The electronic equipment adds a first fusion weight to the three-dimensional coordinate information of each point of the target transformation point cloud picture, adds a second fusion weight to the three-dimensional coordinate information of the point to be fused, and then adds the two pieces of three-dimensional coordinate information with weights, so that the weighted fusion of each point of the target transformation point cloud picture and the corresponding point to be fused is realized, and the fusion point cloud picture is obtained.
In the embodiment of the application, the electronic equipment can determine the fusion weight inversely proportional to the distance through the distance between each point of the target transformation point cloud picture and the corresponding target matching point and the distance between the point to be fused and the fusion matching point, and performs weighted fusion according to the fusion weight, so that the point which is more solved with the real point in the real world occupies a larger effect in the fusion, and the accuracy of the fusion point cloud picture is improved.
In some embodiments of the present application, the depth map to be determined includes: the step of performing point cloud image conversion on the depth images to be determined to obtain converted point cloud images, which is a specific implementation process of S102, may include: S1021-S1023, as follows:
and S1021, performing down-sampling on the depth maps respectively to obtain a plurality of down-sampled depth maps.
The electronic device may reduce the plurality of depth maps by one time or several times by mean value down-sampling or maximum down-sampling, and determine the reduced depth maps as down-sampled depth maps, thereby obtaining a plurality of down-sampled depth maps.
It will be appreciated that the number of downsampled depth maps may be equal to the number of depth maps, i.e. the electronic device downsamples only once for each depth map, resulting in only one corresponding downsampled depth map for each depth map; the number of downsampled depth maps may also be larger than the number of depth maps, i.e. the electronic device downsamples several times for each depth map, resulting in several downsampled depth maps for one depth map.
And S1022, performing image conversion on the plurality of downsampling depth maps and the plurality of depth maps to obtain a plurality of point cloud maps.
And S1023, determining the plurality of point cloud pictures as conversion point cloud pictures.
Since the features of the depth maps with different resolutions may be different, in order to perform accuracy analysis on the depth map to be determined in an all-round manner, the electronic device performs point cloud image conversion on the plurality of downsampled depth maps and the plurality of depth maps to obtain a plurality of point cloud images, and finally, the plurality of point cloud images are used to form a converted point cloud image so as to determine the degree of coincidence with the reference depth image.
In the embodiment of the application, the electronic equipment can obtain the depth maps under different resolutions through down-sampling, and all convert the depth maps under different resolutions into the point cloud map, so that when subsequently carrying out coincidence degree detection with the reference point cloud map, the coincidence degree detection can be carried out on the point cloud map corresponding to the depth maps with different resolutions, the comprehensiveness of the coincidence degree detection is improved, and the precision of image accuracy judgment is improved.
In the following, an exemplary application of the embodiments of the present application in a practical application scenario will be described.
The embodiment of the application is realized under the judgment of the accuracy of the depth map shot by the 3D camera in the face payment system. The 3D camera at this time may use a speckle imaging system, or a color binocular imaging system.
Fig. 14 is a schematic diagram illustrating the principle of determining the accuracy of a depth map shot by a 3D camera in a face payment system according to the embodiment of the present application, where a terminal (electronic device) first converts a shot depth map 14-1 (to-be-determined depth map) by using camera parameters 14-2 to obtain a point cloud map 14-3 (converted point cloud map), and generates a standard point cloud map 14-4 (reference point cloud map) by using a high-precision scanner for an object such as a face head model (shot object), and detects a point cloud overlap 14-5 between the standard point cloud map and a point cloud map 14-3 generated from the depth map 14-1 shot by the 3D camera (overlap detection is performed on the converted depth map and the reference depth map), so as to determine the reconstruction accuracy of a 3D camera, that is, i.e., the accuracy of the depth map is 14-6.
Further, the specific steps of the process are as follows:
the method comprises the following steps: the depth map is converted into a point cloud map (the point cloud map is converted aiming at the depth map to be judged to obtain a converted point cloud map), the coordinate conversion relation between the depth map and the point cloud map can be converted by camera internal reference, and a formula during conversion can refer to formula (1).
Step two: mean value down-sampling is carried out on the depth maps (down-sampling is carried out on the depth maps respectively), three-dimensional points and normal vectors are calculated again after the depth maps are reduced by one time, and therefore the depth maps are converted into point cloud maps (image conversion is carried out on the down-sampled depth maps and the depth maps to obtain a plurality of point cloud maps).
Step three: and scanning objects such as a human head model and the like by using a high-precision scanner to generate a high-precision standard point cloud picture (the reference point cloud picture is the point cloud picture obtained by scanning and having the precision not less than a precision threshold value).
Step four: and determining a point cloud picture generated according to the depth picture shot by the 3D camera and a corresponding point pair of the high-precision standard point cloud picture, namely registering the point cloud picture with the standard point cloud picture (registering the converted point cloud picture with the standard point cloud picture to generate a plurality of matched point pairs). The process comprises the following steps:
the first step is as follows: assuming that a point cloud image generated according to a depth map shot by a 3D camera is X2, and a high-precision standard point cloud image is X1, a corresponding near point of each point in X2 in X1 is calculated (for each point of each point cloud image in a plurality of point cloud images, a candidate point with the smallest distance is selected from the reference point cloud image), and the calculation process may be implemented by minimizing equation (2).
The second step is that: and solving rigid body transformation (each point of each point cloud picture and the corresponding candidate point carry out the rigid body transformation with the minimum distance) which enables the point pair in the first step to have the minimum average distance, so as to obtain translation parameters and rotation parameters.
The third step: and obtaining a new transformation point set by using the translation parameter and the rotation parameter obtained in the second step for X2.
The fourth step: and if the new transformation point set and the reference point set, namely X2 meets the condition that the average distance is smaller than a given threshold value, stopping iteration, otherwise, taking the new transformation point set as the new X2 to continue iteration until the requirement of the given threshold value is met, and obtaining the transformed X2 (transformation point cloud picture).
The fifth step: the corresponding near point of each point of the transformed X2 in X1 is calculated, i.e. the final point pair(s) is determined.
At this point, the registration process of the point cloud image and the standard point cloud image is completed.
Step five: point clouds generated by depth maps shot by a 3D camera are fused into a global model of a scene, for example, 25 frames of depth maps (multiple continuous depth maps) are continuously taken, and the point cloud map corresponding to each frame of depth map is fused. Specifically, during fusion, 25 frames of transformed X2 are fused (the transformed point cloud images corresponding to each point cloud image are fused), at this time, points with small spatial distance and normal vector difference are correspondingly fused (for each point in the target transformed point cloud image, a point to be fused is selected from other transformed point cloud images according to the normal difference and the spatial distance, so as to perform fusion), the weight during fusion may refer to the distance value from the standard point cloud image (the first fusion weight is determined by using the distance between each point in the target transformed point cloud image and the target matching point, and the second fusion weight is determined by using the distance between the point to be fused and the target matching point), and generally, the smaller the distance value is, the larger the weight is.
Step six: and (4) point cloud contact degree calculation, namely calculating contact degree by the terminal according to the fused point cloud image obtained in the step five and points corresponding to the standard point cloud image (aiming at each point in the fused point cloud image, determining a plurality of matching point pairs, and calculating the contact degree based on the plurality of matching point pairs respectively to obtain the accuracy of the depth image to be judged). The method for judging the contact ratio may include the following methods:
1) And solving the minimum error between the point pairs to obtain the contact ratio, namely the minimum distance between the point pairs.
2) And solving the maximum error between the point pairs to obtain the contact ratio, namely the maximum distance between the point pairs.
3) And solving the average error between the point pairs to obtain the contact ratio, namely the average distance of the point pairs.
4) The difference of the error distribution of the point pairs in different areas is obtained to obtain the contact ratio, for example, 30% of point clouds (N point pairs closest to the origin of coordinates) closest to the origin and 30% of point pairs (N point pairs farthest from the origin of coordinates) farthest from the origin are taken, and the difference of the average distances of the point pairs is calculated (the difference between the average value of the N first distances and the average value of the N second distances is determined as the contact ratio).
5) The coincidence degree is determined by the point pair error fluctuation condition of the same area, for example, taking 50% points (M points of the target area of the converted point cloud image) nearest to the origin, fitting the gaussian distribution of the point pair distance, and calculating the variance of the gaussian distribution (calculating the variance of the gaussian distribution for M third distances), so as to obtain the coincidence degree.
In addition to the determination of the contact ratio, the terminal may measure the point pair error by using a standard shape, such as but not limited to a cube, a cylinder, a sphere, etc., to obtain the contact ratio, measure data (reconstructed shape parameters) such as radius, side length, height, etc., from a point cloud image generated from the depth image captured by the 3D camera, and compare the measured data (reference shape parameters) with the data (reference shape parameters) measured from the reference point cloud image to directly obtain the accuracy of the depth image captured by the 3D camera.
By the aid of the method, the three-dimensional accuracy of the depth map can be determined based on the contact cloud map generated by the depth map shot by the 3D camera, namely the actually measured contact cloud map, and the standard contact cloud map scanned by the high-precision scanner, namely the contact ratio between the actually measured contact cloud map and the actually measured contact cloud map, and the accuracy of the depth map is improved. Meanwhile, the method can be used for judging the accuracy of the internal reference and the external reference of the camera.
Continuing with the exemplary structure of the image accuracy judging device 455 provided by the embodiments of the present application implemented as software modules, in some embodiments, as shown in fig. 4, the software modules stored in the image accuracy judging device 455 of the memory 450 may include:
an image obtaining module 4551, configured to obtain a depth map to be determined and a reference point cloud image of a shooting object included in the depth map to be determined; the reference point cloud picture is a point cloud picture with the precision not less than a precision threshold value obtained by scanning, and the reference point cloud picture records real three-dimensional coordinate information of each point of the shooting object;
the image conversion module 4552 is configured to perform point cloud image conversion on the depth map to be determined to obtain a conversion point cloud image; the conversion point cloud picture records three-dimensional coordinate information of each point of the shooting object reconstructed based on the depth value recorded by the depth map to be judged;
and the accuracy judgment module 4553 is configured to determine the accuracy of the depth map to be judged based on detection of coincidence of the conversion point cloud map and the reference point cloud map, so as to realize image accuracy judgment.
In some embodiments of the present application, the accuracy determining module 4553 is further configured to generate a plurality of matching point pairs by registering the conversion point cloud image with the reference point cloud image; and respectively carrying out contact ratio calculation on the plurality of matching point pairs to obtain the accuracy of the depth map to be judged.
In some embodiments of the present application, the accuracy determining module 4553 is further configured to perform distance calculation on at least two points included in each of the plurality of matching point pairs, so as to obtain a plurality of point-pair distances; calculating the contact ratio of the conversion point cloud picture and the reference point cloud picture according to the point pair distances; and converting the contact ratio into the accuracy of the depth map to be judged.
In some embodiments of the present application, the accuracy determining module 4553 is further configured to respectively screen out, from the plurality of point-to-point distances, a first distance corresponding to N points in the cloud image of the transformed points that are closest to the origin of coordinates, and a second distance corresponding to N points that are farthest from the origin of coordinates; determining the difference between the average value of the N first distances and the average value of the N second distances as the coincidence ratio of the conversion point cloud picture and the reference point cloud picture; wherein N is a positive integer.
In some embodiments of the present application, the accuracy determining module 4553 is further configured to screen out, from the plurality of point-to-point distances, a third distance corresponding to M points in a target area of the cloud map of the transformed points; calculating the Gaussian distribution variance of the M third distances to obtain the contact ratio of the conversion point cloud picture and the reference point cloud picture; wherein M is a positive integer.
In some embodiments of the present application, the image accuracy determining device 455 further includes: a parameter determining module 4554, configured to determine a reconstructed shape parameter of the photographic object according to the conversion point cloud image, and determine a reference shape parameter of the photographic object according to the reference point cloud image;
the accuracy determining module 4553 is further configured to determine the accuracy of the depth map to be determined according to a difference between the reconstructed shape parameter and the reference shape parameter.
In some embodiments of the present application, the conversion point cloud comprises: a plurality of point clouds; the accuracy determining module 4553 is further configured to screen, for each point of each point cloud image in the plurality of point cloud images, a candidate point with a minimum distance from the reference point cloud image; generating a transformed point cloud picture corresponding to each point cloud picture based on rigid transformation between each point in each point cloud picture and a corresponding candidate point; screening out a matching point with the minimum distance from each point of the transformation point cloud picture corresponding to each point cloud picture from the reference point cloud picture, and generating a plurality of transformation point pairs by using each point of the transformation point cloud picture and the corresponding matching point; and fusing the transformed point cloud images corresponding to each point cloud image to obtain a fused point cloud image, and determining the plurality of matching point pairs from the plurality of transformed point pairs aiming at each point in the fused point cloud image.
In some embodiments of the application, the transformation parameters include: a rotation parameter and a translation parameter; the accuracy determining module 4553 is further configured to determine the rotation parameter and the translation parameter by performing rigid body transformation with a minimum distance between each point of the cloud image and the corresponding candidate point; and carrying out transformation corresponding to the rotation parameters and the translation parameters on each point cloud picture to obtain the transformed point cloud picture corresponding to each point cloud picture.
In some embodiments of the present application, the accuracy determining module 4553 is further configured to screen out a target transform point cloud picture from the transform point cloud pictures corresponding to each point cloud picture; calculating the normal difference and the space distance between each point in the target transformation point cloud picture and each point in other transformation point cloud pictures; the other transformation point cloud pictures refer to transformation point cloud pictures except the target transformation point cloud picture in the transformation point cloud picture corresponding to each point cloud picture; according to the normal difference and the space distance, aiming at each point of the target transformation point cloud picture, screening out points to be fused from other transformation point cloud pictures; and performing weighted fusion on each point of the target transformation point cloud picture and the corresponding point to be fused to obtain the fused point cloud picture.
In some embodiments of the present application, the accuracy determining module 4553 is further configured to determine, for each point of the target transformation point cloud picture, a target matching point from the reference point cloud picture, and determine, for the point to be fused, a fused matching point from the reference point cloud picture; determining a first fusion weight of each point of the target transformation point cloud picture by using the distance between each point in the target transformation point cloud picture and the target matching point; determining a second fusion weight of the point to be fused by using the distance between the point to be fused and the fusion matching point; and according to the first fusion weight and the second fusion weight, performing weighted fusion on each point of the target transformation point cloud picture and the point to be fused to obtain the fusion point cloud picture.
In some embodiments of the present application, the depth map to be determined includes: a plurality of successive depth maps; the image conversion module 4552 is further configured to perform downsampling on the multiple depth maps respectively to obtain multiple downsampled depth maps; performing image conversion on the plurality of downsampling depth maps and the plurality of depth maps to obtain a plurality of point cloud maps; and determining the plurality of point cloud pictures as the conversion point cloud picture.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the electronic device reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the electronic device executes the image accuracy determination method described in this embodiment of the present application.
Embodiments of the present application provide a computer-readable storage medium storing executable instructions, which when executed by a processor, will cause the processor to execute an image accuracy determination method provided by embodiments of the present application, for example, an image accuracy determination method as shown in fig. 5.
In some embodiments, the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
As an example, executable instructions may be deployed to be executed on one electronic device or on multiple electronic devices located at one site or distributed across multiple sites and interconnected by a communication network.
In summary, according to the embodiments of the present application, the depth map to be determined can be converted into the point cloud image, and the coincidence degree between the converted point cloud image and the reference point cloud image is calculated, so as to clarify the difference between the three-dimensional coordinate information of each point of the photographic object reconstructed according to the depth value to be determined and the actual three-dimensional coordinate information of each point, thereby improving the determination effect on the stereoscopic accuracy of the depth map, and finally improving the determination accuracy of the image accuracy. Furthermore, the conversion of the depth image into the point cloud image can be realized by means of camera parameters, so that the accuracy of the camera parameters of the depth camera can be judged in an auxiliary manner by calculating the contact ratio of the converted point cloud image and the reference point cloud image.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (15)

1. An image accuracy determination method, characterized by comprising:
acquiring a depth map to be judged and a reference point cloud map of a shooting object contained in the depth map to be judged; the reference point cloud picture is a point cloud picture with the precision not less than a precision threshold value obtained by scanning, and the reference point cloud picture records real three-dimensional coordinate information of each point of the shooting object;
performing point cloud image conversion on the depth image to be judged to obtain a converted point cloud image; the converted point cloud image records three-dimensional coordinate information of each point of the shooting object, which is reconstructed based on the depth value recorded by the depth image to be judged;
and determining the accuracy of the depth map to be judged based on the coincidence degree detection of the conversion point cloud map and the reference point cloud map, so as to realize the image accuracy judgment.
2. The method of claim 1, wherein the determining the accuracy of the depth map to be determined based on a coincidence detection of the conversion point cloud map and the reference point cloud map comprises:
generating a plurality of matching point pairs by registering the transformed point cloud with the reference point cloud;
and respectively carrying out contact ratio calculation on the plurality of matching point pairs to obtain the accuracy of the depth map to be judged.
3. The method according to claim 2, wherein the obtaining the accuracy of the depth map to be determined based on the respective contact ratio calculations performed on the plurality of matching point pairs comprises:
calculating the distance between at least two points contained in each matching point pair of the matching point pairs to obtain a plurality of point pair distances;
calculating the contact ratio of the conversion point cloud picture and the reference point cloud picture according to the point pair distances;
and converting the contact ratio into the accuracy of the depth map to be judged.
4. The method of claim 3, wherein said calculating the overlap ratio of the cloud of transform points and the cloud of reference points according to the point-to-point distances comprises:
respectively screening out a first distance corresponding to N points which are closest to the origin of coordinates in the conversion point cloud picture and a second distance corresponding to N points which are farthest from the origin of coordinates from the plurality of point pair distances;
determining the difference between the average value of the N first distances and the average value of the N second distances as the coincidence degree of the conversion point cloud picture and the reference point cloud picture; wherein N is a positive integer.
5. The method of claim 3, wherein said calculating a coincidence of the transformed point cloud and the reference point cloud based on the point-to-point distances comprises:
screening out a third distance corresponding to M points in a target area of the conversion point cloud picture from the plurality of point pair distances;
calculating the Gaussian distribution variance of the M third distances to obtain the coincidence ratio of the conversion point cloud picture and the reference point cloud picture; wherein M is a positive integer.
6. The method according to any one of claims 1 to 5, wherein after the performing the point cloud image conversion on the depth map to be determined to obtain a converted point cloud image, the method further comprises:
determining the reconstructed shape parameters of the shot object according to the conversion point cloud picture, and determining the reference shape parameters of the shot object according to the reference point cloud picture;
and determining the accuracy of the depth map to be judged according to the difference between the reconstruction shape parameter and the reference shape parameter.
7. The method of claim 2, wherein the transforming the cloud of points comprises: a plurality of point clouds; the generating a plurality of matching point pairs by registering the transformed point cloud with the reference point cloud, comprising:
for each point of each point cloud image in the plurality of point cloud images, screening a candidate point with the minimum distance from the reference point cloud image;
generating a transformed point cloud picture corresponding to each point cloud picture based on rigid body transformation between each point in each point cloud picture and a corresponding candidate point;
screening out a matching point with the minimum distance from each point of the transformation point cloud picture corresponding to each point cloud picture from the reference point cloud picture, and generating a plurality of transformation point pairs by using each point of the transformation point cloud picture and the corresponding matching point;
and fusing the transformed point cloud images corresponding to each point cloud image to obtain a fused point cloud image, and determining the plurality of matching point pairs from the plurality of transformed point pairs aiming at each point in the fused point cloud image.
8. The method of claim 7, wherein the transformation parameters comprise: a rotation parameter and a translation parameter; generating a transformed point cloud image corresponding to each point cloud image based on a rigid body transformation between each point in each point cloud image and a corresponding candidate point, including:
determining the rotation parameter and the translation parameter by performing rigid body transformation with the minimum distance on each point of each point cloud picture and the corresponding candidate point;
and carrying out transformation corresponding to the rotation parameters and the translation parameters on each point cloud picture to obtain the transformed point cloud picture corresponding to each point cloud picture.
9. The method according to claim 7 or 8, wherein the fusing the transformed point cloud images corresponding to each point cloud image to obtain a fused point cloud image comprises:
screening out a target transformation point cloud picture from the transformation point cloud pictures corresponding to each point cloud picture;
calculating the normal difference and the space distance between each point in the target transformation point cloud picture and each point in other transformation point cloud pictures; the other transformation point cloud pictures refer to transformation point cloud pictures except the target transformation point cloud picture in the transformation point cloud picture corresponding to each point cloud picture;
according to the normal difference and the space distance, aiming at each point of the target transformation point cloud picture, screening out points to be fused from other transformation point cloud pictures;
and performing weighted fusion on each point of the target transformation point cloud picture and the corresponding point to be fused to obtain the fused point cloud picture.
10. The method according to claim 9, wherein the performing weighted fusion on each point of the target transformation point cloud picture and the corresponding point to be fused to obtain the fused point cloud picture comprises:
for each point of the target transformation point cloud picture, determining a target matching point from the reference point cloud picture, and for the point to be fused, determining a fusion matching point from the reference point cloud picture;
determining a first fusion weight of each point of the target transformation point cloud picture by using the distance between each point in the target transformation point cloud picture and the target matching point;
determining a second fusion weight of the point to be fused by using the distance between the point to be fused and the fusion matching point;
and performing weighted fusion on each point of the target transformation point cloud picture and the point to be fused according to the first fusion weight and the second fusion weight to obtain the fusion point cloud picture.
11. The method according to any one of claims 1 to 5, 7 or 8, wherein the depth map to be judged comprises: a plurality of depth maps in succession; the converting the point cloud image aiming at the depth image to be judged to obtain a converted point cloud image comprises the following steps:
respectively carrying out down-sampling on the plurality of depth maps to obtain a plurality of down-sampled depth maps;
performing image conversion on the plurality of downsampling depth maps and the plurality of depth maps to obtain a plurality of point cloud maps;
and determining the plurality of point cloud pictures as the conversion point cloud picture.
12. An image accuracy determination device characterized by comprising:
the device comprises an image acquisition module, a judgment module and a judgment module, wherein the image acquisition module is used for acquiring a depth map to be judged and a reference point cloud map of a shooting object contained in the depth map to be judged; the reference point cloud picture is a point cloud picture with the precision not less than a precision threshold value obtained by scanning, and the reference point cloud picture records real three-dimensional coordinate information of each point of the shooting object;
the image conversion module is used for converting the point cloud image aiming at the depth image to be judged to obtain a converted point cloud image; the converted point cloud image records three-dimensional coordinate information of each point of the shooting object, which is reconstructed based on the depth value recorded by the depth image to be judged;
and the accuracy judgment module is used for determining the accuracy of the depth map to be judged based on the contact ratio detection of the conversion point cloud map and the reference point cloud map so as to realize image accuracy judgment.
13. An electronic device for image accuracy determination, the electronic device comprising:
a memory for storing executable instructions;
a processor for implementing the image accuracy determination method of any one of claims 1 to 11 when executing the executable instructions stored in the memory.
14. A computer-readable storage medium storing executable instructions, wherein the executable instructions, when executed by a processor, implement the image accuracy determination method of any one of claims 1 to 11.
15. A computer program product comprising a computer program or instructions, wherein the computer program or instructions, when executed by a processor, implement the image accuracy determination method of any one of claims 1 to 11.
CN202111130794.XA 2021-09-26 2021-09-26 Image accuracy judging method, device, equipment, storage medium and program product Pending CN115880206A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111130794.XA CN115880206A (en) 2021-09-26 2021-09-26 Image accuracy judging method, device, equipment, storage medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111130794.XA CN115880206A (en) 2021-09-26 2021-09-26 Image accuracy judging method, device, equipment, storage medium and program product

Publications (1)

Publication Number Publication Date
CN115880206A true CN115880206A (en) 2023-03-31

Family

ID=85762649

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111130794.XA Pending CN115880206A (en) 2021-09-26 2021-09-26 Image accuracy judging method, device, equipment, storage medium and program product

Country Status (1)

Country Link
CN (1) CN115880206A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117576355A (en) * 2024-01-17 2024-02-20 南昌大藏科技有限公司 AR-based text-created product display method and display equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117576355A (en) * 2024-01-17 2024-02-20 南昌大藏科技有限公司 AR-based text-created product display method and display equipment
CN117576355B (en) * 2024-01-17 2024-04-19 南昌大藏科技有限公司 AR-based text-created product display method and display equipment

Similar Documents

Publication Publication Date Title
CN110568447B (en) Visual positioning method, device and computer readable medium
CN109242961B (en) Face modeling method and device, electronic equipment and computer readable medium
US11748906B2 (en) Gaze point calculation method, apparatus and device
WO2018107910A1 (en) Method and device for fusing panoramic video images
CN112444242B (en) Pose optimization method and device
TWI554976B (en) Surveillance systems and image processing methods thereof
US20130335535A1 (en) Digital 3d camera using periodic illumination
CN111091063A (en) Living body detection method, device and system
CN111444744A (en) Living body detection method, living body detection device, and storage medium
WO2021136386A1 (en) Data processing method, terminal, and server
JP2016537901A (en) Light field processing method
CN110213491B (en) Focusing method, device and storage medium
CN115035235A (en) Three-dimensional reconstruction method and device
CN114219855A (en) Point cloud normal vector estimation method and device, computer equipment and storage medium
WO2023284358A1 (en) Camera calibration method and apparatus, electronic device, and storage medium
CN113298708A (en) Three-dimensional house type generation method, device and equipment
TW202242803A (en) Positioning method and apparatus, electronic device and storage medium
WO2020151078A1 (en) Three-dimensional reconstruction method and apparatus
CN115880206A (en) Image accuracy judging method, device, equipment, storage medium and program product
CN113281780A (en) Method and device for labeling image data and electronic equipment
CN113763419A (en) Target tracking method, target tracking equipment and computer-readable storage medium
CN110288707B (en) Three-dimensional dynamic modeling method and system
CN116051736A (en) Three-dimensional reconstruction method, device, edge equipment and storage medium
CN116051876A (en) Camera array target recognition method and system of three-dimensional digital model
JP2005031044A (en) Three-dimensional error measuring device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination