WO2021115298A1 - 一种眼镜匹配设计设备 - Google Patents

一种眼镜匹配设计设备 Download PDF

Info

Publication number
WO2021115298A1
WO2021115298A1 PCT/CN2020/134758 CN2020134758W WO2021115298A1 WO 2021115298 A1 WO2021115298 A1 WO 2021115298A1 CN 2020134758 W CN2020134758 W CN 2020134758W WO 2021115298 A1 WO2021115298 A1 WO 2021115298A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
glasses
head
image capture
adapting
Prior art date
Application number
PCT/CN2020/134758
Other languages
English (en)
French (fr)
Inventor
左忠斌
左达宇
Original Assignee
左忠斌
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 左忠斌 filed Critical 左忠斌
Publication of WO2021115298A1 publication Critical patent/WO2021115298A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/293Generating mixed stereoscopic images; Generating mixed monoscopic and stereoscopic images, e.g. a stereoscopic image overlay window on a monoscopic image background

Definitions

  • the invention relates to the technical field of glasses design, in particular to the field of realizing automatic matching design of glasses through 3D shape measurement technology.
  • the present invention is proposed to provide a glasses matching design device that overcomes the above problems or at least partially solves the above problems.
  • One aspect of the present invention provides a method for fitting glasses, including
  • Step 1 Collect multiple images of the user's head, the images include at least facial images;
  • Step 2 Combine multiple images into a 3D model of the head
  • Step 3 Match the head 3D model with the glasses model, including:
  • 3-4 shows the matching effect to the user.
  • a glasses device including a 3D acquisition device, a 3D synthesis device, a glasses adaptation device, and a display device;
  • 3D collection device used to collect multiple images of the human head
  • a 3D synthesis device for synthesizing a 3D model of the head using the above multiple images
  • Glasses adapting device for adapting the head 3D model to the glasses 3D model
  • the display device is used to display the matching effect of the head 3D model and the glasses 3D model;
  • the 3D acquisition device includes an image acquisition device and a background board.
  • the background board and the image acquisition device are kept relative to each other during the rotation, so that the background board becomes the background pattern of the image collected by the image acquisition device during acquisition.
  • the third aspect of the present invention also provides a glasses device, including a 3D acquisition device, a 3D synthesis device, a glasses adaptation device, and a display device;
  • 3D collection device used to collect multiple images of the human head
  • a 3D synthesis device for synthesizing a 3D model of the head using the above multiple images
  • Glasses adapting device for adapting the head 3D model to the glasses 3D model
  • the display device is used to display the matching effect of the head 3D model and the glasses 3D model;
  • the 3D acquisition device includes an image acquisition device.
  • the image acquisition device acquires a target, two adjacent acquisition positions meet the following conditions:
  • the projection is performed in the direction perpendicular to the photographed surface of the background board, and the length W 1 in the horizontal direction of the projection shape and the length W 2 in the vertical direction of the projection shape are determined by the following conditions:
  • d 1 is the length of the imaging element in the horizontal direction
  • d 2 is the length of the imaging element in the vertical direction
  • T is the vertical distance from the sensor element of the image capture device to the background plate along the optical axis
  • f is the focal length of the image capture device
  • a 1 , A 2 is the experience coefficient
  • it also includes marking points.
  • the marking point is located on the seat.
  • the 3D synthesis device and the glasses adapting device are installed separately or implemented on the same platform.
  • the glasses adapting device is also used for glasses data modification.
  • adapting the head 3D model and the glasses 3D model includes first performing a rough alignment of the head 3D model and the glasses 3D model, and then performing the secondary alignment of the head 3D model and the glasses 3D model.
  • the 3D data of the glasses is sent to the processing equipment.
  • the 3D synthesis speed and synthesis accuracy can be improved at the same time, thereby improving the effect of glasses matching, reducing the waiting time, and making the glasses data available for processing.
  • the position there is no need to measure the angle, no need to measure the head size, which is suitable for all kinds of people. It is more convenient and adaptable.
  • FIG. 1 is a schematic structural diagram of a glasses matching design device provided by an embodiment of the present invention.
  • 1 background board 2 image acquisition device, 3 rotating beam, 4 rotating device, 5 bracket, 6 seat, 7 base, 51 horizontal column, 52 column.
  • the glasses matching design equipment includes a background board 1, an image acquisition device 2, a rotating beam 3, a rotating device 4, a bracket 5, a seat 6 and a base 7.
  • the support includes a horizontal column 51 and a vertical column 52.
  • the vertical column 52 is connected to the base 7 and the horizontal column 51 is connected to the rotating beam 3 through the rotating device 4, so that the rotating beam 3 can be rotated by 360° under the driving of the rotating device 4.
  • the background board 1 and the image acquisition device 2 are located at the two ends of the rotating beam 3, and are arranged oppositely. When the rotating beam 3 rotates, they rotate synchronously and always keep the relative arrangement.
  • the seat 6 is located between the background board 1 and the image capture device 2.
  • the head is located just near the axis of rotation and between the image capture device 2 and the background plate 1, and preferably the head of the person is located on the optical axis of the image capture device 2. Since everyone is different in height, the height of the human head is different. At this time, the position of the human head in the field of view of the image acquisition device 2 can be adjusted by adjusting the height of the seat 6.
  • the seat 6 can be adjusted by a manual adjustment device, for example, the seat 6 is connected to the base through a screw rod, and the height of the seat is adjusted by rotating the screw rod.
  • a manual adjustment device for example, the seat 6 is connected to the base through a screw rod, and the height of the seat is adjusted by rotating the screw rod.
  • it has a lifting drive device, which is data-connected with the controller, and the height of the lifting device is controlled by the controller to adjust the height of the seat.
  • the controller can be directly connected to the glasses matching design device, for example, it can be prevented from being near the armrest of the seat to facilitate user adjustment.
  • the controller can also be a mobile terminal, such as a mobile phone. In this way, by connecting the mobile terminal with the glasses matching design equipment, the height of the seat can be controlled by controlling the lifting driving device in the mobile terminal.
  • the mobile terminal can be operated by an operator or a user, which is more convenient and is not restricted by location.
  • the controller can also be borne by the host computer, or by the server or cluster server.
  • the cloud platform through the network.
  • These host computers, servers, cluster servers, and cloud platforms can be shared with host computers, servers, cluster servers, and cloud platforms that perform 3D synthesis processing, that is, complete the dual functions of control and 3D synthesis.
  • the headrest is cleverly arranged on the seat 6, marking points are set on the headrest, and the absolute distance between the marking points is recorded.
  • marking points are set on the headrest, and the absolute distance between the marking points is recorded.
  • the image acquisition device 2 rotates to the back of the user, these mark points are collected, and the size of the head 3D model is finally calculated according to the predetermined distance of the mark points.
  • setting a marker at this position does not affect the user's facial information collection. Therefore, this is also one of the invention points of the present invention, which can improve the user experience while obtaining the absolute distance of the head 3D information.
  • the marking point can also be set on the seat 6, as long as the image acquisition device 2 can collect the position.
  • the above-mentioned marking point may also be a standard gauge block, that is, a marker with a certain space size and a predictable absolute size.
  • the corresponding standard gauge block can also be set in other positions, as long as the standard gauge block is within the field of view of the camera and is stationary relative to the human head. For example, a hat containing known marking points, a card issuing, etc. can be worn for the user.
  • the image acquisition device 2 is used to collect the image of the target object, which can be CCD, CMOS, camera, video camera, industrial camera, monitor, camera, mobile phone, tablet, notebook, mobile terminal, wearable device, smart glasses, smart watch, Smart bracelet and all devices with image capture function.
  • the image acquisition device includes a camera body with a photosensitive element and a lens.
  • the camera body can be an industrial camera, such as MER-2000-19U3M/C.
  • the industrial camera has a smaller size, and simplifies unnecessary functions compared with a home camera, and has better performance.
  • the image capture device 2 may be connected to the processing unit, so as to transfer the captured images to the processing unit.
  • the above-mentioned connection methods include wired and wireless methods, such as data cable, network cable, optical fiber, 4G, 5G, wifi and other protocols for transmission. Of course, a combination of them can also be used for transmission.
  • the device also includes a processor, which can also become a processing unit, which is used to synthesize a 3D model of the target object according to a 3D synthesis algorithm according to a plurality of images collected by the image acquisition device to obtain 3D information of the target object.
  • a processor which can also become a processing unit, which is used to synthesize a 3D model of the target object according to a 3D synthesis algorithm according to a plurality of images collected by the image acquisition device to obtain 3D information of the target object.
  • the processing unit obtains the 3D information of the target object according to the multiple images in the above-mentioned set of images (the specific algorithm is described in detail below).
  • the processing unit can be directly arranged in the housing where the image acquisition device is located, or can be connected to the image acquisition device 2 through a data line or wirelessly.
  • an independent computer, server, cluster server, etc. can be used as the processing unit, and the image data collected by the image acquisition device 2 is transmitted to it for 3D synthesis.
  • the data of the image acquisition device 2 can also be transmitted to a cloud platform, and the powerful computing capability of the cloud platform can be used for 3D synthesis.
  • the background board 1 is all solid colors, or most (main body) are solid colors. In particular, it can be a white board or a black board, and the specific color can be selected according to the main color of the target object.
  • the background board 1 is usually a flat panel, preferably a curved panel, such as a concave panel, a convex panel, a spherical panel, and even in some application scenarios, it can be a background panel 1 with a wavy surface; it can also be spliced with multiple shapes.
  • the plates can be spliced with three planes, and the whole is concave, or flat and curved surfaces can be used for splicing.
  • the shape of its edge can also be selected as required. Normally, it is a straight type, which constitutes a rectangular plate. However, in some applications, the edges can be curved.
  • the background panel 1 is a curved panel, which can minimize the projection size of the background panel 1 when the maximum background range is obtained. This makes the background board 1 require less space when rotating, which is conducive to reducing the size of the device, reducing the weight of the device, and avoiding rotational inertia, thereby making it more conducive to controlling rotation.
  • the projection is performed in the direction perpendicular to the surface to be photographed.
  • the length W 1 in the horizontal direction of the projected shape and the length W 2 in the vertical direction of the projected shape are determined by the following conditions:
  • d 1 is the length of the imaging element in the horizontal direction
  • d 2 is the length of the imaging element in the vertical direction
  • T is the vertical distance from the sensor element of the image capture device to the background plate along the optical axis
  • f is the focal length of the image capture device
  • a 1 , A 2 is the empirical coefficient.
  • the edge of the background plate 1 is non-linear, which causes the edge of the projected graphic to be non-linear after projection.
  • W 1 and W 2 measured at different positions are different, so W 1 and W 2 are not easy to determine in actual calculations. Therefore, it is possible to take 3-5 points on the opposite sides of the background plate 1, measure the linear distance between the two points, and then take the measured average value as W 1 and W 2 in the above conditions.
  • the background board 1 If the background board 1 is too large, making the cantilever too long, it will increase the volume of the device, and at the same time bring additional burden to the rotation, making the device more likely to be damaged. However, if the background plate 1 is too small, the background will not be pure, which will bring about the burden of calculation.
  • the rotating beam 3 is connected to the fixed beam through the rotating device 4, and the rotating device 4 drives the rotating beam 3 to rotate, thereby driving the background plate 1 and the image capture device 2 at both ends of the beam to rotate.
  • the image capture device 1 and the background plate 2 are both The relative arrangement, especially the optical axis of the image acquisition device 1 passes through the center of the background plate 2.
  • the light source is arranged around the lens of the image acquisition device 2,
  • the light source can be an LED light source or a smart light source, which automatically adjusts the light source parameters according to the target and ambient light conditions.
  • the light source is distributed distributed around the lens of the image acquisition device 2, for example, the light source is a ring LED lamp around the lens.
  • a soft light device such as a soft light housing, can be arranged on the light path of the light source.
  • directly use the LED surface light source not only the light is softer, but also the light is more uniform.
  • an OLED light source can be used, which is smaller in size, has softer light, and has flexible characteristics that can be attached to curved surfaces.
  • the light source can also be arranged on the rotating beam 3 and the housing that carries the image capture device 2.
  • the collected separation distance preferably satisfies the following empirical formula:
  • the positions of two adjacent image acquisition devices 2 or two adjacent acquisition positions of image acquisition devices 2 meet the following conditions:
  • the distance from the photosensitive element to the surface of the target along the optical axis is taken as T.
  • L is A n, A n + 1 two linear distance optical center of the image pickup apparatus, and A n, A n + 1 of two adjacent image pickup devices A
  • it is not limited to 4 adjacent positions, and more positions can be used for average calculation.
  • L should be the linear distance between the optical centers of the two image capture devices, but because the position of the optical centers of the image capture devices is not easy to determine in some cases, the center of the image capture device's photosensitive element and the image can also be used in some cases.
  • the geometric center of the acquisition device 2, the axis of the connection between the image acquisition device 2 and the pan/tilt (or platform, bracket), and the center of the proximal or distal surface of the lens are substituted. After experiments, it is found that the resulting error is acceptable Within range.
  • parameters such as object size and field of view are used as a method for estimating the position of the camera, and the positional relationship between the two cameras is also expressed by angle. Since the angle is not easy to measure in actual use, it is more inconvenient in actual use. And, the size of the object will change with the change of the measuring object. For example, after collecting the 3D information of an adult's head, and then collecting the head of a child, the head size needs to be re-measured and recalculated. The above-mentioned inconvenient measurement and multiple re-measurements will cause measurement errors, resulting in errors in the estimation of the camera position.
  • this solution gives the empirical conditions that the camera position needs to meet, which not only avoids measuring angles that are difficult to accurately measure, but also does not need to directly measure the size of the object.
  • d and f are the fixed parameters of the camera.
  • T is only a straight line distance, which can be easily measured by traditional measuring methods, such as rulers and laser rangefinders. Therefore, the empirical formula of the present invention makes the preparation process convenient and quick, and at the same time improves the accuracy of the arrangement of the camera position, so that the camera can be set in an optimized position, thereby taking into account the 3D synthesis accuracy and speed at the same time. Specific experiments See below for data.
  • the method of the present invention can directly replace the lens to calculate the conventional parameter f to obtain the camera position; similarly, when collecting different objects, due to the different size of the object, the measurement of the object size is also More cumbersome.
  • the method of the present invention there is no need to measure the size of the object, and the camera position can be determined more conveniently.
  • the camera position determined by the present invention can take into account the synthesis time and the synthesis effect. Therefore, the above empirical condition is one of the invention points of the present invention.
  • the seat 6 can be placed between the image acquisition device 2 and the background plate 1.
  • the head is located just near the axis of rotation and between the image acquisition device 2 and the background plate 1. between. Since each person has a different height, the height of the area to be collected (for example, the human head) is different. At this time, the position of the human head in the field of view of the image acquisition device 2 can be adjusted by adjusting the height of the seat 6.
  • the seat 6 can be replaced with a storage table.
  • the height of the image acquisition device 2 and the background board 1 in the vertical direction can also be adjusted to ensure that the center of the target is located in the center of the field of view of the image acquisition device 2.
  • the background board 1 can move up and down along the first mounting post, and the horizontal support carrying the image capture device 2 can move up and down along the second mounting post.
  • the movement of the background plate 1 and the image acquisition device 2 are synchronized to ensure that the optical axis of the image acquisition device passes through the center position of the background plate 1.
  • the image acquisition device 2 can be driven to move back and forth on the horizontal support to ensure that the target object occupies a proper proportion of the pictures collected by the image acquisition device 2.
  • it can also adapt to users with different head sizes by adjusting the focus. But usually the size of the human head is relatively fixed, so it can be achieved with a fixed focal length.
  • the image acquisition device 2 acquires a set of images of the target object by moving relative to the target object;
  • the processing unit obtains the 3D information of the target object according to the multiple images in the above-mentioned set of images.
  • the specific algorithm is as follows.
  • the processing unit can be directly arranged in the housing where the image acquisition device 2 is located, or it can be connected to the image acquisition device 2 through a data line or in a wireless manner.
  • an independent computer, server, cluster server, etc. can be used as the processing unit, and the image data collected by the image acquisition device is transmitted to it for 3D synthesis.
  • the data of the image acquisition device can also be transmitted to the cloud platform, and the powerful computing power of the cloud platform can be used for 3D synthesis.
  • the existing algorithm can be used to realize it, or the optimized algorithm proposed by the present invention can be used, which mainly includes the following steps:
  • Step 1 Perform image enhancement processing on all input photos.
  • the following filters are used to enhance the contrast of the original photo and suppress noise at the same time.
  • g(x, y) is the gray value of the original image at (x, y)
  • f(x, y) is the gray value of the original image after being enhanced by the Wallis filter
  • m g is the local gray value of the original image Degree mean
  • s g is the local gray-scale standard deviation of the original image
  • m f is the local gray-scale target value of the transformed image
  • s f is the local gray-scale standard deviation target value of the transformed image.
  • c ⁇ (0,1) is the expansion constant of the image variance
  • b ⁇ (0,1) is the image brightness coefficient constant.
  • the filter can greatly enhance the image texture patterns of different scales in the image, so the number and accuracy of feature points can be improved when extracting the point features of the image, and the reliability and accuracy of the matching result can be improved in the photo feature matching.
  • Step 2 Perform feature point extraction on all input photos, and perform feature point matching to obtain sparse feature points.
  • the SURF operator is used to extract and match the feature points of the photos.
  • the SURF feature matching method mainly includes three processes, feature point detection, feature point description and feature point matching. This method uses Hessian matrix to detect feature points, uses Box Filters to replace second-order Gaussian filtering, and uses integral images to accelerate convolution to increase the calculation speed and reduce the dimensionality of local image feature descriptors. To speed up the matching speed.
  • the main steps include 1 constructing the Hessian matrix to generate all points of interest for feature extraction.
  • the purpose of constructing the Hessian matrix is to generate stable edge points (mutation points) of the image; 2 constructing the scale space feature point positioning, which will be processed by the Hessian matrix Compare each pixel point with 26 points in the neighborhood of two-dimensional image space and scale space, and initially locate the key points, and then filter out the key points with weaker energy and the key points that are incorrectly positioned to filter out the final stable 3
  • the main direction of the feature point is determined by using the Harr wavelet feature in the circular neighborhood of the statistical feature point.
  • the sum of the horizontal and vertical harr wavelet features of all points in the 60-degree fan is counted, and then the fan is rotated at an interval of 0.2 radians and the harr wavelet eigenvalues in the area are counted again.
  • the direction of the sector with the largest value is taken as the main direction of the feature point; 4 Generate a 64-dimensional feature point description vector, and take a 4*4 rectangular area block around the feature point, but the direction of the obtained rectangular area is along the main direction of the feature point. direction.
  • Each sub-region counts 25 pixels of haar wavelet features in the horizontal and vertical directions, where the horizontal and vertical directions are relative to the main direction.
  • Step 3 Input the matching feature point coordinates, use the beam method to adjust, solve the sparse face 3D point cloud and the position and posture data of the camera, that is, obtain the sparse face model 3D point cloud and the position model coordinate value ;
  • sparse feature points as initial values, dense matching of multi-view photos is performed to obtain dense point cloud data.
  • the process has four main steps: stereo pair selection, depth map calculation, depth map optimization, and depth map fusion. For each image in the input data set, we select a reference image to form a stereo pair for calculating the depth map. Therefore, we can get rough depth maps of all images. These depth maps may contain noise and errors. We use its neighborhood depth map to check consistency to optimize the depth map of each image. Finally, depth map fusion is performed to obtain a three-dimensional point cloud of the entire scene.
  • Step 4 Use the dense point cloud to reconstruct the face surface. Including the process of defining the octree, setting the function space, creating the vector field, solving the Poisson equation, and extracting the isosurface.
  • the integral relationship between the sampling point and the indicator function is obtained from the gradient relationship, and the vector field of the point cloud is obtained according to the integral relationship, and the approximation of the gradient field of the indicator function is calculated to form the Poisson equation.
  • the approximate solution is obtained by matrix iteration, the moving cube algorithm is used to extract the isosurface, and the model of the measured object is reconstructed from the measured point cloud.
  • Step 5 Fully automatic texture mapping of the face model. After the surface model is built, texture mapping is performed.
  • the main process includes: 1The texture data is obtained through the image reconstruction target's surface triangle grid; 2The visibility analysis of the reconstructed model triangle. Use the image calibration information to calculate the visible image set of each triangle and the optimal reference image; 3The triangle surface clustering generates texture patches. According to the visible image set of the triangle surface, the optimal reference image and the neighborhood topological relationship of the triangle surface, the triangle surface cluster is generated into a number of reference image texture patches; 4The texture patches are automatically sorted to generate texture images. Sort the generated texture patches according to their size relationship, generate the texture image with the smallest enclosing area, and obtain the texture mapping coordinates of each triangle.
  • the above-mentioned algorithm is an optimized algorithm of the present invention, and this algorithm cooperates with the image acquisition conditions, and the use of this algorithm takes into account the time and quality of synthesis, which is one of the invention points of the present invention.
  • the conventional 3D synthesis algorithm in the prior art can also be used, but the synthesis effect and speed will be affected to a certain extent.
  • Step 1 First collect the user's head picture information.
  • the user sits on the seat 6 of the collection device, and adjusts the height of the seat 6 according to the height of the user.
  • the height of the background board 1 and the camera can be adjusted so that the center of the user's head and the optical axis of the image collection device 2 are on the same horizontal plane.
  • Adjust the horizontal position of the image acquisition device 2 so that the user's head is located in the middle of the image, the acquisition is complete, and it occupies most of the area.
  • the rotating device drives the rotating beam 3 to rotate 360°, so that the image capture device 2 rotates 360° around the user's head.
  • the image acquisition device 2 performs image acquisition at least once every distance L, so as to obtain multiple photos of the human head from different angles.
  • the information on the front and side of the head is more important, and the lack of information on the back of the head does not hinder the matching and design of the glasses. Therefore, part of the head information can also be collected during collection, that is, the rotation range can be less than 360°.
  • Step 2 Combine multiple images into a 3D model of the head.
  • Use 3D synthesis software to synthesize a 3D model from multiple photos. After the 3D mesh model is obtained, texture information is added to form a head 3D model.
  • the specific method is as described in "3D Synthesis Method Flow".
  • 3D Synthesis Method Flow Of course, in order to be more compatible, other existing synthesis methods can also achieve the establishment of 3D models, so the methods that can be used can use common 3D image matching algorithms.
  • Step 3 Match the existing model glasses with the face.
  • This step can be implemented on the host computer, server, cluster server or cloud platform, can be set independently, or can be shared with the 3D synthesis step. Specifically:
  • 3-1 Import the 3D model of the head, and display the model in front of the face.
  • 3-3 Import the 3D model of the glasses, and determine on the glasses model the P 1 (x 1 ,y 1 ,z 1 ), P 2 (x 2 ,y 2 ,z 2 ), P 3 (x 3 ,y 3) ,z 3 ) points corresponding to points Q 1 (X g1 ,Y g1 ,Z g1 ), Q 2 (X g2 ,Y g2 ,Z g2 ), Q 3 (X g3 ,Y g3 ,Z g3 ). Align P 1 and Q 1 , P 2 and Q 2 , and P 3 and Q 3 respectively, so that the 3D model of the glasses is basically located at the appropriate position of the 3D head model, so that the glasses and the head model are roughly integrated.
  • the size of the matching front glasses and the head is normalized, according to the exact size of the actual object, so that there will be no big deviations when corresponding, the direction of the head is placed, the nose is in front, the left ear and the The right ear is behind, and they are arranged on the left and right.
  • the zoom in the Z-axis direction zooms the left side of the Z-axis of the glasses so that the coordinates of the z-axis 11 are roughly at the position of 0, and the glasses legs are roughly matched with the human face.
  • 3-4 Precise alignment.
  • the matching effect is shown to the user through the display.
  • the user or the operator observes the matching 3D model of the glasses and the head model in the display, and drags and moves the glasses model or the head model, so that the two are accurately aligned and meet the usual wearing requirements.
  • automatic fine-tuning can also be achieved.
  • the adjustment methods include translation, rotation, and zoom.
  • the space is a bit A(12,23,34)
  • the coordinates of point A are scaled 2 times in the X direction, 0.8 times in the y direction, and 10 times in the z direction
  • the spatial coordinates of point A after scaling are (24, 18.4, 340)
  • Modify glasses data In addition to matching, users can also modify the shape of the glasses on the matched glasses and the 3D model of the head based on the predetermined glasses, so as to make personalized customization. And the final design wearing effect is shown to the user through the display. For example, round temples can be modified to square; 5mm wide temples can be changed to 6mm wide; a certain part of the glasses frame can be raised or recessed, and other ways to change the shape and size of the glasses frame.
  • 3D data output The 3D data of steps 3-4 and 3-5 are output to a 3D printer or processing platform, so as to process and manufacture according to the selected 3D glasses model.
  • the rotation movement of the present invention is that during the acquisition process, the previous position acquisition plane and the next position acquisition plane cross instead of being parallel, or the optical axis of the image acquisition device at the previous position crosses the optical axis of the image acquisition position at the next position. Instead of parallel. That is to say, the movement of the acquisition area of the image acquisition device around or partly around the target object can be regarded as the relative rotation of the two.
  • the examples of the present invention enumerate more rotational motions with tracks, it can be understood that as long as non-parallel motion occurs between the acquisition area of the image acquisition device and the target, it is in the category of rotation, and the present invention can be used. Qualification.
  • the protection scope of the present invention is not limited to the orbital rotation in the embodiment.
  • the adjacent acquisition positions in the present invention refer to two adjacent positions on the moving track where the acquisition action occurs when the image acquisition device moves relative to the target. This is usually easy to understand for the movement of the image capture device. However, when the target object moves to cause the two to move relative to each other, at this time, the movement of the target object should be converted into the target object's immobility according to the relativity of the movement, and the image acquisition device moves. At this time, measure the two adjacent positions of the image acquisition device where the acquisition action occurs in the transformed movement track.
  • the above-mentioned target object, target object, and object all represent objects for which three-dimensional information is pre-acquired. It can be a physical object, or it can be a combination of multiple objects. For example, it can be a head, a hand, and so on.
  • the three-dimensional information of the target includes a three-dimensional image, a three-dimensional point cloud, a three-dimensional grid, a local three-dimensional feature, a three-dimensional size, and all parameters with a three-dimensional feature of the target.
  • the so-called three-dimensional in the present invention refers to three-direction information of XYZ, especially depth information, which is essentially different from only two-dimensional plane information. It is also essentially different from the definitions called three-dimensional, panoramic, holographic, and three-dimensional, but actually only include two-dimensional information, especially depth information.
  • modules or units or components in the embodiments can be combined into one module or unit or component, and in addition, they can be divided into multiple sub-modules or sub-units or sub-components. Except that at least some of such features and/or processes or units are mutually exclusive, any combination can be used to compare all features disclosed in this specification (including the accompanying claims, abstract and drawings) and any method or methods disclosed in this manner or All the processes or units of the equipment are combined. Unless expressly stated otherwise, each feature disclosed in this specification (including the accompanying claims, abstract and drawings) may be replaced by an alternative feature providing the same, equivalent or similar purpose.
  • the various component embodiments of the present invention may be implemented by hardware, or by software modules running on one or more processors, or by a combination of them.
  • a microprocessor or a digital signal processor (DSP) may be used in practice to implement some or all of the functions based on some or all of the components in the device of the present invention according to the embodiments of the present invention.
  • DSP digital signal processor
  • the present invention can also be implemented as a device or device program (for example, a computer program and a computer program product) for executing part or all of the methods described herein.
  • Such a program for realizing the present invention may be stored on a computer-readable medium, or may have the form of one or more signals.
  • Such a signal can be downloaded from an Internet website, or provided on a carrier signal, or provided in any other form.

Abstract

一种眼镜设备,包括3D采集装置、3D合成装置、眼镜适配装置、显示装置;3D采集装置,用于采集人体头部的多张图像;3D合成装置,用于利用上述多张图像合成头部3D模型;眼镜适配装置,用于将头部3D模型和眼镜3D模型适配;显示装置,用于显示头部3D模型和眼镜3D模型的适配效果;其中3D采集装置包括图像采集装置(2)和背景板(1),背景板(1)和图像采集装置(2)在转动过程中保持相对设置,使得在采集时背景板(1)成为图像采集装置(2)所采集图像的背景图案。首次提出在眼镜匹配设计设备中通过增加背景板(1)随相机一起旋转的方式来同时提高3D合成速度和合成精度,从而提高眼镜匹配的效果,降低等待时间。

Description

一种眼镜匹配设计设备 技术领域
本发明涉及眼镜设计技术领域,特别涉及通过3D形貌测量技术而实现眼镜自动匹配设计领域。
背景技术
目前,用户在选择眼镜时通常去眼镜店进行实地选择佩戴。但这样费时费力。为了解决这一问题,有人提出拍摄用户脸部图片,然后远程向用户提供多款眼镜的图片,脸部图片和眼镜图片相匹配即可完成眼镜的试戴。但由于两张图片都是平面图片,匹配效果与真实效果差距较大,难以给用户找到真正满意的眼镜。并且这些匹配都是基于现有图片库,难以给用户提供个性化的定制服务。
目前也有一些眼镜设计软件,通过用户头部模型进行眼镜匹配。但首先获得用户头部模型时间较长,导致用户较长等待,体验不好。有些通过算法可以降低时间,但导致头部模型不准确,用不准确的模型进行眼镜匹配会导致用户误判,且呈现效果也与真实有差别,导致用户体验较差。有些甚至使用预先设定好的有限的头部模型作为用户头部模型。在现有技术中,为了同时提高合成速度和合成精度,通常通过优化算法的方法实现。并且本领域一直认为解决上述问题的途径在于算法的选择和更新,截止目前没有任何提出其他角度同时提高合成速度和合成精度的方法。然而,算法的优化目前已经达到瓶颈,在没有更优理论出现前,已经无法兼顾提高合成速度和合成的精度。
另外,大多数软件只能利用眼镜库中的眼镜与用户进行匹配,而不能个性化定制。即使有些软件能够实现眼镜的设计,但由于其头部模型不准确,设计出来的眼镜只能作为效果的展示,而不能作为加工数据。
同时,目前没有能够精确采集头部数据的眼镜设计设备、采集设备。通常只能作为展示使用,而不能产生精确的加工数据。
因此,目前急需解决以下技术问题:①能够同时提高3D合成速度和合成精度,提高眼镜匹配时的真实度,降低等待时间;②成本低,不增加过多设备复杂程度。③能够为客户提供准确的眼镜加工数据,实现个性化定制。
发明内容
鉴于上述问题,提出了本发明提供一种克服上述问题或者至少部分地解决上述问题的眼镜匹配设计设备。
本发明的一方面提供了一种眼镜适配方法,包括
第1步:采集用户头部多张图像,所述图像至少包括面部图像;
第2步:将多张图像合成头部3D模型;
第3步:将头部3D模型与眼镜模型相匹配,具体包括:
3-1确定头部3D模型多个点坐标;
3-2确定眼镜3D模型多个点坐标;
3-3通过旋转、平移、缩放中的至少一种变换方式,将头部3D模型多个点坐标与眼镜3D模型多个点坐标相匹配;
3-4向用户显示匹配效果。
本发明的另一方面提供了一种眼镜设备,包括3D采集装置、3D合成装置、眼镜适配装置、显示装置;
3D采集装置,用于采集人体头部的多张图像;
3D合成装置,用于利用上述多张图像合成头部3D模型;
眼镜适配装置,用于将头部3D模型和眼镜3D模型适配;
显示装置,用于显示头部3D模型和眼镜3D模型的适配效果;
其中3D采集装置包括图像采集装置和背景板,背景板和图像采集装置在转动过程中保持相对设置,使得在采集时背景板成为图像采集装置所采集图像的背景图案。
本发明的第三方面还提供了一种眼镜设备,包括3D采集装置、3D合成装置、眼镜适配装置、显示装置;
3D采集装置,用于采集人体头部的多张图像;
3D合成装置,用于利用上述多张图像合成头部3D模型;
眼镜适配装置,用于将头部3D模型和眼镜3D模型适配;
显示装置,用于显示头部3D模型和眼镜3D模型的适配效果;
其中3D采集装置包括图像采集装置,图像采集装置采集目标物时,相邻两个采集位置满足如下条件:
Figure PCTCN2020134758-appb-000001
其中L为两个位置图像采集装置光心的直线距离;f为图像采集装置的焦距;d为图像采集装置感光元件(CCD)的矩形长度;T为图像采集装置感光元件沿着光轴到目标物表面的距离;δ为调整系数;
且δ<0.603;优选δ<0.410,或δ<0.356。
可选的,在垂直背景板被拍摄表面的方向进行投影,投影形状的水平方向上长度W 1、投影形状的垂直方向上长度W 2由下述条件决定:
Figure PCTCN2020134758-appb-000002
Figure PCTCN2020134758-appb-000003
其中,d 1为成像元件水平方向长度,d 2为成像元件垂直方向长度,T为沿光轴方向图像采集装置传感元件到背景板的垂直距离,f为图像采集装置的焦距,A 1、A 2为经验系数;
其中A 1>1.04,A 2>1.04,优选,A 1>1.25,A 2>1.25
可选的,还包括标记点。
可选的,标记点位于座椅上。
可选的,3D合成装置和眼镜适配装置分立设置或在同一平台实现。
可选的,眼镜适配装置还用于眼镜数据修改。
可选的,将头部3D模型和眼镜3D模型适配包括先进行头部3D模型和眼镜3D模型的粗对齐,再进行头部3D模型和眼镜3D模型的二次对齐。
可选的,将眼镜3D数据发送给加工设备。
发明点及技术效果
1、首次提出在眼镜匹配设计设备中通过增加背景板随相机一起旋转的方式来同时提高3D合成速度和合成精度,从而提高眼镜匹配的效果,降低等待时间,使得眼镜数据可以用于加工。
2、通过优化背景板的尺寸,在降低旋转负担的同时,保证能够同时提高3D合成速度和合成精度,从而提高眼镜匹配的效果,降低等待时间,使得眼镜数据可以用于加工。
3、通过优化相机位置,能够同时提高3D合成速度和合成精度,从而提高眼镜匹配的效果,降低等待时间,使得眼镜数据可以用于加工。优化位置时, 无需测量角度,无需测量头部尺寸,适用于各种人群。更加方便、适应性强。
4、由于上述原因,能够为用户提供准确的头部数据,从而使得该数据用于眼镜的定制加工。
附图说明
通过阅读下文优选实施方式的详细描述,各种其他的优点和益处对于本领域普通技术人员将变得清楚明了。附图仅用于示出优选实施方式的目的,而并不认为是对本实用新型的限制。而且在整个附图中,用相同的参考符号表示相同的部件。在附图中:
图1为本发明实施例提供的眼镜匹配设计设备的结构示意图。
附图标记与各部件的对应关系如下:
1背景板、2图像采集装置、3旋转横梁、4旋转装置、5支架、6座椅、7底座、51横柱、52立柱。
具体实施方式
下面将参照附图更详细地描述本公开的示例性实施例。虽然附图中显示了本公开的示例性实施例,然而应当理解,可以以各种形式实现本公开而不应被这里阐述的实施例所限制。相反,提供这些实施例是为了能够更透彻地理解本公开,并且能够将本公开的范围完整的传达给本领域的技术人员。
眼镜匹配设计设备包括背景板1、图像采集装置2、旋转横梁3、旋转装置4、支架5、座椅6和底座7。
支架包括横柱51和立柱52,立柱52与底座7连接,横柱51通过旋转装置4与旋转横梁3连接,从而使得旋转横梁3能够在旋转装置4的驱动下360°转动。背景板1和图像采集装置2位于旋转横梁3的两端,相对设置,在旋转横梁3转动时同步转动,始终保持相对设置。
底座上具有座椅6,座椅6位于背景板1和图像采集装置2之间。在人坐下时,头部正好位于转动轴附近且在图像采集装置2和背景板1之间,并且优选人头部位于图像采集装置2的光轴上。由于每个人身高不同,因此人体头部的高度不同。此时可以通过调节座椅6的高度来调整人体头部在图像采集装置2视场中的位置。
调节座椅6可以通过手动调节装置,例如座椅6通过丝杆与底座连接,通过旋转丝杆调节座椅高度。优选的,具有升降驱动装置,升降驱动装置与控制 器数据连接,通过控制器控制升降装置高度,从而调节座椅高度。控制器可以直接连接在眼镜匹配设计设备中,例如可以防止在座椅扶手附近,以方便用户调节。但控制器也可以为移动终端,例如手机。这样通过移动终端与眼镜匹配设计设备连接,在移动终端中就能够通过控制升降驱动装置从而控制座椅高度。移动终端可以由操作员进行操作,也可以由用户进行操作,更加方便,且不受位置限制。当然,控制器也可以由上位机承担,或由服务器、集群服务器承担。当然也可以通过网络由云平台承担。这些上位机、服务器、集群服务器、云平台可以与进行3D合成处理的上位机、服务器、集群服务器、云平台共用,即完成控制和3D合成双重功能。
如果只进行3D头像的展示,那么只要3D头像各部分比例正确即可,无需每个部分的绝对尺寸。但是对于进行眼镜的匹配和设计而言,如果没有头部3D模型的绝对尺寸,那么就无法完成眼镜的真正匹配和设计,无法为眼镜最终的加工提供有意义的数据。为了获得头部3D信息的绝对尺寸,需要对用户头部进行标定。但如果按目前常规方法,直接在用户的头部贴标记,会带来不好的用户体验。而其他位置难以贴标记点。因此,本发明巧妙地在座椅6上设置头托,在头托上设置标记点,并记录标记点相互之间的绝对距离。在图像采集装置2转动到用户背后时,会采集到这些标记点,并根据这些标记点的预定距离,最终计算出头部3D模型的尺寸。同时,在该位置设置标记点不影响用户面部信息采集。因此,这也是本发明的发明点之一,可以提高用户体验的同时,获得头部3D信息的绝对距离。同时,标记点也可以设置在座椅6上,只要图像采集装置2可以采集到的位置即可。上述标记点也可以为标准量块,即具有一定空间大小,且预知绝对尺寸的标记物。当然,除了在头托上设置标记点,也可以在其他位置设置相应的标准量块,只要标准量块在相机的视野范围内,且相对人体头部静止即可。例如,可以为用户佩戴包含已知标记点的帽子、发卡等。
图像采集装置2用于采集目标物的图像,其可以为CCD、CMOS、相机、摄像机、工业相机、监视器、摄像头、手机、平板、笔记本、移动终端、可穿戴设备、智能眼镜、智能手表、智能手环以及带有图像采集功能所有设备。图像采集装置包括带有感光元件的相机本体和镜头。优选的,相机本体可以采用工业相机,例如MER-2000-19U3M/C。工业相机拥有更小的体积,并且与家用相机相比简化了不需要的功能,且性能更佳。图像采集装置2可以与处理单元连接,从而将采集到的图像传递至处理单元。上述连接方式包括有线方式和无 线方式,例如通过数据线、网线、光纤、4G、5G、wifi等多种协议进行传递,当然也可以使用它们的组合进行传递。
设备还包括处理器,也可以成为处理单元,用以根据图像采集装置采集的多个图像,根据3D合成算法,合成目标物3D模型,得到目标物3D信息。
处理单元根据上述一组图像中的多个图像得到目标物的3D信息(具体算法下文详述)。处理单元可以直接设置在图像采集装置所在的壳体内,也可以通过数据线或通过无线方式与图像采集装置2连接。例如可以使用独立的计算机、服务器及集群服务器等作为处理单元,图像采集装置2采集到的图像数据传输至其上,进行3D合成。同时,也可以将图像采集装置2的数据传输至云平台,利用云平台的强大计算能力进行3D合成。
背景板1全部为纯色,或大部分(主体)为纯色。特别是可以为白色板或黑色板,具体颜色可以根据目标物主体颜色来选择。背景板1通常为平板,优选也可以为曲面板,例如凹面板、凸面板、球形板,甚至在某些应用场景下,可以为表面为波浪形的背景板1;也可以为多种形状拼接板,例如可以用三段平面进行拼接,而整体呈现凹形,或用平面和曲面进行拼接等。除了背景板1表面的形状可以变化外,其边缘形状也可以根据需要选择。通常情况下为直线型,从而构成矩形板。但是在某些应用场合,其边缘可以为曲线。
优选的,背景板1为曲面板,这样可以使得在获得最大背景范围的情况下,使得背景板1投影尺寸最小。这使得背景板1在转动时需要的空间更小,有利于缩小设备体积,并且减少设备重量,避免转动惯性,从而更有利于控制转动。
无论背景板1的表面形状和边缘形状如何,在垂直其被拍摄表面的方向进行投影,投影形状的水平方向上长度W 1、投影形状的垂直方向上长度W 2由下述条件决定:
Figure PCTCN2020134758-appb-000004
Figure PCTCN2020134758-appb-000005
其中,d 1为成像元件水平方向长度,d 2为成像元件垂直方向长度,T为沿光轴方向图像采集装置传感元件到背景板的垂直距离,f为图像采集装置的焦距,A 1、A 2为经验系数。
经过大量实验,优选,A 1>1.04,A 2>1.04;更优选的2>A 1>1.1,2>A 2>1.1。
在一些应用场景下,背景板1边缘为非直线形,导致其投影后投影图形边缘也为非直线。此时不同位置测量W 1、W 2均不同,因此实际计算时W 1、W 2不易确定。因此,可以在背景板1相对两侧分别边缘取3-5个点,测量相对两点的直线距离,再取测量的平均值作为上述条件中的W 1、W 2
如果背景板1过大,使得悬臂过长,会增加设备体积,同时给旋转带来额外的负担,使得设备更容易损坏。但如果背景板1过小,则会导致背景不单纯,从而带来计算的负担。
下表为实验对照结果:
实验条件:
采集对象:真实人体头部
相机:MER-2000-19U3M/C
镜头:OPT-C1616-10M
经验系数 合成时间 合成精度
A 1=1.2,A 2=1.2 3.3分钟
A 1=1.4,A 2=1.4 3.4分钟
A 1=0.9,A 2=0.9 4.5分钟 中高
7.8分钟
旋转横梁3通过旋转装置4与固定横梁连接,旋转装置4驱动旋转横梁3转动,从而带动横梁两端的背景板1和图像采集装置2转动,但无论怎样转动,图像采集装置1与背景板2均相对设置,特别是图像采集装置1的光轴穿过背景板2中心。
光源设置于图像采集装置2镜头的周围,
可以为LED光源,也可以为智能光源,即根据目标物及环境光的情况自动调整光源参数。通常情况下,光源位于图像采集装置2的镜头周边分散式分布,例如光源为在镜头周边的环形LED灯。当被采集对象为人体,因此需要控制光源强度,避免造成人体不适。特别是可以在光源的光路上设置柔光装置,例如为柔光外壳。或者直接采用LED面光源,不仅光线比较柔和,而且发光更为均匀。更佳地,可以采用OLED光源,体积更小,光线更加柔和,并且具有柔性特性,可以贴附于弯曲的表面。此外光源也可以为设置于旋转横梁3、承载图像采集装置2的外壳上。
3D采集相机(图像采集装置)位置优化
根据大量实验,采集的间隔距离优选满足如下经验公式:
在进行3D采集时,相邻两个图像采集装置2的位置,或图像采集装置2相邻两个采集位置满足如下条件:
Figure PCTCN2020134758-appb-000006
其中L为两个图像采集装置光心的直线距离;f为图像采集装置的焦距;d为图像采集装置感光元件(CCD)的矩形长度;T为图像采集装置感光元件沿着光轴到目标物表面的距离;δ为调整系数,δ<0.603。
图像采集装置2在两个位置中的任何一个位置时,感光元件沿着光轴到目标物表面的距离作为T。除了这种方法外,在另一种情况下,L为A n、A n+1两个图像采集装置光心的直线距离,与A n、A n+1两个图像采集装置相邻的A n-1、A n+2两个图像采集装置和A n、A n+1两个图像采集装置各自感光元件沿着光轴到目标物表面的距离分别为T n-1、T n、T n+1、T n+2,T=(T n-1+T n+T n+1+T n+2)/4。当然可以不只限于相邻4个位置,也可以用更多的位置进行平均值计算。
L应当为两个图像采集装置光心的直线距离,但由于图像采集装置光心位置在某些情况下并不容易确定,因此在某些情况下也可以使用图像采集装置的感光元件中心、图像采集装置2的几何中心、图像采集装置2与云台(或平台、支架)连接的轴中心、镜头近端或远端表面的中心替代,经过试验发现由此带来的误差是在可接受的范围内的。
通常情况下,现有技术中均采用物体尺寸、视场角等参数作为推算相机位置的方式,并且两个相机之间的位置关系也采用角度表达。由于角度在实际使用过程中并不好测量,因此在实际使用时较为不便。并且,物体尺寸会随着测量物体的变化而改变。例如,在进行一个成年人头部3D信息采集后,再进行儿童头部采集时,就需要重新测量头部尺寸,重新推算。上述不方便的测量以及多次重新测量都会带来测量的误差,从而导致相机位置推算错误。而本方案根据大量实验数据,给出了相机位置需要满足的经验条件,不仅避免测量难以准确测量的角度,而且不需要直接测量物体大小尺寸。经验条件中d、f均为相机固定参数,在购买相机、镜头时,厂家即会给出相应参数,无需测量。而T仅为一个直线距离,用传统测量方法,例如直尺、激光测距仪均可以很便捷的测量得到。因此,本发明的经验公式使得准备过程变得方便快捷,同时也提高 了相机位置的排布准确度,使得相机能够设置在优化的位置中,从而在同时兼顾了3D合成精度和速度,具体实验数据参见下述。
利用本发明装置,进行实验,得到了如下实验结果。
Figure PCTCN2020134758-appb-000007
更换相机镜头,再次实验,得到了如下实验结果。
Figure PCTCN2020134758-appb-000008
更换相机镜头,再次实验,得到了如下实验结果。
Figure PCTCN2020134758-appb-000009
Figure PCTCN2020134758-appb-000010
从上述实验结果及大量实验经验可以得出,δ的值应当满足δ<0.603,此时已经能够合成部分3D模型,虽然有一部分无法自动合成,但是在要求不高的情况下也是可以接受的,并且可以通过手动或者更换算法的方式弥补无法合成的部分。特别是δ的值满足δ<0.410时,能够最佳地兼顾合成效果和合成时间的平衡;为了获得更好的合成效果可以选择δ<0.356,此时合成时间会上升,但合成质量更好。当然为了进一步提高合成效果,可以选择δ<0.311。而当δ为0.681时,已经无法合成。但这里应当注意,以上范围仅仅是最佳实施例,并不构成对保护范围的限定。
并且从上述实验可以看出,对于相机拍照位置的确定,只需要获取相机参数(焦距f、CCD尺寸)、相机CCD与物体表面的距离T即可根据上述公式得到,这使得在进行设备设计和调试时变得容易。由于相机参数(焦距f、CCD尺寸)在相机购买时就已经确定,并且是产品说明中就会标示的,很容易获得。因此根据上述公式很容易就能够计算得到相机位置,而不需要再进行繁琐的视场角测量和物体尺寸测量。特别是在一些场合中,需要更换相机镜头,那么本发明的方法直接更换镜头常规参数f计算即可得到相机位置;同理,在采集不同物体时,由于物体大小不同,对于物体尺寸的测量也较为繁琐。而使用本发明的方法,无需进行物体尺寸测量,能够更为便捷地确定相机位置。并且使用本发明确定的相机位置,能够兼顾合成时间和合成效果。因此,上述经验条件是本发明的发明点之一。
以上数据仅为验证该公式条件所做实验得到的,并不对发明构成限定。即使没有这些数据,也不影响该公式的客观性。本领域技术人员可以根据需要调整设备参数和步骤细节进行实验,得到其他数据也是符合该公式条件的。
3D信息采集方法流程
将目标物,放置在图像采集装置2和背景板1之间。优选放置在旋转装置4的转轴延长线上,即图像采集装置2转动所围绕的圆心处。这样可以保证图像采集装置2在转动过程中距离目标物的距离基本不变,从而防止由于物距剧 烈变化而导致图像采集不清晰,或导致对相机的景深要求过高(增加成本)。
当目标物为人体头部时,可以在图像采集装置2和背景板1之间放置座椅6,在人坐下时,头部正好位于转动轴附近且在图像采集装置2和背景板1之间。由于每个人身高不同,因此待采集的区域(例如人体头部)的高度不同。此时可以通过调节座椅6的高度来调整人体头部在图像采集装置2视场中的位置。在进行物体的采集时,可以将座椅6替换为置物台。
除了调整座椅6高度,还可以通过调整图像采集装置2和背景板1在竖直方向上的高度来保证目标物中心位于图像采集装置2的视场中心。例如背景板1可以沿第一安装柱上下移动,承载图像采集装置2的水平托可以沿第二安装柱上下移动。通常,背景板1和图像采集装置2的移动是同步的,保证图像采集装置的光轴穿过背景板1的中心位置。
由于每次采集目标物尺寸大小差异较大。如果图像采集装置2在同一位置进行采集,则会导致目标物在图像中的比例变化巨大。例如目标物A在图像中大小合适时,如果换成较小的目标物B,其在图像中的比例将非常小,这会极大地影响后续的3D合成速度和精度。因此,可以驱动图像采集装置2在水平托上前后移动,保证目标物在图像采集装置2所采集图片中的占比合适。同时,也可以通过调整焦距的方式适应头部大小不同的用户。但通常人体头部尺寸相对固定,因此用固定焦距即可实现。
3D合成方法流程
根据上述采集方法,图像采集装置2通过与目标物相对运动而采集目标物一组图像;
处理单元根据上述一组图像中的多个图像得到目标物的3D信息。具体算法如下。当然,处理单元可以直接设置在图像采集装置2所在的壳体内,也可以通过数据线或通过无线方式与图像采集装置2连接。例如可以使用独立的计算机、服务器及集群服务器等作为处理单元,图像采集装置采集到的图像数据传输至其上,进行3D合成。同时,也可以将图像采集装置的数据传输至云平台,利用云平台的强大计算能力进行3D合成。
利用上述采集到的图片进行3D合成时,可以采用现有算法实现,也可以采用本发明提出的优化的算法,主要包括如下步骤:
步骤1:对所有输入照片进行图像增强处理。采用下述滤波器增强原始照片的反差和同时压制噪声。
Figure PCTCN2020134758-appb-000011
式中:g(x,y)为原始影像在(x,y)处灰度值,f(x,y)为经过Wallis滤波器增强后该处的灰度值,m g为原始影像局部灰度均值,s g为原始影像局部灰度标准偏差,m f为变换后的影像局部灰度目标值,s f为变换后影像局部灰度标准偏差目标值。c∈(0,1)为影像方差的扩展常数,b∈(0,1)为影像亮度系数常数。
该滤波器可以大大增强影像中不同尺度的影像纹理模式,所以在提取影像的点特征时可以提高特征点的数量和精度,在照片特征匹配中则提高了匹配结果可靠性和精度。
步骤2:对输入的所有照片进行特征点提取,并进行特征点匹配,获取稀疏特征点。采用SURF算子对照片进行特征点提取与匹配。SURF特征匹配方法主要包含三个过程,特征点检测、特征点描述和特征点匹配。该方法使用Hessian矩阵来检测特征点,用箱式滤波器(Box Filters)来代替二阶高斯滤波,用积分图像来加速卷积以提高计算速度,并减少了局部影像特征描述符的维数,来加快匹配速度。主要步骤包括①构建Hessian矩阵,生成所有的兴趣点,用于特征提取,构建Hessian矩阵的目的是为了生成图像稳定的边缘点(突变点);②构建尺度空间特征点定位,将经过Hessian矩阵处理的每个像素点与二维图像空间和尺度空间邻域内的26个点进行比较,初步定位出关键点,再经过滤除能量比较弱的关键点以及错误定位的关键点,筛选出最终的稳定的特征点;③特征点主方向的确定,采用的是统计特征点圆形邻域内的harr小波特征。即在特征点的圆形邻域内,统计60度扇形内所有点的水平、垂直harr小波特征总和,然后扇形以0.2弧度大小的间隔进行旋转并再次统计该区域内harr小波特征值之后,最后将值最大的那个扇形的方向作为该特征点的主方向;④生成64维特征点描述向量,特征点周围取一个4*4的矩形区域块,但是所取得矩形区域方向是沿着特征点的主方向。每个子区域统计25个像素的水平方向和垂直方向的haar小波特征,这里的水平和垂直方向都是相对主方向而言的。该haar小波特征为水平方向值之后、垂直方向值之后、水平方向绝对值之后以及垂直方向绝对值之和4个方向,把这4个值作为每个子块区域的特征向量,所以一共有4*4*4=64维向量作为Surf特征的描述子;⑤特征点匹配,通过计算两个特征点间的欧式距离来确定匹配度,欧氏距离越短,代表两个特征点的匹配度越好。
步骤3:输入匹配的特征点坐标,利用光束法平差,解算稀疏的人脸三维点云和拍照相机的位置和姿态数据,即获得了稀疏人脸模型三维点云和位置的模型坐标值;以稀疏特征点为初值,进行多视照片稠密匹配,获取得到密集点 云数据。该过程主要有四个步骤:立体像对选择、深度图计算、深度图优化、深度图融合。针对输入数据集里的每一张影像,我们选择一张参考影像形成一个立体像对,用于计算深度图。因此我们可以得到所有影像的粗略的深度图,这些深度图可能包含噪声和错误,我们利用它的邻域深度图进行一致性检查,来优化每一张影像的深度图。最后进行深度图融合,得到整个场景的三维点云。
步骤4:利用密集点云进行人脸曲面重建。包括定义八叉树、设置函数空间、创建向量场、求解泊松方程、提取等值面几个过程。由梯度关系得到采样点和指示函数的积分关系,根据积分关系获得点云的向量场,计算指示函数梯度场的逼近,构成泊松方程。根据泊松方程使用矩阵迭代求出近似解,采用移动方体算法提取等值面,对所测点云重构出被测物体的模型。
步骤5:人脸模型的全自动纹理贴图。表面模型构建完成后,进行纹理贴图。主要过程包括:①纹理数据获取通过图像重建目标的表面三角面格网;②重建模型三角面的可见性分析。利用图像的标定信息计算每个三角面的可见图像集以及最优参考图像;③三角面聚类生成纹理贴片。根据三角面的可见图像集、最优参考图像以及三角面的邻域拓扑关系,将三角面聚类生成为若干参考图像纹理贴片;④纹理贴片自动排序生成纹理图像。对生成的纹理贴片,按照其大小关系进行排序,生成包围面积最小的纹理图像,得到每个三角面的纹理映射坐标。
应当注意,上述算法是本发明的优化算法,本算法与图像采集条件相互配合,使用该算法兼顾了合成的时间和质量,是本发明的发明点之一。当然,使用现有技术中常规3D合成算法也可以实现,只是合成效果和速度会受到一定影响。
眼镜匹配及制作
第1步:首先采集用户头部图片信息。用户坐在采集装置的座椅6上,根据用户身高调整座椅6高度,同时也可以调整背景板1和相机高度,使得用户头部中心与图像采集装置2的光轴在同一水平面。调整图像采集装置2的水平位置,使得用户头部位于图像正中间,采集完整,且占据绝大部分面积。旋转装置驱动旋转横梁3转动360°,从而使得图像采集装置2围绕用户头部转动360°。在转动过程中,图像采集装置2至少每间隔L距离进行一次图像采集,从而得到人体头部不同角度的多张照片。当然,对于眼镜匹配而言,人头部正面及侧面信息较为重要,而头部背面信息缺失并不妨碍眼镜的匹配与设计。因 此,在采集时也可以采集头部部分信息,即转动范围可以小于360°。
第2步:将多张图像合成头部3D模型。将多张照片利用3D合成软件合成3D模型,在得到3D网格模型后,增加纹理信息,从而形成头部3D模型,具体方法如“3D合成方法流程”所述。当然,为了能够更好地兼容,其他现有的合成方法也可以实现3D模型的建立,因此可以采用的方法可以使用常见的3D图片匹配算法。
第3步:已有模型眼镜与人脸匹配。该步骤可以在上位机、服务器、集群服务器或云平台上实现,可以独立设置,也可以与3D合成步骤共用。具体包括:
3-1:导入头部3D模型,模型采用人脸正面的方式进行显示。
3-2:确定头部3D模型的多个点坐标。多个点通常选择为与眼镜框、两个眼镜腿相关联的点。例如可以选择与头部与眼镜想接触的部位:耳根、鼻子两侧,作为粗对齐的基础。确定选择点的坐标P 1(x 1,y 1,z 1)、P 2(x 2,y 2,z 2)、P 3(x 3,y 3,z 3)。当然,也可以选择例如耳廓、鼻头等不与眼镜相接触的点,从而只进行更为不精确的粗对齐,再进行细微调整。
3-3:导入眼镜3D模型,在眼镜模型上确定与上述P 1(x 1,y 1,z 1)、P 2(x 2,y 2,z 2)、P 3(x 3,y 3,z 3)点相对应的点Q 1(X g1,Y g1,Z g1)、Q 2(X g2,Y g2,Z g2)、Q 3(X g3,Y g3,Z g3)。将P 1与Q 1、P 2与Q 2、P 3与Q 3分别对齐,从而使得眼镜3D模型基本位于头部3D模型合适的位置,这样眼镜就和头部模型大致的结合了起来。
例如,在匹配前眼镜和头部的尺寸做了归一,按照实物的准确尺寸,这样对应的时候,不会发生大的偏差,头部的摆放的的方向,鼻子在前,左耳和右耳在后,且分列左右。
分别在左耳,右耳,鼻子上面选取空间坐标(0,90,0)(100,89,0)(52,89,56)。假设眼镜模型对应的鼻子,左耳,右耳的空间坐标为(0,0,0)(-50,0-45)(50,0-45)。
首先,鼻子对准
眼镜的所有点的空间尺寸平移量
x delta=52-0=52
y delta=89-0=89
z delta=56-0=56
其次,眼镜模型的空间点平移
鼻子对应点(0+52=52,0+89=89,0+56=56)
左耳对应点(-50+52=2,0+89=89-45+56=11)
右耳对应点(50+52=102,0+89=89,-45+56=11)
可选的,手动微调
Z轴方向的缩放,将眼镜的Z轴左边缩放,使z轴11的坐标大致落在0的位置,眼镜腿就和人脸大致配合紧致。
3-4:精确对齐。将匹配效果通过显示器向用户展示。用户或者操作人员通过观察显示器中的眼镜与头部模型的匹配的3D模型,通过拖拽、移动眼镜模型或头部模型,从而使得两者精确对齐,符合通常的佩戴要求。除了人工微调外,也可以实现自动微调,调整方式包括平移、旋转和缩放。
例如,空间有点A(12,23,34)
1.将A点平移(5,6,7)
平移后A点新的坐标
X’=x+5=17
Y’=23+6=29
Z’=34+7=41
平移后A点的空间坐标为(17,29,41)
2.将A点的坐标在X方向缩放2倍,y方向缩放0.8倍,z方向缩放10倍
缩放后A点新的坐标
X’=12*2=24
Y’=23*0.8=18.4
Y’=34*10=340
缩放后A点的空间坐标为(24,18.4,340)
3.将A点沿X轴旋转30度,
A点旋转30度的坐标
X’=12
Y’=23*cos30。-23*sin30。=23*0.866-23*0.5=8.418
Z’=34*sin30。-34*cos30。=34*0.5-34*0.866=-12.444
A点沿X轴旋转30度的空间坐标(12,8.418,-12.444)
沿Y轴,z轴同样的原理。
3-5:修改眼镜数据。除了匹配外,用户还可以以预定眼镜为基础,在匹配 好的眼镜和头部3D模型上进行眼镜形状的改动,从而进行个性化的定制。并将最终设计佩戴效果通过显示器向用户展示。例如,可以将圆形眼镜腿修改为方形;可以将5mm宽的眼镜腿修改为6mm宽;可以将眼镜框某个部位设置凸起或凹陷,以及其他一些改变镜框形状、尺寸的方式。
3-6:3D数据输出。将步骤3-4、3-5的3D数据输出至3D打印机或加工平台,从而按照选定的3D眼镜模型进行加工制造。
本发明所述的转动运动,为在采集过程中前一位置采集平面和后一位置采集平面发生交叉而不是平行,或前一位置图像采集装置光轴和后一位置图像采集位置光轴发生交叉而不是平行。也就是说,图像采集装置的采集区域环绕或部分环绕目标物运动,均可以认为是两者相对转动。虽然本发明实施例中列举更多的为有轨道的转动运动,但是可以理解,只要图像采集设备的采集区域和目标物之间发生非平行的运动,均是转动范畴,均可以使用本发明的限定条件。本发明保护范围并不限定于实施例中的有轨道转动。
本发明所述的相邻采集位置是指,在图像采集装置相对目标物移动时,移动轨迹上的发生采集动作的两个相邻位置。这通常对于图像采集装置运动容易理解。但对于目标物发生移动导致两者相对移动时,此时应当根据运动的相对性,将目标物的运动转化为目标物不动,而图像采集装置运动。此时再衡量图像采集装置在转化后的移动轨迹中发生采集动作的两个相邻位置。
上述目标物体、目标物、及物体皆表示预获取三维信息的对象。可以为一实体物体,也可以为多个物体组成物。例如可以为头部、手部等。所述目标物的三维信息包括三维图像、三维点云、三维网格、局部三维特征、三维尺寸及一切带有目标物三维特征的参数。本实用新型里所谓的三维是指具有XYZ三个方向信息,特别是具有深度信息,与只有二维平面信息具有本质区别。也与一些称为三维、全景、全息、三维,但实际上只包括二维信息,特别是不包括深度信息的定义有本质区别。
在此处所提供的说明书中,说明了大量具体细节。然而,能够理解,本实用新型的实施例可以在没有这些具体细节的情况下实践。在一些实例中,并未详细示出公知的方法、结构和技术,以便不模糊对本说明书的理解。
类似地,应当理解,为了精简本公开并帮助理解各个发明方面中的一个或多个,在上面对本发明的示例性实施例的描述中,本发明的各个特征有时被一起分组到单个实施例、图、或者对其的描述中。然而,并不应将该公开的方法解释成反映如下意图:即所要求保护的本发明要求比在每个权利要求中所明确 记载的特征更多的特征。更确切地说,如下面的权利要求书所反映的那样,发明方面在于少于前面公开的单个实施例的所有特征。因此,遵循具体实施方式的权利要求书由此明确地并入该具体实施方式,其中每个权利要求本身都作为本发明的单独实施例。
本领域那些技术人员可以理解,可以对实施例中的设备中的模块进行自适应性地改变并且把它们设置在与该实施例不同的一个或多个设备中。可以把实施例中的模块或单元或组件组合成一个模块或单元或组件,以及此外可以把它们分成多个子模块或子单元或子组件。除了这样的特征和/或过程或者单元中的至少一些是相互排斥之外,可以采用任何组合对本说明书(包括伴随的权利要求、摘要和附图)中公开的所有特征以及如此公开的任何方法或者设备的所有过程或单元进行组合。除非另外明确陈述,本说明书(包括伴随的权利要求、摘要和附图)中公开的每个特征可以由提供相同、等同或相似目的的替代特征来代替。
此外,本领域的技术人员能够理解,尽管在此的一些实施例包括其它实施例中所包括的某些特征而不是其它特征,但是不同实施例的特征的组合意味着处于本发明的范围之内并且形成不同的实施例。例如,在权利要求书中,所要求保护的实施例的任意之一都可以以任意的组合方式来使用。
本发明的各个部件实施例可以以硬件实现,或者以在一个或者多个处理器上运行的软件模块实现,或者以它们的组合实现。本领域的技术人员应当理解,可以在实践中使用微处理器或者数字信号处理器(DSP)来实现根据本发明实施例的基于本发明装置中的一些或者全部部件的一些或者全部功能。本发明还可以实现为用于执行这里所描述的方法的一部分或者全部的设备或者装置程序(例如,计算机程序和计算机程序产品)。这样的实现本发明的程序可以存储在计算机可读介质上,或者可以具有一个或者多个信号的形式。这样的信号可以从因特网网站上下载得到,或者在载体信号上提供,或者以任何其他形式提供。
应该注意的是上述实施例对本发明进行说明而不是对本发明进行限制,并且本领域技术人员在不脱离所附权利要求的范围的情况下可设计出替换实施例。在权利要求中,不应将位于括号之间的任何参考符号构造成对权利要求的限制。单词“包含”不排除存在未列在权利要求中的元件或步骤。位于元件之前的单词“一”或“一个”不排除存在多个这样的元件。本发明可以借助于包括有若干不同元件的硬件以及借助于适当编程的计算机来实现。在列举了若干装置的 单元权利要求中,这些装置中的若干个可以是通过同一个硬件项来具体体现。单词第一、第二、以及第三等的使用不表示任何顺序。可将这些单词解释为名称。
至此,本领域技术人员应认识到,虽然本文已详尽示出和描述了本发明的多个示例性实施例,但是,在不脱离本发明精神和范围的情况下,仍可根据本发明公开的内容直接确定或推导出符合本发明原理的许多其他变型或修改。因此,本发明的范围应被理解和认定为覆盖了所有这些其他变型或修改。

Claims (52)

  1. 一种眼镜适配方法,其特征在于:包括
    第1步:采集用户头部多张图像,所述图像至少包括面部图像;
    第2步:将多张图像合成头部3D模型;
    第3步:将头部3D模型与眼镜模型相匹配,具体包括:
    3-1确定头部3D模型多个点坐标;
    3-2确定眼镜3D模型多个点坐标;
    3-3通过旋转、平移、缩放中的至少一种变换方式,将头部3D模型多个点坐标与眼镜3D模型多个点坐标相匹配;
    3-4向用户显示匹配效果;
    使用3D采集装置采集用户头部多张图像,3D采集装置包括图像采集装置和背景板,背景板和图像采集装置在转动过程中保持相对设置,使得在采集时背景板成为图像采集装置所采集图像的背景图案;
    背景板全部为纯色。
  2. 如权利要求1所述的方法,其特征在于:在垂直背景板被拍摄表面的方向进行投影,投影形状的水平方向上长度W 1、投影形状的垂直方向上长度W 2由下述条件决定:
    Figure PCTCN2020134758-appb-100001
    Figure PCTCN2020134758-appb-100002
    其中,d 1为成像元件水平方向长度,d 2为成像元件垂直方向长度,T为沿光轴方向图像采集装置传感元件到背景板的垂直距离,f为图像采集装置的焦距,A 1、A 2为经验系数;
    其中A 1>1.04,A 2>1.04。
  3. 如权利要求2所述的方法,其特征在于:A 1>1.25,A 2>1.25。
  4. 如权利要求1所述的方法,其特征在于:将头部3D模型和眼镜3D模型适配包括先进行头部3D模型和眼镜3D模型的粗对齐,再进行头部3D模型和眼镜3D模型的二次对齐。
  5. 如权利要求1所述的方法,其特征在于:将眼镜3D数据发送给加工设备。
  6. 一种眼镜适配方法,其特征在于:包括
    第1步:采集用户头部多张图像,所述图像至少包括面部图像;
    第2步:将多张图像合成头部3D模型;
    第3步:将头部3D模型与眼镜模型相匹配,具体包括:
    3-1确定头部3D模型多个点坐标;
    3-2确定眼镜3D模型多个点坐标;
    3-3通过旋转、平移、缩放中的至少一种变换方式,将头部3D模型多个点坐标与眼镜3D模型多个点坐标相匹配;
    3-4向用户显示匹配效果;
    使用3D采集装置采集用户头部多张图像,3D采集装置包括图像采集装置和背景板,背景板和图像采集装置在转动过程中保持相对设置,使得在采集时背景板成为图像采集装置所采集图像的背景图案;
    背景板和图像采集装置的移动是同步的;背景板和图像采集装置位于旋转横梁的两端,相对设置,在旋转横梁转动时同步转动,始终保持相对设置。
  7. 如权利要求6所述的方法,其特征在于:在垂直背景板被拍摄表面的方向进行投影,投影形状的水平方向上长度W 1、投影形状的垂直方向上长度W 2由下述条件决定:
    Figure PCTCN2020134758-appb-100003
    Figure PCTCN2020134758-appb-100004
    其中,d 1为成像元件水平方向长度,d 2为成像元件垂直方向长度,T为沿光轴方向图像采集装置传感元件到背景板的垂直距离,f为图像采集装置的焦距,A 1、A 2为经验系数;
    其中A 1>1.04,A 2>1.04。
  8. 如权利要求7所述的方法,其特征在于:A 1>1.25,A 2>1.25。
  9. 如权利要求6所述的方法,其特征在于:将头部3D模型和眼镜3D模型适配包括先进行头部3D模型和眼镜3D模型的粗对齐,再进行头部3D模型和眼镜3D模型的二次对齐。
  10. 如权利要求6所述的方法,其特征在于:将眼镜3D数据发送给加工设备。
  11. 一种眼镜适配方法,其特征在于:包括
    第1步:采集用户头部多张图像,所述图像至少包括面部图像;
    第2步:将多张图像合成头部3D模型;
    第3步:将头部3D模型与眼镜模型相匹配,具体包括:
    3-1确定头部3D模型多个点坐标;
    3-2确定眼镜3D模型多个点坐标;
    3-3通过旋转、平移、缩放中的至少一种变换方式,将头部3D模型多个点坐标与眼镜3D模型多个点坐标相匹配;
    3-4向用户显示匹配效果;
    使用3D采集装置采集用户头部多张图像,3D采集装置包括图像采集装置,图像采集装置采集目标物时,相邻两个采集位置满足如下条件:
    Figure PCTCN2020134758-appb-100005
    其中L为两个位置图像采集装置光心的直线距离;f为图像采集装置的焦距;d为图像采集装置感光元件的矩形长度;T为图像采集装置感光元件沿着光轴到目标物表面的距离;δ为调整系数;
    且δ<0.603。
  12. 如权利要求11所述的方法,其特征在于:δ<0.410。
  13. 如权利要求11所述的方法,其特征在于:δ<0.356。
  14. 如权利要求11所述的方法,其特征在于:δ<0.311。
  15. 如权利要求11所述的方法,其特征在于:δ<0.284。
  16. 如权利要求11所述的方法,其特征在于:δ<0.261。
  17. 如权利要求11所述的方法,其特征在于:δ<0.241。
  18. 如权利要求11所述的方法,其特征在于:δ<0.107。
  19. 如权利要求11所述的方法,其特征在于:将头部3D模型和眼镜3D模型适配包括先进行头部3D模型和眼镜3D模型的粗对齐,再进行头部3D模型和眼镜3D模型的二次对齐。
  20. 如权利要求11所述的方法,其特征在于:将眼镜3D数据发送给加工设备。
  21. 一种眼镜设备,其特征在于:包括3D采集装置、3D合成装置、眼镜适配装置、显示装置;
    3D采集装置,用于采集人体头部的多张图像;
    3D合成装置,用于利用上述多张图像合成头部3D模型;
    眼镜适配装置,用于将头部3D模型和眼镜3D模型适配;
    显示装置,用于显示头部3D模型和眼镜3D模型的适配效果;
    其中3D采集装置包括图像采集装置和背景板,背景板和图像采集装置在转动过程中保持相对设置,使得在采集时背景板成为图像采集装置所采集图像的背景图案;
    背景板全部为纯色。
  22. 如权利要求21所述的设备,其特征在于:在垂直背景板被拍摄表面的方向进行投影,投影形状的水平方向上长度W 1、投影形状的垂直方向上长度W 2由下述条件决定:
    Figure PCTCN2020134758-appb-100006
    Figure PCTCN2020134758-appb-100007
    其中,d 1为成像元件水平方向长度,d 2为成像元件垂直方向长度,T为沿光轴方向图像采集装置传感元件到背景板的垂直距离,f为图像采集装置的焦距,A 1、A 2为经验系数;
    其中A 1>1.04,A 2>1.04。
  23. 如权利要求22所述的设备,其特征在于:A 1>1.25,A 2>1.25。
  24. 如权利要求21所述的设备,其特征在于:还包括标记点。
  25. 如权利要求24所述的设备,其特征在于:标记点位于座椅上。
  26. 如权利要求21所述的设备,其特征在于:3D合成装置和眼镜适配装置分立设置或在同一平台实现。
  27. 如权利要求21所述的设备,其特征在于:眼镜适配装置还用于眼镜数据修改。
  28. 如权利要求21所述的设备,其特征在于:将头部3D模型和眼镜3D模型适配包括先进行头部3D模型和眼镜3D模型的粗对齐,再进行头部3D模 型和眼镜3D模型的二次对齐。
  29. 如权利要求21所述的设备,其特征在于:将眼镜3D数据发送给加工设备。
  30. 一种眼镜设备,其特征在于:包括3D采集装置、3D合成装置、眼镜适配装置、显示装置;
    3D采集装置,用于采集人体头部的多张图像;
    3D合成装置,用于利用上述多张图像合成头部3D模型;
    眼镜适配装置,用于将头部3D模型和眼镜3D模型适配;
    显示装置,用于显示头部3D模型和眼镜3D模型的适配效果;
    其中3D采集装置包括图像采集装置和背景板,背景板和图像采集装置在转动过程中保持相对设置,使得在采集时背景板成为图像采集装置所采集图像的背景图案;
    背景板和图像采集装置的移动是同步的;背景板和图像采集装置位于旋转横梁的两端,相对设置,在旋转横梁转动时同步转动,始终保持相对设置。
  31. 如权利要求30所述的设备,其特征在于:在垂直背景板被拍摄表面的方向进行投影,投影形状的水平方向上长度W 1、投影形状的垂直方向上长度W 2由下述条件决定:
    Figure PCTCN2020134758-appb-100008
    Figure PCTCN2020134758-appb-100009
    其中,d 1为成像元件水平方向长度,d 2为成像元件垂直方向长度,T为沿光轴方向图像采集装置传感元件到背景板的垂直距离,f为图像采集装置的焦距,A 1、A 2为经验系数;
    其中A 1>1.04,A 2>1.04。
  32. 如权利要求31所述的设备,其特征在于:A 1>1.25,A 2>1.25。
  33. 如权利要求30所述的设备,其特征在于:还包括标记点。
  34. 如权利要求33所述的设备,其特征在于:标记点位于座椅上。
  35. 如权利要求30所述的设备,其特征在于:3D合成装置和眼镜适配装置分立设置或在同一平台实现。
  36. 如权利要求30所述的设备,其特征在于:眼镜适配装置还用于眼镜数据修改。
  37. 如权利要求30所述的设备,其特征在于:将头部3D模型和眼镜3D模型适配包括先进行头部3D模型和眼镜3D模型的粗对齐,再进行头部3D模型和眼镜3D模型的二次对齐。
  38. 如权利要求30所述的设备,其特征在于:将眼镜3D数据发送给加工设备。
  39. 一种眼镜设备,其特征在于:包括3D采集装置、3D合成装置、眼镜适配装置、显示装置;
    3D采集装置,用于采集人体头部的多张图像;
    3D合成装置,用于利用上述多张图像合成头部3D模型;
    眼镜适配装置,用于将头部3D模型和眼镜3D模型适配;
    显示装置,用于显示头部3D模型和眼镜3D模型的适配效果;
    其中3D采集装置包括图像采集装置,图像采集装置采集目标物时,相邻两个采集位置满足如下条件:
    Figure PCTCN2020134758-appb-100010
    其中L为两个位置图像采集装置光心的直线距离;f为图像采集装置的焦距;d为图像采集装置感光元件的矩形长度;T为图像采集装置感光元件沿着光轴到目标物表面的距离;δ为调整系数;
    且δ<0.603。
  40. 如权利要求39所述的设备,其特征在于:δ<0.410。
  41. 如权利要求39所述的设备,其特征在于:δ<0.356。
  42. 如权利要求39所述的设备,其特征在于:δ<0.311。
  43. 如权利要求39所述的设备,其特征在于:δ<0.284。
  44. 如权利要求39所述的设备,其特征在于:δ<0.261。
  45. 如权利要求39所述的设备,其特征在于:δ<0.241。
  46. 如权利要求39所述的设备,其特征在于:δ<0.107。
  47. 如权利要求39所述的设备,其特征在于:还包括标记点。
  48. 如权利要求47所述的设备,其特征在于:标记点位于座椅上。
  49. 如权利要求39所述的设备,其特征在于:3D合成装置和眼镜适配装置分立设置或在同一平台实现。
  50. 如权利要求39所述的设备,其特征在于:眼镜适配装置还用于眼镜数据修改。
  51. 如权利要求39所述的设备,其特征在于:将头部3D模型和眼镜3D模型适配包括先进行头部3D模型和眼镜3D模型的粗对齐,再进行头部3D模型和眼镜3D模型的二次对齐。
  52. 如权利要求39所述的设备,其特征在于:将眼镜3D数据发送给加工设备。
PCT/CN2020/134758 2019-12-12 2020-12-09 一种眼镜匹配设计设备 WO2021115298A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911276019.8A CN111161400B (zh) 2019-12-12 2019-12-12 一种眼镜匹配设计设备
CN201911276019.8 2019-12-12

Publications (1)

Publication Number Publication Date
WO2021115298A1 true WO2021115298A1 (zh) 2021-06-17

Family

ID=70557030

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/134758 WO2021115298A1 (zh) 2019-12-12 2020-12-09 一种眼镜匹配设计设备

Country Status (2)

Country Link
CN (2) CN113115024B (zh)
WO (1) WO2021115298A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113115024B (zh) * 2019-12-12 2023-01-31 天目爱视(北京)科技有限公司 一种3d信息采集设备

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103456008A (zh) * 2013-08-26 2013-12-18 刘晓英 一种面部与眼镜匹配方法
US20140133658A1 (en) * 2012-10-30 2014-05-15 Bit Cauldron Corporation Method and apparatus for providing 3d audio
CN106570747A (zh) * 2016-11-03 2017-04-19 济南博图信息技术有限公司 结合手势识别的眼镜在线适配方法及系统
CN108490642A (zh) * 2018-02-14 2018-09-04 天目爱视(北京)科技有限公司 基于3d头部数据的眼镜自动设计方法
CN109035379A (zh) * 2018-09-10 2018-12-18 天目爱视(北京)科技有限公司 一种目标物360°3d测量及信息获取装置
CN109443235A (zh) * 2018-11-02 2019-03-08 滁州市云米工业设计有限公司 一种工业设计用产品轮廓采集装置
CN111161400A (zh) * 2019-12-12 2020-05-15 天目爱视(北京)科技有限公司 一种眼镜匹配设计设备

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4094303B2 (ja) * 2002-02-08 2008-06-04 オリンパス株式会社 立体情報取得装置及び立体情報取得方法
CA2884018C (en) * 2014-02-26 2022-06-21 Freespace Composites Inc. Manufacturing system using topology optimization design software, novel three-dimensional printing mechanisms and structural composite materials
CN207354436U (zh) * 2017-05-08 2018-05-11 唐志远 一种多功能三维照相扫描一体机
US11119255B2 (en) * 2017-07-19 2021-09-14 President And Fellows Of Harvard College Highly efficient data representation of dense polygonal structures
CN108490641B (zh) * 2018-02-14 2019-04-26 天目爱视(北京)科技有限公司 基于3d头部数据的眼镜自动设计系统
CN209671943U (zh) * 2019-02-14 2019-11-22 刘磊 一种环绕式拍摄台
CN110533775B (zh) * 2019-09-18 2023-04-18 广州智美科技有限公司 一种基于3d人脸的眼镜匹配方法、装置及终端

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140133658A1 (en) * 2012-10-30 2014-05-15 Bit Cauldron Corporation Method and apparatus for providing 3d audio
CN103456008A (zh) * 2013-08-26 2013-12-18 刘晓英 一种面部与眼镜匹配方法
CN106570747A (zh) * 2016-11-03 2017-04-19 济南博图信息技术有限公司 结合手势识别的眼镜在线适配方法及系统
CN108490642A (zh) * 2018-02-14 2018-09-04 天目爱视(北京)科技有限公司 基于3d头部数据的眼镜自动设计方法
CN109035379A (zh) * 2018-09-10 2018-12-18 天目爱视(北京)科技有限公司 一种目标物360°3d测量及信息获取装置
CN109443235A (zh) * 2018-11-02 2019-03-08 滁州市云米工业设计有限公司 一种工业设计用产品轮廓采集装置
CN111161400A (zh) * 2019-12-12 2020-05-15 天目爱视(北京)科技有限公司 一种眼镜匹配设计设备

Also Published As

Publication number Publication date
CN111161400A (zh) 2020-05-15
CN113115024A (zh) 2021-07-13
CN111161400B (zh) 2021-03-12
CN113115024B (zh) 2023-01-31

Similar Documents

Publication Publication Date Title
CN111060023B (zh) 一种高精度3d信息采集的设备及方法
CN112304222B (zh) 一种背景板同步旋转的3d信息采集设备
CN111028341B (zh) 一种三维模型生成方法
US9265414B2 (en) Methods and systems for measuring interpupillary distance
WO2021115301A1 (zh) 一种近距离目标物3d采集设备
CN111292364A (zh) 一种三维模型构建过程中图像快速匹配的方法
CN111292239B (zh) 一种三维模型拼接设备及方法
CN111160136B (zh) 一种标准化3d信息采集测量方法及系统
WO2021115302A1 (zh) 一种3d智能视觉设备
WO2021185216A1 (zh) 一种基于多激光测距的标定方法
CN109242898A (zh) 一种基于图像序列的三维建模方法及系统
CN111340959B (zh) 一种基于直方图匹配的三维模型无缝纹理贴图方法
CN110973763B (zh) 一种足部智能3d信息采集测量设备
CN111780682A (zh) 一种基于伺服系统的3d图像采集控制方法
WO2021115298A1 (zh) 一种眼镜匹配设计设备
CN109084679B (zh) 一种基于空间光调制器的3d测量及获取装置
CN111445570B (zh) 一种定制化服装设计生产设备及方法
CN111208138A (zh) 一种智能木料识别装置
WO2021115297A1 (zh) 3d信息采集的设备及方法
WO2021115296A1 (zh) 用于移动终端的超薄型三维采集的模组
CN211085115U (zh) 标准化生物三维信息采集装置
CN111325780B (zh) 一种基于图像筛选的3d模型快速构建方法
CN211086835U (zh) 具有背景板的眼镜设备
CN211672690U (zh) 一种人体脚部三维采集设备
CN111368700B (zh) 一种基于身份关联的智能设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20899023

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20899023

Country of ref document: EP

Kind code of ref document: A1