WO2019232871A1 - 眼镜虚拟佩戴方法、装置、计算机设备及存储介质 - Google Patents
眼镜虚拟佩戴方法、装置、计算机设备及存储介质 Download PDFInfo
- Publication number
- WO2019232871A1 WO2019232871A1 PCT/CN2018/094391 CN2018094391W WO2019232871A1 WO 2019232871 A1 WO2019232871 A1 WO 2019232871A1 CN 2018094391 W CN2018094391 W CN 2018094391W WO 2019232871 A1 WO2019232871 A1 WO 2019232871A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- glasses
- reference point
- image
- point
- face image
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
Definitions
- the present application relates to the field of image processing, and in particular, to a method, a device, a computer device, and a storage medium for virtual wearing of glasses.
- a face wearing algorithm is usually used to make the face image and the glasses image merge and match.
- adjustments are mostly based on the distance between the pupils of the two glasses, but only based on the position information of the two points, and most are adjustments to the face image.
- the amount of data is relatively high, which will cause a large amount of calculation during the adjustment process, and it is easy to cause inconsistency between the adjusted glasses image and the face image.
- a virtual wearing method for glasses includes:
- Obtain a face image obtain feature points in the face image from the face image based on the face feature point detection algorithm, and construct target reference points that characterize the position of the eyes and nose position based on the feature points in the face image ;
- the glasses selection request includes a glasses identification
- a virtual wearing device for glasses includes:
- a face image acquisition module configured to obtain a face image, obtain feature points in the face image from the face image based on a facial feature detection algorithm, and construct a representative eye based on the feature points in the face image Target datum points for position and tip position;
- a glasses selection request obtaining module configured to obtain a glasses selection request, where the glasses selection request includes a glasses identification
- a glasses image acquisition module configured to obtain a glasses image based on the glasses identification, where the glasses image includes a target reference point;
- An image combination adjustment module configured to combine the face image and the glasses image, and adjust the glasses image on the face image based on the target reference point and the target reference point, so that the glasses The image matches the face image.
- a computer device includes a memory, a processor, and computer-readable instructions stored in the memory and executable on the processor.
- the processor executes the computer-readable instructions, the following steps are implemented:
- Obtain a face image obtain feature points in the face image from the face image based on the face feature point detection algorithm, and construct target reference points that characterize the position of the eyes and nose position based on the feature points in the face image ;
- the glasses selection request includes a glasses identification
- One or more non-volatile readable storage media storing computer-readable instructions, which when executed by one or more processors, cause the one or more processors to perform the following steps:
- Obtain a face image obtain feature points in the face image from the face image based on the face feature point detection algorithm, and construct target reference points that characterize the position of the eyes and nose position based on the feature points in the face image ;
- the glasses selection request includes a glasses identification
- FIG. 1 is a schematic diagram of an application environment of a virtual wearing method of glasses according to an embodiment of the present application
- FIG. 2 is a flowchart of a virtual wearing method of glasses according to an embodiment of the present application
- FIG. 3 is a flowchart of a virtual wearing method of glasses according to an embodiment of the present application.
- FIG. 4 is a schematic diagram of a face image feature point in a virtual wearing method of glasses according to an embodiment of the present application.
- FIG. 5 is a flowchart of a virtual wearing method of glasses according to an embodiment of the present application.
- FIG. 6 is a flowchart of a virtual wearing method of glasses according to an embodiment of the present application.
- FIG. 7 is a schematic block diagram of a virtual wearing device for glasses in an embodiment of the present application.
- FIG. 8 is a schematic diagram of a computer device according to an embodiment of the present application.
- the virtual wearing method of glasses provided in this application can be applied in the application environment as shown in FIG. 1, in which a client (computer device) communicates with a server through a network, the client obtains a face image and glasses selection request, and sends the face An image and glasses selection request is sent to the server. After the server obtains the face image and glasses selection request, the server implements matching adjustment on the face image and the glasses image according to the target reference point and the target reference point.
- the client may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices.
- the server can be implemented by an independent server or a server cluster composed of multiple servers.
- the virtual wearing method of glasses provided in the present application can also be separately applied to a single computer device to obtain a face image through a single computer device.
- the computer device stores the glasses image, which can be implemented according to the target reference point and the target reference point. Match and adjust the face image and the glasses image to achieve the effect of virtual wearing of glasses.
- a method for virtual wearing glasses is provided.
- the method is applied to a single computer device as an example for description, and includes the following steps:
- S10 Obtain a face image, obtain feature points in the face image from the face image based on the face feature point detection algorithm, and construct target reference points that characterize the position of the eyes and the tip of the nose based on the feature points in the face image.
- the face image refers to a face image of a user who wears glasses virtually.
- the face image may be obtained by photographing a user's face, for example, by transmitting data after shooting by a camera or a camera; the face image may also be obtained by directly uploading facial image data.
- a facial feature point refers to a point that represents the contours of various parts of a person's face, such as the corners of the eyes, the tip of the nose, the center of the eyebrow, or the eyeball.
- the facial feature point detection algorithm refers to an algorithm that automatically locates facial feature points based on the input facial image.
- the following facial feature point detection algorithms may be adopted to obtain facial feature point information:
- OpenCV is a cross-platform computer vision library that can run on Linux, Windows, Android, and Mac OS operating systems. It consists of a series of C functions and a small number of C ++ classes. It also provides interfaces for languages such as Python, Ruby, and MATLAB.
- Viola-Jones algorithm based on Harr features is one of facial feature point detection algorithms.
- Haar feature is a feature that reflects the gray change of an image, and is a feature that reflects the difference between pixel sub-modules. Haar features are divided into three categories: edge features, linear features, and center-diagonal features.
- the Viola-Jones algorithm is a method for face detection based on haar eigenvalues of faces.
- HOG Histogram of Oriented Gradient (HOG)
- SVM Support Vector Machine (Machine) refers to a support vector machine, which is a common discrimination method. It is usually used for pattern recognition, classification, and regression analysis. HOG features combined with SVM classifiers are widely used in image recognition.
- DPM Deformable Part Model
- SVM Session-to-envelope-to-envelope-to-envelope-to-envelope-to-envelope-to-envelope-to-envelope-to-envelope-to-envelope-to-envelope-to-envelope-to-envelope-to-envelope-to-envelope-to-envelope-to-envelope-to-SVM.
- the HeadHunter and HeadHunter_baseline algorithms are the same in method as DPM, the difference is that the models used are different.
- the following uses the (1) facial feature point detection algorithm as an example to illustrate the process of obtaining facial feature points:
- a sample image of the input face image pre-process (normalize) the sample image and train it to obtain a facial feature point model, namely the Viola-Jones algorithm of Harr features; then obtain the input face image, The same preprocessing is performed on the face image, followed by the steps of skin color region segmentation, face feature region segmentation, and face feature region classification. Finally, the Harola feature Viola-Jones algorithm is used to perform matching calculations with the face feature region classification to obtain Face feature point information of a face image.
- the target reference point refers to a position point set on the face image and used as a reference for wearing glasses. For example, feature points in the face image that characterize the position of the eyes and the position of the nose tip are selected as target reference points.
- the face image of the user wearing the glasses is obtained by taking a picture of the human face or directly uploading facial image data, and then using the face feature point detection algorithm to obtain the face feature point information in the face image, and then from Feature points representing the position of the eyes and the position of the tip of the nose are selected as target reference points.
- a user wears virtual glasses through a computer device.
- the computer device collects a user's face image through a camera, and then uses OpenCV's own Harr feature-based Viola-Jones algorithm to obtain facial feature points.
- OpenCV's own Harr feature-based Viola-Jones algorithm uses OpenCV's own Harr feature-based Viola-Jones algorithm to obtain facial feature points.
- the left eye corner of the face is used as the origin to establish a rectangular coordinate system, and then the coordinate data in the face image is obtained, and then the points representing the position of the eyes and the position of the nose are selected as target reference points.
- step S10 the target reference points representing the position of the eyes and the position of the nose tip are selected, which can better match and adjust the subsequent glasses image and the face image.
- S20 Acquire a glasses selection request, where the glasses selection request includes a glasses identification.
- the glasses selection request refers to a request for a user who wears glasses to select glasses information provided by a computer device.
- the glasses selection request may be obtained according to a user's click, touch, or long press on the computer device.
- Glasses identification refers to the identification used to distinguish different glasses, such as the image or model information of the glasses.
- the glasses selection request may be triggered according to a user's click, touch, or long-press on the computer device, where the glasses selection request includes a glasses identification. For example, when the user clicks and selects an eyeglass image with a glasses identifier provided by the computer device, the computer device obtains a glasses selection request including the glasses identifier.
- S30 Acquire a glasses image based on the glasses identification, where the glasses image includes a target reference point.
- the glasses image refers to an image corresponding to the glasses selected by the user.
- the glasses image may be obtained in the computer device according to the glasses identification.
- the target reference point refers to a preset point in the glasses image that is used as a reference point when the glasses image and the face image are adjusted.
- a rectangular coordinate system may be established according to the glasses image, so as to obtain position information of each part of the glasses in the glasses image, and then select a certain number of position points as target reference points.
- a rectangular coordinate system can be established by taking the position of the nosepiece of the glasses as the origin, and then obtaining the coordinates of the target reference point respectively.
- three target reference points can be selected, and one of the three target reference points is not on a straight line with the remaining two points, so that a plane can be determined according to the three target reference points, that is, the plane of the glasses image .
- the corresponding glasses image may be obtained in the computer device according to the glasses identification; then, the reference point coordinates of the glasses image may be obtained by setting a rectangular coordinate system in the glasses image.
- the computer equipment obtains the glasses identification, such as the glasses model
- the computer equipment obtains the corresponding glasses image according to the glasses model; then establishes a rectangular coordinate system based on the glasses images, and then selects three coordinate points as target reference points.
- a rectangular coordinate system can be established in the face image and the glasses image respectively, and the coordinate systems are merged in the process of merging the face image and the glasses image; the target reference point can also be selected first, and the glasses image and the face can be selected.
- image merging starts, the coordinates of the target reference point are obtained according to the rectangular coordinate system of the face image.
- S40 Combine the face image and the glasses image, and adjust the glasses image on the face image based on the target reference point and the target reference point, so that the glasses image matches the face image.
- the two images of the face image and the glasses image are combined, and the images are combined in the order of the glasses image above and the face image below.
- adjustments such as translation, rotation, or scaling are performed according to the target reference point of the face image and the target reference point of the glasses image, so that the face image and the glasses image can be matched.
- the adjustment ends when the nose pad and the temple of the glasses reach the preset position of the face image in the glasses image.
- the preset position can be set according to actual needs, which is not specifically limited in this embodiment.
- the target reference point and the target reference point may be set correspondingly.
- the glasses image may be correspondingly selected to correspond to the corner of the eye.
- the position point of the edge of the eyeglasses and the nosepiece corresponding to the nose tip at a certain distance is used as the target reference point.
- adjustments such as translation, rotation, or scaling can be implemented according to the relative positional relationship between the target reference point and the target reference point, which can make the adjustment process easier and meanwhile make the glasses better.
- the image is coordinated with the face image.
- a feature point in the face image is obtained from the face image based on the face feature point detection algorithm, and a characterizing eye position and a feature point are constructed based on the feature point in the face image.
- the target reference point of the nose position and then obtain the glasses selection request, obtain the glasses image according to the glasses identifier of the glasses selection request, and finally merge the face image and the glasses image according to the target reference point of the glasses image and the target reference point of the face image, Realize the effect of virtual wearing of glasses.
- the face image and the glasses image, and the target reference point and the target reference point are acquired, and the face image and the glasses image are adjusted and combined according to the target reference point and the target reference point, so that the calculation of the adjustment process is relatively simple.
- the feature points that characterize the position of the eyes and the position of the nose are used as target reference points, so that the matching of the face image and the glasses image will not cause large deformation of the glasses image, so that the adjusted glasses image and face image will reach Coordinated effect.
- step S10 a face image is obtained, and a target reference point representing a relative position of an eye and a nose tip is obtained from the face image based on a facial feature point detection algorithm, and specifically includes the following steps: :
- the feature points include the left eyebrow center point, right eyebrow center point, left eye left eye point, left eye right eye point, right eye left eye point, and right. Eye corner point and nasal point.
- the center point of the left eyebrow and the center point of the right eyebrow respectively refer to the center point of the left eyebrow and the center point of the right eyebrow of the face.
- the intersection point of the middle perpendicular line connecting the two ends of the eyebrows and the eyebrows may be used as the center point of the eyebrows.
- the left eye left eye corner point and the left eye right eye corner point respectively refer to the most edge point of the left eye corner of the human face and the right eye corner point of the left eye of the human face.
- the left eye corner point of the right eye and the right eye corner point of the right eye similarly refer to the outermost point of the left eye corner of the right eye of the human face and the outermost point of the right eye corner of the right eye.
- S12 Form a first triangle based on the left eyebrow center point, the left eye left eye corner point, and the left eye right eye corner point, and obtain the centroid of the first triangle as a first reference point.
- FIG. 4 shows a reference point of a face image in an embodiment of the present application, where point A is a first reference point, point B is a second reference point, and point C is a third reference point.
- centroid refers to the intersection of the three centerlines of the triangle.
- S13 Form a second triangle based on the right eyebrow center point, the right eye left eye corner point, and the right eye right eye corner point, and obtain the centroid of the second triangle as a second reference point.
- the right eyebrow center point, the right eye left eye corner point, and the right eye right eye corner point are connected, and the above three points are used as the apex of the triangle to form a second triangle, and the centroid position point of the second triangle is used as the second reference point ( Point B).
- Point B the centroid position point of the second triangle
- a target reference point is formed according to the first reference point, the second reference point, and the third reference point, that is, the target reference point at which the face image represents the position of the eyes and the position of the nose.
- one of the first reference point, the second reference point, and the third reference point is not on the same straight line, so that the plane of the face image can be determined by using three reference points.
- the feature points in the face image are obtained through the face feature point detection algorithm, and then a triangle is formed based on the left-eye feature points and the right-eye feature points of the face, respectively.
- a reference point, a second reference point, and finally a third reference point (the tip of the nose) constitute a target reference point. Because the eyebrows and eyes will be different in different face images, the reference point determined based on the eyebrow center and the corner of the eye can well reduce the errors caused by this difference.
- the use of three reference points that are far away from each other as the target reference point can prevent subsequent deformation of the glasses image during the matching adjustment and improve the coordination of the virtual wearing of the glasses.
- the target reference point includes a first reference point, a second reference point, and a third reference point.
- the first reference point is the upper edge center point of the left frame of the glasses image
- the second reference point is the upper edge center point of the right frame of the glasses image
- the third reference point is the first reference point and the second reference point.
- the center point of the point line is a predetermined distance downward.
- the predetermined distance can be determined according to the frame height of the glasses, and the frame height can be obtained based on the glasses identification.
- the predetermined distance is downwardly about two thirds of the height of the frame of the glasses image.
- the center of the upper edge of the left frame of the glasses image is used as the first reference point
- the center of the upper edge of the right frame is used as the second reference point
- the center of the line connecting the first and second reference points The position of the predetermined downward point is used as the third reference point, the purpose of which is to make the position of the target reference point correspond to the target reference point, so that the target reference point and the target reference point are used in the matching adjustment process of the face image and the glasses image.
- step S40 the face image and the glasses image are combined, and the glasses image is adjusted based on the target reference point and the target reference point, as shown in FIG. 5, which may specifically include the following steps:
- the coordinates (u, v) of the third reference point of the face image and the coordinates (u ′, v ′) of the third reference point of the face image may be obtained based on the face image; Coordinates (u, v) of the reference point, translate the glasses image so that the coordinates (u ′, v ′) of the third reference point and the coordinates (u, v) of the third reference point coincide; according to the coordinates of the third reference point ( The positional relationship between u ′, v ′) and the coordinates (u, v) of the third reference point can obtain a translation matrix:
- t x is the translation amount in the X direction
- t y is the translation amount in the Y direction.
- the translation matrix I can be calculated by the following formula:
- the translation amounts t x and t y are calculated through the above formula, and then each coordinate point in the glasses image is transformed based on the translation matrix I, so that the glasses image can be translated.
- a straight line connecting the first reference point and the second reference point is used as a reference line.
- a straight line connecting the first reference point and the second reference point is used as a reference line.
- the angle ⁇ between the reference line and the reference line is obtained, and the following rotation matrix is constructed with the third reference point as the origin, and the glasses image coordinates obtained after the translation in step S41 are multiplied by the rotation matrix with the angle ⁇ Get the glasses image with reference line and reference line parallel, that is:
- (x 0 , y 0 ) are coordinate points after the glasses image is translated, and (x 0 ′, y 0 ′) are corresponding coordinate points after the glasses image is rotated.
- S44 Adjust the glasses image on the face image based on the first reference point and the first reference point, or make the glasses image match the face image based on the second reference point and the second reference point.
- the scaling matrix is obtained based on the first reference point and the first reference point, or based on the positional relationship between the second reference point and the second reference point.
- the scaling matrix (S) can be calculated by the following formula:
- (m, n) is the coordinates of the first or second reference point
- (m 0 ′, n 0 ′) is the coordinates of the corresponding first or second reference point
- s x is the X coordinate Scaling factor
- s y is the scaling factor for the Y coordinate.
- Each coordinate point of the glasses image is transformed based on the scaling matrix to achieve the scaling of the glasses image.
- the glasses image after scaling adjustment can be obtained.
- the scaling matrix one obtained based on the first reference point and the first reference point and the scaling matrix two obtained based on the second reference point and the second reference point may be obtained by averaging the scaling matrix one and the scaling matrix two to obtain
- the zoom matrix III is used to achieve the zoom adjustment of the glasses image and improve the accuracy of the virtual wearing of the glasses.
- the glasses image is translated based on the third reference point of the face image, then the glasses image is rotated based on the third reference point, and finally the first reference point and the first reference point or the third reference point are rotated.
- the positional relationship between the two reference points and the second reference point adjusts the glasses image to adjust the glasses image and the face image to achieve the effect of virtual wearing of the glasses and improve the accuracy of virtual wearing of the glasses.
- the glasses selection request further includes a user ID, and the user ID refers to an identifier on the computer device used to distinguish different users.
- step S40 that is, after the steps of merging the face image and the glasses image and adjusting the glasses image based on the reference point and the reference point, as shown in FIG. 6, the following steps may be specifically included:
- S51 Obtain a custom matching request, and obtain glasses image adjustment information based on the custom matching request.
- the custom matching request refers to a matching request sent by a user according to his own needs or preferences after the matching adjustment process of the glasses image and the face image. For example, when user A wears the glasses, the user A is used to move the position of the glasses upward a little. In this way, in order to achieve the effect that the glasses of the user A are virtually worn, it is necessary to move the glasses image upward by a certain distance after performing the adjustments in the steps shown in the previous embodiment.
- a custom matching request may be obtained in the form of an open interface, and then adjustment information of the glasses image may be obtained based on the custom matching request.
- the open interface means that after the user can click the corresponding adjustment control button or input the corresponding adjustment parameters on the computer device, the computer device adjusts the glasses image accordingly according to the user's click or input parameters, so as to meet the user's personalized needs.
- the glasses image adjustment information may be vector information, which indicates that the preset distance is moved to the up, down, left, right, and other directions. For example, the vector (1,0) may be used to indicate that the X coordinate is moved to the right by a preset distance of 1.
- the user adjusts the glasses image through the up, down, left, right, and left and right adjustment buttons provided by the computer device.
- the computer device moves a preset distance up, down, left, and right according to the user's click.
- the computer device obtains the glasses image adjustment information.
- S52 Associate user ID, glasses identification, and glasses image adjustment information, and save them in a custom matching table.
- the custom matching table may be stored in a computer device and used for storing eyeglass image adjustment information generated by a user through a custom matching request.
- the glasses identification, the glasses image, and the adjustment information are associated with the user ID, and the information is stored in a custom matching table corresponding to the user ID.
- the user can directly use the glasses image adjustment information saved in the custom matching table when logging in next time, and quickly view the effect of virtual wearing of the glasses.
- a virtual wearing device for glasses is provided.
- the virtual wearing device corresponds to the virtual wearing method of glasses in the above embodiment.
- the glasses virtual wearing device includes a face image acquisition module 10, a glasses selection request acquisition module 20, a glasses image acquisition module 30, and an image merge adjustment module 40.
- the detailed description of each function module is as follows:
- a face image acquisition module 10 is configured to obtain a face image, obtain feature points in the face image from the face image based on the face feature point detection algorithm, and construct a representative eye position and a nose tip based on the feature points in the face image. The target datum point for the position.
- the glasses selection request obtaining module 20 is configured to obtain a glasses selection request, where the glasses selection request includes a glasses identification.
- the glasses image acquisition module 30 is configured to obtain a glasses image based on the glasses identification, where the glasses image includes a target reference point.
- the image combining and adjusting module 40 is configured to combine the face image and the glasses image, and adjust the glasses image on the face image based on the target reference point and the target reference point, so that the glasses image matches the face image.
- the face image acquisition module 10 includes a facial feature point acquisition unit 11, a first reference point acquisition unit 12, a second reference point acquisition unit 13, a third reference point acquisition unit 14, and a target reference point acquisition unit 15.
- a facial feature point acquisition unit 11 is configured to use a facial feature point detection algorithm to obtain feature points in a face image.
- the feature points include a left eyebrow center point, a right eyebrow center point, a left eye left eye point, and a left eye right eye point , Right eye left eye corner point, right eye right eye corner point, and nasal point.
- a first reference point acquiring unit 12 is configured to form a first triangle based on a left eyebrow center point, a left eye left eye corner point, and a left eye right eye corner point, and obtain a centroid of the first triangle as a first reference point.
- a second reference point acquisition unit 13 is configured to form a second triangle based on a right eyebrow center point, a right eye left eye corner point, and a right eye right eye corner point, and obtain a centroid of the second triangle as a second reference point.
- the third reference point acquisition unit 14 is configured to use the nose tip point as a third reference point.
- the target reference point acquisition unit 15 is configured to form a target reference point characterizing the position of the eye and the position of the nose tip based on the first reference point, the second reference point, and the third reference point.
- the target reference point acquired by the glasses image acquisition module 30 includes a first reference point, a second reference point, and a third reference point, where the first reference point is the center point of the upper edge of the left frame of the glasses image, and the second reference point The point is the center point of the upper edge of the right frame of the glasses image, and the third reference point is a position that is a predetermined distance downward from the center point where the first reference point and the second reference point are connected, where the predetermined distance is obtained based on the glasses identification.
- the image merge adjustment module 40 includes an image translation unit 41, a reference line and reference line acquisition unit 42, an image rotation unit 43, and an image adjustment unit 44.
- the image shifting unit 41 is configured to shift the glasses image on the face image based on the third reference point on the face image, so that the third reference point and the third reference point coincide.
- the reference line and reference line obtaining unit 42 is configured to obtain a connection line between the first reference point and the second reference point as a reference line, and obtain a connection line between the first reference point and the second reference point as a reference line.
- the image rotation unit 43 is configured to rotate the glasses image on the face image based on the third reference point, so that the reference line and the reference line are parallel.
- the image adjustment unit 44 is configured to adjust the glasses image on the face image based on the first reference point and the first reference point or based on the second reference point and the second reference point, so that the glasses image matches the face image.
- the glasses selection request further includes a user ID;
- the glasses virtual wearing device further includes a custom matching module 50, wherein the custom matching module 50 includes a custom request obtaining unit 51 and a custom information association unit 52.
- the custom request obtaining unit 51 is configured to obtain a custom matching request, and obtain glasses image adjustment information based on the custom matching request.
- the custom information associating unit 52 is configured to correlate the user ID, the glasses identification, and the glasses image adjustment information, and save them in a custom matching table.
- Each module in the above-mentioned glasses virtual wearing device may be implemented in whole or in part by software, hardware, and a combination thereof.
- the above-mentioned modules may be embedded in the hardware in or independent of the processor in the computer device, or may be stored in the memory of the computer device in the form of software, so that the processor can call and execute the operations corresponding to the above modules.
- a computer device is provided.
- the computer device may be a terminal, and the internal structure diagram may be as shown in FIG. 8.
- the computer equipment includes a processor, a memory, a network interface, a display screen, and an input device connected through a system bus.
- the processor of the computer device is used to provide computing and control capabilities.
- the memory of the computer device includes a non-volatile storage medium and an internal memory.
- the non-volatile storage medium stores an operating system and computer-readable instructions.
- the internal memory provides an environment for the operation of the operating system and computer-readable instructions in a non-volatile storage medium.
- the network interface of the computer device is used to communicate with an external server through a network connection. When the computer-readable instructions are executed by a processor, a virtual wearing method of glasses is implemented.
- a computer device including a memory, a processor, and computer-readable instructions stored on the memory and executable on the processor.
- the processor executes the computer-readable instructions, the following steps are implemented:
- Obtain a face image obtain feature points in the face image from the face image based on the face feature point detection algorithm, and construct target reference points that characterize the position of the eyes and the nose tip based on the feature points in the face image;
- the glasses selection request includes a glasses identification
- the face image and the glasses image are combined, and the glasses image is adjusted on the face image based on the target reference point and the target reference point, so that the glasses image matches the face image.
- one or more non-volatile readable storage media storing computer readable instructions are provided, and the non readable storage medium stores computer readable instructions, the computer readable instructions When executed by one or more processors, causes the one or more processors to perform the following steps:
- Obtain a face image obtain feature points in the face image from the face image based on the face feature point detection algorithm, and build target reference points that characterize the position of the eyes and the tip of the nose based on the feature points in the face image;
- the glasses selection request includes a glasses identification
- the face image and the glasses image are combined, and the glasses image is adjusted on the face image based on the target reference point and the target reference point, so that the glasses image matches the face image.
- Non-volatile memory may include read-only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
- Volatile memory can include random access memory (RAM) or external cache memory.
- RAM is available in various forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
本申请公开了一种眼镜虚拟佩戴方法、装置、计算机设备及存储介质,该眼镜虚拟佩戴方法包括:获取人脸图像,基于人脸特征点检测算法从所述人脸图像中获取表征眼睛位置和鼻尖位置的目标基准点;获取眼镜选择请求,所述眼镜选择请求包括眼镜标识;基于所述眼镜标识获取眼镜图像,所述眼镜图像包括目标参考点;合并所述人脸图像和所述眼镜图像,基于所述目标基准点和所述目标参考点对所述眼镜图像进行调整。本申请的技术方案通过目标基准点和目标参考点来调整人脸图像与眼镜图像,使眼镜图像与人脸图像在匹配过程中不会产生较大形变,保证了眼镜图像与人脸图像相协调。
Description
本申请以2018年6月8日提交的申请号为201810585001.5,名称为“眼镜虚拟佩戴方法、装置、计算机设备及存储介质”的中国发明专利申请为基础,并要求其优先权。
本申请涉及图像处理领域,尤其涉及一种眼镜虚拟佩戴方法、装置、计算机设备及存储介质。
随着科技的发展,越来越多的电子商务平台或者线下商家开始提供眼镜的虚拟佩戴服务,使消费者通过眼镜的虚拟佩戴模拟真实的眼镜佩戴,查看佩戴效果。
在眼镜的虚拟佩戴过程中,通常要用到人脸穿戴算法,以使人脸图像与眼镜图像合并匹配。但是,目前的人脸穿戴算法中,多是根据两个眼镜的瞳孔的距离来进行调整,只是基于两个点的位置信息进行调整,而且多是对人脸图像进行调整,由于人脸图像中数据量相对较高,会使调整过程中的计算量较大,而且很容易造成调整后的眼镜图像和人脸图像不协调。
发明内容
基于此,有必要针对上述技术问题,提供一种可以使调整后的眼镜图像与人脸图像协调的眼镜虚拟佩戴方法、装置、计算机设备及存储介质。
一种眼镜虚拟佩戴方法,包括:
获取人脸图像,基于人脸特征点检测算法从所述人脸图像中获取人脸图像中的特征点,并基于所述人脸图像中的特征点构建表征眼睛位置和鼻尖位置的目标基准点;
获取眼镜选择请求,所述眼镜选择请求包括眼镜标识;
基于所述眼镜标识获取眼镜图像,所述眼镜图像包括目标参考点;
合并所述人脸图像和所述眼镜图像,基于所述目标基准点和所述目标参考点对所述眼镜图像在所述人脸图像上进行调整,使所述眼镜图像与所述人脸图像匹配。
一种眼镜虚拟佩戴装置,包括:
人脸图像获取模块,用于获取人脸图像,基于人脸特点征检测算法从所述人脸图像中获取人脸图像中的特征点,并基于所述人脸图像中的特征点构建表征眼睛位置和鼻尖位置的目标基准点;
眼镜选择请求获取模块,用于获取眼镜选择请求,所述眼镜选择请求包括眼镜标识;
眼镜图像获取模块,用于基于所述眼镜标识获取眼镜图像,所述眼镜图像包括目标参考点;
图像合并调整模块,用于合并所述人脸图像和所述眼镜图像,基于所述目标基准点和所述目标参考点对所述眼镜图像在所述人脸图像上进行调整,使所述眼镜图像与所述人脸图像匹配。
一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令时实现如下步骤:
获取人脸图像,基于人脸特征点检测算法从所述人脸图像中获取人脸图像中的特征点,并基于所述人脸图像中的特征点构建表征眼睛位置和鼻尖位置的目标基准点;
获取眼镜选择请求,所述眼镜选择请求包括眼镜标识;
基于所述眼镜标识获取眼镜图像,所述眼镜图像包括目标参考点;
合并所述人脸图像和所述眼镜图像,基于所述目标基准点和所述目标参考点对所述眼镜图像在所述人脸图像上进行调整,使所述眼镜图像与所述人脸图像匹配。
一个或多个存储有计算机可读指令的非易失性可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:
获取人脸图像,基于人脸特征点检测算法从所述人脸图像中获取人脸图像中的特征点,并基于所述人脸图像中的特征点构建表征眼睛位置和鼻尖位置的目标基准点;
获取眼镜选择请求,所述眼镜选择请求包括眼镜标识;
基于所述眼镜标识获取眼镜图像,所述眼镜图像包括目标参考点;
合并所述人脸图像和所述眼镜图像,基于所述目标基准点和所述目标参考点对所述眼镜图像在所述人脸图像上进行调整,使所述眼镜图像与所述人脸图像匹配。
本申请的一个或多个实施例的细节在下面的附图和描述中提出,本申请的其他特征和优点将从说明书、附图以及权利要求变得明显。
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请一实施例中眼镜虚拟佩戴方法的一应用环境示意图;
图2是本申请一实施例中眼镜虚拟佩戴方法的一流程图;
图3是本申请一实施例中眼镜虚拟佩戴方法的一流程图;
图4是本申请一实施例中眼镜虚拟佩戴方法的人脸图像特征点的示意图;
图5是本申请一实施例中眼镜虚拟佩戴方法的一流程图;
图6是本申请一实施例中眼镜虚拟佩戴方法的一流程图;
图7是本申请一实施例中眼镜虚拟佩戴装置的一原理框图;
图8是本申请一实施例中计算机设备的一示意图。
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请提供的眼镜虚拟佩戴方法,可应用在如图1的应用环境中,其中,客户端(计算机设备)通过网络与服务器进行通信,客户端获取人脸图像和眼镜选择请求,并将人脸图像和眼镜选择请求发送至服务器。服务器获取到人脸图像和眼镜选择请求后,根据目标基准点和目标参考点实现对人脸图像和眼镜图像的匹配调整。其中,客户端可以但不限于各种个人计算机、笔记本电脑、智能手机、平板电脑和便携式可穿戴设备。服务器可以用独立的服务器或者是多个服务器组成的服务器集群来实现。
可选地,本申请提供的眼镜虚拟佩戴方法也可以单独应用在单个计算机设备中,通过单个计算机设备获取人脸图像,同时计算机设备中存储有眼镜图像,能根据目标基准点和目标参考点实现对人脸图像和眼镜图像的匹配调整,达到眼镜虚拟佩戴的效果。
在一实施例中,如图2所示,提供一种眼镜虚拟佩戴方法,以该方法应用在单个计算机设备中为例进行说明,包括如下步骤:
S10:获取人脸图像,基于人脸特征点检测算法从人脸图像中获取人脸图像中的特征点,并基于人脸图像中的特征点构建表征眼睛位置和鼻尖位置的目标基准点。
其中,人脸图像是指进行眼镜虚拟佩戴用户的脸部图像。可选地,人脸图像可以通过对 用户的人脸进行摄像获取,例如通过摄像头或者照相机拍摄后传输数据获取;人脸图像也可以通过直接上传脸部图像数据的方式来获取。人脸特征点是指表示人的脸部各部位轮廓的点,例如眼角、鼻尖、眉心或者眼球等。人脸特征点检测算法是指根据输入的人脸图像自动定位出人脸特征点的算法。可选地,可以采用以下人脸特征点检测算法获取人脸特征点信息:
(1)OpenCV自带的基于Harr特征的Viola-Jones算法;
其中,OpenCV是一个跨平台计算机视觉库,可以运行在Linux、Windows、Android和Mac OS操作系统上,由一系列C函数和少量C++类构成,同时提供了Python、Ruby、MATLAB等语言的接口,实现了图像处理和计算机视觉方面的很多通用算法,而基于Harr特征的Viola-Jones算法是其中一种人脸特征点检测算法。Haar特征是一种反映图像的灰度变化的特征,是反映像素分模块差值的一种特征。Haar特征分为三类:边缘特征、线性特征和中心-对角线特征。Viola-Jones算法是基于人脸的haar特征值进行人脸检测的方法。
(2)基于HOG+SVM特征的dlib;
其中,dlib是一个现代化的C++工具箱,其中包含用于在C++中创建复杂软件以解决实际问题的机器学习算法和工具,HOG是指方向梯度直方图(Histogram of Oriented Gradient,HOG),SVM(Support Vector Machine)指的是支持向量机,是常见的一种判别方法,通常用来进行模式识别、分类以及回归分析,HOG特征结合SVM分类器被广泛应用于图像识别中。
(3)doppia库的三种人脸检测方法(DPM、HeadHunter和HeadHunter_baseline)。
其中,DPM(Deformable Part Model)是一个目标检测算法,目前已成为众多分类器、分割、人体姿态和行为分类的重要部分。DPM可以看做是HOG的扩展,方法是首先计算梯度方向直方图,然后用SVM训练得到目标梯度模型,再进行分类,从而使模型和目标匹配。而HeadHunter和HeadHunter_baseline算法与DPM在方法上是相同的,区别在于用到的模型不同。
以下用第(1)种人脸特征点检测算法为例说明获取人脸特征点的过程:
首先获取输入的人脸图像的样本图像,对样本图像进行预处理(归一化)后进行训练,得到人脸特征点模型,即Harr特征的Viola-Jones算法;然后获取输入的人脸图像,对人脸图像进行同样预处理,接着依次进行肤色区域分割、人脸特征区域分割和人脸特征区域分类的步骤,最后根据Harr特征的Viola-Jones算法与人脸特征区域分类进行匹配计算,得到人脸图像的人脸特征点信息。
步骤S10中,目标基准点是指在人脸图像上设定的用于作为眼镜佩戴基准的位置点。例如,选用人脸图像中表征眼睛位置和鼻尖位置的特征点作为目标基准点。
具体地,通过对人脸进行摄像或者直接上传脸部图像数据等方式获取眼镜虚拟佩戴用户的人脸图像,然后运用人脸特征点检测算法获取人脸图像中的人脸特征点信息,再从中选取表征眼睛位置和鼻尖位置的特征点作为目标基准点。
例如,用户通过计算机设备进行眼镜的虚拟佩戴,计算机设备通过摄像头对用户进行人脸图像的采集,然后运用OpenCV自带的基于Harr特征的Viola-Jones算法获取人脸特征点,通过在人脸图像中建立直角坐标系,例如以人脸左眼眼角为原点建立直角坐标系,然后获取人脸图像中的坐标数据,再选取表征眼睛位置和鼻尖位置的点作为目标基准点。
在步骤S10中,选取表征眼睛位置和鼻尖位置的目标基准点,可以较好实现后续眼镜图像与人脸图像的匹配调整。
S20:获取眼镜选择请求,其中,眼镜选择请求包括眼镜标识。
其中,眼镜选择请求是指进行眼镜虚拟佩戴的用户对计算机设备提供的眼镜信息进行选择的请求。可选地,可以根据用户在计算机设备上的点击、触摸或长按等动作来获取眼镜选择请求。眼镜标识是指用于区分不同眼镜的标识,例如眼镜的图像或型号信息等。
具体地,可以根据用户在计算机设备上的点击、触摸或长按等方式触发眼镜选择请求,其中,眼镜选择请求包括眼镜标识。例如,当用户根据计算机设备提供的具有眼镜标识的眼 镜图像进行点击选择时,计算机设备即获取到包括眼镜标识的眼镜选择请求。
S30:基于眼镜标识获取眼镜图像,其中,眼镜图像包括目标参考点。
其中,眼镜图像是指用户选择的眼镜对应的图像。可选地,当获取到眼镜选择请求时,可根据眼镜标识在计算机设备中获取眼镜图像。目标参考点是指眼镜图像中预设的用于作为眼镜图像与人脸图像进行匹配调整时参考的位置点。可选地,可以根据眼镜图像建立直角坐标系,从而获取眼镜图像中眼镜各部位的位置信息,再从中选取一定数量的位置点作为目标参考点。例如,可以通过以眼镜鼻托位置为原点建立直角坐标系,再分别获取目标参考点的坐标。优选地,目标参考点可以选取三个,且三个目标参考点之中有一个点与其余两个点不在一条直线上,这样根据三个目标参考点可以确定一个平面,即确定眼镜图像这个平面。
具体地,在获取到眼镜标识之后,可以根据眼镜标识在计算机设备中获取对应的眼镜图像;然后,可以通过在眼镜图像中设立直角坐标系的方式来获取眼镜图像的参考点坐标。
例如,计算机设备获取到眼镜标识之后,例如是眼镜型号,则计算机设备根据眼镜型号获取对应的眼镜图像;然后根据眼镜图像设立直角坐标系,再选取其中三个坐标点作为目标参考点。
应理解,可以分别在人脸图像和眼镜图像中建立直角坐标系,在人脸图像与眼镜图像的合并过程中将坐标系进行合并;也可以先选取目标参考点,在将眼镜图像与人脸图像合并开始时,根据人脸图像的直角坐标系来获取目标参考点的坐标。
S40:合并人脸图像和眼镜图像,基于目标基准点和目标参考点对眼镜图像在人脸图像上进行调整,使眼镜图像与人脸图像匹配。
具体地,将人脸图像和眼镜图像这两幅图像进行合并,按照眼镜图像在上、人脸图像在下的顺序进行图像的合并。在合并的过程中,根据人脸图像的目标基准点和眼镜图像的目标参考点进行平移、旋转或者缩放等调整,使人脸图像与眼镜图像可以匹配。可选地,合并两幅图像时,以眼镜图像中鼻托和眼镜的镜腿到达人脸图像的预设位置时调整结束。其中,预设位置可以根据实际需要设定,本实施例不做具体限定。可选地,为了使调整过程有针对性,目标基准点与目标参考点可以对应进行设置,例如,如果选人脸图像的眼角和鼻尖作为目标基准点,则眼镜图像可以相应地选与眼角对应的眼镜边缘和与鼻尖对应的眼镜鼻托往下一定距离的位置点作为目标参考点。这样,在眼镜图像与人脸图像进行匹配调整时,可以根据目标参考点与目标基准点的相对位置关系实现平移、旋转或者缩放等调整,可以使调整的过程比较简便,同时更好地使眼镜图像与人脸图像相协调。
在图2对应的实施例中,通过获取人脸图像,基于人脸特征点检测算法从人脸图像中获取人脸图像中的特征点,并基于人脸图像中的特征点构建表征眼睛位置和鼻尖位置的目标基准点,然后获取眼镜选择请求,根据眼镜选择请求的眼镜标识获取眼镜图像,最后根据眼镜图像的目标参考点和人脸图像的目标基准点对人脸图像和眼镜图像进行合并,实现眼镜虚拟佩戴的效果。本申请实施例通过获取人脸图像和眼镜图像以及目标基准点和目标参考点,根据目标基准点和目标参考点对人脸图像和眼镜图像进行调整合并,可以使调整的过程计算比较简便。同时,采用表征眼睛位置和鼻尖位置的特征点作为目标基准点,使人脸图像与眼镜图像的匹配调整中不会使眼镜图像产生较大形变,从而使调整后的眼镜图像与人脸图像达到协调的效果。
在一实施例中,如图3所示,步骤S10中,即获取人脸图像,基于人脸特征点检测算法从人脸图像中获取代表眼睛和鼻尖相对位置的目标基准点,具体包括以下步骤:
S11:采用人脸特征点检测算法获取人脸图像中的特征点,特征点包括左眉中心点、右眉中心点、左眼左眼角点、左眼右眼角点、右眼左眼角点、右眼右眼角点和鼻尖点。
其中,左眉中心点和右眉中心点分别是指人脸左侧眉毛中心点和人脸右侧眉毛中心点。可选地,可以将眉毛两端连线的中垂线与眉毛的相交点作为眉毛中心点。左眼左眼角点和左眼右眼角点分别是指人脸左眼左眼角最边缘点和人脸左眼右眼角最边缘点。同样地,右眼左眼角点和右眼右眼角点同理分别是指人脸右眼左眼角最边缘点和右眼右眼角最边缘点。
S12:基于左眉中心点、左眼左眼角点和左眼右眼角点形成第一三角形,获取所述第一三角形的形心作为第一基准点。
请参考图4,其示出了本申请实施例中的人脸图像的基准点,其中,A点为第一基准点,B点为第二基准点,C点为第三基准点。
具体地,连接左眉中心点、左眼左眼角点和左眼右眼角点,将以上三个点作为三角形的顶点形成第一三角形,以第一三角形的形心位置点作为第一基准点(A点)。形心是指三角形三条中线的交点,形心位置坐标点可以先获取三角形的三个顶点坐标,例如是(x1,y1)、(x2,y2)和(x3,y3),则形心坐标的X坐标=(x1+x2+x3)/3,形心坐标的Y坐标=(y1+y2+y3)/3。
S13:基于右眉中心点、右眼左眼角点和右眼右眼角点形成第二三角形,获取所述第二三角形的形心作为第二基准点。
具体地,连接右眉中心点、右眼左眼角点和右眼右眼角点,将以上三个点作为三角形的顶点形成第二三角形,以第二三角形的形心位置点作为第二基准点(B点)。其中,第二三角形的形心位置点的确定方式和步骤S12中的相似,在此不再赘述。
S14:将鼻尖点作为第三基准点。
具体地,可以以人脸图像中鼻翼两侧最宽点的连线与鼻梁线相交的点作为鼻尖点,将鼻尖点作为第三基准点(C点)。
S15:基于第一基准点、第二基准点和第三基准点构成表征眼睛位置和鼻尖位置的目标基准点。
根据第一基准点、第二基准点和第三基准点构成目标基准点,即人脸图像表征眼睛位置和鼻尖位置的目标基准点。其中,第一基准点、第二基准点和第三基准点中有一个基准点不在同一条直线上,这样就可以用三个基准点确定了人脸图像这个平面。
在图3对应的实施例中,通过人脸特征点检测算法获取人脸图像中的特征点,然后分别基于人脸左眼特征点和右眼特征点形成三角形,根据三角形的形心分别确定第一基准点和第二基准点,最后与第三基准点(鼻尖)构成目标基准点。由于眉毛和眼睛在不同的人脸图像中会有差异化,因此基于眉心和眼角确定的基准点可以很好地减小这种差异化带来的误差。而采用三个相互之间距离较远的基准点作为目标基准点,可以使后续在匹配调整中不会使眼镜图像产生较大形变,提高眼镜虚拟佩戴的协调性。
在一实施例中,目标参考点包括第一参考点、第二参考点和第三参考点。
可选地,第一参考点为眼镜图像的左镜框的上边缘中心点,第二参考点为眼镜图像的右镜框的上边缘中心点,第三参考点为以第一参考点和第二参考点连线的中心点向下预定距离的位置,其中,当眼镜的镜框上边线不是直线而是曲线时,可以将左镜框或者右镜框的上边缘最高点作为上边缘中心点。预定距离可以根据眼镜的镜框高度来确定,而镜框高度可基于眼镜标识来获取。可选地,预定距离为向下约眼镜图像的三分之二的镜框高度。
本实施例中,以眼镜图像的左镜框的上边缘中心点为第一参考点,以右镜框的上边缘中心点为第二参考点,以第一参考点和第二参考点连线的中心点向下预定距离的位置作为第三参考点,其目的是使目标参考点的位置与目标基准点相对应,使人脸图像与眼镜图像的匹配调整过程中,以目标参考点和目标基准点来匹配调整,可以使人脸图像与眼镜图像相协调。
在一实施例中,步骤S40中,即合并人脸图像和眼镜图像,基于目标基准点和目标参考点对眼镜图像进行调整,如图5所示,具体可以包括以下步骤:
S41:基于人脸图像上的第三基准点,在人脸图像上对眼镜图像进行平移,使得第三参考点和第三基准点重合。
具体地,可以基于人脸图像获取人脸图像的第三基准点的坐标(u,v)和眼镜图像的第三参考点的坐标(u′,v′);基于人脸图像上的第三基准点的坐标(u,v),平移眼镜图像,使第三参考点的坐标(u′,v′)和第三基准点的坐标(u,v)重合;根据第三参考点的坐标(u′,v′)与第三基准点的坐标(u,v)的位置关系可以获得平移矩阵:
其中,t
x为X方向的平移量,t
y为Y方向的平移量。平移矩阵I可以通过下面公式计算得到:
通过上述公式计算得到平移量t
x和t
y,再基于该平移矩阵I对眼镜图像中的每一坐标点进行平移变换,即可实现眼镜图像的平移。
S42:获取第一基准点和第二基准点的连线,作为基准线,获取第一参考点和第二参考点的连线,作为参考线。
具体地,以连接第一基准点和第二基准点的直线作为基准线。同样地,以连接第一参考点和第二参考点的直线作为参考线。
S43:基于第三基准点使眼镜图像在人脸图像上旋转,以使得基准线和参考线平行。
其中,(x
0,y
0)为眼镜图像平移后的坐标点,(x
0′,y
0′)为眼镜图像旋转之后对应的坐标点。
S44:基于第一参考点和第一基准点,或者基于第二参考点和第二基准点对眼镜图像在人脸图像上进行调整,使眼镜图像与人脸图像匹配。
其中,(m,n)为第一基准点或者第二基准点的坐标,(m
0′,n
0′)为相应的第一参考点或者第二参考点的坐标,s
x为X坐标的缩放系数,s
y为Y坐标的缩放系数。
基于缩放矩阵对眼镜图像的每一坐标点进行变换,实现眼镜图像的缩放,由此,可以得到进行缩放调整后的眼镜图像。
优选地,可以基于第一基准点和第一参考点得到的缩放矩阵一,基于第二基准点和第二参考点得到的缩放矩阵二,可以将缩放矩阵一和缩放矩阵二求平均值后得到缩放矩阵三,以缩放矩阵三实现对眼镜图像的缩放调整,提高眼镜虚拟佩戴的精度。
在图5对应的实施例中,通过基于人脸图像的第三基准点对眼镜图像进行平移,然后基于第三基准点对眼镜图像进行旋转,最后根据第一参考点与第一基准点或者第二参考点与第二基准点的位置关系,对眼镜图像进行缩放调整,使眼镜图像与人脸图像协调,实现了眼镜虚拟佩戴的效果,提高了眼镜虚拟佩戴的精度。
在一实施例中,眼镜选择请求还包括用户ID,用户ID是指计算机设备上用于区别不同用户的标识。在本实施例中,步骤S40之后,即在合并人脸图像和眼镜图像,基于基准点和参考点对眼镜图像进行调整的步骤之后,如图6所示,具体可以包括以下步骤:
S51:获取自定义匹配请求,基于自定义匹配请求获取眼镜图像调整信息。
其中,自定义匹配请求是指眼镜图像与人脸图像进行匹配调整过程后,用户根据自己的需要或者喜好发出的匹配请求。例如,用户A在眼镜佩戴时,习惯将眼镜的位置向上移一点。这样,为了达到用户A眼镜虚拟佩戴的效果,需要将眼镜图像在进行前面实施例所示步骤调整的基础上,再向上移动一定距离。
具体地,可以通过开放接口的形式来获取自定义匹配请求,然后基于自定义匹配请求获取眼镜图像的调整信息。其中,开放接口是指用户可以在计算机设备上点击相应调整控件按钮或者输入相应调整参数后,计算机设备根据用户的点击或者输入的参数对眼镜图像进行相应地调整,从而满足用户的个性化需求。可选地,眼镜图像调整信息可以为矢量信息,表示往上下左右等方向移动预设的距离,例如可以是以矢量(1,0)表示X坐标向右移动预设距离为1的距离。
例如,用户通过计算机设备提供的上下左右调整按钮对眼镜图像进行调整,计算机设备根据用户的点击向上下左右移动预设的距离,当用户完成调整时,计算机设备获取眼镜图像调整信息。
S52:关联用户ID、眼镜标识和眼镜图像调整信息,并保存至自定义匹配表中。
其中,自定义匹配表可以存储在计算机设备中,用于存储用户通过自定义匹配请求产生的眼镜图像调整信息。
具体地,将眼镜标识和眼镜图像以及调整信息与用户ID进行关联,并将这些信息保存至与用户ID相对应的自定义匹配表中。这样,用户在下次登陆时可以直接使用保存在自定义匹配表中的眼镜图像调整信息,快速查看眼镜虚拟佩戴的效果。
在图6对应的实施例中,通过获取自定义匹配请求,根据自定义匹配请求获取眼镜图像的调整信息,并将眼镜标识、眼镜图像调整信息与用户ID关联后保存到自定义匹配表中,可以使用户根据自己的需要和喜好对眼镜图像进行调整,满足个性化需求,同时用户在下次进行佩戴时也可以根据之前的调整信息对新选择的眼镜进行相应的调整,提高了眼镜虚拟佩戴的便利性。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
在一实施例中,提供一种眼镜虚拟佩戴装置,该眼镜虚拟佩戴装置与上述实施例中眼镜虚拟佩戴方法一一对应。如图7所示,该眼镜虚拟佩戴装置包括人脸图像获取模块10、眼镜选择请求获取模块20、眼镜图像获取模块30和图像合并调整模块40。各功能模块详细说明如下:
人脸图像获取模块10,用于获取人脸图像,基于人脸特征点检测算法从人脸图像中获取人脸图像中的特征点,并基于人脸图像中的特征点构建表征眼睛位置和鼻尖位置的目标基准点。
眼镜选择请求获取模块20,用于获取眼镜选择请求,其中,眼镜选择请求包括眼镜标识。
眼镜图像获取模块30,用于基于眼镜标识获取眼镜图像,其中,眼镜图像包括目标参考点。
图像合并调整模块40,用于合并人脸图像和眼镜图像,基于目标基准点和目标参考点对眼镜图像在人脸图像上进行调整,使眼镜图像与人脸图像匹配。
优选地,人脸图像获取模块10包括人脸特征点获取单元11、第一基准点获取单元12、第二基准点获取单元13、第三基准点获取单元14和目标基准点获取单元15。
人脸特征点获取单元11,用于采用人脸特征点检测算法获取人脸图像中的特征点,特征点包括左眉中心点、右眉中心点、左眼左眼角点、左眼右眼角点、右眼左眼角点、右眼右眼 角点和鼻尖点。
第一基准点获取单元12,用于基于左眉中心点、左眼左眼角点和左眼右眼角点形成第一三角形,获取所述第一三角形的形心作为第一基准点。
第二基准点获取单元13,用于基于右眉中心点、右眼左眼角点和右眼右眼角点形成第二三角形,获取所述第二三角形的形心作为第二基准点。
第三基准点获取单元14,用于将鼻尖点作为第三基准点。
目标基准点获取单元15,用于基于第一基准点、第二基准点和第三基准点构成表征眼睛位置和鼻尖位置的目标基准点。
优选地,眼镜图像获取模块30获取的目标参考点包括第一参考点、第二参考点和第三参考点,其中,第一参考点为眼镜图像的左镜框的上边缘中心点,第二参考点为眼镜图像的右镜框的上边缘中心点,第三参考点为以第一参考点和第二参考点连线的中心点向下预定距离的位置,其中,预定距离基于眼镜标识获取。
优选地,图像合并调整模块40包括图像平移单元41、基准线和参考线获取单元42、图像旋转单元43和图像调整单元44。
图像平移单元41,用于基于人脸图像上的第三基准点,在人脸图像上对眼镜图像进行平移,使得第三参考点和第三基准点重合。
基准线和参考线获取单元42,用于获取第一基准点和第二基准点的连线,作为基准线,获取第一参考点和第二参考点的连线,作为参考线。
图像旋转单元43,用于基于第三基准点使眼镜图像在人脸图像上旋转,以使得基准线和参考线平行。
图像调整单元44,用于基于第一参考点和第一基准点,或者基于第二参考点和第二基准点对眼镜图像在人脸图像上进行调整,使眼镜图像与人脸图像匹配。
进一步地,眼镜选择请求还包括用户ID;眼镜虚拟佩戴装置还包括自定义匹配模块50,其中自定义匹配模块50包括自定义请求获取单元51和自定义信息关联单元52。
自定义请求获取单元51,用于获取自定义匹配请求,基于自定义匹配请求获取眼镜图像调整信息。
自定义信息关联单元52,用于关联用户ID、眼镜标识和眼镜图像调整信息,并保存至自定义匹配表中。
关于眼镜虚拟佩戴装置的具体限定可以参见上文中对于眼镜虚拟佩戴方法的限定,在此不再赘述。上述眼镜虚拟佩戴装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一实施例中,提供了一种计算机设备,该计算机设备可以是终端,其内部结构图可以如图8所示。该计算机设备包括通过系统总线连接的处理器、存储器、网络接口、显示屏和输入装置。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统和计算机可读指令。该内存储器为非易失性存储介质中的操作系统和计算机可读指令的运行提供环境。该计算机设备的网络接口用于与外部服务器通过网络连接通信。该计算机可读指令被处理器执行时以实现一种眼镜虚拟佩戴方法。
在一个实施例中,提供了一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机可读指令,处理器执行计算机可读指令时实现以下步骤:
获取人脸图像,基于人脸特征点检测算法从人脸图像中获取人脸图像中的特征点,并基于人脸图像中的特征点构建表征眼睛位置和鼻尖位置的目标基准点;
获取眼镜选择请求,其中,眼镜选择请求包括眼镜标识;
基于眼镜标识获取眼镜图像,其中,眼镜图像包括目标参考点;
合并人脸图像和眼镜图像,基于目标基准点和目标参考点对眼镜图像在人脸图像上进行 调整,使眼镜图像与人脸图像匹配。
在一个实施例中,提供了一个或多个存储有计算机可读指令的非易失性可读存储介质,该非易失性可读存储介质上存储有计算机可读指令,该计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器实现以下步骤:
获取人脸图像,基于人脸特征点检测算法从人脸图像中获取人脸图像中的特征点,并基于人脸图像中的特征点构建表征眼睛位置和鼻尖位置的目标基准点;
获取眼镜选择请求,其中,眼镜选择请求包括眼镜标识;
基于眼镜标识获取眼镜图像,其中,眼镜图像包括目标参考点;
合并人脸图像和眼镜图像,基于目标基准点和目标参考点对眼镜图像在人脸图像上进行调整,使眼镜图像与人脸图像匹配。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一非易失性计算机可读取存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。
Claims (20)
- 一种眼镜虚拟佩戴方法,其特征在于,包括:获取人脸图像,基于人脸特征点检测算法从所述人脸图像中获取人脸图像中的特征点,并基于所述人脸图像中的特征点构建表征眼睛位置和鼻尖位置的目标基准点;获取眼镜选择请求,所述眼镜选择请求包括眼镜标识;基于所述眼镜标识获取眼镜图像,所述眼镜图像包括目标参考点;合并所述人脸图像和所述眼镜图像,基于所述目标基准点和所述目标参考点对所述眼镜图像在所述人脸图像上进行调整,使所述眼镜图像与所述人脸图像匹配。
- 如权利要求1所述的眼镜虚拟佩戴方法,其特征在于,所述基于人脸特征点检测算法从所述人脸图像中获取人脸图像中的特征点,并基于所述人脸图像中的特征点构建表征眼睛位置和鼻尖位置的目标基准点,包括以下步骤:采用所述人脸特征点检测算法获取所述人脸图像中的特征点,所述特征点包括左眉中心点、右眉中心点、左眼左眼角点、左眼右眼角点、右眼左眼角点、右眼右眼角点和鼻尖点;基于所述左眉中心点、所述左眼左眼角点和所述左眼右眼角点形成第一三角形,获取所述第一三角形的形心作为第一基准点;基于所述右眉中心点、所述右眼左眼角点和所述右眼右眼角点形成第二三角形,获取所述第二三角形的形心作为第二基准点;将所述鼻尖点作为第三基准点;基于所述第一基准点、所述第二基准点和所述第三基准点构成表征眼睛位置和鼻尖位置的目标基准点。
- 如权利要求2所述的眼镜虚拟佩戴方法,其特征在于,所述目标参考点包括第一参考点、第二参考点和第三参考点;所述第一参考点为所述眼镜图像的左镜框的上边缘中心点;所述第二参考点为所述眼镜图像的右镜框的上边缘中心点;所述第三参考点为以所述第一参考点和所述第二参考点连线的中心点向下预定距离的位置,其中,所述预定距离基于所述眼镜标识获取。
- 如权利要求3所述的眼镜虚拟佩戴方法,其特征在于,所述合并所述人脸图像和所述眼镜图像,基于所述目标基准点和所述目标参考点对所述眼镜图像在所述人脸图像上进行调整,使眼镜图像与人脸图像匹配,包括以下步骤:基于所述人脸图像上的所述第三基准点,在所述人脸图像上对所述眼镜图像进行平移,使得所述第三参考点和所述第三基准点重合;获取所述第一基准点和所述第二基准点的连线,作为基准线,获取所述第一参考点和所述第二参考点的连线,作为参考线;基于所述第三基准点使所述眼镜图像在所述人脸图像上旋转,以使得所述基准线和所述参考线平行;基于所述第一参考点和所述第一基准点,或者基于所述第二参考点和所述第二基准点对所述眼镜图像在所述人脸图像上进行调整,使眼镜图像与人脸图像匹配。
- 如权利要求1所述的眼镜虚拟佩戴方法,其特征在于,所述眼镜选择请求还包括用户ID;在合并所述人脸图像和所述眼镜图像,基于所述目标基准点和所述目标参考点对所述眼镜图像进行调整的步骤之后,所述眼镜虚拟佩戴方法还包括以下步骤:获取自定义匹配请求,基于所述自定义匹配请求获取眼镜图像调整信息;关联所述用户ID、所述眼镜标识和所述眼镜图像调整信息,并保存至自定义匹配表中。
- 一种眼镜虚拟佩戴装置,其特征在于,包括:人脸图像获取模块,用于获取人脸图像,基于人脸特点征检测算法从所述人脸图像中获取人脸图像中的特征点,并基于所述人脸图像中的特征点构建表征眼睛位置和鼻尖位置的目标基准点;眼镜选择请求获取模块,用于获取眼镜选择请求,所述眼镜选择请求包括眼镜标识;眼镜图像获取模块,用于基于所述眼镜标识获取眼镜图像,所述眼镜图像包括目标参考点;图像合并调整模块,用于合并所述人脸图像和所述眼镜图像,基于所述目标基准点和所述目标参考点对所述眼镜图像在所述人脸图像上进行调整,使所述眼镜图像与所述人脸图像匹配。
- 如权利要求6所述的眼镜虚拟佩戴装置,其特征在于,所述人脸图像模块包括人脸特征点获取单元、第一基准点获取单元、第二基准点获取单元、第三基准点获取单元和目标基准点获取单元;所述人脸特征点获取单元,用于采用所述人脸特征点检测算法获取所述人脸图像中的特征点,所述特征点包括左眉中心点、右眉中心点、左眼左眼角点、左眼右眼角点、右眼左眼角点、右眼右眼角点和鼻尖点;所述第一基准点获取单元,用于基于所述左眉中心点、所述左眼左眼角点和所述左眼右眼角点形成第一三角形,获取所述第一三角形的形心作为第一基准点;所述第二基准点获取单元,用于基于所述右眉中心点、所述右眼左眼角点和所述右眼右眼角点形成第二三角形,获取所述第二三角形的形心作为第二基准点;所述第三基准点获取单元,用于将所述鼻尖点作为第三基准点;所述目标基准点获取单元,用于基于所述第一基准点、所述第二基准点和所述第三基准点构成表征眼睛位置和鼻尖位置的目标基准点。
- 如权利要求7所述的眼镜虚拟佩戴装置,其特征在于,所述目标参考点包括第一参考点、第二参考点和第三参考点;所述第一参考点为所述眼镜图像的左镜框的上边缘中心点;所述第二参考点为所述眼镜图像的右镜框的上边缘中心点;所述第三参考点为以所述第一参考点和所述第二参考点连线的中心点向下预定距离的位置,其中,所述预定距离基于所述眼镜标识获取。
- 如权利要求8所述的眼镜虚拟佩戴装置,其特征在于,所述图像合并调整模块包括图像平移单元、基准线和参考线获取单元、图像旋转单元和图像调整单元;所述图像平移单元,用于基于所述人脸图像上的所述第三基准点,在所述人脸图像上对所述眼镜图像进行平移,使得所述第三参考点和所述第三基准点重合;所述基准线和参考线获取单元,用于获取所述第一基准点和所述第二基准点的连线,作为基准线,获取所述第一参考点和所述第二参考点的连线,作为参考线;所述图像旋转单元,用于基于所述第三基准点使所述眼镜图像在所述人脸图像上旋转,以使得所述基准线和所述参考线平行;所述图像调整单元,用于基于所述第一参考点和所述第一基准点,或者基于所述第二参考点和所述第二基准点对所述眼镜图像在所述人脸图像上进行调整,使所述眼镜图像与所述人脸图像匹配。
- 如权利要求7所述的眼镜虚拟佩戴装置,其特征在于,所述眼镜选择请求还包括用户ID;所述眼镜虚拟佩戴装置还包括自定义匹配模块,所述自定义匹配模块包括自定义请求获取单元和自定义信息关联单元;所述自定义请求获取单元,用于获取自定义匹配请求,基于所述自定义匹配请求获取眼镜图像调整信息;所述自定义信息关联单元,用于关联所述用户ID、所述眼镜标识和所述眼镜图像调整信 息,并保存至自定义匹配表中。
- 一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,其特征在于其特征在于,其特征在于,所述处理器执行所述计算机可读指令时实现如下步骤:获取人脸图像,基于人脸特征点检测算法从所述人脸图像中获取人脸图像中的特征点,并基于所述人脸图像中的特征点构建表征眼睛位置和鼻尖位置的目标基准点;获取眼镜选择请求,所述眼镜选择请求包括眼镜标识;基于所述眼镜标识获取眼镜图像,所述眼镜图像包括目标参考点;合并所述人脸图像和所述眼镜图像,基于所述目标基准点和所述目标参考点对所述眼镜图像在所述人脸图像上进行调整,使所述眼镜图像与所述人脸图像匹配。
- 如权利要求11所述的计算机设备,其特征在于,所述基于人脸特征点检测算法从所述人脸图像中获取人脸图像中的特征点,并基于所述人脸图像中的特征点构建表征眼睛位置和鼻尖位置的目标基准点,包括:采用所述人脸特征点检测算法获取所述人脸图像中的特征点,所述特征点包括左眉中心点、右眉中心点、左眼左眼角点、左眼右眼角点、右眼左眼角点、右眼右眼角点和鼻尖点;基于所述左眉中心点、所述左眼左眼角点和所述左眼右眼角点形成第一三角形,获取所述第一三角形的形心作为第一基准点;基于所述右眉中心点、所述右眼左眼角点和所述右眼右眼角点形成第二三角形,获取所述第二三角形的形心作为第二基准点;将所述鼻尖点作为第三基准点;基于所述第一基准点、所述第二基准点和所述第三基准点构成表征眼睛位置和鼻尖位置的目标基准点。
- 如权利要求12所述的计算机设备,其特征在于,所述目标参考点包括第一参考点、第二参考点和第三参考点;所述第一参考点为所述眼镜图像的左镜框的上边缘中心点;所述第二参考点为所述眼镜图像的右镜框的上边缘中心点;所述第三参考点为以所述第一参考点和所述第二参考点连线的中心点向下预定距离的位置,其中,所述预定距离基于所述眼镜标识获取。
- 如权利要求13所述的计算机设备,其特征在于,所述合并所述人脸图像和所述眼镜图像,基于所述目标基准点和所述目标参考点对所述眼镜图像在所述人脸图像上进行调整,使眼镜图像与人脸图像匹配,包括:基于所述人脸图像上的所述第三基准点,在所述人脸图像上对所述眼镜图像进行平移,使得所述第三参考点和所述第三基准点重合;获取所述第一基准点和所述第二基准点的连线,作为基准线,获取所述第一参考点和所述第二参考点的连线,作为参考线;基于所述第三基准点使所述眼镜图像在所述人脸图像上旋转,以使得所述基准线和所述参考线平行;基于所述第一参考点和所述第一基准点,或者基于所述第二参考点和所述第二基准点对所述眼镜图像在所述人脸图像上进行调整,使眼镜图像与人脸图像匹配。
- 如权利要求11所述的计算机设备,其特征在于,所述眼镜选择请求还包括用户ID;在合并所述人脸图像和所述眼镜图像,基于所述目标基准点和所述目标参考点对所述眼镜图像进行调整的步骤之后,所述处理器执行所述计算机可读指令时还实现如下步骤:获取自定义匹配请求,基于所述自定义匹配请求获取眼镜图像调整信息;关联所述用户ID、所述眼镜标识和所述眼镜图像调整信息,并保存至自定义匹配表中。
- 一个或多个存储有计算机可读指令的非易失性可读存储介质,其特征在于,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:获取人脸图像,基于人脸特征点检测算法从所述人脸图像中获取人脸图像中的特征点,并基于所述人脸图像中的特征点构建表征眼睛位置和鼻尖位置的目标基准点;获取眼镜选择请求,所述眼镜选择请求包括眼镜标识;基于所述眼镜标识获取眼镜图像,所述眼镜图像包括目标参考点;合并所述人脸图像和所述眼镜图像,基于所述目标基准点和所述目标参考点对所述眼镜图像在所述人脸图像上进行调整,使所述眼镜图像与所述人脸图像匹配。
- 如权利要求16所述的非易失性可读存储介质,其特征在于,所述基于人脸特征点检测算法从所述人脸图像中获取人脸图像中的特征点,并基于所述人脸图像中的特征点构建表征眼睛位置和鼻尖位置的目标基准点,包括:采用所述人脸特征点检测算法获取所述人脸图像中的特征点,所述特征点包括左眉中心点、右眉中心点、左眼左眼角点、左眼右眼角点、右眼左眼角点、右眼右眼角点和鼻尖点;基于所述左眉中心点、所述左眼左眼角点和所述左眼右眼角点形成第一三角形,获取所述第一三角形的形心作为第一基准点;基于所述右眉中心点、所述右眼左眼角点和所述右眼右眼角点形成第二三角形,获取所述第二三角形的形心作为第二基准点;将所述鼻尖点作为第三基准点;基于所述第一基准点、所述第二基准点和所述第三基准点构成表征眼睛位置和鼻尖位置的目标基准点。
- 如权利要求17所述的非易失性可读存储介质,其特征在于,所述目标参考点包括第一参考点、第二参考点和第三参考点;所述第一参考点为所述眼镜图像的左镜框的上边缘中心点;所述第二参考点为所述眼镜图像的右镜框的上边缘中心点;所述第三参考点为以所述第一参考点和所述第二参考点连线的中心点向下预定距离的位置,其中,所述预定距离基于所述眼镜标识获取。
- 如权利要求18所述的非易失性可读存储介质,其特征在于,所述合并所述人脸图像和所述眼镜图像,基于所述目标基准点和所述目标参考点对所述眼镜图像在所述人脸图像上进行调整,使眼镜图像与人脸图像匹配,包括:基于所述人脸图像上的所述第三基准点,在所述人脸图像上对所述眼镜图像进行平移,使得所述第三参考点和所述第三基准点重合;获取所述第一基准点和所述第二基准点的连线,作为基准线,获取所述第一参考点和所述第二参考点的连线,作为参考线;基于所述第三基准点使所述眼镜图像在所述人脸图像上旋转,以使得所述基准线和所述参考线平行;基于所述第一参考点和所述第一基准点,或者基于所述第二参考点和所述第二基准点对所述眼镜图像在所述人脸图像上进行调整,使眼镜图像与人脸图像匹配。
- 如权利要求16所述的非易失性可读存储介质,其特征在于,所述眼镜选择请求还包括用户ID;在合并所述人脸图像和所述眼镜图像,基于所述目标基准点和所述目标参考点对所述眼镜图像进行调整的步骤之后,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器还执行如下步骤:获取自定义匹配请求,基于所述自定义匹配请求获取眼镜图像调整信息;关联所述用户ID、所述眼镜标识和所述眼镜图像调整信息,并保存至自定义匹配表中。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810585001.5 | 2018-06-08 | ||
CN201810585001.5A CN109063539B (zh) | 2018-06-08 | 2018-06-08 | 眼镜虚拟佩戴方法、装置、计算机设备及存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019232871A1 true WO2019232871A1 (zh) | 2019-12-12 |
Family
ID=64820633
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/094391 WO2019232871A1 (zh) | 2018-06-08 | 2018-07-04 | 眼镜虚拟佩戴方法、装置、计算机设备及存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109063539B (zh) |
WO (1) | WO2019232871A1 (zh) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111723754A (zh) * | 2020-06-24 | 2020-09-29 | 深圳数联天下智能科技有限公司 | 一种左右眼识别方法、识别装置、终端设备及存储介质 |
CN112328084A (zh) * | 2020-11-12 | 2021-02-05 | 北京态璞信息科技有限公司 | 一种三维虚拟眼镜的定位方法、装置及电子设备 |
CN114267080A (zh) * | 2021-12-30 | 2022-04-01 | 淮阴工学院 | 一种基于角度变化的无差别眨眼识别方法 |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110533775B (zh) * | 2019-09-18 | 2023-04-18 | 广州智美科技有限公司 | 一种基于3d人脸的眼镜匹配方法、装置及终端 |
CN110910512B (zh) * | 2019-11-29 | 2024-04-30 | 北京达佳互联信息技术有限公司 | 虚拟物体自适应调整方法、装置、计算机设备和存储介质 |
CN110958463A (zh) * | 2019-12-06 | 2020-04-03 | 广州华多网络科技有限公司 | 虚拟礼物展示位置的检测、合成方法、装置和设备 |
CN111062328B (zh) * | 2019-12-18 | 2023-10-03 | 中新智擎科技有限公司 | 一种图像处理方法、装置及智能机器人 |
CN112418138B (zh) * | 2020-12-04 | 2022-08-19 | 兰州大学 | 一种眼镜试戴系统 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105809507A (zh) * | 2016-02-29 | 2016-07-27 | 北京酷配科技有限公司 | 一种虚拟试戴方法、虚拟试戴装置 |
CN107103513A (zh) * | 2017-04-23 | 2017-08-29 | 广州帕克西软件开发有限公司 | 一种眼镜虚拟试戴方法 |
US20170323374A1 (en) * | 2016-05-06 | 2017-11-09 | Seok Hyun Park | Augmented reality image analysis methods for the virtual fashion items worn |
CN107408315A (zh) * | 2015-02-23 | 2017-11-28 | Fittingbox公司 | 用于实时、物理准确且逼真的眼镜试戴的流程和方法 |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104408764B (zh) * | 2014-11-07 | 2017-05-24 | 成都好视界眼镜有限公司 | 眼镜虚拟试戴方法、装置及系统 |
CN105975920B (zh) * | 2016-04-28 | 2019-11-26 | 上海交通大学 | 一种眼镜试戴方法及系统 |
-
2018
- 2018-06-08 CN CN201810585001.5A patent/CN109063539B/zh active Active
- 2018-07-04 WO PCT/CN2018/094391 patent/WO2019232871A1/zh active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107408315A (zh) * | 2015-02-23 | 2017-11-28 | Fittingbox公司 | 用于实时、物理准确且逼真的眼镜试戴的流程和方法 |
CN105809507A (zh) * | 2016-02-29 | 2016-07-27 | 北京酷配科技有限公司 | 一种虚拟试戴方法、虚拟试戴装置 |
US20170323374A1 (en) * | 2016-05-06 | 2017-11-09 | Seok Hyun Park | Augmented reality image analysis methods for the virtual fashion items worn |
CN107103513A (zh) * | 2017-04-23 | 2017-08-29 | 广州帕克西软件开发有限公司 | 一种眼镜虚拟试戴方法 |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111723754A (zh) * | 2020-06-24 | 2020-09-29 | 深圳数联天下智能科技有限公司 | 一种左右眼识别方法、识别装置、终端设备及存储介质 |
CN111723754B (zh) * | 2020-06-24 | 2024-05-31 | 深圳数联天下智能科技有限公司 | 一种左右眼识别方法、识别装置、终端设备及存储介质 |
CN112328084A (zh) * | 2020-11-12 | 2021-02-05 | 北京态璞信息科技有限公司 | 一种三维虚拟眼镜的定位方法、装置及电子设备 |
CN114267080A (zh) * | 2021-12-30 | 2022-04-01 | 淮阴工学院 | 一种基于角度变化的无差别眨眼识别方法 |
Also Published As
Publication number | Publication date |
---|---|
CN109063539A (zh) | 2018-12-21 |
CN109063539B (zh) | 2023-04-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019232871A1 (zh) | 眼镜虚拟佩戴方法、装置、计算机设备及存储介质 | |
US11295474B2 (en) | Gaze point determination method and apparatus, electronic device, and computer storage medium | |
AU2022201107B2 (en) | Methods and systems for capturing biometric data | |
US20190333262A1 (en) | Facial animation implementation method, computer device, and storage medium | |
WO2020063744A1 (zh) | 人脸检测方法及装置、业务处理方法、终端设备及存储介质 | |
US11715231B2 (en) | Head pose estimation from local eye region | |
US9262671B2 (en) | Systems, methods, and software for detecting an object in an image | |
TWI704501B (zh) | 可由頭部操控的電子裝置與其操作方法 | |
WO2018177337A1 (zh) | 手部三维数据确定方法、装置及电子设备 | |
JP2022095879A5 (zh) | ||
US20210303825A1 (en) | Directional assistance for centering a face in a camera field of view | |
JP6307805B2 (ja) | 画像処理装置、電子機器、眼鏡特性判定方法および眼鏡特性判定プログラム | |
WO2016110030A1 (zh) | 一种人脸图像的检索系统及方法 | |
CN110148191B (zh) | 视频虚拟表情生成方法、装置及计算机可读存储介质 | |
US11120535B2 (en) | Image processing method, apparatus, terminal, and storage medium | |
CN108090463A (zh) | 对象控制方法、装置、存储介质和计算机设备 | |
WO2021185110A1 (zh) | 眼球追踪校准方法及装置 | |
WO2022272230A1 (en) | Computationally efficient and robust ear saddle point detection | |
WO2024156287A1 (zh) | 一种人像视频皱纹祛除方法、装置以及设备 | |
US12014462B2 (en) | Generation of a 3D model of a reference object to perform scaling of a model of a user's head | |
US12033278B2 (en) | Method for generating a 3D model | |
Sun et al. | An auxiliary gaze point estimation method based on facial normal | |
CN111488778A (zh) | 图像处理方法及装置、计算机系统和可读存储介质 | |
WO2023210341A1 (ja) | 顔分類方法、装置、およびプログラム | |
Varley et al. | Limitations of Local-minima Gaze Prediction. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18921454 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 11/03/2021) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18921454 Country of ref document: EP Kind code of ref document: A1 |