WO2019200719A1 - 三维人脸模型生成方法、装置及电子设备 - Google Patents

三维人脸模型生成方法、装置及电子设备 Download PDF

Info

Publication number
WO2019200719A1
WO2019200719A1 PCT/CN2018/094072 CN2018094072W WO2019200719A1 WO 2019200719 A1 WO2019200719 A1 WO 2019200719A1 CN 2018094072 W CN2018094072 W CN 2018094072W WO 2019200719 A1 WO2019200719 A1 WO 2019200719A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional
face image
map
standard
feature point
Prior art date
Application number
PCT/CN2018/094072
Other languages
English (en)
French (fr)
Inventor
李建亿
Original Assignee
太平洋未来科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 太平洋未来科技(深圳)有限公司 filed Critical 太平洋未来科技(深圳)有限公司
Publication of WO2019200719A1 publication Critical patent/WO2019200719A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/08Projecting images onto non-planar surfaces, e.g. geodetic screens
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Definitions

  • the present invention relates to the technical field of image processing, and in particular, to a method, an apparatus, and an electronic device for generating a three-dimensional face model.
  • the inventor has found that the method for generating a three-dimensional face model in the prior art often requires a large amount of storage space and processing resources, and is likely to cause a jam of the mobile terminal.
  • the method, device and electronic device for generating a three-dimensional face model according to embodiments of the present invention are used to solve at least the above problems in the related art.
  • An embodiment of the present invention provides a method for generating a three-dimensional face model, including:
  • Identifying a face image in the picture acquiring feature information of the first feature point and the first feature point on the face image by using a feature recognition model; wherein the first feature point and the two-dimensional standard map The second feature point has a corresponding relationship, the two-dimensional standard map is an expanded topology map of the standard three-dimensional model; and the feature information of the first feature point is mapped to the two-dimensional standard map based on the correspondence relationship, and two And dimensioning the two-dimensional face image map to the standard three-dimensional model to obtain a three-dimensional face image corresponding to the face image.
  • the feature information includes a texture value
  • the mapping the feature information of the first feature point to the two-dimensional standard map based on the correspondence relationship including: the face based on the correspondence relationship Deforming a second feature point corresponding to the first feature point after the deformation processing; A texture value corresponding to a feature point is mapped onto the second feature point.
  • the two-dimensional face image map is pasted on the standard three-dimensional model to obtain a three-dimensional face image corresponding to the face image, including: based on the two-dimensional standard map and the standard And mapping the two-dimensional face image map to the standard three-dimensional model to obtain a three-dimensional face image corresponding to the face image.
  • the method further includes: preparing a plurality of training samples according to the second feature points on the two-dimensional standard map, and training the feature recognition models according to the training samples.
  • the recognizing the face image in the picture includes: recognizing a face key point in the picture by using a face recognition model to obtain a coordinate position of the key point; determining the coordinate position according to the key point Face image.
  • the picture in the three-dimensional face model generation method is acquired by a camera of the electronic device
  • the camera includes a lens, an auto focus voice coil motor, an image sensor, and a micro memory alloy optical image stabilization device, the lens being fixed on the auto focus voice coil motor, the image sensor acquiring an optical scene of the lens Converting to image data, the autofocus voice coil motor is mounted on the micro memory alloy optical image stabilization device, and the processor of the electronic device drives the micro memory alloy optical image stabilization device according to the lens shake data detected by the gyroscope Action to achieve lens compensation for the lens;
  • the micro memory alloy optical image stabilization device includes a movable plate and a substrate, the auto focus voice coil motor is mounted on the movable plate, the substrate has a size larger than the movable plate, and the movable plate is mounted on the substrate
  • a plurality of movable supports are disposed between the movable plate and the substrate, and four sides of the substrate are provided on the periphery of the substrate, and a gap is formed in a middle portion of each of the side walls, and a micro-motion is installed in the notch a movable member of the micro switch capable of opening or closing the notch under the instruction of the processor, and the movable member is provided with a strip disposed along a width direction of the movable member near a side of the movable plate
  • the substrate is provided with a temperature control circuit connected to the electrical contact
  • the processor controls opening and closing of the temperature control circuit according to a lens shake direction detected by the gyroscope
  • the movable plate The middle of the four sides of the four
  • the elastic member is a spring.
  • the electronic device is a camera, and the camera is mounted on a bracket, the bracket includes a mounting seat, a support shaft, and three support frames hinged on the support shaft;
  • the mounting base includes a first mounting plate and a second mounting plate that are perpendicular to each other, and the first mounting plate and the second mounting plate are both for mounting the camera, and the support shaft is vertically mounted on the first mounting plate a bottom surface, the support shaft is disposed away from a bottom surface of the mounting seat with a radial dimension larger than a circumferential surface of the support shaft, and the three support frames are mounted on the support shaft from top to bottom, and each of the two The horizontal projection of the support frame after deployment is at an angle, the support shaft is a telescopic rod member, and includes a tube body connected to the mounting seat and a rod body partially retractable into the tube body, a portion of the rod that extends into the tubular body includes a first section, a second section, a third section, and a fourth section that are sequentially hinged, the first section being coupled to the tubular body, the first section being adjacent to the
  • the end of the second stage is provided with a mounting groove, a locking member is hinged in the mounting
  • a mounting slot is disposed at an end of the second segment adjacent to the third segment, and the mounting slot is hinged a locking member, the end of the third section adjacent to the second section is provided with a locking hole detachably engaged with the locking member, and the third section is provided with a mounting groove near the end of the fourth section A locking member is hinged in the mounting groove, and an end of the fourth segment adjacent to the third segment is provided with a locking hole detachably engaged with the locking member.
  • each of the support frames is further connected with a distance adjusting device
  • the distance adjusting device comprises a bearing ring mounted on the bottom of the support frame, a rotating ring connected to the bearing ring, a pipe body, a screw, a threaded sleeve and a support rod, wherein one end of the tubular body is provided with a plug, and the screw portion is installed in the tube body through the plugging, and the plugging is provided with an inner portion adapted to the screw Thread, another part of the screw is connected to the rotating ring, one end of the screw sleeve is installed in the tube body and is screwed with the screw, and the other end of the screw sleeve protrudes outside the tube body and
  • the support rod is fixedly connected, and the inner wall of the screw sleeve is provided with a protrusion, and the outer side wall of the screw sleeve is provided with a slide rail adapted to the protrusion along the length direction thereof, and the tube body includes adjacent
  • An identification module configured to identify a face image in the image; and an acquiring module, configured to acquire feature information of the first feature point and the first feature point on the face image by using the feature recognition model; a feature point has a corresponding relationship with a second feature point on the two-dimensional standard map, the two-dimensional standard map is an expanded topology map of the standard three-dimensional model, and a mapping module is configured to: the first feature point based on the correspondence relationship The feature information is mapped to the two-dimensional standard map to generate a two-dimensional face image map; and a generating module is configured to paste the two-dimensional face image map onto the standard three-dimensional model to obtain the person The three-dimensional face image corresponding to the face image.
  • the feature information includes a texture value
  • the mapping module includes: a deformation processing sub-module, configured to perform deformation processing on the facial image based on the correspondence relationship; and a search sub-module for the two-dimensional On the standard map, the second feature point corresponding to the first feature point after the deformation process is found based on the correspondence relationship, and the mapping sub-module is configured to correspond to the first feature point after the deformation process The texture value is mapped to the second feature point.
  • the generating module is specifically configured to map the two-dimensional face image map onto the standard three-dimensional model based on a mapping relationship between the two-dimensional standard map and the standard three-dimensional model, to obtain the The three-dimensional face image corresponding to the face image.
  • the device further includes: a training module, configured to prepare a plurality of training samples according to the second feature points on the two-dimensional standard map, and obtain the feature recognition model according to the training samples.
  • a training module configured to prepare a plurality of training samples according to the second feature points on the two-dimensional standard map, and obtain the feature recognition model according to the training samples.
  • the identification module is specifically configured to use a face recognition model to identify a face key point in the picture, obtain a coordinate position of the key point, and determine the face image according to the coordinate position of the key point. .
  • a still further aspect of the embodiments of the present invention provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor;
  • the memory stores instructions executable by the at least one processor, the instructions being executed by the at least one processor to enable the at least one processor to perform any of the above-described three-dimensional human faces of embodiments of the present invention Model generation method.
  • the three-dimensional face model generation method, device and electronic device provided by the embodiments of the present invention, because the two-dimensional standard map is a two-dimensional image, the same two-dimensional face is performed based on the two-dimensional standard map. The deformation of the image has a higher rate.
  • the two-dimensional standard map is the expanded topological map of the standard three-dimensional model, there is a predetermined mapping relationship between the two, and after the face image is mapped to the two-dimensional standard map, the two-dimensional face image map is generated.
  • the two-dimensional face image map includes the feature information on the face image, and the mapping relationship can be directly referenced in the subsequent step to paste the updated two-dimensional face image map onto the standard three-dimensional model, thereby avoiding
  • the adjustment of the 3D standard model and the calculations needed to obtain the transformed mapping relationship improve the generation efficiency, save storage space and process resources.
  • the common standard three-dimensional model is adopted, it is not necessary to find a corresponding three-dimensional model according to different face images, thereby further improving the processing efficiency.
  • FIG. 1 is a flowchart of a method for generating a three-dimensional face model according to an embodiment of the present invention
  • FIG. 2 is a specific flowchart of step S103 according to an embodiment of the present invention.
  • FIG. 3 is a flowchart of a method for generating a three-dimensional face model according to an embodiment of the present invention
  • FIG. 4 is a structural diagram of a three-dimensional face model generating apparatus according to an embodiment of the present invention.
  • FIG. 5 is a structural diagram of a three-dimensional face model generating apparatus according to an embodiment of the present invention.
  • FIG. 6 is a schematic diagram showing the hardware structure of an electronic device for performing a three-dimensional face model generation method provided by an embodiment of the method of the present invention
  • FIG. 7 is a structural diagram of a camera provided by an embodiment of the present invention.
  • FIG. 8 is a structural diagram of a micro memory alloy optical image stabilization device according to an embodiment of the present invention.
  • FIG. 9 is a structural diagram showing an operation state of a micro memory alloy optical image stabilization device according to an embodiment of the present invention.
  • Figure 10 is a structural view of a bracket according to an embodiment of the present invention.
  • Figure 11 is a structural view of a support shaft according to an embodiment of the present invention.
  • FIG. 12 is a structural diagram of a distance adjusting device according to an embodiment of the present invention.
  • FIG. 1 is a flowchart of a method for generating a three-dimensional face model according to an embodiment of the present invention. As shown in FIG. 1 , a method for generating a three-dimensional face model according to an embodiment of the present invention includes:
  • the face image in the image needs to be recognized.
  • an image in a picture acquired by means of real-time shooting can be identified, and an image stored in a picture local to the terminal can also be identified.
  • the feature information of the face image includes, but is not limited to, the size of the face image and the rotation angle of the face image.
  • the range of the face image can be identified according to edge information and/or color information of the image.
  • the pre-defined key points can also be identified based on the detection.
  • the position of the key point to determine the size of the face image and the angle of rotation of the face image.
  • the eyebrows, the eyes, the nose, the face, the mouth, and the like in the face image respectively have a plurality of the key points, that is, the eyebrows, the eyes, the nose, and the face in the face image can be determined by the coordinate positions of the key points. And the position and contour of the mouth.
  • positive and negative samples for face image key point recognition may be prepared in advance, and the face recognition model is trained according to the positive and negative samples.
  • the picture to be recognized is input into the face recognition model, and the key point coordinates of the face image in the picture to be recognized are output.
  • the coordinate system can take the lower left corner of the picture as the origin, the right direction is the positive direction of the X axis, and the upward direction is the positive direction of the Y axis, and the coordinate value is measured by the number of pixels.
  • the watershed image segmentation algorithm is used to obtain the coordinate information of the other forehead and chin, and the coordinates of all the key points obtained are integrated. Complete face image.
  • the first feature point has a corresponding relationship with the second feature point on the two-dimensional standard map
  • the two-dimensional standard map is an expanded topology diagram of the standard three-dimensional model.
  • the standard 3D model is obtained by statistical modeling of face data and extracting the same features of faces, reflecting the geometric structure of common faces.
  • a plurality of feature points are determined in advance in a standard three-dimensional model, and the feature points can reflect geometric structure and texture information of various organs of the face (such as eyes, nose, mouth, forehead, face, etc.), such as feature points. It can be marked at the horizontal limit of the eye, at the corner of the eyebrow, etc.
  • Each organ can be composed of a plurality of feature points through which a unique face object is identified.
  • the feature information in the embodiment of the present invention includes, but is not limited to, a texture value of a feature point.
  • the target position that best reflects the geometric structure and texture information of each face of the face can be determined by pre-calculating the plurality of face data, and the target position is used as the feature point; or according to the FAP in MPEG4. (Facial Animation parameter) and FDP (Facial Definition parameter) to define feature points; ASM (active shape model) or AAM (active appearance model) can also be used to determine feature points. It is well known to the skilled person and will not be described here.
  • the standard three-dimensional model is developed to obtain a corresponding two-dimensional standard map and a second feature point on the two-dimensional standard map, and the feature points on the second feature point and the standard three-dimensional model are corresponding.
  • the method further includes: preparing a plurality of training samples according to the second feature points on the two-dimensional standard map, and training the feature recognition models according to the training samples. Specifically, a plurality of different two-dimensional face maps are prepared in advance, and corresponding feature points are marked on the two-dimensional face map according to the relative positions of the second feature points on the two-dimensional standard map. And extracting feature information of each feature point on the training sample, thereby obtaining a plurality of training samples.
  • the convolutional neural network and the training samples can be used for training to obtain a feature recognition model. After the face image obtained in step S101 is input into the feature recognition model, feature information of the first feature point and the first feature point on the face image can be obtained.
  • the first feature point Since the feature recognition model is obtained according to the training sample of the second feature point marker, the first feature point has a corresponding relationship with the second feature point on the two-dimensional standard map.
  • the first feature point indicates the location of the face feature, which can roughly reflect the outline of the face in the picture and the outline of the facial features.
  • the second feature point corresponding to each first feature point is found according to the correspondence relationship, and the texture value of the first feature point is mapped to the second feature point corresponding thereto, when all the first feature points are completed.
  • a map of the two-dimensional face image is generated.
  • the face image needs to be deformed to match the two-dimensional standard image.
  • this step may include the following steps:
  • S1031 Perform deformation processing on the face image based on the correspondence relationship.
  • the face image is deformed based on the correspondence between the first feature point of the face image and the second feature point on the two-dimensional standard map.
  • the deformation process is performed based on a correspondence between the first feature point and the second feature point.
  • There are several ways to achieve image deformation based on the correspondence between feature points for example, using RBF (radial basc function) interpolation, TPS (thin plate spline) interpolation, MLS ( Moving leastsquares, moving least squares, etc.
  • the two-dimensional face image map may be mapped to the image based on the mapping relationship.
  • a three-dimensional face image corresponding to the face image is obtained.
  • the bump mapping process may be performed, that is, the sticker is attached.
  • the standard 3D model of the 2D face image is mapped to a layer of texture.
  • the mapped texture and the content of the 2D face avatar are the same, but the position is wrong, so as to better represent the details of the bump, such as pores and wrinkles.
  • the deformation of the two-dimensional face image based on the two-dimensional standard map has a higher rate.
  • the two-dimensional standard map is the expanded topological map of the standard three-dimensional model, there is a predetermined mapping relationship between the two, and after the face image is mapped to the two-dimensional standard map, the two-dimensional face image map is generated.
  • the two-dimensional face image map includes the feature information on the face image, and the mapping relationship can be directly referenced in the subsequent step to paste the updated two-dimensional face image map onto the standard three-dimensional model, thereby avoiding
  • the adjustment of the 3D standard model and the calculations needed to obtain the transformed mapping relationship improve the generation efficiency, save storage space and process resources.
  • the common standard three-dimensional model is adopted, it is not necessary to find a corresponding three-dimensional model according to different face images, thereby further improving the processing efficiency.
  • FIG. 3 is a flowchart of a method for generating a three-dimensional face model according to an embodiment of the present invention. As shown in FIG. 3 , this embodiment is a specific implementation of the embodiment shown in FIG. 1 and FIG. 2 , and therefore, the specific implementation method and beneficial effects of the steps in the embodiment shown in FIG. 1 and FIG. 2 are not further described.
  • the method for generating a three-dimensional face model provided by the embodiment includes:
  • the first feature point has a corresponding relationship with the second feature point on the two-dimensional standard map
  • the two-dimensional standard map is an expanded topology diagram of the standard three-dimensional model.
  • FIG. 4 is a structural diagram of a three-dimensional face model generating apparatus according to an embodiment of the present invention. As shown in FIG. 4, the device specifically includes: an identification module 100, an acquisition module 200, a mapping module 300, and a generation module 400. among them,
  • the identification module 100 is configured to identify a face image in a picture;
  • the acquiring module 200 is configured to acquire, by using a feature recognition model, feature information of a first feature point and the first feature point on the face image Wherein the first feature point has a corresponding relationship with a second feature point on the two-dimensional standard map, the two-dimensional standard map is an expanded topology map of the standard three-dimensional model;
  • the mapping module 300 is configured to Mapping the feature information of the first feature point to the two-dimensional standard image to generate a two-dimensional face image map;
  • the generating module 400 configured to paste the two-dimensional face image map To the standard three-dimensional model, a three-dimensional face image corresponding to the face image is obtained.
  • the identification module 100 is specifically configured to use a face recognition model to identify a face key point in the picture, obtain a coordinate position of the key point, and determine the person according to the coordinate position of the key point. Face image.
  • the device further includes a training module, configured to prepare a plurality of training samples according to the second feature points on the two-dimensional standard map, and obtain the feature recognition model according to the training samples.
  • a training module configured to prepare a plurality of training samples according to the second feature points on the two-dimensional standard map, and obtain the feature recognition model according to the training samples.
  • the three-dimensional face model generating apparatus provided by the embodiment of the present invention is specifically configured to perform the method provided by the embodiment shown in FIG. 1 and FIG. 2, the implementation principle, the method, the functional use, and the like, and the embodiment shown in FIG. 1 and FIG. Similar, I will not repeat them here.
  • FIG. 5 is a structural diagram of a three-dimensional face model generating apparatus according to an embodiment of the present invention. As shown in FIG. 5, the device specifically includes: an identification module 100, an acquisition module 200, a mapping module 300, and a generation module 400. among them,
  • the identification module 100 is configured to identify a face image in a picture;
  • the acquiring module 200 is configured to acquire, by using a feature recognition model, feature information of a first feature point and the first feature point on the face image Wherein the first feature point has a corresponding relationship with a second feature point on the two-dimensional standard map, the two-dimensional standard map is an expanded topology map of the standard three-dimensional model;
  • the mapping module 300 is configured to Mapping the feature information of the first feature point to the two-dimensional standard image to generate a two-dimensional face image map;
  • the generating module 400 configured to paste the two-dimensional face image map To the standard three-dimensional model, a three-dimensional face image corresponding to the face image is obtained.
  • the mapping module 300 includes: a deformation processing sub-module 310, a lookup sub-module 320, and a mapping sub-module 330.
  • the deformation processing sub-module 310 is configured to perform deformation processing on the face image based on the correspondence relationship; the searching sub-module 320 is configured to find the corresponding relationship on the two-dimensional standard map. a second feature point corresponding to the first feature point after the morphing; the mapping sub-module 330 is configured to map the texture value corresponding to the first feature point after the morphing to the second On the feature point.
  • the generating module 400 is specifically configured to map the two-dimensional face image map to the standard three-dimensional model based on a mapping relationship between the two-dimensional standard map and the standard three-dimensional model, to obtain The three-dimensional face image corresponding to the face image.
  • the three-dimensional face model generating apparatus provided by the embodiment of the present invention is specifically configured to perform the method provided by the embodiment shown in FIG. 3, and the implementation principle, method, and function of the method are similar to the embodiment shown in FIG. 3, and details are not described herein again. .
  • the three-dimensional face model generating apparatus of the embodiments of the present invention may be separately installed in the electronic device as one of the software or hardware functional units, or may be implemented as one of the functional modules integrated in the processor, and the embodiment of the present invention is executed.
  • a method of face recognition and 3D model matching may be separately installed in the electronic device as one of the software or hardware functional units, or may be implemented as one of the functional modules integrated in the processor, and the embodiment of the present invention is executed.
  • FIG. 6 is a schematic diagram showing the hardware structure of an electronic device for performing a three-dimensional face model generation method provided by an embodiment of the method of the present invention.
  • the electronic device includes:
  • processors 610 and memory 620 one processor 610 is taken as an example in FIG.
  • the electronic device that performs the three-dimensional face model generation may further include: an input device 630 and an output device 630.
  • the processor 610, the memory 620, the input device 630, and the output device 640 may be connected by a bus or other means, as exemplified by a bus connection in FIG.
  • the memory 620 is a non-volatile computer readable storage medium, and can be used for storing a non-volatile software program, a non-volatile computer executable program, and a module, such as the three-dimensional face model generation in the embodiment of the present invention.
  • the corresponding program instruction/module The processor 610 executes various functional applications of the server and data processing by executing non-volatile software programs, instructions, and modules stored in the memory 620, that is, implementing the three-dimensional face model generation method.
  • the memory 620 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function; and the storage data area may store the three-dimensional face model generation device according to the embodiment of the present invention. Use the created data, etc.
  • memory 620 can include high speed random access memory 620, and can also include non-volatile memory 620, such as at least one disk storage device 620, flash memory device, or other non-volatile solid state memory 620 device.
  • the memory 620 can optionally include a memory 620 remotely located relative to the processor 66, which can be connected to the three-dimensional face model generation device via a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • Input device 630 can receive input numeric or character information and generate key signal inputs related to three-dimensional face model user settings and function controls.
  • Input device 630 can include a device such as a press module.
  • the one or more modules are stored in the memory 620, and when executed by the one or more processors 610, the three-dimensional face model generation method is performed.
  • the electronic device of the embodiment of the invention exists in various forms, including but not limited to:
  • Mobile communication devices These devices are characterized by mobile communication functions and are mainly aimed at providing voice and data communication.
  • Such terminals include: smart phones (such as iPhone), multimedia phones, functional phones, and low-end phones.
  • Ultra-mobile personal computer equipment This type of equipment belongs to the category of personal computers, has computing and processing functions, and generally has mobile Internet access.
  • Such terminals include: PDAs, MIDs, and UMPC devices, such as the iPad.
  • Portable entertainment devices These devices can display and play multimedia content. Such devices include: digital cameras, audio, video players (such as iPods), handheld game consoles, e-books, and smart toys and portable car navigation devices.
  • digital cameras audio, video players (such as iPods), handheld game consoles, e-books, and smart toys and portable car navigation devices.
  • the server consists of a processor 610, a hard disk, a memory, a system bus, etc.
  • the server is similar to a general-purpose computer architecture, but is capable of processing and is stable due to the need to provide highly reliable services. Sex, reliability, security, scalability, manageability, etc. are highly demanding.
  • the device embodiments described above are merely illustrative, wherein the modules described as separate components may or may not be physically separate, and the components displayed as modules may or may not be physical modules, ie may be located A place, or it can be distributed to multiple network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment. Those of ordinary skill in the art can understand and implement without deliberate labor.
  • Embodiments of the present invention provide a non-transitory computer readable storage medium storing computer executable instructions, wherein when the computer executable instructions are executed by an electronic device, the electronic device is caused
  • the three-dimensional face model generation method in any of the above method embodiments is performed.
  • An embodiment of the present invention provides a computer program product, wherein the computer program product comprises a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions, wherein when the program instruction When executed by the electronic device, the electronic device is caused to execute the three-dimensional face model generation method in any of the above method embodiments.
  • a camera with an electronic device with better anti-shake performance is provided, and the picture obtained by the camera is clearer and more clear than the ordinary camera. Can meet the needs of beauty users.
  • the picture acquired by the camera in the embodiment is used in the three-dimensional face model generation method in the above embodiment, the effect is better.
  • the existing electronic device camera (the electronic device is a mobile phone or a video camera, etc.) including the lens 1, the auto focus voice coil motor 2, and the image sensor 3 are known in the art, and thus are not described here.
  • Micro memory alloy optical anti-shake device is usually used because the existing anti-shake device mostly uses the energized coil to generate Loren magnetic force to drive the lens in the magnetic field, and to achieve optical image stabilization, the lens needs to be driven in at least two directions. It means that multiple coils need to be arranged, which will bring certain challenges to the miniaturization of the overall structure, and is easily interfered by external magnetic fields, thus affecting the anti-shake effect.
  • Some prior art techniques achieve the stretching and shortening of the memory alloy wire through temperature changes.
  • the autofocus voice coil motor is moved to realize the lens shake compensation, and the control chip of the micro memory alloy optical anti-shake actuator can control the change of the drive signal to change the temperature of the memory alloy wire, thereby controlling the extension of the memory alloy wire.
  • the length and length are shortened, and the position and moving distance of the actuator are calculated based on the resistance of the memory alloy wire.
  • the Applicant has found that due to the randomness and uncertainty of the jitter, the structure of the above technical solution alone cannot accurately compensate the lens in the case of multiple jitters due to the temperature rise of the shape memory alloy. It takes a certain time to cool down.
  • the above technical solution can compensate the lens for the first direction jitter, but when the second direction of the jitter occurs, the memory alloy wire is too late. It is deformed in an instant, so it is easy to cause the compensation to be untimely. It is impossible to accurately achieve lens shake compensation for multiple jitters and continuous jitter in different directions. This results in poor quality of the acquired image, so the camera or camera structure needs to be improved.
  • the camera of the embodiment includes a lens 1, an auto focus voice coil motor 2, an image sensor 3, and a micro memory alloy optical image stabilization device 4, and the lens 1 is fixed to the auto focus voice coil.
  • the image sensor 3 transmits an image acquired by the lens 1 to the identification module 100, and the autofocus voice coil motor 2 is mounted on the micro memory alloy optical image stabilization device 4, the electron
  • the internal processor of the device drives the action of the micro-memory alloy optical anti-shake device 4 according to the lens shake detected by the gyroscope (not shown) inside the electronic device to realize the lens shake compensation;
  • the improvement of the micro memory alloy optical anti-shake device is as follows:
  • the micro memory alloy optical image stabilization device comprises a movable plate 5 and a substrate 6, wherein the movable plate 5 and the substrate 6 are rectangular plate-shaped members, and the autofocus voice coil motor 2 is mounted on the movable plate 5, the substrate
  • the size of 6 is larger than the size of the movable panel 5, the movable panel 5 is mounted on the substrate 6, and a plurality of movable supports 7 are disposed between the movable panel 5 and the substrate 6, and the movable support 7 Specifically, the balls are disposed in the grooves at the four corners of the substrate 6 to facilitate the movement of the movable plate 5 on the substrate 6.
  • the substrate 6 has four side walls around, and the central portion of each of the side walls A notch 8 is disposed, and the notch 8 is mounted with a micro switch 9 , and the movable member 10 of the micro switch 9 can open or close the notch under the instruction of the processing module, and the movable member 10 is close to the
  • the side surface of the movable panel 5 is provided with strip-shaped electrical contacts 11 arranged along the width direction of the movable member 10, and the substrate 6 is provided with a temperature control circuit connected to the electrical contact 11 (not shown)
  • the processing module can control the opening of the temperature control circuit according to the lens shake direction detected by the gyroscope Closely, a middle portion of the four sides of the movable panel 5 is provided with a shape memory alloy wire 12, one end of the shape memory alloy wire 12 is fixedly connected to the movable plate 5, and the other end is slid with the electrical contact 11
  • the elastic member 13 for resetting is disposed between the inner side wall of the substrate 6 and the movable plate
  • the working process of the micro memory alloy optical image stabilization device of the present embodiment will be described in detail below with reference to the above structure: taking the lens in the opposite direction of the lens as an example, when the lens is shaken in the first direction, the gyroscope will detect The lens shake direction and distance are fed back to the processor, and the processor calculates the amount of elongation of the shape memory alloy wire that needs to be controlled to compensate the jitter, and drives the corresponding temperature control circuit to heat the shape memory alloy wire.
  • the memory alloy wire is elongated and drives the movable plate to move in a direction that can compensate for the first direction of shaking, while the other shape memory alloy wire symmetrical with the shape memory alloy wire does not change, but with the other shape memory alloy wire
  • the connected movable piece opens the corresponding notch, so that the other shape memory alloy wire protrudes out of the notch by the movable plate, and at this time, the elastic members near the two shape memory alloy wires are respectively stretched and Compression (as shown in Figure 9), feedback the shape memory alloy wire after moving to the specified position on the micro memory alloy optical anti-shake actuator
  • the resistance by comparing the deviation of the resistance value from the target value, can correct the movement deviation on the micro memory alloy optical anti-shake actuator; and when the second jitter occurs, the processor first passes the other shape and the alloy wire
  • the movable member closes the notch, and opens the movable member that abuts the shape memory alloy wire in the extended state, and the rotation of the movable member with the other shape and the
  • the movable plate moves in a direction that can compensate for the second direction of shaking, due to the lack of the previously elongated shape memory alloy wire Open, so it does not affect the other shape and the movement of the alloy ribbon moving plate, and due to the opening speed of the movable member and the resetting action of the spring, the micro memory alloy optical image stabilization device of the embodiment is used when multiple shaking occurs. Accurate compensation can be made, which is far superior to the micro-memory alloy optical anti-shake device in the prior art.
  • the above is only a simple two-jitter.
  • the two adjacent shape memory alloy wires can be elongated to compensate for the jitter.
  • the basic working process is as described above. The description principle is the same, but it is not described here.
  • the detection feedback of the shape memory alloy resistance and the detection feedback of the gyroscope are all prior art, and will not be described here.
  • the electronic device includes a camera that can be mounted on the bracket of the camera, but the applicant has found during use that the bracket of the existing camera has the following defects: 1.
  • the existing camera bracket All are supported by a tripod, but the tripod structure can not guarantee the level of the bracket mount when the ground unevenness is installed at a large uneven position, which is easy to be shaken or tilted, which may easily adversely affect the shooting; 2.
  • Existing bracket Cannot be used as a shoulder camera bracket, with a single structure and function, and must be equipped with a shoulder camera bracket separately when shoulder impact shooting is required.
  • the bracket of the embodiment includes a mounting seat 14, a support shaft 15, and three support frames 16 hinged on the support shaft;
  • the mounting bracket 14 includes a first mounting plate 141 and a second mounting plate 142 that are perpendicular to each other, and the first mounting plate 141 and the second mounting plate 142 are both used to mount the camera, and the support shaft 15 is vertically mounted at the same.
  • a bottom surface of the first mounting plate 141, the support shaft 15 is disposed away from the bottom end of the mounting seat 14 with a radial surface slightly larger than the circumferential surface 17 of the support shaft, and the three support frames 16 are from top to bottom.
  • each of the two support frames 16 is inclined at an angle.
  • the circumferential surface 17 is first assumed to be flat on the uneven surface.
  • the erection of the bracket is leveled by opening and adjusting the position of the three retractable support frames, so even the uneven ground can quickly erect the support, adapt to various terrains, and ensure that the mount is horizontal. status.
  • the support shaft 15 of the present embodiment is also a telescopic rod member including a tube body 151 connected to the mounting seat 14 and a rod body 152 partially retractable into the tube body 151, the rod body
  • the portion of the 152 that extends into the tubular body includes a first segment 1521, a second segment 1522, a third segment 1523, and a fourth segment 1524 that are sequentially hinged, the first segment 1521 being coupled to the tubular body 151,
  • a mounting slot 18 is defined in the end of the first segment 1521 adjacent to the second segment 1522.
  • a locking member 19 is hinged in the mounting slot 18, and the second segment 1522 is adjacent to the end of the first segment 1521.
  • the locking hole 20 is detachably engaged with the locking member 19.
  • the second portion 1522 is provided with a mounting groove 18 near the end of the third segment 1523.
  • the mounting groove 18 is hingedly locked.
  • the third section 1523 is provided with a locking hole 20 detachably engaged with the locking member 19 near the end of the second segment 1522, and the third segment 1523 is adjacent to the end of the fourth segment 1524.
  • the mounting portion 18 is provided with a locking member 19, and the locking portion 19 is hinged in the mounting groove 18.
  • the end of the fourth portion 1524 adjacent to the third portion 1523 is provided with a detachable locking member 19.
  • the locking hole 20 can be hidden in the mounting groove. When the locking member is needed, the locking member can be locked on the locking hole by rotating the locking member.
  • the locking member 19 may be a strip having a protrusion which is adapted to the size of the locking hole to press the protrusion into the two adjacent sections in the locking hole (
  • the first segment and the second segment are fixed in position to prevent relative rotation, and the portion can be formed by the cooperation of the first segment 1521, the second segment 1522, the third segment 1523 and the fourth segment 1524.
  • the structure is fixed, and the relative positions of the segments are fixed by the locking member 19.
  • the soft material can also be provided at the bottom of the structure.
  • the Applicant has also found that the telescopic support frame stretches most of the telescopic portion by human force to realize the adjustment of the telescopic length, but the distance is uncontrollable and the randomness is large, so that the problem of adjustment inconvenience often occurs, especially in need of When the telescopic length is partially adjusted, it is often not easy to implement. Therefore, the applicant also optimizes the structure of the support frame 16. As shown in FIG. 12, the bottom end of each of the support frames 16 of the embodiment is also connected with a pitch adjustment.
  • the device 21 includes a bearing ring 211 mounted on the bottom of the support frame 16, a rotating ring 212 connected to the bearing ring 211, a tube body 213, a screw 214, a threaded sleeve 215 and a support rod 216.
  • One end of the tubular body 213 is provided with a plugging 217, and the screw 215 is partially installed in the tubular body 213 through the plugging 217, and the plugging 217 is provided with the screw 214.
  • the other end of the screw 214 is connected to the rotating ring 212.
  • One end of the threaded sleeve 215 is mounted in the tube body 213 and is screwed to the screw 214.
  • the other end of the threaded sleeve 215 extends.
  • the inner wall of the threaded sleeve 215 is provided with a protrusion 218.
  • the outer side wall of the threaded sleeve 215 is provided with a slide 219 adapted to the protrusion along the length thereof.
  • 213 includes an adjacent first portion 2131 having an inner diameter smaller than an inner diameter of the second portion 2132, and a second portion 2132 disposed on an outer end of the second portion 2132.
  • the end of the screw sleeve 215 near the screw 214 is provided with a limiting end 2151 having an outer diameter larger than the inner diameter of the first portion.
  • the screw 214 By rotating the rotating ring 212, the screw 214 is rotated in the tube body 213, and the rotation trend is transmitted.
  • the screw sleeve 215 is not rotated due to the cooperation of the protrusion 218 and the slide 219, so that the rotational force is turned into an outward linear movement, thereby driving the support rod 216 to move, and the bottom end of the support frame is realized.
  • the length is finely adjusted, which is convenient for the user to flatten the bracket and its mounting seat, and provide a good foundation for the subsequent shooting work.
  • a machine-readable medium includes read only memory (ROM), random access memory (RAM), magnetic disk storage media, optical storage media, flash storage media, electrical, optical, acoustic, or other forms of propagation signals (eg, carrier waves) , an infrared signal, a digital signal, etc., etc., the computer software product comprising instructions for causing a computer device (which may be a personal computer, server, or network device, etc.) to perform the various embodiments or portions of the embodiments described Methods.
  • ROM read only memory
  • RAM random access memory
  • magnetic disk storage media e.g., magnetic disks, magnetic disk storage media, optical storage media, flash storage media, electrical, optical, acoustic, or other forms of propagation signals (eg, carrier waves) , an infrared signal, a digital signal, etc., etc.
  • the computer software product comprising instructions for causing a computer device (which may be a personal computer, server, or network device, etc.) to perform the various embodiments or portions of the embodiment

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Architecture (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

本发明实施例提供一种三维人脸模型生成方法、装置及电子设备,包括:识别图片中的人脸图像;通过特征识别模型获取所述人脸图像上的第一特征点和所述第一特征点的特征信息;其中,所述第一特征点与二维标准图上的第二特征点具有对应关系,所述二维标准图为标准三维模型的展开拓扑图;基于所述对应关系将所述第一特征点的特征信息映射到所述二维标准图上,生成二维人脸图像映射图;将所述二维人脸图像映射图贴到所述标准三维模型上,得到所述人脸图像对应的三维人脸图像。通过上述方法、装置及电子设备,提高了基于二维人脸图像生成三维人脸模型的效率,节省了存储空间及处理资源。

Description

三维人脸模型生成方法、装置及电子设备 技术领域
本发明涉及图像处理的技术领域,尤其涉及一种三维人脸模型生成方法、装置及电子设备。
背景技术
随着科技的发展,移动终端在人们生活中所起的作用也越来越广泛。例如,人们可以利用移动终端来进行照片的拍摄。然而,移动终端拍出来的人脸照片都是扁平的,不能突出其重点部位(如鼻子,眼窝等),因此一般通过将拍摄的二维人脸图像转化成三维人脸图像进行打光处理,以此为基础实现人脸五官的立体化显示。
然而,发明人在实现本发明的过程中发现,现有技术中生成三维人脸模型的方法往往需要耗费大量的存储空间及处理资源,易造成移动终端的卡顿。
发明内容
本发明实施例提供的三维人脸模型生成方法、装置及电子设备,用以至少解决相关技术中的上述问题。
本发明实施例一方面提供了一种三维人脸模型生成方法,包括:
识别图片中的人脸图像;通过特征识别模型获取所述人脸图像上的第一特征点和所述第一特征点的特征信息;其中,所述第一特征点与二维标准图上的第二特征点具有对应关系,所述二维标准图为标准三维模型的展开拓扑图;基于所述对应关系将所述第一特征点的特征信息映射到所述二维标准图上,生成二维人脸图像映射图;将所述二维人脸图像映射图贴到所述标准三维模型上,得到所述人脸图像对应的三维人脸图像。
进一步地,所述特征信息包括纹理值,所述基于所述对应关系将所述第一特征点的特征信息映射到所述二维标准图上,包括:基于所述对应关系对所述人脸图像进行变形处理;在所述二维标准图上,基于所述对应关系、找到所述变形处理后的所述第一特征点对应的第二特征点;将所述变形处理后的所述第一特征点对应的纹理值映射到所述第二特征点上。
进一步地,所述将所述二维人脸图像映射图贴到所述标准三维模型上,得到所述人脸图像对应的三维人脸图像,包括:基于所述二维标准图 与所述标准三维模型的映射关系,将所述二维人脸图像映射图映射到所述标准三维模型上,得到所述人脸图像对应的三维人脸图像。
进一步地,所述方法还包括:根据所述二维标准图上的第二特征点准备多个训练样本,根据所述训练样本训练得到所述特征识别模型。
进一步地,所述识别图片中的人脸图像包括:利用人脸识别模型识别所述图片中的人脸关键点,得到所述关键点的坐标位置;根据所述关键点的坐标位置确定所述人脸图像。
进一步的,
所述三维人脸模型生成方法中的图片通过电子装置的摄像头获取;
所述摄像头包括镜头、自动聚焦音圈马达、图像传感器以及微型记忆合金光学防抖器,所述镜头固装在所述自动聚焦音圈马达上,所述图像传感器将所述镜头获取的光学场景转换为图像数据,所述自动聚焦音圈马达安装在所述微型记忆合金光学防抖器上,电子装置的处理器根据陀螺仪检测到的镜头抖动数据驱动所述微型记忆合金光学防抖器的动作,实现镜头的抖动补偿;
所述微型记忆合金光学防抖器包括活动板和基板,所述自动聚焦音圈马达安装在所述活动板上,所述基板的尺寸大于所述活动板,所述活动板安装在所述基板上,所述活动板和所述基板之间设有多个活动支撑,所述基板的四周具有四个侧壁,每个所述侧壁的中部设有一缺口,所述缺口处安装有微动开关,所述微动开关的活动件可以在所述处理器的指令下打开或封闭所述缺口,所述活动件靠近所述活动板的侧面设有沿所述活动件宽度方向布设的条形的电触点,所述基板设有与所述电触点相连接的温控电路,所述处理器根据陀螺仪检测到的镜头抖动方向控制所述温控电路的开闭,所述活动板的四个侧边的中部均设有形状记忆合金丝,所述形状记忆合金丝一端与所述活动板固定连接,另一端与所述电触点滑动配合,所述基板的四周的内侧壁与所述活动板之间均设有弹性件,当所述基板上的一个温控电路连通时,与该电路相连接的形状记忆合金丝伸长,同时,远离该形状记忆合金丝的微动开关的活动件打开所述缺口,与该形状记忆合金丝同侧的弹性件收缩,远离该形状记忆合金丝的弹性件伸长。
进一步的,所述弹性件为弹簧。
进一步的,所述电子装置为摄像机,所述摄像机安装于支架上,所述支架包括安装座、支撑轴、三个铰装在所述支撑轴上的支撑架;
所述安装座包括相互垂直的第一安装板和第二安装板,所述第一安装板和第二安装板均可用于安装所述摄像机,所述支撑轴垂直安装在所述第一安装板的底面,所述支撑轴远离所述安装座的底端设有径向尺寸大于所 述支撑轴的圆周面,三个所述支撑架由上至下安装在所述支撑轴上,且每两个所述支撑架展开后的水平投影呈一夹角,所述支撑轴为伸缩杆件,其包括与所述安装座相连接的管体和部分可收缩至所述管体内的杆体,所述杆体伸入所述管体的部分包括依次铰接的第一段、第二段、第三段和第四段,所述第一段与所述管体相连接,所述第一段靠近所述第二段的端部设有安装槽,所述安装槽内铰接有锁止件,所述第二段靠近所述第一段的端部设有与锁止件可拆卸配合的锁止孔,所述第二段靠近所述第三段的端部设有安装槽,所述安装槽内铰接有锁止件,所述第三段靠近所述第二段的端部设有与锁止件可拆卸配合的锁止孔,所述第三段靠近所述第四段的端部设有安装槽,所述安装槽内铰接有锁止件,所述第四段靠近所述第三段的端部设有与锁止件可拆卸配合的锁止孔。
进一步的,每个所述支撑架的底端还连接有调距装置,所述调距装置包括安装在所述支撑架底部的轴承圈、与所述轴承圈相连接的转动环、管体、螺杆、螺套及支撑杆,所述管体的一端设有封堵,所述螺杆部分通过所述封堵安装在所述管体内,所述封堵设有与所述螺杆相适配的内螺纹,所述螺杆另一部分与所述转动环相连接,所述螺套一端安装在所述管体内并与所述螺杆螺纹连接,所述螺套的另一端伸出所述管体外并与所述支撑杆固定连接,所述螺套的内壁设有一凸起,所述螺套的外侧壁沿其长度方向设有与所述凸起相适配的滑道,所述管体包括相邻的第一部分和第二部分,所述第一部分的内径小于所述第二部分的内径,所述封堵设置在所述第二部分的外端上,所述螺套靠近所述螺杆的端部设有外径大于所述第一部分内径的限位端。
本发明实施例的另一方面提供了三维人脸模型生成装置,包括:
识别模块,用于识别图片中的人脸图像;获取模块,用于通过特征识别模型获取所述人脸图像上的第一特征点和所述第一特征点的特征信息;其中,所述第一特征点与二维标准图上的第二特征点具有对应关系,所述二维标准图为标准三维模型的展开拓扑图;映射模块,用于基于所述对应关系将所述第一特征点的特征信息映射到所述二维标准图上,生成二维人脸图像映射图;生成模块,用于将所述二维人脸图像映射图贴到所述标准三维模型上,得到所述人脸图像对应的三维人脸图像。
进一步地,所述特征信息包括纹理值,所述映射模块包括:变形处理子模块,用于基于所述对应关系对所述人脸图像进行变形处理;查找子模块,用于在所述二维标准图上,基于所述对应关系、找到所述变形处理后的所述第一特征点对应的第二特征点;映射子模块,用于将所述变形处理后的所述第一特征点对应的纹理值映射到所述第二特征点上。
进一步地,所述生成模块具体用于,基于所述二维标准图与所述标准三维模型的映射关系,将所述二维人脸图像映射图映射到所述标准三维模型上,得到所述人脸图像对应的三维人脸图像。
进一步地,所述装置还包括:训练模块,用于根据所述二维标准图上的第二特征点准备多个训练样本,根据所述训练样本训练得到所述特征识别模型。
进一步地,所述识别模块具体用于,利用人脸识别模型识别所述图片中的人脸关键点,得到所述关键点的坐标位置;根据所述关键点的坐标位置确定所述人脸图像。
本发明实施例的又一方面提供一种电子设备,包括:至少一个处理器;以及,与所述至少一个处理器通信连接的存储器;其中,
所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行本发明实施例上述任一项三维人脸模型生成方法。
由以上技术方案可见,本发明实施例提供的三维人脸模型生成方法、装置及电子设备,由于二维标准图是二维图像,基于二维标准图所进行的对同样是二维的人脸图像的变形就具有较高的速率。同时,由于二维标准图为标准三维模型的展开拓扑图,两者之间具有预先确定的映射关系,在将人脸图像映射到二维标准图后生成二维人脸图像映射图后,所述二维人脸图像映射图就包含了所述人脸图像上的特征信息,在后续步骤中可以直接引用该映射关系将更新的二维人脸图像映射图贴到标准三维模型上,避免了对三维标准模型进行调整以及为获得变换的映射关系而需要进行的计算,提高了生成效率,节省了存储空间及处理资源。此外,由于采用通用的标准三维模型,不需要根据不同的人脸图像查找对应的三维模型,进一步提高了处理效率。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明实施例中记载的一些实施例,对于本领域普通技术人员来讲,还可以根据这些附图获得其他的附图。
图1为本发明一个实施例提供的三维人脸模型生成方法流程图;
图2为本发明一个实施例提供的步骤S103的具体流程图;
图3为本发明一个实施例提供的三维人脸模型生成方法流程图;
图4为本发明一个实施例提供的三维人脸模型生成装置结构图;
图5为本发明一个实施例提供的三维人脸模型生成装置结构图;
图6为执行本发明方法实施例提供的三维人脸模型生成方法的电子设备的硬件结构示意图;
图7为本发明一个实施例提供的摄像头的结构图;
图8为本发明一个实施例提供的微型记忆合金光学防抖器的结构图;
图9为本发明一个实施例提供的微型记忆合金光学防抖器的一种工作状态结构图;
图10为本发明一个实施例提供的支架结构图;
图11为本发明一个实施例提供的支撑轴结构图;
图12为本发明一个实施例提供的调距装置结构图。
具体实施方式
为了使本领域的人员更好地理解本发明实施例中的技术方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本发明实施例一部分实施例,而不是全部的实施例。基于本发明实施例中的实施例,本领域普通技术人员所获得的所有其他实施例,都应当属于本发明实施例保护的范围。
本发明实施例的执行主体为电子设备,所述电子设备包括但不限于手机、平板电脑、笔记本电脑、带摄像头的台式电脑、服务器等。下面结合附图,对本发明的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互结合。图1为本发明实施例提供的三维人脸模型生成方法流程图。如图1所示,本发明实施例提供的三维人脸模型生成方法,包括:
S101,识别图片中的人脸图像。
通常情况下,图片中会包含其他非人脸图像,例如背景环境图像等,因此需要对图片中的人脸图像进行识别。在进行本步骤时,可以识别通过实时拍摄的方式获取的图片中的图像,也可以识别保存于终端本地的图片中的图像。人脸图像的特征信息包括但不限于人脸图像的尺寸和人脸图像的旋转角度。
当前存在许多识别人脸图像的识别方法,例如可以根据图像的边缘信息和/或颜色信息等识别出人脸图像的范围,在本实施例中,也可以通过识别预先定义的关键点,基于检测到的关键点的位置确定人脸图像的尺寸和人脸图像的旋转角度。人脸图像中的眉毛、眼睛、鼻子、脸庞和嘴巴等 分别有若干个所述关键点组成,即通过所述关键点的坐标位置能够确定所述人脸图像中的眉毛、眼睛、鼻子、脸庞和嘴巴的位置及轮廓。
具体地,可以预先准备用于人脸图像关键点识别的正负样本,依据该正负样本训练人脸识别模型。将待识别图片输入所述人脸识别模型,输出该待识别图片中人脸图像的关键点坐标。坐标系可以以图片的左下角为原点,向右方向为X轴正方向,向上方向为Y轴正方向,坐标值以像素点的个数进行计量。根据上述关键点坐标、按照人像比例推出一个人脸图像的初步范围,用此范围采用分水岭图像分割算法,求出另外的额头、下巴的坐标信息,把得到的所有关键点坐标进行整合就是一张完整的人脸图像。
S102,通过特征识别模型获取所述人脸图像上的第一特征点和所述第一特征点的特征信息。
其中,所述第一特征点与二维标准图上的第二特征点具有对应关系,所述二维标准图为标准三维模型的展开拓扑图。
不同人脸具有大致相同的特征,标准三维模型是通过对人脸数据的统计建模、提取人脸的相同特征得到的,反映了普遍人脸的几何结构。在本发明实施例中,预先在标准三维模型中确定若干特征点,该特征点能够反映人脸各个器官(例如眼睛、鼻子、嘴巴、额头、脸型等)的几何结构和纹理信息,例如特征点可以标在眼睛的水平界限、眉毛的转角处等,每个器官可以由多个特征点构成,通过这些特征点来标识唯一的人脸对象。本发明实施例中的特征信息包括但不限于特征点的纹理值。
在具体实现过程中,可以通过预先对多个人脸数据的统计,来确定最能反映人脸各器官几何结构和纹理信息的目标位置,将该目标位置作为特征点;也可以根据MPEG4中的FAP(Facial Animation parameter)和FDP(Facial Definition parameter)来定义特征点;还可以采用ASM(active shape model,主动形状模型)或AAM(active appearance model,主动外观模型)确定特征点,该技术为本领域技术人员所熟知,在此不再赘述。
在确定了标准三维模型的特征点之后,将该标准三维模型展开得到对应的二维标准图和二维标准图上的第二特征点,该第二特征点与标准三维模型上的特征点是一一对应的。
在进行本步骤之前,所述方法还包括:根据所述二维标准图上的第二特征点准备多个训练样本,根据所述训练样本训练得到所述特征识别模型。具体地,预先准备多个不同的二维人脸图,根据所述第二特征点在所述二维标准图上的相对位置,在所述二维人脸图上进行相应特征点的标记,并提取训练样本上每个特征点的特征信息,从而得到多个训练样本。 可以利用卷积神经网络和所述训练样本进行训练,得到特征识别模型。将步骤S101得到的人脸图像输入所述特征识别模型后,即可得到所述人脸图像上的第一特征点和所述第一特征点的特征信息。由于所述特征识别模型是根据第二特征点标记的训练样本进行训练得到的,所以所述第一特征点与二维标准图上的第二特征点具有对应关系。第一特征点标示了人脸特征所在的位置,可以大体反应图片中人脸的轮廓以及五官的轮廓。
S103,基于所述对应关系将所述第一特征点的特征信息映射到所述二维标准图上,生成二维人脸图像映射图。
在本步骤中,根据对应关系找到每个第一特征点对应的第二特征点,将所述第一特征点的纹理值映射到与其对应的第二特征点上,当完成所有第一特征点的映射后,即生成了二维人脸图像的映射图。
当人脸图像与二维标准图尺寸上存在差距时,需要对人脸图像进行变形,使其与二维标准图相匹配。
具体地,本步骤可以包括如下步骤:
S1031,基于所述对应关系对所述人脸图像进行变形处理。
基于人脸图像第一特征点与二维标准图上第二特征点的对应关系,对所述人脸图像进行变形处理。所述变形过程是基于第一特征点与第二特征点之间的对应关系进行的。有多种方式可以实现基于特征点之间的对应关系进行图像的变形,例如,采用RBF(radial basc function,径向基函数)插值、TPS(thin plate spline,薄板样条函数)插值、MLS(moving leastsquares,移动最小二乘)插值等方法。
S1032,在所述二维标准图上,基于所述对应关系、找到所述变形处理后的所述第一特征点对应的第二特征点。
S1033,将所述变形处理的所述第一特征点对应的纹理值映射到所述第二特征点上。
确定变形处理后第一特征点对应的纹理值,根据对应关系找到每个变形处理后的第一特征点对应的第二特征点,将所述变形处理后第一特征点的纹理值映射到与其对应的第二特征点上,当完成所有第一特征点的映射后,即生成了二维人脸图像的映射图。
S104,将所述二维人脸图像映射图贴到所述标准三维模型上,得到所述人脸图像对应的三维人脸图像。
具体地,由于二维标准图上的第二特征点与所述标准三维模型上的特征点具有映射关系,因此可以基于所述映射关系,将所述二维人脸图像映射图映射到所述标准三维模型上,得到所述人脸图像对应的三维人脸图像。
进一步地,为了使最终得到的三维人脸图像更为逼真,将所述二维人脸图像映射图贴到所述标准三维模型上后,可以对其进行凹凸贴图处理,也即是在贴有二维人脸图像的标准三维模型上再映射一层纹理,映射的纹理和二维人脸头像的内容相同,但是位置相错,以此更好的表现凹凸的细节,比如毛孔、皱纹等。
本发明实施例提供的三维人脸模型生成方法,由于二维标准图是二维图像,基于二维标准图所进行的对同样是二维的人脸图像的变形就具有较高的速率。同时,由于二维标准图为标准三维模型的展开拓扑图,两者之间具有预先确定的映射关系,在将人脸图像映射到二维标准图后生成二维人脸图像映射图后,所述二维人脸图像映射图就包含了所述人脸图像上的特征信息,在后续步骤中可以直接引用该映射关系将更新的二维人脸图像映射图贴到标准三维模型上,避免了对三维标准模型进行调整以及为获得变换的映射关系而需要进行的计算,提高了生成效率,节省了存储空间及处理资源。此外,由于采用通用的标准三维模型,不需要根据不同的人脸图像查找对应的三维模型,进一步提高了处理效率。
图3为本发明实施例提供的三维人脸模型生成方法流程图。如图3所示,本实施例为图1和图2所示实施例的具体实现方案,因此不再赘述图1和图2所示实施例中各步骤的具体实现方法和有益效果,本发明实施例提供的三维人脸模型生成方法,具体包括:
S301,识别图片中的人脸图像。
S302,通过特征识别模型获取所述人脸图像上的第一特征点和所述第一特征点的特征信息。
其中,所述第一特征点与二维标准图上的第二特征点具有对应关系,所述二维标准图为标准三维模型的展开拓扑图。
S303,基于所述对应关系对所述人脸图像进行变形处理。
S304,在所述二维标准图上,基于所述对应关系、找到所述变形处理后的所述第一特征点对应的第二特征点。
S305,将所述变形处理的所述第一特征点对应的纹理值映射到所述第二特征点上。
S306,将所述二维人脸图像映射图贴到所述标准三维模型上,得到所述人脸图像对应的三维人脸图像。
图4为本发明实施例提供的三维人脸模型生成装置结构图。如图4所示,该装置具体包括:识别模块100,获取模块200,映射模块300,生 成模块400。其中,
所述识别模块100,用于识别图片中的人脸图像;所述获取模块200,用于通过特征识别模型获取所述人脸图像上的第一特征点和所述第一特征点的特征信息;其中,所述第一特征点与二维标准图上的第二特征点具有对应关系,所述二维标准图为标准三维模型的展开拓扑图;所述映射模块300,用于基于所述对应关系将所述第一特征点的特征信息映射到所述二维标准图上,生成二维人脸图像映射图;所述生成模块400,用于将所述二维人脸图像映射图贴到所述标准三维模型上,得到所述人脸图像对应的三维人脸图像。
可选地,所述识别模块100具体用于,利用人脸识别模型识别所述图片中的人脸关键点,得到所述关键点的坐标位置;根据所述关键点的坐标位置确定所述人脸图像。
可选地,所述装置还包括训练模块,用于根据所述二维标准图上的第二特征点准备多个训练样本,根据所述训练样本训练得到所述特征识别模型。
本发明实施例提供的三维人脸模型生成装置具体用于执行图1和图2所示实施例提供的所述方法,其实现原理、方法和功能用途等与图1和图2所示实施例类似,在此不再赘述。
图5为本发明实施例提供的三维人脸模型生成装置结构图。如图5所示,该装置具体包括:识别模块100,获取模块200,映射模块300,生成模块400。其中,
所述识别模块100,用于识别图片中的人脸图像;所述获取模块200,用于通过特征识别模型获取所述人脸图像上的第一特征点和所述第一特征点的特征信息;其中,所述第一特征点与二维标准图上的第二特征点具有对应关系,所述二维标准图为标准三维模型的展开拓扑图;所述映射模块300,用于基于所述对应关系将所述第一特征点的特征信息映射到所述二维标准图上,生成二维人脸图像映射图;所述生成模块400,用于将所述二维人脸图像映射图贴到所述标准三维模型上,得到所述人脸图像对应的三维人脸图像。
所述映射模块300包括:变形处理子模块310、查找子模块320、映射子模块330。
所述变形处理子模块310,用于基于所述对应关系对所述人脸图像进行变形处理;所述查找子模块320,用于在所述二维标准图上,基于所述对应关系、找到经过所述变形处理后的所述第一特征点对应的第二特征 点;所述映射子模块330,用于将变形处理后的所述第一特征点对应的纹理值映射到所述第二特征点上。
可选地,所述生成模块400具体用于,基于所述二维标准图与所述标准三维模型的映射关系,将所述二维人脸图像映射图映射到所述标准三维模型上,得到所述人脸图像对应的三维人脸图像。
本发明实施例提供的三维人脸模型生成装置具体用于执行图3所示实施例提供的所述方法,其实现原理、方法和功能用途和图3所示实施例类似,在此不再赘述。
上述这些本发明实施例的三维人脸模型生成装置可以作为其中一个软件或者硬件功能单元,独立设置在上述电子设备中,也可以作为整合在处理器中的其中一个功能模块,执行本发明实施例的人脸识别和三维模型匹配的方法。
图6为执行本发明方法实施例提供的三维人脸模型生成方法的电子设备的硬件结构示意图。根据图6所示,该电子设备包括:
一个或多个处理器610以及存储器620,图6中以一个处理器610为例。
执行所述的三维人脸模型生成的电子设备还可以包括:输入装置630和输出装置630。
处理器610、存储器620、输入装置630和输出装置640可以通过总线或者其他方式连接,图6中以通过总线连接为例。
存储器620作为一种非易失性计算机可读存储介质,可用于存储非易失性软件程序、非易失性计算机可执行程序以及模块,如本发明实施例中的所述三维人脸模型生成方法对应的程序指令/模块。处理器610通过运行存储在存储器620中的非易失性软件程序、指令以及模块,从而执行服务器的各种功能应用以及数据处理,即实现所述三维人脸模型生成方法。
存储器620可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储根据本发明实施例提供的三维人脸模型生成装置的使用所创建的数据等。此外,存储器620可以包括高速随机存取存储器620,还可以包括非易失性存储器620,例如至少一个磁盘存储器620件、闪存器件、或其他非易失性固态存储器620件。在一些实施例中,存储器620可选包括相对于处理器66远程设置的存储器620,这些远程存储器620可以通过网络连接至所述三维人脸模型生成装置。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
输入装置630可接收输入的数字或字符信息,以及产生与三维人脸模型用户设置以及功能控制有关的键信号输入。输入装置630可包括按压模组等设备。
所述一个或者多个模块存储在所述存储器620中,当被所述一个或者多个处理器610执行时,执行所述三维人脸模型生成方法。
本发明实施例的电子设备以多种形式存在,包括但不限于:
(1)移动通信设备:这类设备的特点是具备移动通信功能,并且以提供话音、数据通信为主要目标。这类终端包括:智能手机(例如iPhone)、多媒体手机、功能性手机,以及低端手机等。
(2)超移动个人计算机设备:这类设备属于个人计算机的范畴,有计算和处理功能,一般也具备移动上网特性。这类终端包括:PDA、MID和UMPC设备等,例如iPad。
(3)便携式娱乐设备:这类设备可以显示和播放多媒体内容。该类设备包括:数码相机、音频、视频播放器(例如iPod),掌上游戏机,电子书,以及智能玩具和便携式车载导航设备。
(4)服务器:提供计算服务的设备,服务器的构成包括处理器610、硬盘、内存、系统总线等,服务器和通用的计算机架构类似,但是由于需要提供高可靠的服务,因此在处理能力、稳定性、可靠性、安全性、可扩展性、可管理性等方面要求较高。
(5)其他具有数据交互功能的电子装置。
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个网络模块上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性的劳动的情况下,即可以理解并实施。
本发明实施例提供了一种非暂态计算机可读存存储介质,所述计算机存储介质存储有计算机可执行指令,其中,当所述计算机可执行指令被电子设备执行时,使所述电子设备上执行上述任意方法实施例中的三维人脸模型生成方法。
本发明实施例提供了一种计算机程序产品,其中,所述计算机程序产品包括存储在非暂态计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,其中,当所述程序指令被电子设备执行时,使所述电子设备执行上述任意方法实施例中的三维人脸模型生成方法。
在另一实施例中,为了便于上述实施例对图片的三维处理,还提供了 一种具有更好防抖性能的电子装置的摄像头,通过该摄像头获取的图片相比于普通摄像头更加清晰,更能满足美颜用户的需求。特别是本实施例中的摄像头获取的图片用于上述实施例中的三维人脸模型生成方法时,效果更佳。
具体的,现有的电子装置摄像头(电子装置为手机或摄像机等)包括镜头1、自动聚焦音圈马达2、图像传感器3为本领域技术人员公知的现有技术,因此这里不过多描述。通常采用微型记忆合金光学防抖器是因为现有的防抖器大多由通电线圈在磁场中产生洛伦磁力驱动镜头移动,而要实现光学防抖,需要在至少两个方向上驱动镜头,这意味着需要布置多个线圈,会给整体结构的微型化带来一定挑战,而且容易受外界磁场干扰,进而影响防抖效果,一些现有技术通过温度变化实现记忆合金丝的拉伸和缩短,以此拉动自动聚焦音圈马达移动,实现镜头的抖动补偿,微型记忆合金光学防抖致动器的控制芯片可以控制驱动信号的变化来改变记忆合金丝的温度,以此控制记忆合金丝的伸长和缩短,并且根据记忆合金丝的电阻来计算致动器的位置和移动距离。当微型记忆合金光学防抖致动器上移动到指定位置后反馈记忆合金丝此时的电阻,通过比较这个电阻值与目标值的偏差,可以校正微型记忆合金光学防抖致动器上的移动偏差。但是申请人发现,由于抖动的随机性和不确定性,仅仅依靠上述技术方案的结构是无法实现在多次抖动发生的情况下能够对镜头进行精确的补偿,这是由于形状记忆合金的升温和降温均需要一定的时间,当抖动向第一方向发生时,上述技术方案可以实现镜头对第一方向抖动的补偿,但是当随之而来的第二方向的抖动发生时,由于记忆合金丝来不及在瞬间变形,因此容易造成补偿不及时,无法精准实现对多次抖动和不同方向的连续抖动的镜头抖动补偿,这导致了获取的图片质量不佳,因此需要对摄像头或摄像机结构上进行改进。
如图7所示,本实施例的所述摄像头包括镜头1、自动聚焦音圈马达2、图像传感器3以及微型记忆合金光学防抖器4,所述镜头1固装在所述自动聚焦音圈马达2上,所述图像传感器3将所述镜头1获取的图像传输至所述识别模块100,所述自动聚焦音圈马达2安装在所述微型记忆合金光学防抖器4上,所述电子装置内部处理器根据电子装置内部陀螺仪(图中未示出)检测到的镜头抖动驱动所述微型记忆合金光学防抖器4的动作,实现镜头的抖动补偿;
结合附图8所示,对所述微型记忆合金光学防抖器的改进之处介绍如下:
所述微型记忆合金光学防抖器包括活动板5和基板6,活动板5和基 板6均为矩形板状件,所述自动聚焦音圈马达2安装在所述活动板5上,所述基板6的尺寸大于所述活动板5的尺寸,所述活动板5安装在所述基板6上,所述活动板5和所述基板6之间设有多个活动支撑7,所述活动支撑7具体为设置在所述基板6四个角处凹槽内的滚珠,便于活动板5在基板6上的移动,所述基板6的四周具有四个侧壁,每个所述侧壁的中部均设有一缺口8,所述缺口8处安装有微动开关9,所述微动开关9的活动件10可以在所述处理模块的指令下打开或封闭所述缺口,所述活动件10靠近所述活动板5的侧面设有沿所述活动件10宽度方向布设的条形的电触点11,所述基板6设有与所述电触点11相连接的温控电路(图中未示出),所述处理模块可以根据陀螺仪检测到的镜头抖动方向控制所述温控电路的开闭,所述活动板5的四个侧边的中部均设有形状记忆合金丝12,所述形状记忆合金丝12一端与所述活动板5固定连接,另一端与所述电触点11滑动配合,所述基板6的四周的内侧壁与所述活动板5之间均设有用于复位的弹性件13,具体的,本实施例的所述弹性件优选为微型的弹簧。
下面结合上述结构对本实施例的微型记忆合金光学防抖器的工作过程进行详细的描述:以镜头两次方向相反的抖动为例,当镜头发生向第一方向抖动时,陀螺仪将检测到的镜头抖动方向和距离反馈给所述处理器,处理器计算出需要控制可以补偿该抖动的形状记忆合金丝的伸长量,并驱动相应的温控电路对该形状记忆合金丝进行升温,该形状记忆合金丝伸长并带动活动板向可补偿第一方向抖动的方向运动,与此同时与该形状记忆合金丝相对称的另一形状记忆合金丝没有变化,但是与该另一形状记忆合金丝相连接的活动件会打开与其对应的缺口,便于所述另一形状记忆合金丝在活动板的带动下向缺口外伸出,此时,两个形状记忆合金丝附近的弹性件分别拉伸和压缩(如图9所示),当微型记忆合金光学防抖致动器上移动到指定位置后反馈该形状记忆合金丝的电阻,通过比较这个电阻值与目标值的偏差,可以校正微型记忆合金光学防抖致动器上的移动偏差;而当第二次抖动发生时,处理器首先通过与另一形状以及合金丝相抵接的活动件关闭缺口,并且打开与处于伸长状态的该形状记忆合金丝相抵接的活动件,与另一形状以及合金丝相抵接活动件的转动可以推动另一形状记忆合金丝复位,与处于伸长状态的该形状记忆合金丝相抵接的活动件的打开可以便于伸长状态的形状记忆合金丝伸出,并且在上述的两个弹性件的弹性作用下可以保证活动板迅速复位,同时处理器再次计算出需要控制可以补偿第二次抖动的形状记忆合金丝的伸长量,并驱动相应的温控电路对另一形状记忆合金丝进行升温,另一形状记忆合金丝伸长并带动活动板向可 补偿第二方向抖动的方向运动,由于在先伸长的形状记忆合金丝处的缺口打开,因此不会影响另一形状以及合金丝带动活动板运动,而由于活动件的打开速度和弹簧的复位作用,因此在发生多次抖动时,本实施例的微型记忆合金光学防抖器均可做出精准的补偿,其效果远远优于现有技术中的微型记忆合金光学防抖器。
当然上述仅仅为简单的两次抖动,当发生多次抖动时,或者抖动的方向并非往复运动时,可以通过驱动两个相邻的形状记忆合金丝伸长以补偿抖动,其基础工作过程与上述描述原理相同,这里不过多赘述,另外关于形状记忆合金电阻的检测反馈、陀螺仪的检测反馈等均为现有技术,这里也不做赘述。
另一实施例中,电子装置包括摄像机,所述摄像机可以安装于所述摄像机的支架上,但是申请人在使用过程中发现,现有的摄像机的支架具有以下缺陷:1、现有的摄像机支架均采用三脚架支撑,但是三脚架结构在地面不平整存在较大凹凸不平的位置进行安装时无法保证支架安装座的水平,容易发生抖动或者倾斜,对拍摄容易产生不良的影响;2、现有的支架无法作为肩抗式摄影机支架,结构和功能单一,在需要肩抗拍摄时必须单独配备肩抗式摄影机支架。
因此,申请人对支架结构进行改进,如图10和11所示,本实施例的所述支架包括安装座14、支撑轴15、三个铰装在所述支撑轴上的支撑架16;所述安装座14包括相互垂直的第一安装板141和第二安装板142,所述第一安装板141和第二安装板142均可用于安装所述摄像机,所述支撑轴15垂直安装在所述第一安装板141的底面,所述支撑轴15远离所述安装座14的底端设有径向尺寸略大于所述支撑轴的圆周面17,三个所述支撑架16由上至下安装在所述支撑轴15上,且每两个所述支撑架16展开后的水平投影呈一倾角,上述结构在进行支架的架设时,首先将圆周面17假设在凹凸不平的平面较平整的一小块区域,在通过打开并调整三个可伸缩的支撑架的位置实现支架的架设平整,因此即使是凹凸不平的地面也能迅速将支架架设平整,适应各种地形,保证安装座处于水平状态。
更有利的,本实施例的所述支撑轴15也是伸缩杆件,其包括与所述安装座14相连接的管体151和部分可收缩至所述管体151内的杆体152,所述杆体152伸入所述管体的部分包括依次铰接的第一段1521、第二段1522、第三段1523和第四段1524,所述第一段1521与所述管体151相连接,所述第一段1521靠近所述第二段1522的端部设有安装槽18,所述安装槽18内铰接有锁止件19,所述第二段1522靠近所述第一段1521的端部设有与锁止件19可拆卸配合的锁止孔20,同理,所述第二段1522 靠近所述第三段1523的端部设有安装槽18,所述安装槽18内铰接有锁止件19,所述第三段1523靠近所述第二段1522的端部设有与锁止件19可拆卸配合的锁止孔20,所述第三段1523靠近所述第四段1524的端部设有安装槽18,所述安装槽18内铰接有锁止件19,所述第四段1524靠近所述第三段1523的端部设有与锁止件19可拆卸配合的锁止孔20,所述锁止件可以隐藏在安装槽内,当需要使用锁止件时可以通过转动锁止件,将锁止件扣合在所述锁止孔上,具体的,所述锁止件19可以是具有一个凸起的条形件,该凸起与所述锁止孔的大小尺寸相适配,将凸起压紧在锁止孔内完整相邻两个段(例如第一段和第二段)位置的固定,防止相对转动,而通过第一段1521、第二段1522、第三段1523和第四段1524的配合可以将该部分形成一
Figure PCTCN2018094072-appb-000001
形结构,并且通过锁止件19固定各个段的相对位置,还可以在该结构的底部设有软质材料,当需要将支架作为肩抗式摄像机支架时,该部分放置在用户的肩部,通过把持三个支撑架中的一个作为肩抗式支架的手持部,可以快速的实现由固定式支架到肩抗式支架的切换,十分方便。
另外,申请人还发现,可伸缩的支撑架伸大多通过人力拉出伸缩部分以实现伸缩长度的调节,但是该距离不可控制,随机性较大,因此常常出现调节不便的问题,特别是需要将伸缩长度部分微调时,往往不容易实现,因此申请人还对支撑架的16结构进行优化,结合附图12所示,本实施例的每个所述支撑架16的底端还连接有调距装置21,所述调距装置21包括安装在所述支撑架16底部的轴承圈211、与所述轴承圈211相连接的转动环212、管体213、螺杆214、螺套215及支撑杆216,所述管体213的一端设有封堵217,所述螺杆215部分通过所述封堵217安装在所述管体213内,所述封堵217设有与所述螺杆214相适配的内螺纹,所述螺杆214另一部分与所述转动环212相连接,所述螺套215一端安装在所述管体213内并与所述螺杆214螺纹连接,所述螺套215的另一端伸出所述管体213外并与所述支撑杆216固定连接,所述螺套215的内壁设有一凸起218,所述螺套215的外侧壁沿其长度方向设有与所述凸起相适配的滑道219,所述管体213包括相邻的第一部分2131和第二部分2132,所述第一部分2131的内径小于所述第二部分2132的内径,所述封堵217设置在所述第二部分2132的外端上,所述螺套215靠近所述螺杆214的端部设有外径大于所述第一部分内径的限位端2151,通过转动所述转动环212带动螺杆214在管体213内转动,并将转动趋势传递给所述螺套215,而由于螺套受凸起218和滑道219的配合影响,无法转动,因此将转动力化为向外的直线移动,进而带动支撑杆216运动,实现支撑架底端的长度微 调节,便于用户架平支架及其安装座,为后续的拍摄工作提供良好的基础保障。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到各实施方式可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件。基于这样的理解,上述技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,所述计算机可读记录介质包括用于以计算机(例如计算机)可读的形式存储或传送信息的任何机制。例如,机器可读介质包括只读存储器(ROM)、随机存取存储器(RAM)、磁盘存储介质、光存储介质、闪速存储介质、电、光、声或其他形式的传播信号(例如,载波、红外信号、数字信号等)等,该计算机软件产品包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行各个实施例或者实施例的某些部分所述的方法。
最后应说明的是:以上实施例仅用以说明本发明实施例的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。

Claims (10)

  1. 一种三维人脸模型生成方法,其特征在于,包括:
    识别图片中的人脸图像;
    通过特征识别模型获取所述人脸图像上的第一特征点和所述第一特征点的特征信息;其中,所述第一特征点与二维标准图上的第二特征点具有对应关系,所述二维标准图为标准三维模型的展开拓扑图;
    基于所述对应关系将所述第一特征点的特征信息映射到所述二维标准图上,生成二维人脸图像映射图;
    将所述二维人脸图像映射图贴到所述标准三维模型上,得到所述人脸图像对应的三维人脸图像。
  2. 根据权利1所述的方法,其特征在于,所述特征信息包括纹理值,所述基于所述对应关系将所述第一特征点的特征信息映射到所述二维标准图上,包括:
    基于所述对应关系对所述人脸图像进行变形处理;
    在所述二维标准图上,基于所述对应关系、找到所述变形处理后的所述第一特征点对应的第二特征点;
    将所述变形处理后的所述第一特征点对应的纹理值映射到所述第二特征点上。
  3. 根据权利要求2所述的方法,其特征在于,所述将所述二维人脸图像映射图贴到所述标准三维模型上,得到所述人脸图像对应的三维人脸图像,包括:
    基于所述二维标准图与所述标准三维模型的映射关系,将所述二维人脸图像映射图映射到所述标准三维模型上,得到所述人脸图像对应的三维人脸图像。
  4. 根据权利要求1-3任一项所述的方法,其特征在于,所述方法还包括:根据所述二维标准图上的第二特征点准备多个训练样本,根据所述训练样本训练得到所述特征识别模型。
  5. 根据权利要求1-3任一项所述的方法,其特征在于,所述识别图片中的人脸图像包括:
    利用人脸识别模型识别所述图片中的人脸关键点,得到所述关键点的坐标位置;
    根据所述关键点的坐标位置确定所述人脸图像。
  6. 一种三维人脸模型生成装置,其特征在于,包括:
    识别模块,用于识别图片中的人脸图像;
    获取模块,用于通过特征识别模型获取所述人脸图像上的第一特征点 和所述第一特征点的特征信息;其中,所述第一特征点与二维标准图上的第二特征点具有对应关系,所述二维标准图为标准三维模型的展开拓扑图;
    映射模块,用于基于所述对应关系将所述第一特征点的特征信息映射到所述二维标准图上,生成二维人脸图像映射图;
    生成模块,用于将所述二维人脸图像映射图贴到所述标准三维模型上,得到所述人脸图像对应的三维人脸图像。
  7. 根据权利6所述的装置,其特征在于,所述特征信息包括纹理值,所述映射模块包括:
    变形处理子模块,用于基于所述对应关系对所述人脸图像进行变形处理;
    查找子模块,用于在所述二维标准图上,基于所述对应关系、找到所述变形处理后的所述第一特征点对应的第二特征点;
    映射子模块,用于将所述变形处理后的所述第一特征点对应的纹理值映射到所述第二特征点上。
  8. 根据权利要求7所述的装置,其特征在于,所述生成模块具体用于,基于所述二维标准图与所述标准三维模型的映射关系,将所述二维人脸图像映射图映射到所述标准三维模型上,得到所述人脸图像对应的三维人脸图像。
  9. 根据权利要求6-8任一项所述的装置,其特征在于,所述装置还包括:
    训练模块,用于根据所述二维标准图上的第二特征点准备多个训练样本,根据所述训练样本训练得到所述特征识别模型。
  10. 一种电子设备,其特征在于,包括:至少一个处理器;以及,
    与所述至少一个处理器通信连接的存储器;其中,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1至5中任一项所述的三维人脸模型生成方法。
PCT/CN2018/094072 2018-04-18 2018-07-02 三维人脸模型生成方法、装置及电子设备 WO2019200719A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810347796.6 2018-04-18
CN201810347796.6A CN108596827B (zh) 2018-04-18 2018-04-18 三维人脸模型生成方法、装置及电子设备

Publications (1)

Publication Number Publication Date
WO2019200719A1 true WO2019200719A1 (zh) 2019-10-24

Family

ID=63611030

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/094072 WO2019200719A1 (zh) 2018-04-18 2018-07-02 三维人脸模型生成方法、装置及电子设备

Country Status (2)

Country Link
CN (1) CN108596827B (zh)
WO (1) WO2019200719A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111009006A (zh) * 2019-12-10 2020-04-14 广州久邦世纪科技有限公司 一种基于人脸特征点的图像处理方法
CN111126344A (zh) * 2019-12-31 2020-05-08 杭州趣维科技有限公司 一种生成人脸额头关键点的方法与系统
CN112488178A (zh) * 2020-11-26 2021-03-12 推想医疗科技股份有限公司 网络模型的训练方法及装置、图像处理方法及装置、设备
CN112884881A (zh) * 2021-01-21 2021-06-01 魔珐(上海)信息科技有限公司 三维人脸模型重建方法、装置、电子设备及存储介质
CN113689527A (zh) * 2020-05-15 2021-11-23 武汉Tcl集团工业研究院有限公司 一种人脸转换模型的训练方法、人脸图像转换方法

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108596827B (zh) * 2018-04-18 2022-06-17 太平洋未来科技(深圳)有限公司 三维人脸模型生成方法、装置及电子设备
CN109816634B (zh) * 2018-12-29 2023-07-11 歌尔股份有限公司 检测方法、模型训练方法、装置及设备
CN109766622B (zh) * 2019-01-03 2023-05-02 中国联合网络通信集团有限公司 对象整理方法及装置、存储介质
CN113223188B (zh) * 2021-05-18 2022-05-27 浙江大学 一种视频人脸胖瘦编辑方法
CN114299267A (zh) * 2021-12-28 2022-04-08 北京快来文化传播集团有限公司 图像编辑系统和方法
CN114445564B (zh) * 2022-04-08 2022-06-17 腾讯科技(深圳)有限公司 一种模型展开方法、设备、存储介质及计算机程序产品

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106570822A (zh) * 2016-10-25 2017-04-19 宇龙计算机通信科技(深圳)有限公司 一种人脸贴图方法及装置
CN107316340A (zh) * 2017-06-28 2017-11-03 河海大学常州校区 一种基于单张照片的快速人脸建模方法
US20180018819A1 (en) * 2014-11-25 2018-01-18 Samsung Electronics Co., Ltd. Method and apparatus for generating personalized 3d face model
CN108596827A (zh) * 2018-04-18 2018-09-28 太平洋未来科技(深圳)有限公司 三维人脸模型生成方法、装置及电子设备

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013127492A (ja) * 2011-06-09 2013-06-27 Panasonic Corp レンズアクチュエータ
CN104376594B (zh) * 2014-11-25 2017-09-29 福建天晴数码有限公司 三维人脸建模方法和装置
CN104966316B (zh) * 2015-05-22 2019-03-15 腾讯科技(深圳)有限公司 一种3d人脸重建方法、装置及服务器
KR102460756B1 (ko) * 2015-08-11 2022-10-31 삼성전기주식회사 카메라 모듈
CN205987121U (zh) * 2016-08-25 2017-02-22 东莞市亚登电子有限公司 微型光学防抖摄像头模组结构
CN106131435B (zh) * 2016-08-25 2021-12-07 东莞市亚登电子有限公司 微型光学防抖摄像头模组
CN106787600A (zh) * 2016-12-16 2017-05-31 厦门新鸿洲精密科技有限公司 利用形状记忆合金实现防抖的音圈马达

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180018819A1 (en) * 2014-11-25 2018-01-18 Samsung Electronics Co., Ltd. Method and apparatus for generating personalized 3d face model
CN106570822A (zh) * 2016-10-25 2017-04-19 宇龙计算机通信科技(深圳)有限公司 一种人脸贴图方法及装置
CN107316340A (zh) * 2017-06-28 2017-11-03 河海大学常州校区 一种基于单张照片的快速人脸建模方法
CN108596827A (zh) * 2018-04-18 2018-09-28 太平洋未来科技(深圳)有限公司 三维人脸模型生成方法、装置及电子设备

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111009006A (zh) * 2019-12-10 2020-04-14 广州久邦世纪科技有限公司 一种基于人脸特征点的图像处理方法
CN111126344A (zh) * 2019-12-31 2020-05-08 杭州趣维科技有限公司 一种生成人脸额头关键点的方法与系统
CN111126344B (zh) * 2019-12-31 2023-08-01 杭州趣维科技有限公司 一种生成人脸额头关键点的方法与系统
CN113689527A (zh) * 2020-05-15 2021-11-23 武汉Tcl集团工业研究院有限公司 一种人脸转换模型的训练方法、人脸图像转换方法
CN113689527B (zh) * 2020-05-15 2024-02-20 武汉Tcl集团工业研究院有限公司 一种人脸转换模型的训练方法、人脸图像转换方法
CN112488178A (zh) * 2020-11-26 2021-03-12 推想医疗科技股份有限公司 网络模型的训练方法及装置、图像处理方法及装置、设备
CN112488178B (zh) * 2020-11-26 2022-07-26 推想医疗科技股份有限公司 网络模型的训练方法及装置、图像处理方法及装置、设备
CN112884881A (zh) * 2021-01-21 2021-06-01 魔珐(上海)信息科技有限公司 三维人脸模型重建方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN108596827A (zh) 2018-09-28
CN108596827B (zh) 2022-06-17

Similar Documents

Publication Publication Date Title
WO2019200719A1 (zh) 三维人脸模型生成方法、装置及电子设备
US11481923B2 (en) Relocalization method and apparatus in camera pose tracking process, device, and storage medium
WO2019200718A1 (zh) 图像处理方法、装置及电子设备
WO2019205284A1 (zh) Ar成像方法和装置
WO2020019663A1 (zh) 基于人脸的特效生成方法、装置和电子设备
US10559062B2 (en) Method for automatic facial impression transformation, recording medium and device for performing the method
WO2020037679A1 (zh) 视频处理方法、装置及电子设备
WO2020151750A1 (zh) 图像处理方法及装置
WO2020037676A1 (zh) 三维人脸图像生成方法、装置及电子设备
WO2019196745A1 (zh) 人脸建模方法及相关产品
CN111161395B (zh) 一种人脸表情的跟踪方法、装置及电子设备
WO2021134178A1 (zh) 一种视频流处理方法、装置、设备及介质
WO2020037680A1 (zh) 基于光线的三维人脸优化方法、装置及电子设备
WO2020037678A1 (zh) 基于遮挡图像生成三维人脸图像方法、装置及电子设备
JP2021524957A (ja) 画像処理方法およびその、装置、端末並びにコンピュータプログラム
WO2019205283A1 (zh) 基于红外的ar成像方法、系统及电子设备
WO2019200720A1 (zh) 基于图像处理的环境光补偿方法、装置及电子设备
CN109644231A (zh) 移动设备的改进的视频稳定性
WO2019019927A1 (zh) 一种视频处理方法、网络设备和存储介质
US20170177087A1 (en) Hand skeleton comparison and selection for hand and gesture recognition with a computing interface
WO2013112237A1 (en) Mobile device configured to compute 3d models based on motion sensor data
WO2020258258A1 (zh) 目标跟随的方法、系统、可读存储介质和可移动平台
WO2020037681A1 (zh) 视频生成方法、装置及电子设备
WO2020056690A1 (zh) 一种视频内容关联界面的呈现方法、装置及电子设备
TW202016691A (zh) 移動設備和相關視訊編輯方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18915003

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 15.02.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18915003

Country of ref document: EP

Kind code of ref document: A1