CN108171803B - Image making method and related device - Google Patents

Image making method and related device Download PDF

Info

Publication number
CN108171803B
CN108171803B CN201711161804.XA CN201711161804A CN108171803B CN 108171803 B CN108171803 B CN 108171803B CN 201711161804 A CN201711161804 A CN 201711161804A CN 108171803 B CN108171803 B CN 108171803B
Authority
CN
China
Prior art keywords
image
printing
person
camera
head
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711161804.XA
Other languages
Chinese (zh)
Other versions
CN108171803A (en
Inventor
刘岱昕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Langxing Digital Technology Co ltd
Original Assignee
Shenzhen Langxing Digital Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Langxing Digital Technology Co ltd filed Critical Shenzhen Langxing Digital Technology Co ltd
Priority to CN201711161804.XA priority Critical patent/CN108171803B/en
Publication of CN108171803A publication Critical patent/CN108171803A/en
Application granted granted Critical
Publication of CN108171803B publication Critical patent/CN108171803B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The embodiment of the application discloses an image making method, which comprises the following steps: acquiring a first person image, and identifying a mark point which is closest to the overhead position in the first person image, wherein the first person image is obtained by shooting at a first position by a camera; the terminal equipment calculates the height difference between the top of the head in the first person image and the identification point; determining the adjusting direction of the camera according to the identification point, determining the adjusting height of the camera according to the height difference, and adjusting the camera to a second position; and the terminal equipment acquires a second person image, and extracts a face image of the second person image, wherein the second person image is obtained by shooting at a second position by the camera. The embodiment of the application also provides the related terminal equipment. Adopt this application embodiment can improve the quality of the image of shooing according to the height of user's height automatically regulated camera to be used for making the amusement poster with the image, improve the sight of image, improve user's experience.

Description

Image making method and related device
Technical Field
The present application relates to the field of image processing, and in particular, to a method for producing an entertainment poster and a related device.
Background
Along with the development of science and technology, the popularity of cameras is higher and higher, and people also have higher and higher requirements on the functions of the cameras, for example, some shopping malls or entertainment venues can place some terminal devices with shooting functions for tourists to take photos, so that the tourists are left with some images with higher quality, and more fun is brought to the tourists.
However, due to the problems of position and angle, some cameras in some terminal devices sometimes cannot shoot some satisfactory images, and some shot images are not utilized subsequently, so that the images are applied to more entertainment projects, and the experience of tourists is reduced.
Disclosure of Invention
The invention provides an image making method and terminal equipment, aiming at automatically adjusting the height of a camera according to the height of a user, improving the quality of a photographed image, and making an entertainment poster by using the image, so that the appreciation of the image is improved, and the experience of the user is improved.
In a first aspect, the present invention provides an image producing method, including:
acquiring a first person image acquired by a camera at a first position, and identifying a nearest identification point above the head in the first person image;
acquiring the height difference between the head of the first person image and the identification point;
determining the adjusting direction of the camera according to the identification point, determining the adjusting height of the camera according to the height difference, and adjusting the camera to a second position;
and acquiring a second figure image acquired by the camera at a second position, and extracting a face image of the second figure image.
In a second aspect, the present invention provides a terminal device for image production, comprising:
the system comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring a first person image and identifying a mark point which is closest to the overhead of the head in the first person image, and the first person image is obtained by shooting at a first position by a camera;
the calculation unit is used for calculating the height difference between the top of the head in the first human image and the identification point;
the determining unit is used for determining the adjusting direction of the camera according to the identification point, determining the adjusting height of the camera according to the height difference and adjusting the camera to a second position;
and the extraction unit is used for extracting the face image of the second person image, and the second person image is obtained by shooting the camera at a second position.
In a third aspect, embodiments of the present application provide a terminal device, including one or more processors, one or more memories, one or more transceivers, and one or more programs stored in the memories and configured to be executed by the one or more processors, the programs including instructions for performing the steps in the method according to the first aspect.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium storing a computer program for electronic data exchange, wherein the computer program causes a computer to execute the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform a method according to the first aspect.
By adopting the embodiment of the application, the following beneficial effects are achieved:
in the prior art, some entertainment machines in markets collect images of users at fixed positions, and the difference of the heights of the users is not considered, but a series of identification points are arranged on a back plate of a seat in an image collection area, so that the height of a camera of the entertainment machine can be automatically adjusted according to the heights of the users, and high-quality images are collected at the most proper positions.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of an image forming method according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram illustrating the arrangement of the identification points of the image capturing area provided in the embodiment of the present application;
FIG. 3 is a schematic diagram of a 3D model diagram selection icon provided by an embodiment of the present application;
FIG. 4 is a schematic flow chart of another image production method provided in the embodiments of the present application;
fig. 5 is a schematic structural diagram of another terminal device provided in an embodiment of the present application;
fig. 6 is a schematic structural diagram of another terminal device provided in an embodiment of the present application;
fig. 7 is a schematic structural diagram of a processing unit according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The following are detailed below.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of this application and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Hereinafter, some terms in the present invention are explained to facilitate understanding by those skilled in the art.
A terminal device is a device that provides voice and/or data connectivity to a user, e.g., a handheld device, a vehicle mounted device, etc., with wireless connectivity. Common terminal devices include, for example: the mobile phone includes a mobile phone, a tablet computer, a notebook computer, a palm computer, a Mobile Internet Device (MID), and a wearable device such as a smart watch, a smart bracelet, a pedometer, and the like.
First, referring to fig. 1, fig. 1 is a schematic flow chart of an image manufacturing method provided in an embodiment of the present application, where the method includes:
step 101: the method comprises the steps that terminal equipment obtains a first person image, and identifies a mark point which is closest to the overhead side in the first person image, wherein the first person image is obtained by shooting at a first position through a camera.
Wherein the first position is a fixed position in which the camera is not in operation, and the first position is determined by the sitting height of the user and the height of the seat of the image acquisition area.
For example, in general, the minimum sitting height value is usually 0.45m and the maximum sitting height value is usually 1.05m among different people over three years old, and if the height of the seat is set to 0.35m, the first position setting range of the camera is 0.8m to 1.4m, and specifically, the first position setting of the camera may be 1.0 m.
Further, the camera collects a first person image at the first position, and identifies the identification points in the first person image, it can be understood that a plurality of identification points may be disposed on the back panel behind the seat in the collection area, and the identification points at different positions represent different heights.
Furthermore, the connecting lines of the plurality of identification points on the back plate behind the seat are a vertical straight line, and the plurality of identification points are different, can be different in shape, can also be different in color, and the like.
Further, when the terminal device identifies the identification point in the first person image, if all the identification points on the seat can be detected, it indicates that the user does not block the identification point and deviates from the image acquisition area, the terminal device cannot normally acquire the first person image of the user, and then prompt information is displayed on the terminal display interface, and the prompt information is used for prompting the user to adjust the position and sit in the center of the seat, so that the camera can better acquire the first person image of the user.
For example, as shown in fig. 2, six identification points are disposed on the back plate behind the seat in the acquisition area, where the first identification point represents a height of 0.6m, the second identification point represents a height of 0.8m, the third identification point represents a height of 1.0m, the fourth identification point represents a height of 1.2m, the fifth identification point represents a height of 1.4m, and the sixth identification point represents a height of 1.6m, and the first position is set at the height of 1.0m corresponding to the third identification point, i.e. the camera is initially aligned with the third identification point, when the user sits at the center of the seat in the image acquisition area, the camera starts to acquire the first person image of the user, and identifies and acquires the identification point in the first person image, it can be understood that, because the user blocks the identification point, the first person image may only include the sixth identification point, it is also possible to include said sixth identification point and said fifth identification point, … …, etc.
For example, if the fourth identification point, the fifth identification point, and the sixth identification point are included in the first person image, the fourth identification point is closest to the top of the head of the user.
Step 102: the terminal equipment calculates the height difference between the top of the head in the first person image and the identification point.
Wherein, in calculating the height difference between the identification point and the top of the head of the user in the first person image, the height difference is determined by the distance between the top of the head of the user and the identification point in the first person image, and the height difference may be 0.01m, 0.02m, 0.04m, 0.06m or other values, for example.
Step 103: and the terminal equipment determines the camera adjusting direction according to the identification point, determines the adjusting height of the camera according to the height difference, and adjusts the camera to a second position.
The specific implementation manner of determining the adjustment direction of the camera according to the identification point includes: after the terminal equipment identifies the identification point, comparing the identification point with the third identification point, if the identification point is the identification point below the third identification point, determining that the adjustment direction of the camera is downward, and if the identification point is the identification point above the third identification point, determining that the adjustment direction of the camera is upward.
The specific implementation manner of determining the adjustment height of the camera according to the height difference includes: the terminal device reads the distance h between any two adjacent identification points in the first human image, and the scaling ratio k of the first human image can be determined to be 0.2/h by combining that the real distance between any two adjacent identification points in the background image is 0.2 m.
The terminal equipment acquires the height difference h between the top of the first person image and the identification point1Then the adjusting height of the camera can be calculated to be h2=K×h1And adjusting the camera to the second position according to the adjusting direction and the adjusting height.
For example, if the terminal device recognizesIf the identification point closest to the vertex of the user is the fourth identification point, it may be determined that the adjustment direction of the camera is upward adjustment, and if the terminal device reads that the distance h between any two adjacent identification points in the first human image is 0.05m, it may be determined that the scaling ratio k is 40, and if the terminal device reads that the height difference h between the vertex of the user and the third identification point is equal to 0.05m1If 0.04m, the adjustment distance h of the camera can be determined2And (4) 40 × 0.04 — 0.16m, adjusting the camera upwards by 0.16m by the terminal equipment, and taking the position of the adjusted camera as a second position.
Step 104: the terminal equipment acquires a second person image, and extracts a face image of the second person image, wherein the second person image is obtained by shooting at a second position by the camera.
After the camera is adjusted to the second position, the camera collects a second person image of the user at the second position, the terminal device performs image segmentation on the second person image, and the second person image is segmented into two areas, namely a person image and a background image, wherein the image segmentation can adopt the following method: threshold-based segmentation methods, edge-based segmentation methods, region-based segmentation methods, and the like.
Further, the specific implementation manner of extracting the face image of the second person image includes: converting the second character image into a gray image, firstly determining a gray threshold which is a boundary value of a gray value of a background image and a gray value of the character image, then comparing the gray value of each pixel in the second character image with the gray threshold, dividing the corresponding pixel into a class in which the gray value of the pixel is greater than the gray threshold and a class in which the gray value of the pixel is less than the gray threshold according to a comparison result, setting the class in which the gray value of the pixel is greater than the gray threshold as a character region, setting the gray value of the pixel of the region as 1, setting the class in which the gray value of the pixel is less than the gray threshold as a background region, and setting the gray value of the pixel of the background region as 0.
Further, when the user blinks, turns around, etc., which may cause closed eyes, blurred five sense organs, etc. in the second personal image captured by the camera, in order to ensure that the captured second personal image is a normal personal image, the camera may be controlled to continuously capture a plurality of images within a predetermined time T after being adjusted to the second position, for example, the camera may be controlled to capture a plurality of images within 3S after reaching the second position, and then perform image recognition and image quality evaluation on the plurality of images, and then select an image with the best quality from the plurality of images as the second personal image.
In an example, after extracting the face image of the second person image, the method further includes:
acquiring position parameters of the face image, wherein the position parameters comprise: a position parameter of the eyes and a position parameter of the hair;
sequentially setting the 3D model map after serialization at the corresponding position of the face image according to the position parameter to generate a head image, wherein the D model map at least comprises one of the following parts: a 3D model of hair, a 3D model of glasses, or a 3D model of a hat.
The specific implementation manner of sequentially setting the serialized 3D model images at the positions corresponding to the face images includes: and sequentially setting the selected 3D model images at corresponding positions according to the position parameters of the face images according to the sequence of selecting the 3D model images on the image selection interface of the terminal equipment by the user.
For example, as shown in fig. 3, after the user clicks a 3D model map icon of hair for the first time on a map selection interface, three hair type selection icons pop up on the map selection interface, if the user clicks the mushroom head selection icon, the terminal device sets the selected 3D model map of the mushroom head at a corresponding position of the face image according to the position parameters of the hair of the face image, if the user clicks the glasses 3D model map icon for the second time, three glasses type selection icons pop up on the map selection interface, if the user clicks the sunglasses selection icon, the terminal device sets the selected 3D model map of the glasses at a corresponding position of the face image according to the position parameters of the eyes of the face image, and if the user clicks the hat 3D model map icon for the third time, popping up three cap type selection icons on the image selection interface, and if the user clicks the selection icon of the peaked cap, the terminal equipment sets the selected 3D model image of the peaked cap at the corresponding position of the face image according to the position parameters of the hair of the face image.
In an example, after generating the head image, the method further comprises:
and acquiring a background image and position parameters of the head in the background image, setting the head image at the position of the head in the background image, and generating a second person image.
For example, the user clicks a "snow train" icon on a map selection interface of the terminal device, the terminal device reads a background image made of a "snow train" poster, the background image is processed in advance, a person to be replaced is set, and a head of the person to be replaced is removed, the terminal device obtains a position parameter of the head of the person to be replaced in the background image according to a mapping relation between the background image and the person to be replaced, and the terminal device sets the head image at the position of the head in the background image according to the position parameter of the head to generate a second person image.
In an example, after the generating of the second personal image, the method further includes:
displaying a printing icon on a first display interface, wherein the printing icon is used for reminding a user of selecting a printing mode, and the printing mode at least comprises one of the following modes: printing one second person image directly, printing a plurality of second person images directly, printing one second person image in color, or printing a plurality of second person images in color.
And sending the second person image to a printing end, and printing the second person image according to the printing mode selected by the user.
In one example, after the printing of the second personal image in the user-selected printing mode, the method further includes:
if a request instruction for finishing the operation is received, adjusting the camera to the first position;
and if a request instruction for continuously manufacturing the image is received, displaying a picture selection icon on a second display interface, wherein the picture selection icon comprises a 3D model picture selection icon and a background image selection icon.
In the prior art, some entertainment machines in markets collect images of users at fixed positions, and the difference of the heights of the users is not considered, but a series of identification points are arranged on a back plate of a seat in an image collection area, so that the height of a camera of the entertainment machine can be automatically adjusted according to the heights of the users, and high-quality images are collected at the most proper positions.
The embodiment of the present application further provides another more detailed method flow, as shown in fig. 4, the method includes:
step 401: the camera is used for acquiring a first person image at a first position.
Step 402: and the terminal equipment acquires the first person image acquired by the camera.
Step 403: and the terminal equipment identifies the nearest identification point above the top of the head in the first person image.
Step 404: and the terminal equipment acquires the height difference between the head of the first person image and the identification point.
Step 405: and the terminal equipment determines the adjusting parameters of the camera according to the identification points and the height difference.
Step 406: and the terminal equipment adjusts the camera to a second position according to the adjusting parameter.
Step 407: and the camera acquires a second person image at the second position.
Step 408: and the terminal equipment acquires the second character image acquired by the camera at a second position and extracts the face image of the second character image.
Step 409: the terminal equipment acquires the position parameters of the face image, and the position parameters comprise: a position parameter of the eyes and a position parameter of the hair.
Step 410: the terminal equipment sequentially sets the selected serialized 3D model graph at the corresponding position of the face image according to the position parameter to generate a head image, wherein the D model graph at least comprises one of the following images: a 3D model of hair, a 3D model of glasses, or a 3D model of a hat.
Step 411: the terminal equipment displays a printing icon, the printing icon is used for reminding a user of selecting a printing mode, and the printing mode at least comprises one of the following modes: printing one second person image directly, printing a plurality of second person images directly, printing one second person image in color, or printing a plurality of second person images in color.
Step 412: and the terminal equipment sends the second character image to a printing end and prints the second character image according to the printing mode selected by the user.
Step 413: and the terminal equipment judges whether to continue to make the image or not.
If yes, go to step 410;
if not, go to step 414.
Step 414: and the terminal equipment adjusts the camera to the first position.
The method of the embodiments of the present application is set forth above in detail and the apparatus of the embodiments of the present application is provided below.
Referring to fig. 5, fig. 5 is a terminal device 500 according to an embodiment of the present application, including: at least one processor, at least one memory, and at least one communication interface; and one or more programs;
the one or more programs are stored in the memory and configured to be executed by the processor, the programs including instructions for performing the steps of:
acquiring a first person image acquired by a camera at a first position, and identifying a nearest identification point above the head in the first person image;
acquiring the height difference between the head of the first person image and the identification point;
determining the adjusting direction of the camera according to the identification point, determining the adjusting height of the camera according to the height difference, and adjusting the camera to a second position;
and acquiring a second figure image acquired by the camera at a second position, and extracting a face image of the second figure image.
In an example, the program is further configured to, after extracting the face image of the second person image, execute the instructions of:
acquiring position parameters of the face image, wherein the position parameters comprise: a position parameter of the eyes and a position parameter of the hair;
sequentially setting the selected serialized 3D model map at the corresponding position of the face image according to the position parameters to generate a head image, wherein the D model map at least comprises one of the following parts: a 3D model of hair, a 3D model of glasses, or a 3D model of a hat.
In an example, after the step of sequentially arranging the selected serialized 3D model maps at the positions corresponding to the face images and generating the head images, the program is further configured to execute the following steps:
and acquiring a background image and position parameters of the head in the background image, setting the head image at the position of the head in the background image, and generating a second person image.
In an example, after the head image is set at the position of the head in the background image and the second person image is generated, the program is further configured to execute the following steps:
displaying a printing icon on a first display interface, wherein the printing icon is used for reminding a user of selecting a printing mode, and the printing mode at least comprises one of the following modes: printing one second person image directly, printing a plurality of second person images directly, printing one second person image in color, or printing a plurality of second person images in color.
And sending the second person image to a printing end, and printing the second person image according to the printing mode selected by the user.
In one example, the program is further configured to, after printing the second personal image according to the printing method selected by the user, execute instructions for:
if a request instruction for finishing the operation is received, adjusting the camera to the first position;
and if a request instruction for continuously manufacturing the image is received, displaying a picture selection icon on a second display interface, wherein the picture selection icon comprises a 3D model picture selection icon and a background image selection icon.
It should be noted that, the specific implementation manner of the content described in this embodiment may refer to the above method, and will not be described here.
The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It is understood that the terminal device includes hardware structures and/or software modules for performing the respective functions in order to implement the functions. Those of skill in the art would readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the terminal device may be divided into the functional units according to the above method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
In case of integrated units, fig. 6 shows a block diagram of a possible functional unit composition of the terminal device involved in the above embodiments. The terminal device 600 includes: a processing unit 601, a communication unit 602, and a storage unit 603, the processing unit 601 including an acquisition unit 6011, a calculation unit 6012, a determination unit 6013, an extraction unit 6014, a head image generation unit 6015, a person image generation unit 6016, a printing unit 6017, and an adjustment unit 6018, as illustrated in fig. 7. The storage unit 603 is used to store program codes and data of the terminal device. The communication unit 602 is configured to support communication between the terminal device and other devices. Some units (an acquisition set unit 6011, a calculation unit 6012, a determination unit 6013, an extraction unit 6014, a head image generation unit 6015, a person image generation unit 6016, a printing unit 6017, and an adjustment unit 6018) described above are used to perform the relevant steps of the above-described method.
The Processing Unit 601 may be a Processor or a controller (e.g., a Central Processing Unit (CPU), a general purpose Processor, a Digital Signal Processor (DSP), an Application-Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a transistor logic device, a hardware component, or any combination thereof). The storage unit 603 may be a memory, and the communication unit 602 may be a transceiver, a transceiver circuit, a radio frequency chip, a communication interface, or the like.
The calculating unit 6012 is configured to acquire the first person image, identify a closest identification point in the first person image above the vertex, where the first person image is obtained by shooting with a camera at a first position;
calculating the height difference between the top of the head in the first human image and the identification point;
the determining unit 6013 is configured to determine an adjustment direction of the camera according to the identification point, determine an adjustment height of the camera according to the height difference, and adjust the camera to a second position;
the extracting unit 6014 is configured to extract a face image of the second person image, where the second person image is obtained by shooting at a second position with the camera.
In an example, after the extracting unit 6014 extracts the face image of the second person image, the terminal device further includes:
a head image generating unit 6015, configured to acquire position parameters of the face image, where the position parameters include: a position parameter of the eyes and a position parameter of the hair;
in an example, the head image generation unit 6015 is further configured to:
sequentially setting the 3D model map after serialization at the corresponding position of the face image according to the position parameter to generate a head image, wherein the D model map at least comprises one of the following parts: a 3D model of hair, a 3D model of glasses, or a 3D model of a hat.
In an example, after the head image generation unit 6015 generates the head image, the terminal apparatus further includes:
a person image generating unit 6016, configured to obtain a background image and a position parameter of the head in the background image, set the head image at the position of the head in the background image according to the position parameter of the head in the background image, and generate a second person image.
In an example, after the person image generating unit 6016 generates the second person image, the terminal apparatus further includes:
a printing unit 6017, configured to display a printing icon on the first display interface, where the printing icon is used to remind a user to select a printing mode, and the printing mode at least includes one of the following: printing one second person image directly, printing a plurality of second person images directly, printing one second person image in color, or printing a plurality of second person images in color.
In one example, the printing unit 6017 is further configured to:
and sending the second person image to a printing end, and printing the second person image according to the printing mode selected by the user.
In an example, after the printing unit 6017 prints the second person image according to the printing method selected by the user, the terminal apparatus further includes:
an adjusting unit 6018, configured to adjust the camera to the first position if a request instruction for ending the operation is received.
In an example, the adjusting unit 6018 is further configured to:
and if a request instruction for continuously manufacturing the image is received, displaying a picture selection icon on a second display interface, wherein the picture selection icon comprises a 3D model picture selection icon and a background image selection icon.
Embodiments of the present application also provide a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, and the computer program enables a computer to execute part or all of the steps of any one of the methods described in the above method embodiments, and the computer includes a terminal device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as set out in the above method embodiments. The computer program product may be a software installation package, said computer comprising terminal equipment.
The steps of a method or algorithm described in the embodiments of the present application may be implemented in hardware, or may be implemented by a processor executing software instructions. The software instructions may be comprised of corresponding software modules that may be stored in Random Access Memory (RAM), flash Memory, Read Only Memory (ROM), Erasable Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), registers, a hard disk, a removable disk, a compact disc Read Only Memory (CD-ROM), or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an ASIC. Additionally, the ASIC may reside in an access network device, a target network device, or a core network device. Of course, the processor and the storage medium may reside as discrete components in an access network device, a target network device, or a core network device.
Those skilled in the art will appreciate that in one or more of the examples described above, the functionality described in the embodiments of the present application may be implemented, in whole or in part, by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., Digital Video Disk (DVD)), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the embodiments of the present application in further detail, and it should be understood that the above-mentioned embodiments are only specific embodiments of the present application, and are not intended to limit the scope of the embodiments of the present application, and any modifications, equivalent substitutions, improvements and the like made on the basis of the technical solutions of the embodiments of the present application should be included in the scope of the embodiments of the present application.

Claims (10)

1. An image production method, comprising:
acquiring a first person image, and identifying a mark point which is closest to the overhead position in the first person image, wherein the first person image is obtained by shooting at a first position by a camera;
the terminal equipment calculates the height difference between the top of the head in the first person image and the identification point;
determining the adjusting direction of the camera according to the identification point, determining the adjusting height of the camera according to the height difference, and adjusting the camera to a second position;
the terminal equipment acquires a second person image, and extracts a face image of the second person image, wherein the second person image is obtained by shooting at a second position by the camera;
wherein, the determining the adjustment height of the camera according to the height difference comprises: the terminal equipment reads the distance between any two adjacent identification points in the first human image, determines the scaling of the first human image by combining the real distance between any two adjacent identification points in the background image, and determines the adjusting height of the camera according to the product of the height difference between the vertex and the identification points in the first human image and the scaling.
2. The method of claim 1, further comprising:
acquiring position parameters of the face image, wherein the position parameters comprise: a position parameter of the eyes and a position parameter of the hair;
sequentially setting the 3D model map after serialization at the corresponding position of the face image according to the position parameter to generate a head image, wherein the 3D model map at least comprises one of the following parts: a 3D model of hair, a 3D model of glasses, or a 3D model of a hat.
3. The method of claim 2, further comprising:
acquiring a background image and position parameters of the head in the background image, and setting the head image at the position of the head in the background image according to the position parameters of the head in the background image to generate a second person image.
4. The method of claim 3, wherein after generating the second person image, the method further comprises:
displaying a printing icon on a first display interface, wherein the printing icon is used for reminding a user of selecting a printing mode, and the printing mode at least comprises one of the following modes: printing one second person image directly, printing a plurality of second person images directly, printing one second person image in color, or printing a plurality of second person images in color;
and sending the second person image to a printing end, and printing the second person image according to the printing mode selected by the user.
5. The method of claim 4, wherein after said printing of said second personal image according to said user-selected print mode, said method further comprises:
if a request instruction for finishing the operation is received, adjusting the camera to the first position;
and if a request instruction for continuously manufacturing the image is received, displaying a picture selection icon on a second display interface, wherein the picture selection icon comprises a 3D model picture selection icon and a background image selection icon.
6. A terminal device for image production, comprising:
the system comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring a first person image and identifying a mark point which is closest to the overhead of the head in the first person image, and the first person image is obtained by shooting at a first position by a camera;
the calculation unit is used for calculating the height difference between the top of the head in the first human image and the identification point;
the determining unit is used for determining the adjusting direction of the camera according to the identification point, determining the adjusting height of the camera according to the height difference and adjusting the camera to a second position;
the extraction unit is used for extracting a face image of a second person image, wherein the second person image is obtained by shooting at a second position by the camera;
the determining unit is specifically configured to read a distance between any two adjacent identification points in the first human image, determine a scaling of the first human image by combining a real distance between any two adjacent identification points in the background image, and determine an adjusted height of the camera according to a product of a height difference between the vertex and the identification point in the first human image and the scaling.
7. The terminal device according to claim 6, wherein the terminal device further comprises:
a head image generation unit, configured to acquire position parameters of the face image, where the position parameters include: a position parameter of the eyes and a position parameter of the hair;
the head image generating unit is further configured to sequentially set the serialized 3D model map at a position corresponding to the face image according to the position parameter, so as to generate a head image, where the 3D model map at least includes one of: a 3D model of hair, a 3D model of glasses, or a 3D model of a hat.
8. The terminal device according to claim 7, wherein the terminal device further comprises:
and the person image generating unit is used for acquiring the background image and the position parameters of the head in the background image, setting the head image at the position of the head in the background image according to the position parameters of the head in the background image and generating a second person image.
9. The terminal device according to claim 8, wherein after the person generation unit generates the second person image, the terminal device further comprises:
the printing unit is used for displaying a printing icon on a first display interface, the printing icon is used for reminding a user of selecting a printing mode, and the printing mode at least comprises one of the following modes: printing one second person image directly, printing a plurality of second person images directly, printing one second person image in color, or printing a plurality of second person images in color;
the printing unit is further configured to send the second person image to a printing end, and print the second person image according to the printing mode selected by the user.
10. The terminal device according to claim 9, wherein after the printing unit prints the second personal image in accordance with the print mode selected by the user, the terminal device further comprises:
the adjusting unit is used for adjusting the camera to the first position if a request instruction for finishing the operation is received;
the adjusting unit is further configured to display a picture selection icon on a second display interface if a request instruction for continuing to produce the image is received, where the picture selection icon includes a 3D model picture selection icon and a background image selection icon.
CN201711161804.XA 2017-11-21 2017-11-21 Image making method and related device Active CN108171803B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711161804.XA CN108171803B (en) 2017-11-21 2017-11-21 Image making method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711161804.XA CN108171803B (en) 2017-11-21 2017-11-21 Image making method and related device

Publications (2)

Publication Number Publication Date
CN108171803A CN108171803A (en) 2018-06-15
CN108171803B true CN108171803B (en) 2021-09-21

Family

ID=62527128

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711161804.XA Active CN108171803B (en) 2017-11-21 2017-11-21 Image making method and related device

Country Status (1)

Country Link
CN (1) CN108171803B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1503567A (en) * 2002-11-26 2004-06-09 ���µ�����ҵ��ʽ���� Method and apparatus for processing image
CN203827438U (en) * 2014-02-21 2014-09-10 北京海鑫科金高科技股份有限公司 Portrait shooting system
CN104408702A (en) * 2014-12-03 2015-03-11 浩云星空信息技术(北京)有限公司 Image processing method and device
CN106419923A (en) * 2016-10-27 2017-02-22 南京阿凡达机器人科技有限公司 Height measurement method based on monocular machine vision
CN107197149A (en) * 2017-06-14 2017-09-22 深圳传音通讯有限公司 The generation method and device of certificate photograph

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014230051A (en) * 2013-05-22 2014-12-08 ソニー株式会社 Information processing apparatus, information processing method, and program
JP6418444B2 (en) * 2014-10-06 2018-11-07 フリュー株式会社 Photo sticker creating apparatus and image providing method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1503567A (en) * 2002-11-26 2004-06-09 ���µ�����ҵ��ʽ���� Method and apparatus for processing image
CN203827438U (en) * 2014-02-21 2014-09-10 北京海鑫科金高科技股份有限公司 Portrait shooting system
CN104408702A (en) * 2014-12-03 2015-03-11 浩云星空信息技术(北京)有限公司 Image processing method and device
CN106419923A (en) * 2016-10-27 2017-02-22 南京阿凡达机器人科技有限公司 Height measurement method based on monocular machine vision
CN107197149A (en) * 2017-06-14 2017-09-22 深圳传音通讯有限公司 The generation method and device of certificate photograph

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Human Detection Based on the Generation of a Background Image by Using a Far-Infrared Light Camera;Eun Som Jeon ET AL.;《Sensors 2015》;20151231;第6763-6788页 *

Also Published As

Publication number Publication date
CN108171803A (en) 2018-06-15

Similar Documents

Publication Publication Date Title
US11893689B2 (en) Automated three dimensional model generation
US20190206031A1 (en) Facial Contour Correcting Method and Device
KR102279813B1 (en) Method and device for image transformation
CN111754415B (en) Face image processing method and device, image equipment and storage medium
CN104898832B (en) Intelligent terminal-based 3D real-time glasses try-on method
US7653220B2 (en) Face image creation device and method
KR20170008638A (en) Three dimensional content producing apparatus and three dimensional content producing method thereof
CN111008935B (en) Face image enhancement method, device, system and storage medium
JP7342366B2 (en) Avatar generation system, avatar generation method, and program
CN111357034A (en) Point cloud generation method, system and computer storage medium
CN111160309A (en) Image processing method and related equipment
CN107734207B (en) Video object transformation processing method and device and computing equipment
CN113850726A (en) Image transformation method and device
EP2863337B1 (en) Methods and systems for detecting biometric characteristics in an image
CN112733579A (en) Object reconstruction method, device, equipment and storage medium
CN107767326B (en) Method and device for processing object transformation in image and computing equipment
CN110766631A (en) Face image modification method and device, electronic equipment and computer readable medium
CN108171803B (en) Image making method and related device
CN111988525A (en) Image processing method and related device
CN107492068B (en) Video object transformation real-time processing method and device and computing equipment
CN108010038B (en) Live-broadcast dress decorating method and device based on self-adaptive threshold segmentation
CN109670422A (en) Face datection information display method, device, equipment and storage medium
CN105631938B (en) Image processing method and electronic equipment
JP6650998B2 (en) Mirror, image display method and program
CN113709537B (en) User interaction method based on 5G television, 5G television and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant