WO2020215854A1 - 渲染图像的方法、装置、电子设备和计算机可读存储介质 - Google Patents

渲染图像的方法、装置、电子设备和计算机可读存储介质 Download PDF

Info

Publication number
WO2020215854A1
WO2020215854A1 PCT/CN2020/074443 CN2020074443W WO2020215854A1 WO 2020215854 A1 WO2020215854 A1 WO 2020215854A1 CN 2020074443 W CN2020074443 W CN 2020074443W WO 2020215854 A1 WO2020215854 A1 WO 2020215854A1
Authority
WO
WIPO (PCT)
Prior art keywords
parameter
image
target object
rendering
computer
Prior art date
Application number
PCT/CN2020/074443
Other languages
English (en)
French (fr)
Inventor
李润祥
Original Assignee
北京字节跳动网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字节跳动网络技术有限公司 filed Critical 北京字节跳动网络技术有限公司
Publication of WO2020215854A1 publication Critical patent/WO2020215854A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Definitions

  • the present disclosure relates to the field of information processing, and in particular, to a method, apparatus, electronic device, and computer-readable storage medium for rendering images.
  • the smart terminal also has powerful data processing capabilities.
  • the image obtained by the smart terminal can be processed in real time through the image segmentation algorithm to identify the target object in the captured image.
  • computer equipment such as smart terminals can process each frame of the video in real time through human image segmentation algorithm, and accurately identify the contours of the person object in the image and each key point of the person object, so as to be able to Determine the position of the face, right hand, etc. of the human object in the image. This recognition has been accurate to the pixel level.
  • the human face object can be preset When the face of the human object in the image is round, the face of the human object in the image is rendered according to the target width parameter to achieve the effect of "face thinning", but for the distance between the eyes Larger facial objects perform the "face thinning" rendering operation according to the target width parameter, which may not achieve the beautification effect or even backfire. This is because in the prior art, the facial objects in the image are processed according to preset rendering parameters.
  • the rendering method of rendering is not flexible enough and does not consider the difference between the facial objects of different individuals.
  • the embodiments of the present disclosure provide a method, apparatus, electronic device, and computer-readable storage medium for rendering an image, which can correct the first parameter of the target object according to other parameters of the target object in the image, and according to the corrected The first parameter renders the target object, and the rendering method is more flexible.
  • an embodiment of the present disclosure provides a method for rendering an image, which includes: acquiring an image; determining a first parameter of a target object in the image; determining a first parameter of the target object in the image Two parameters; correcting the first parameter according to the second parameter; rendering the target object in the image according to the corrected first parameter.
  • an embodiment of the present disclosure provides a device for rendering an image, which is characterized by comprising: an image acquisition module for acquiring an image; a first parameter determination module for determining the first parameter of the target object in the image Parameters; a second parameter determination module, used to determine the second parameter of the target object in the image; a correction module, used to correct the first parameter according to the second parameter; a rendering module, used to correct the The latter first parameter renders the target object in the image.
  • an embodiment of the present disclosure provides an electronic device, including: a memory, configured to store computer-readable instructions; and one or more processors, configured to execute the computer-readable instructions to make the processor run
  • the method for rendering an image in any one of the foregoing first aspect is implemented at a time.
  • the embodiments of the present disclosure provide a non-transitory computer-readable storage medium, wherein the non-transitory computer-readable storage medium stores computer instructions, and when the computer instructions are executed by the computer, the The computer executes the method of rendering an image in any one of the foregoing first aspects.
  • the present disclosure discloses a method, apparatus, electronic device, and computer-readable storage medium for rendering an image.
  • the method for rendering an image is characterized in that it includes: acquiring an image; determining a first parameter of a target object in the image; determining a second parameter of the target object in the image; The first parameter is corrected by a parameter; and the target object in the image is rendered according to the corrected first parameter.
  • the embodiments of the present disclosure provide a method, apparatus, electronic device, and computer-readable storage medium for rendering an image, which can correct the first parameter of the target object according to other parameters of the target object in the image, and according to the corrected The first parameter renders the target object, and the rendering method is more flexible.
  • FIG. 1 is a flowchart of Embodiment 1 of a method for rendering an image provided by an embodiment of the disclosure
  • Embodiment 2 is a flowchart of Embodiment 2 of the method for rendering an image provided by an embodiment of the disclosure
  • FIG. 3 is a schematic structural diagram of an embodiment of an image rendering apparatus provided by an embodiment of the disclosure.
  • Fig. 4 is a schematic structural diagram of an electronic device provided according to an embodiment of the present disclosure.
  • illustrations provided in the following embodiments only illustrate the basic concept of the present disclosure in a schematic manner.
  • the illustrations only show components related to the present disclosure rather than the number, shape, and shape of the components in actual implementation.
  • the type, quantity, and ratio of each component can be changed at will during actual implementation, and the component layout type may also be more complicated.
  • FIG. 1 is a flowchart of Embodiment 1 of a method for rendering an image provided by an embodiment of the disclosure.
  • the method for rendering an image provided in this embodiment can be executed by an image rendering device, which can be implemented as software or hardware It can also be implemented as a combination of software and hardware.
  • the image rendering apparatus includes a computer device (for example, a smart terminal), so that the method for rendering an image provided in this embodiment is executed by the computer device.
  • the method for rendering an image in an embodiment of the present disclosure includes the following steps:
  • Step S101 acquiring an image
  • step S101 the image rendering apparatus acquires the image, so as to implement the method of rendering the image through the current and/or subsequent steps.
  • the device for rendering an image may include a photographing device, so that the image acquired in step S101 includes an image photographed by the photographing device; the device for rendering an image may not include the photographing device, but is communicatively connected with the photographing device, so that the image is acquired in step S101 It includes acquiring the image taken by the photographing device through the communication connection; the image rendering device can also acquire the image from a preset storage location, so as to implement the method of rendering the image through the current and/or subsequent steps, embodiments of the present disclosure There is no limitation on the way of acquiring images.
  • obtaining an image in step S101 includes obtaining an image from the video.
  • Step S102 Determine the first parameter of the target object in the image
  • the target object includes a person object, or an object that includes a key part of the human body, for example, includes a face object, a facial features object, a torso object, an arm object, and the like.
  • the computer equipment in the prior art has powerful data processing capabilities.
  • the pixel area and/or key points of the target object in the image can be identified through image segmentation algorithms, and other key points can be used for positioning.
  • Technology recognizes the key points of the target object in the image. Therefore, the apparatus for rendering the image in the embodiment of the present disclosure can determine the target object and/or the target object in the image based on an image segmentation algorithm and/or key point positioning technology The first parameter.
  • the image in the embodiments of the present disclosure is composed of pixels, and each pixel in the image can be characterized by position parameters and color parameters, so that the aforementioned image segmentation algorithm can be based on the position parameters of the pixels of the image. And/or color parameters determine the pixel area of the target object in the image, and the aforementioned key point positioning technology can combine preset key point features (such as color features and/or shape features) with the position parameters of the pixels of the image and/or Or color parameters are matched to determine the key points of the target object.
  • preset key point features such as color features and/or shape features
  • a typical representation method is to represent a pixel of an image through a five-tuple (x, y, r, g, b), where the coordinates x and y are used as the position parameters of the pixel, and the color component r, g, and b are the numerical values of the pixel in the RGB space, and the color of the pixel can be obtained by superimposing r, g, and b.
  • the position parameter and color parameter of the pixel can also be expressed in other ways, for example, the position parameter of the pixel is expressed by polar coordinates or UV coordinates, and the color parameter of the pixel is expressed according to the Lab space or CMY space. This is not the case in the embodiments of the present disclosure. Make a limit.
  • the target object and/or the first parameter of the target object are determined in the image based on an image segmentation algorithm.
  • a common image segmentation algorithm can be based on the similarity or homogeneity of the color parameters of the pixels in the image.
  • the image is divided into regions, and then the pixels included in the combined region are determined as the pixel regions of the target object through region merging, and then the key points of the target object and other points can be determined based on the pixel regions.
  • the first parameter of the target object; the basic area of the target object can also be determined according to the color feature and/or shape feature of the target object, and then the basic area of the target object can be determined according to the discontinuity and abruptness of the color parameter of the target object.
  • the contour of the target object Starting from the region, find the contour of the target object, and perform spatial extension according to the position of the contour, that is to say, perform image segmentation according to the feature points, lines, and surfaces of the image to determine the contour of the target object.
  • the area within the contour of is the pixel area of the target object, and the key points of the target object and other first parameters of the target object can be determined based on the pixel area.
  • image segmentation algorithms can also be used.
  • the embodiments of the present disclosure do not limit various image segmentation algorithms. Any existing or future image segmentation algorithm can be used in the embodiments of the present disclosure to determine the image.
  • the target object in and/or the first parameter of the target object.
  • the target object and/or the first parameter of the target object is determined based on the color feature and/or shape feature of the target object and through key point positioning technology, where, for example, the target object includes a facial object of a human body, Then the key points of the contour of the facial object can be characterized by the color feature and/or shape feature, and then the position parameter and/or color parameter of the pixel of the image in the image according to the color feature and/or shape feature Perform feature extraction to determine the contour key points of the facial object. Since the key points occupy only a very small area in the image (usually only a few to dozens of pixels in size), the color features corresponding to the key points are / Or the area occupied by the shape feature on the image is usually very limited and partial.
  • the contour of the target object may be searched based on the discontinuity and abruptness of the contour key points and the color parameters of the target object, or based on the contour of the facial object The key point determines other first parameters of the target object.
  • the first parameter of the target object includes but is not limited to one or more of the following parameters: color parameter, position parameter, length parameter, width parameter, shape parameter, scale parameter, type Parameters, expression parameters, posture parameters, other parameters.
  • the position parameter and/or color parameter of the pixel in the pixel region of the target object in the image can be used to calculate, characterize, or determine the first parameter of the target object.
  • the first parameter of the target object includes the length parameter of the eye object
  • the length parameter includes the length from the outer corner of the left eye object or the right eye object to the inner corner of the eye (for example, the length parameter is characterized by the number of pixels.
  • the length parameter includes the number of pixels between the outer corner of the eye and the inner corner of the left-eye object or the right-eye object); as yet another example, the first parameter of the target object includes the scale parameter of the eye object, and The scale parameter includes the ratio between the length from the outer corner to the inner corner of the left-eye object or the right-eye object and the width of the face corresponding to the eye object; as another example, the first parameter of the target object includes the face
  • the color parameter of the object includes the average value of the color parameters of all pixels in the pixel area of the face object (for example, the color parameter of the pixel based on the RGB channel, then the color parameter of the face object is (r, g) , B), where r is the sum of the values of all pixels on the r channel in the pixel area of the face object divided by the number of all pixels, correspondingly, g and b are also calculated in the above manner).
  • the first parameter of the target object includes the type parameter of the facial object.
  • the type parameters include round face type, pointed face type, and standard face type.
  • the ratio of the face width of the face object to the face width at the cheekbones of the face object determines the first parameter. For example, when the ratio is less than 0.6, the first parameter is determined to be a sharp face type, and when the ratio is greater than 0.8, the first parameter is determined
  • the first parameter is a round face type, and the others determine that the first parameter is a standard face type.
  • the embodiment of the present disclosure does not limit the form and content of the first parameter of the target object, and the first parameter of the target object includes any parameter that can characterize the target object.
  • Step S103 Determine a second parameter of the target object in the image
  • the second parameter of the target object in the image includes but is not limited to one or more of the following parameters: color parameter, position parameter, length parameter, width parameter, shape parameter , Scale parameters, type parameters, expression parameters, posture parameters, and other parameters.
  • the second parameter is different from the first parameter.
  • step S102 can be executed before step S103, which can be After step S103 is executed, step S102 and step S103 can also be executed simultaneously.
  • step S103 determining the second parameter of the target object in the image includes: determining the first parameter corresponding to the first parameter according to a preset first correspondence relationship Two parameters.
  • first perform step S102 determine the first parameter of the target object in the image, if the first parameter of the target object includes the color parameter of the face object of the human body, the first correspondence indicates that it corresponds to the color parameter
  • the second parameter of includes the face shape parameter, then the face shape parameter of the facial object is determined in step S103.
  • the first preset relationship may be realized, for example, by storing a corresponding relationship table. After the first parameter is determined in step S102, the corresponding relationship to the first parameter may be determined by querying the corresponding relationship table. The second parameter.
  • step S102 determining the first parameter of the target object in the image includes: determining the first parameter corresponding to the second parameter according to a preset second correspondence relationship parameter.
  • step S102 can be performed after step S103.
  • step S103 is performed: determining the second parameter of the target object in the image, if the second parameter of the target object includes the hairstyle parameter of the character object If the second correspondence relationship indicates that the first parameter corresponding to the hairstyle parameter includes a face shape parameter, then the face shape parameter of the character object is determined in step S102.
  • the second correspondence relationship reference may be made to the same or corresponding description about the first correspondence relationship, which will not be repeated here.
  • Step S104 Correct the first parameter according to the second parameter
  • the first parameter and the second parameter of the target object in the image are determined through step S102 and step S103, the first parameter is corrected according to the second parameter in step S104.
  • the facial width parameter of the facial object in the image is often corrected based on the preset target width parameter, but this correction method It is not considered that different individuals may have different characteristics, and it may not be possible to obtain a better correction effect for different individuals using unified target parameters for correction. Therefore, in the step S104, according to the second of the target object Parameters are used to correct the first parameter, so that the first parameter can be corrected according to the characteristics of the target object in the image, so as to obtain a better correction effect.
  • modifying the first parameter according to the second parameter includes: determining a modification rule associated with the first parameter according to the second parameter; and modifying the modification rule according to the modification rule.
  • the correction rule may be a preset correction rule.
  • the first parameter of the target object includes the eyebrow shape parameter of the face object
  • the second parameter of the target object includes the face shape parameter of the face object. If the face shape parameter determined in step S103 is A round face is suitable for flat eyebrows and/or thick eyebrows, so the preset correction rule may include correcting the eyebrow shape parameters of the target object to flat eyebrows and/or thick eyebrows.
  • the The correction rule may be pre-stored in the form of a correspondence table. After the second parameter is determined, the correction rule corresponding to the second parameter is determined by querying the correspondence table, wherein the correction rule is identical to the first parameter. Parameter association.
  • the correction rule includes a value range of the first parameter; and correcting the first parameter according to the correction rule includes: correcting the first parameter according to the value range.
  • the first parameter of the target object includes the eyebrow shape parameter of the face object
  • the second parameter of the target object includes the face shape parameter of the face object.
  • the correction The rule includes the width range of the corrected eyebrow shape, and the width range includes, for example, the minimum width and maximum width of the eyebrow shape, so that when the first parameter, that is, the eyebrow shape parameter is corrected according to the correction rule, the correction must be ensured
  • the width of the eyebrow shape in the latter eyebrow shape parameter is greater than or equal to the minimum width in the width range and less than or equal to the maximum width in the width range.
  • the eyebrow shape parameter of the face object may be The width of the eyebrow is corrected to the minimum width, the maximum width, or the middle value of the width range.
  • the method before correcting the first parameter according to the second parameter, the method further includes: determining a target parameter corresponding to the first parameter; correcting the first parameter according to the value range, The method includes: when the target parameter belongs to the value range, determining the target parameter as the corrected first parameter.
  • correcting the first parameter according to the value range includes: when the target parameter does not belong to the value range, correcting according to the boundary value of the value range and the target parameter The first parameter.
  • the target parameter corresponding to the first parameter includes, for example, color parameter, position parameter, length parameter, width parameter, shape parameter, scale parameter, type parameter, expression parameter, posture parameter, other parameters, etc.
  • the target parameter is preset, and the target parameter can be determined by comparing and analyzing a large number of images. For example, when the first parameter is corrected according to the target parameter, there is a greater probability that a better correction effect will be obtained .
  • the first parameter of the target object includes the face length parameter of the character object.
  • the number of pixels between the chin key point and the top key point of the facial contour key point of the character object is determined as 60 pixels.
  • the face length parameter of the person object; the second parameter of the target object includes the height parameter of the person object, for example, the height parameter is 500 pixels);
  • the correction rule determined according to the height parameter includes: face length parameter and height
  • the ratio of the parameters is [0.125, 0.2], so according to this rule, it can be determined that the value range of the face length parameter is 62.5 pixels to 100 pixels, or the rule includes that the value range of the face length parameter is 62.5 pixels to 100 pixels.
  • the first parameter can be corrected according to the value range
  • the 60 pixels can be corrected to the range of 62.5 pixels to 100 pixels according to an appropriate algorithm or rule (for example, the face length parameter can be directly corrected to 62.5, 100 pixels, etc.).
  • the method also determines the target parameter corresponding to the first parameter, that is, the face length parameter, for example, the target parameter is 65 pixels (the target parameter corresponding to the first parameter may be determined according to a preset rule, for example, The face width parameter of the character object is 50 pixels.
  • the aforementioned preset rule includes determining the target parameter corresponding to the face length parameter to be 1.3 times the face width parameter (65 pixels), because 65 pixels belong to 62.5 pixels to 100 pixels.
  • the target parameter, 65 pixels can be determined as the corrected first parameter, that is, the face length parameter; if the target parameter is 60 pixels (for example, the face width parameter of the character object is 50 pixels).
  • the aforementioned preset rule includes determining the target parameter corresponding to the face length parameter to be 1.2 times the face width parameter (that is, 60 pixels). Since 60 pixels do not belong to the range of 62.5 pixels to 100 pixels, it can be determined according to the target The parameter is 60 pixels and the boundary value of the range, that is, 62.5 pixels and/or 100 pixels, to correct the face length parameter (for example, the first parameter, that is, the face length parameter is corrected to be one of the target parameter and the range The average of the boundary values).
  • the correction rule includes a correction type corresponding to the first parameter; correcting the first parameter according to the correction rule includes: correcting the first parameter according to the correction type.
  • the first parameter of the target object determined in step S102 includes the eye parameters of the face object, for example, the eye parameters include the contour parameters of the eyes in the face object, the corner position parameters, and the eye length parameters , The eye shadow color parameter, and/or the width parameter of the widest part of the eye;
  • the second parameter of the target object determined in step S103 includes the face shape parameter of the facial object, for example, the face shape parameter is a sharp face, and a sharp face It is suitable for the "Danfengyan" type, that is, the correction type corresponding to the eye parameter included in the correction rule is the "Danfengyan" type.
  • the eye parameter will be corrected according to the "Danfengyan” type, For example, according to the requirements of the “Danfengyan” type, the corner of the eye position parameter in the eye parameters is corrected so that the outer corner of the eye is higher than the inner corner of the eye, and the ratio of the eye length parameter to the face width is corrected to reach a preset ratio.
  • Step S105 Render the target object in the image according to the corrected first parameter.
  • step S104 the first parameter is corrected, so that in step S105, the target object in the image can be rendered according to the corrected first parameter to achieve image processing functions such as "beauty".
  • the existing or future image processing technology can be used to process the image, for example, the vector diagram of the image is established by color space conversion and the image is smoothed.
  • the type and content of the first parameter change the position parameter and/or the color parameter of the pixel in the area of the person object in the image, etc., which will not be repeated here.
  • the first parameter can be corrected according to the second parameter, and the image can be rendered based on the corrected first parameter.
  • the target object in the image that is, the first parameter of the target object is modified according to other characteristics of the target object in the image, and the target object is rendered, so that different target objects can be unique, Different or more appropriate first parameters are used to render the target object, and the rendering method is more flexible.
  • step S105 rendering the target in the image according to the corrected first parameter After the object, it further includes step S201; displaying the image; and/or storing the image. Since the function of rendering the target object in the image is implemented in step S105, for example, image processing such as beautification is performed on the human object in the image captured by the photographing device, then in step S201, The beautified image is displayed and/or stored, so that the user can instantly browse the rendered image effect and persist the rendered image.
  • image processing such as beautification is performed on the human object in the image captured by the photographing device
  • FIG. 3 shows a schematic structural diagram of an embodiment of an image rendering apparatus 300 provided by an embodiment of the disclosure.
  • the image rendering apparatus 300 includes an image acquisition module 301, a first parameter determination module 302, and a second parameter determination Module 303, correction module 304, and rendering module 305.
  • the image acquisition module 301 is used to acquire an image
  • the first parameter determination module 302 is used to determine the first parameter of the target object in the image
  • the second parameter determination module 303 is used to Determine the second parameter of the target object in the image
  • the correction module 304 is configured to modify the first parameter according to the second parameter
  • the rendering module 305 is configured to modify the first parameter according to the modified
  • the first parameter renders the target object in the image.
  • the apparatus for rendering images further includes: a display module 305 and/or a storage module 306, wherein the display module 305 is used to display the image, and the storage module 306 is used to store The image.
  • the device shown in FIG. 3 can execute the method of the embodiment shown in FIG. 1 and/or FIG. 2.
  • parts that are not described in detail in this embodiment please refer to the related description of the embodiment shown in FIG. 1 and/or FIG. 2.
  • the implementation process and technical effects of this technical solution please refer to the description in the embodiment shown in FIG. 1 and/or FIG. 2, and will not be repeated here.
  • FIG. 4 shows a schematic structural diagram of an electronic device 300 suitable for implementing embodiments of the present disclosure.
  • Electronic devices in the embodiments of the present disclosure may include, but are not limited to, mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablets), PMPs (portable multimedia players), vehicle-mounted terminals (for example, Mobile terminals such as car navigation terminals) and fixed terminals such as digital TVs, desktop computers, etc.
  • the electronic device shown in FIG. 4 is only an example, and should not bring any limitation to the function and scope of use of the embodiments of the present disclosure.
  • the electronic device 400 may include a processing device (such as a central processing unit, a graphics processor, etc.) 401, which may be loaded into a random access device according to a program stored in a read-only memory (ROM) 402 or from a storage device 408.
  • the programs in the memory (RAM) 403 execute various appropriate actions and processes.
  • the RAM 403 also stores various programs and data required for the operation of the electronic device 400.
  • the processing device 401, the ROM 402, and the RAM 403 are connected to each other through a bus or a communication line 404.
  • An input/output (I/O) interface 405 is also connected to the bus or communication line 404.
  • the following devices can be connected to the I/O interface 405: including input devices 406 such as touch screens, touch pads, keyboards, mice, image sensors, microphones, accelerometers, gyroscopes, etc.; including, for example, liquid crystal displays (LCD), speakers, An output device 407 such as a vibrator; a storage device 408 such as a magnetic tape and a hard disk; and a communication device 409.
  • the communication device 409 may allow the electronic device 400 to perform wireless or wired communication with other devices to exchange data.
  • FIG. 4 shows an electronic device 400 having various devices, it should be understood that it is not required to implement or have all the illustrated devices. It may alternatively be implemented or provided with more or fewer devices.
  • the process described above with reference to the flowchart can be implemented as a computer software program.
  • the embodiments of the present disclosure include a computer program product, which includes a computer program carried on a computer-readable medium, and the computer program contains program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from the network through the communication device 409, or installed from the storage device 408, or installed from the ROM 402.
  • the processing device 401 the above-mentioned functions defined in the method of the embodiment of the present disclosure are executed.
  • the aforementioned computer-readable medium in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two.
  • the computer-readable storage medium may be, for example, but not limited to, an electric, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the above. More specific examples of computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in a baseband or as a part of a carrier wave, and a computer-readable program code is carried therein. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium.
  • the computer-readable signal medium may send, propagate or transmit the program for use by or in combination with the instruction execution system, apparatus, or device .
  • the program code contained on the computer-readable medium can be transmitted by any suitable medium, including but not limited to: wire, optical cable, RF (Radio Frequency), etc., or any suitable combination of the above.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or it may exist alone without being assembled into the electronic device.
  • the foregoing computer-readable medium carries one or more programs, and when the foregoing one or more programs are executed by the electronic device, the electronic device is caused to execute the image rendering method in the foregoing embodiment.
  • the computer program code used to perform the operations of the present disclosure may be written in one or more programming languages or a combination thereof.
  • the above-mentioned programming languages include object-oriented programming languages—such as Java, Smalltalk, C++, and also conventional Procedural programming language-such as "C" language or similar programming language.
  • the program code can be executed entirely on the user's computer, partly on the user's computer, executed as an independent software package, partly on the user's computer and partly executed on a remote computer, or entirely executed on the remote computer or server.
  • the remote computer can be connected to the user’s computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, using an Internet service provider to pass Internet connection).
  • LAN local area network
  • WAN wide area network
  • each block in the flowchart or block diagram may represent a module, program segment, or part of code, and the module, program segment, or part of code contains one or more for realizing the specified logical function Executable instructions.
  • the functions marked in the block may also occur in a different order from the order marked in the drawings. For example, two blocks shown in succession can actually be executed substantially in parallel, and they can sometimes be executed in the reverse order, depending on the functions involved.
  • each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart can be implemented by a dedicated hardware-based system that performs the specified functions or operations Or it can be realized by a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments described in the present disclosure may be implemented in a software manner, or may be implemented in a hardware manner. Among them, the name of the unit does not constitute a limitation on the unit itself under certain circumstances.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种渲染图像的方法、装置、电子设备和计算机可读存储介质,涉及信息处理领域。其中所述渲染图像的方法包括:获取图像(S101);确定所述图像中的目标对象的第一参数(S102);确定所述图像中的所述目标对象的第二参数(S103);根据所述第二参数修正所述第一参数(S104);根据修正后的所述第一参数渲染所述图像中的所述目标对象(S105)。所述方法能够根据图像中的目标对象的其他参数来修正所述目标对象的第一参数,并根据修正后的所述第一参数渲染所述目标对象,渲染方式更加灵活。

Description

渲染图像的方法、装置、电子设备和计算机可读存储介质
相关申请的交叉引用
本申请要求于2019年04月23日提交的,申请号为201910331282.6、发明名称为“渲染图像的方法、装置、电子设备和计算机可读存储介质”的中国专利申请的优先权,该申请的全文通过引用结合在本申请中。
技术领域
本公开涉及信息处理领域,尤其涉及一种渲染图像的方法、装置、电子设备及计算机可读存储介质。
背景技术
随着计算机技术的发展,智能终端的应用范围得到了广泛的提高,例如可以通过智能终端拍摄图像和视频等。
同时智能终端还具有强大的数据处理能力,例如在采用智能终端对目标对象进行拍摄时,能够通过图像分割算法对智能终端拍摄所获得的图像进行实时处理,以识别出所拍摄图像中的目标对象。以通过人体图像分割算法处理视频为例,智能终端等计算机设备能够通过人体图像分割算法实时地处理视频的每一帧图像,准确识别图像中的人物对象轮廓和人物对象的各个关键点,从而能够确定人物对象的面部、右手等在图像中的位置,这种识别已经能够精确到像素级。
现有技术中,还能够对从图像中识别出的人物对象进行“美颜”,例如通过预设的渲染参数对人脸对象进行渲染以达到美化的效果,作为示例,可以预设人脸对象的目标宽度参数,当所述图像中的人物对象的面部偏圆时,则根据所述目标宽度参数渲染图像中的人物对象的面部以达到“瘦脸”的效果,但是对于两眼之间的距离较大的面部对象根据所述目标宽度参数进行“瘦脸”的渲染操作,可能并不能达到美化的效果甚至适得其反,这是因为现有技术中根据预设的渲染参数对图像中的人脸对象进行渲染的渲染方式不够灵活,没有考虑不同的个体的面部对象之间的差异。
发明内容
本公开实施例提供渲染图像的方法,装置,电子设备,和计算机可读存储介质,能够根据图像中的目标对象的其他参数来修正所述目标对象的第一参数,并根据修正后的所述第一参数渲染所述目标对象,渲染方式更加灵活。
第一方面,本公开实施例提供一种渲染图像的方法,其特征在于,包括:获取图像;确定所述图像中的目标对象的第一参数;确定所述图像中的所述目标对象的第二参数;根据所述第二参数修正所述第一参数;根据修正后的所述第一参数渲染所述图像中的所述目标对象。
第二方面,本公开实施例提供一种渲染图像的装置,其特征在于,包括:图像获取模块,用于获取图像;第一参数确定模块,用于确定所述图像中的目标对象的第一参数;第二参数确定模块,用于确定所述图像中的所述目标对象的第二参数;修正模块,用于根据所述第二参数修正所述第一参数;渲染模块,用于根据修正后的所述第一 参数渲染所述图像中的所述目标对象。
第三方面,本公开实施例提供一种电子设备,包括:存储器,用于存储计算机可读指令;以及一个或多个处理器,用于运行所述计算机可读指令,使得所述处理器运行时实现前述第一方面中的任一所述渲染图像的方法。
第四方面,本公开实施例提供一种非暂态计算机可读存储介质,其特征在于,所述非暂态计算机可读存储介质存储计算机指令,当所述计算机指令被计算机执行时,使得所述计算机执行前述第一方面中的任一所述渲染图像的方法。
本公开公开了一种渲染图像的方法、装置、电子设备和计算机可读存储介质。其中所述渲染图像的方法,其特征在于,包括:获取图像;确定所述图像中的目标对象的第一参数;确定所述图像中的所述目标对象的第二参数;根据所述第二参数修正所述第一参数;根据修正后的所述第一参数渲染所述图像中的所述目标对象。本公开实施例提供渲染图像的方法,装置,电子设备,和计算机可读存储介质,能够根据图像中的目标对象的其他参数来修正所述目标对象的第一参数,并根据修正后的所述第一参数渲染所述目标对象,渲染方式更加灵活。
上述说明仅是本公开技术方案的概述,为了能更清楚了解本公开的技术手段,而可依照说明书的内容予以实施,并且为让本公开的上述和其他目的、特征和优点能够更明显易懂,以下特举较佳实施例,并配合附图,详细说明如下。
附图说明
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本公开实施例提供的渲染图像的方法的实施例一的流程图;
图2为本公开实施例提供的渲染图像的方法的实施例二的流程图;
图3为本公开实施例提供的渲染图像的装置的实施例的结构示意图;
图4为根据本公开实施例提供的电子设备的结构示意图。
具体实施方式
以下通过特定的具体实例说明本公开的实施方式,本领域技术人员可由本说明书所揭露的内容轻易地了解本公开的其他优点与功效。显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。本公开还可以通过另外不同的具体实施方式加以实施或应用,本说明书中的各项细节也可以基于不同观点与应用,在没有背离本公开的精神下进行各种修饰或改变。需说明的是,在不冲突的情况下,以下实施例及实施例中的特征可以相互组合。基于本公开中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
需要说明的是,下文描述在所附权利要求书的范围内的实施例的各种方面。应显而易见,本文中所描述的方面可体现于广泛多种形式中,且本文中所描述的任何特定结构及/或功能仅为说明性的。基于本公开,所属领域的技术人员应了解,本文中所描述的一个方面可与任何其它方面独立地实施,且可以各种方式组合这些方面中的两者或两者以上。举例来说,可使用本文中所阐述的任何数目个方面来实施设备及/或实践方法。另外,可使用除了本文中所阐述的方面中的一或多者之外的其它结构及/或功能性实施此设备及/或实践此方法。
还需要说明的是,以下实施例中所提供的图示仅以示意方式说明本公开的基本构想,图示中仅显示与本公开中有关的组件而非按照实际实施时的组件数目、形状及尺寸绘制,其实际实施时各组件的型态、数量及比例可为一种随意的改变,且其组件布局型态也可能更为复杂。
另外,在以下描述中,提供具体细节是为了便于透彻理解实例。然而,所属领域的技术人员将理解,可在没有这些特定细节的情况下实践所述方面。
图1为本公开实施例提供的渲染图像的方法实施例一的流程图,本实施例提供的渲染图像的方法可以由一个渲染图像的装置来执行,该装置可以实现为软件,可以实现为硬件,还可以实现为软件和硬件的组合,例如所述渲染图像的装置包括计算机设备(例如智能终端),从而通过该计算机设备来执行本实施例提供的该渲染图像的方法。
如图1所示,本公开实施例的渲染图像的方法包括如下步骤:
步骤S101,获取图像;
在步骤S101中,渲染图像的装置获取图像,以便通过当前的和/或后续的步骤实现渲染图像的方法。该渲染图像的装置可以包括拍摄装置,从而步骤S101中所获取的图像包括该拍摄装置拍摄的图像;该渲染图像的装置可以不包括拍摄装置,但是与拍摄装置通信连接,从而步骤S101中获取图像包括通过所述通信连接获取该拍摄装置拍摄的图像;该渲染图像的装置还可以从预设的存储位置获取图像,以便通过当前的和/或后续的步骤实现渲染图像的方法,本公开实施例对获取图像的方式不做限定。
本领域技术人员可以理解,视频由一系列图像帧构成,每个图像帧也可以称为图像,因此步骤S101获取图像包括从视频中获取图像。
步骤S102,确定所述图像中的目标对象的第一参数;
可选的,所述目标对象包括人物对象,或者包括人体的关键部位对象,例如包括面部对象,五官对象,躯干对象,手臂对象等。如本公开背景技术所述,现有技术中的计算机设备具有强大的数据处理能力,例如可以通过图像分割算法识别图像中的目标对象的像素区域和/或关键点,还可以通过其他关键点定位技术识别图像中的目标对象的关键点,因此,本公开实施例中的渲染图像的装置可以基于图像分割算法和/或关键点定位技术确定所述图像中的目标对象和/或所述目标对象的第一参数。
如本领域技术人员所理解的,本公开实施例中的图像由像素构成,可以通过位置参数和颜色参数来表征图像中的每个像素,从而前述的图像分割算法可以基于图像的像素的位置参数和/或颜色参数确定所述图像中的目标对象的像素区域,前述的关键点定位技术可以将预设的关键点特征(例如颜色特征和/或形状特征)与图像的像素的 位置参数和/或颜色参数进行匹配,以确定目标对象的关键点。一种典型的表征方式为通过五元组(x,y,r,g,b)来表示图像的一个像素,其中的坐标x和y作为所述一个像素的位置参数,其中的颜色分量r,g,和b为所述像素在RGB空间上的数值,将r,g,和b叠加可以获得所述像素的颜色。当然,还可以通过其他方式表示像素的位置参数和颜色参数,例如通过极坐标或者UV坐标等表示像素的位置参数,根据Lab空间或者CMY空间等表示像素的颜色参数,本公开实施例对此不做限定。
作为一个示例,基于图像分割算法在所述图像中确定目标对象和/或所述目标对象的第一参数,其中,常见的图像分割算法可以根据图像中的像素的颜色参数的相似性或同质性将图像划分为区域,然后通过区域合并的方式将合并后的区域所包括的像素确定为所述目标对象的像素区域,进而可以基于所述像素区域确定所述目标对象的关键点和其他所述目标对象的第一参数;还可以根据目标对象的颜色特征和/或形状特征确定所述目标对象的基本区域,然后根据所述目标对象的颜色参数的不连续性和突变性从所述基本区域出发寻找所述目标对象的轮廓,根据其轮廓的位置进行空间上的延伸,也就是说根据图像的特征点、线、面进行图像分割以确定出所述目标对象的轮廓,所述目标对象的轮廓内的区域即为所述目标对象的像素区域,进而可以基于所述像素区域确定所述目标对象的关键点和其他所述目标对象的第一参数。当然,还可以采用其他的图像分割算法,本公开实施例对各类图像分割算法不做限定,任何现有的或将来的图像分割算法均可用于本公开实施例中,以确定出所述图像中的目标对象和/或所述目标对象的第一参数。
作为又一个示例,基于目标对象的颜色特征和/或形状特征并通 过关键点定位技术确定目标对象和/或所述目标对象的第一参数,其中,例如所述目标对象包括人体的面部对象,那么可以通过颜色特征和/或形状特征表征所述面部对象的轮廓关键点,然后根据所述颜色特征和/或形状特征在所述图像中与所述图像的像素的位置参数和/或颜色参数进行特征提取,以确定出所述面部对象的轮廓关键点,由于关键点在图像中仅占据非常小的面积(通常只有几个至几十个像素的大小),与关键点对应的颜色特征和/或形状特征在图像上所占据的区域通常也是非常有限和局部的,目前用的特征提取方式有两种:(1)沿轮廓垂向的一维范围图像特征提取;(2)关键点方形邻域的二维范围图像特征提取,上述两种方式有很多种实现方法,如ASM和AAM类方法、统计能量函数类方法、回归分析方法、深度学习方法、分类器方法、批量提取方法等等,本公开实施例不做具体限定。在识别出所述面部对象的轮廓关键点后,还可以基于所述轮廓关键点和所述目标对象的颜色参数的不连续性和突变性寻找目标对象的轮廓,或者基于所述面部对象的轮廓关键点确定所述目标对象的其他第一参数。
本公开实施例中,可选的,所述目标对象的第一参数包括但不限于如下参数中的一个或多个:颜色参数,位置参数,长度参数,宽度参数,形状参数,比例参数,类型参数,表情参数,姿势参数,其他参数。可选的,可以通过图像中的目标对象的像素区域中的像素的位置参数和/或颜色参数来计算,表征,或者确定所述目标对象的第一参数。作为一个示例,所述目标对象的第一参数包括眼部对象的长度参数,所述长度参数包括左眼对象或者右眼对象的外眼角到内眼角的长度(例如通过像素数量表征所述长度参数,那么所述长度参数包括左眼对象或者右眼对象的外眼角到内眼角之间的像素数量);作为又 一个示例,所述目标对象的第一参数包括眼部对象的比例参数,所述比例参数包括左眼对象或者右眼对象的外眼角到内眼角的长度与所述眼部对象所对应的面部的宽度之间的比例;作为另一个示例,所述目标对象的第一参数包括面部对象的颜色参数,所述颜色参数包括所述面部对象的像素区域内所有像素的颜色参数的平均值(例如基于RGB通道表征像素的颜色参数,那么所述面部对象的颜色参数为(r,g,b),其中r为所述面部对象的像素区域内所有像素在r通道上的数值的和除以所有像素的数量,相应地,g和b也按照上述方式计算)。作为再一个实施例,所述目标对象的第一参数包括面部对象的类型参数,所述类型参数包括圆脸类型,尖脸类型,标准脸类型,可以根据所述面部对象的嘴角横向延长线处的面部宽度与所述面部对象的颧骨处的面部宽度的比值确定所述第一参数,例如当该比值小于0.6则确定所述第一参数为尖脸类型,当该比值大于0.8则确定所述第一参数为圆脸类型,其他则确定所述第一参数为标准脸类型。本公开实施例对于所述目标对象的第一参数的形式和内容不做限定,所述目标对象的第一参数包括任何可以表征所述目标对象的参数。
步骤S103,确定所述图像中的所述目标对象的第二参数;
本公开实施例中,可选的,所述图像中的所述目标对象的第二参数包括但不限于如下参数中的一个或多个:颜色参数,位置参数,长度参数,宽度参数,形状参数,比例参数,类型参数,表情参数,姿势参数,其他参数。关于如何从所述图像中确定所述目标对象和/或所述目标对象的第二参数,可以参照步骤S102中关于确定所述目标对象和/或所述目标对象的第一参数的相同或相应描述,此处不再赘述。可选的,所述第二参数与所述第一参数不同。
值得说明的是,本公开实施例中虽然对步骤进行了编号,但是编 号的顺序并不意味着步骤的先后顺序,以步骤S102和步骤S103为例,步骤S102可以在步骤S103之前执行,可以在步骤S103之后执行,也可以同时执行步骤S102和步骤S103。
在一个可选的实施例中,步骤S103:确定所述图像中的所述目标对象的第二参数,包括:根据预设的第一对应关系,确定与所述第一参数对应的所述第二参数。例如首先执行步骤S102:确定所述图像中的目标对象的第一参数,如果所述目标对象的第一参数包括人体的面部对象的颜色参数,所述第一对应关系指示与所述颜色参数对应的第二参数包括脸型参数,那么在步骤S103中确定所述面部对象的脸型参数。其中实现所述第一预设关系例如可以通过存储对应关系表的方式,在步骤S102中确定所述第一参数之后,可以通过查询所述对应关系表的方式确定与所述第一参数对应的第二参数。
在又一个可选的实施例中,步骤S102:确定所述图像中的目标对象的第一参数,包括:根据预设的第二对应关系,确定与所述第二参数对应的所述第一参数。如前所述,步骤S102可以在步骤S103之后执行,在执行步骤S103:确定所述图像中的所述目标对象的第二参数之后,如果所述目标对象的第二参数包括人物对象的发型参数,所述第二对应关系指示与所述发型参数对应的第一参数包括脸型参数,那么在步骤S102中确定所述人物对象的脸型参数。关于所述第二对应关系的实现方式可以参见关于第一对应关系的相同或相应描述,此处不再赘述。
步骤S104,根据所述第二参数修正所述第一参数;
在通过步骤S102和步骤S103确定了所述图像中的目标对象的第一参数和第二参数后,在步骤S104中根据所述第二参数修正所述第 一参数。如本公开背景技术所述,例如在对于人物对象的面部进行“瘦脸”的美颜操作时,往往会基于预设的目标宽度参数修正图像中的面部对象的面部宽度参数,但是这种修正方式没有考虑不同的个体可能具有不同的特征,而对于不同的个体均采用统一的目标参数进行修正可能无法取得较好的修正效果,因此在所述步骤S104中,将根据所述目标对象的第二参数来修正所述第一参数,从而可以针对图像中的目标对象的特征来实现对所述第一参数的修正,以期取得较佳的修正效果。
在一个可选的实施例中,根据所述第二参数修正所述第一参数,包括:根据所述第二参数确定与所述第一参数关联的修正规则;根据所述修正规则修正所述第一参数。其中,所述修正规则可以是预设的修正规则。作为一个示例,所述目标对象的第一参数包括脸部对象的眉形参数,所述目标对象的第二参数包括脸部对象的脸型参数,如果在步骤S103中所确定的所述脸型参数为圆脸,而圆脸适合平眉和/或粗眉,因此预设的所述修正规则可以包括将所述目标对象的眉形参数修正为平眉和/或粗眉,可选的,所述修正规则可以通过对应关系表的方式预先存储,在确定所述第二参数后,通过查询所述对应关系表确定与所述第二参数对应的修正规则,其中所述修正规则与所述第一参数关联。
可选的,所述修正规则包括所述第一参数的取值范围;根据所述修正规则修正所述第一参数,包括:根据所述取值范围修正所述第一参数。基于前述实施例,例如所述目标对象的第一参数包括脸部对象的眉形参数,所述目标对象的第二参数包括脸部对象的脸型参数,由于圆脸适合粗眉,因此所述修正规则包括修正后的眉形的宽度范围,所述宽度范围例如包括眉形的最小宽度和最大宽度,从而在根据所述 修正规则对所述第一参数即眉形参数进行修正时,要保证修正后的所述眉形参数中的眉形的宽度大于或者等于所述宽度范围中的最小宽度并且小于或者等于所述宽度范围中的最大宽度,例如可以将所述面部对象的眉形参数中的眉形的宽度修正为所述宽度范围的最小宽度,最大宽度,或者中间值。
可选的,在根据所述第二参数修正所述第一参数之前,所述方法还包括:确定与所述第一参数对应的目标参数;根据所述取值范围修正所述第一参数,包括:在所述目标参数属于所述取值范围的情况下,将所述目标参数确定为修正后的所述第一参数。可选的,根据所述取值范围修正所述第一参数,包括:在所述目标参数不属于所述取值范围的情况下,根据所述取值范围的边界值和所述目标参数修正所述第一参数。本公开实施例中,与所述第一参数对应的目标参数例如包括颜色参数,位置参数,长度参数,宽度参数,形状参数,比例参数,类型参数,表情参数,姿势参数,其他参数等,例如所述目标参数是预设的,可以经过对大量图像的对比和分析确定所述目标参数,例如当所述第一参数按照所述目标参数进行修正时,较大概率将取得较好的修正效果。
作为示例,所述目标对象的第一参数包括人物对象的面部长度参数,例如将所述人物对象的面部轮廓关键点中的下巴关键点和顶部关键点之间的像素数量60像素确定为所述人物对象的面部长度参数;所述目标对象的第二参数包括所述人物对象的身高参数,例如所述身高参数为500像素);根据所述身高参数确定的修正规则包括:面部长度参数与身高参数的比例为[0.125,0.2],因此根据该规则能够确定所述面部长度参数的取值范围为62.5像素到100像素,或者该规则包括所述面部长度参数的取值范围为62.5像素到100像素,从而 可以根据所述取值范围修正所述第一参数,例如将所述60像素根据适当的算法或者规则修正到62.5像素到100像素的范围内(例如将所述面部长度参数直接修正为62.5,100像素等)。如果所述方法还确定了与所述第一参数即面部长度参数对应的目标参数,例如所述目标参数为65像素(可以根据预设的规则确定与所述第一参数对应的目标参数,例如所述人物对象的面部宽度参数为50像素,前述预设的规则包括将于面部长度参数对应的目标参数确定为所述面部宽度参数的1.3倍即65像素),由于65像素属于62.5像素到100像素的范围,因此可以将所述目标参数即65像素确定为修正后的所述第一参数即面部长度参数;如果所述目标参数为60像素(例如所述人物对象的面部宽度参数为50像素,前述预设的规则包括将于面部长度参数对应的目标参数确定为所述面部宽度参数的1.2倍即60像素),由于60像素不属于62.5像素到100像素的范围,因此可以根据所述目标参数即60像素与所述范围的边界值即62.5像素和/或100像素来修正所述面部长度参数(例如将所述第一参数即面部长度参数修正为所述目标参数与所述范围的一个边界值的平均值)。
可选的,所述修正规则包括与所述第一参数对应的修正类型;根据所述修正规则修正所述第一参数,包括:根据所述修正类型修正所述第一参数。作为示例,在步骤S102中确定的所述目标对象的第一参数包括面部对象的眼部参数,例如所述眼部参数包括所述面部对象中的眼睛的轮廓参数,眼角位置参数,眼睛长度参数,眼影颜色参数,和/或眼睛最宽处宽度参数;在步骤S103中确定的所述目标对象的第二参数包括所述面部对象的脸型参数,例如所述脸型参数为尖脸,而尖脸适合“丹凤眼”类型,即所述修正规则所包括的与所述眼部参数对应的修正类型为“丹凤眼”类型,因此在步骤S104中将根据所述 “丹凤眼”类型修正所述眼部参数,例如根据“丹凤眼”类型要求,将所述眼部参数中的眼角位置参数修正为外眼角高于内眼角,以及将眼睛长度参数与面部宽度的比例修正为达到预设比例等。
步骤S105,根据修正后的所述第一参数渲染所述图像中的所述目标对象。
在步骤S104中修正了所述第一参数,从而在步骤S105中可以根据修正后的所述第一参数来渲染所述图像中的所述目标对象,以实现“美颜”等图像处理功能。在步骤S105对所述目标对象进行渲染的过程中,可以采用现有的或将来的图像处理技术对所述图像进行处理,例如通过颜色空间转换建立图像的矢量图以及对图像进行平滑处理,根据第一参数的类型和内容等对图像中的人物对象的区域中的像素进行位置参数和/或颜色参数的变化等,此处不再赘述。
通过本公开实施例的技术方案,在确定图像中的目标对象的第一参数和第二参数后,能够根据所述第二参数修正所述第一参数,并基于修正后的第一参数渲染所述图像中的目标对象,也就是说,根据所述图像中的目标对象的其他特征来修正所述目标对象的第一参数并渲染所述目标对象,从而对于不同的目标对象能够采用独特的,差异的,或者更为适当的第一参数来渲染所述目标对象,其渲染方式更加灵活。
图2为本公开实施例提供的渲染图像的方法实施例二的流程图,在该方法实施例二中,在步骤S105:根据修正后的所述第一参数渲染所述图像中的所述目标对象之后,还包括步骤S201;显示所述图像;和/或存储所述图像。由于在步骤S105中实现了渲染所述图像的中的目标对象的功能,例如对所述拍摄装置拍摄的所述图像中的 人物对象进行了美颜等图像处理,那么在步骤S201中,可以将该经过美颜的图像显示和/或存储,以使得使用者能够即时浏览渲染后的图像效果并持久化该经过渲染的图像。
图3所示为本公开实施例提供的渲染图像的装置300实施例的结构示意图,如图3所示,渲染图像的装置300包括图像获取模块301,第一参数确定模块302,第二参数确定模块303,修正模块304,和渲染模块305。其中,所述图像获取模块301用于用于获取图像;所述第一参数确定模块302,用于确定所述图像中的目标对象的第一参数;所述第二参数确定模块303,用于确定所述图像中的所述目标对象的第二参数;所述修正模块304,用于根据所述第二参数修正所述第一参数;所述渲染模块305,用于根据修正后的所述第一参数渲染所述图像中的所述目标对象。
在一个可选的实施例中,所述渲染图像的装置还包括:显示模块305和/或存储模块306,其中,所述显示模块305用于显示所述图像,所述存储模块306用于存储所述图像。
图3所示装置可以执行图1和/或图2所示实施例的方法,本实施例未详细描述的部分,可参考对图1和/或图2所示实施例的相关说明。该技术方案的执行过程和技术效果参见图1和/或图2所示实施例中的描述,在此不再赘述。
下面参考图4,其示出了适于用来实现本公开实施例的电子设备300的结构示意图。本公开实施例中的电子设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等 的固定终端。图4示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图4所示,电子设备400可以包括处理装置(例如中央处理器、图形处理器等)401,其可以根据存储在只读存储器(ROM)402中的程序或者从存储装置408加载到随机访问存储器(RAM)403中的程序而执行各种适当的动作和处理。在RAM 403中,还存储有电子设备400操作所需的各种程序和数据。处理装置401、ROM 402以及RAM 403通过总线或通信线路404彼此相连。输入/输出(I/O)接口405也连接至总线或通信线路404。
通常,以下装置可以连接至I/O接口405:包括例如触摸屏、触摸板、键盘、鼠标、图像传感器、麦克风、加速度计、陀螺仪等的输入装置406;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置407;包括例如磁带、硬盘等的存储装置408;以及通信装置409。通信装置409可以允许电子设备400与其他设备进行无线或有线通信以交换数据。虽然图4示出了具有各种装置的电子设备400,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置409从网络上被下载和安装,或者从存储装置408被安装,或者从ROM 402被安装。在该计算机程序被处理装置401执行时,执行本公开实施例的方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备执行上述实施例中的渲染图像的方法。
可以以一种或多种程序设计语言或其组合来编写用于执行本公 开的操作的计算机程序代码,上述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限 于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。

Claims (12)

  1. 一种渲染图像的方法,其特征在于,包括:
    获取图像;
    确定所述图像中的目标对象的第一参数;
    确定所述图像中的所述目标对象的第二参数;
    根据所述第二参数修正所述第一参数;
    根据修正后的所述第一参数渲染所述图像中的所述目标对象。
  2. 根据权利要求1所述的渲染图像的方法,其特征在于,确定所述图像中的所述目标对象的第二参数,包括:
    根据预设的第一对应关系,确定与所述第一参数对应的所述第二参数。
  3. 根据权利要求1所述的渲染图像的方法,其特征在于,确定所述图像中的目标对象的第一参数,包括:
    根据预设的第二对应关系,确定与所述第二参数对应的所述第一参数。
  4. 根据权利要求1-3中任一所述的渲染图像的方法,其特征在于,根据所述第二参数修正所述第一参数,包括:
    根据所述第二参数确定与所述第一参数关联的修正规则;
    根据所述修正规则修正所述第一参数。
  5. 根据权利要求4所述的渲染图像的方法,其特征在于,所述修正规则包括所述第一参数的取值范围;
    根据所述修正规则修正所述第一参数,包括:
    根据所述取值范围修正所述第一参数。
  6. 根据权利要求5所述的渲染图像的方法,其特征在于,在根据所述第二参数修正所述第一参数之前,所述方法还包括:
    确定与所述第一参数对应的目标参数;
    根据所述取值范围修正所述第一参数,包括:
    在所述目标参数属于所述取值范围的情况下,将所述目标参数确定为修正后的所述第一参数。
  7. 根据权利要求6所述的渲染图像的方法,其特征在于,根据所述取值范围修正所述第一参数,包括:
    在所述目标参数不属于所述取值范围的情况下,根据所述取值范围的边界值和所述目标参数修正所述第一参数。
  8. 根据权利要求4所述的渲染图像的方法,其特征在于,所述修正规则包括于所述第一参数对应的修正类型;
    根据所述修正规则修正所述第一参数,包括:
    根据所述修正类型修正所述第一参数。
  9. 一种渲染图像的装置,其特征在于,包括:
    图像获取模块,用于获取图像;
    第一参数确定模块,用于确定所述图像中的目标对象的第一参数;
    第二参数确定模块,用于确定所述图像中的所述目标对象的第二 参数;
    修正模块,用于根据所述第二参数修正所述第一参数;
    渲染模块,用于根据修正后的所述第一参数渲染所述图像中的所述目标对象。
  10. 一种电子设备,包括:
    存储器,用于存储计算机可读指令;以及
    处理器,用于运行所述计算机可读指令,使得所述处理器运行时实现根据权利要求1-8中任意一项所述的渲染图像的方法。
  11. 一种非暂态计算机可读存储介质,用于存储计算机可读指令,当所述计算机可读指令由计算机执行时,使得所述计算机执行权利要求1-8中任意一项所述的渲染图像的方法。
  12. [根据细则26改正18.03.2020] 
PCT/CN2020/074443 2019-04-23 2020-02-06 渲染图像的方法、装置、电子设备和计算机可读存储介质 WO2020215854A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910331282.6 2019-04-23
CN201910331282.6A CN110097622B (zh) 2019-04-23 2019-04-23 渲染图像的方法、装置、电子设备和计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2020215854A1 true WO2020215854A1 (zh) 2020-10-29

Family

ID=67445687

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/074443 WO2020215854A1 (zh) 2019-04-23 2020-02-06 渲染图像的方法、装置、电子设备和计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN110097622B (zh)
WO (1) WO2020215854A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110097622B (zh) * 2019-04-23 2022-02-25 北京字节跳动网络技术有限公司 渲染图像的方法、装置、电子设备和计算机可读存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107680033A (zh) * 2017-09-08 2018-02-09 北京小米移动软件有限公司 图片处理方法及装置
CN108734126A (zh) * 2018-05-21 2018-11-02 深圳市梦网科技发展有限公司 一种美颜方法、美颜装置及终端设备
CN108921856A (zh) * 2018-06-14 2018-11-30 北京微播视界科技有限公司 图像裁剪方法、装置、电子设备及计算机可读存储介质
CN110097622A (zh) * 2019-04-23 2019-08-06 北京字节跳动网络技术有限公司 渲染图像的方法、装置、电子设备和计算机可读存储介质

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004062651A (ja) * 2002-07-30 2004-02-26 Canon Inc 画像処理装置、画像処理方法、その記録媒体およびそのプログラム
WO2004110264A1 (ja) * 2003-06-11 2004-12-23 Kose Corporation 肌の評価方法および画像のシミュレーション方法
KR20100056270A (ko) * 2008-11-19 2010-05-27 삼성전자주식회사 색 보정 처리를 행하는 디지털 영상 신호 처리 방법 및 상기 방법을 실행하는 디지털 영상 신호 처리 장치
JP2013179464A (ja) * 2012-02-28 2013-09-09 Nikon Corp 電子カメラ
CN103605975B (zh) * 2013-11-28 2018-10-19 小米科技有限责任公司 一种图像处理的方法、装置及终端设备
CN104715236A (zh) * 2015-03-06 2015-06-17 广东欧珀移动通信有限公司 一种美颜拍照方法及装置
CN105279487B (zh) * 2015-10-15 2022-03-15 Oppo广东移动通信有限公司 美颜工具筛选方法和系统
CN106169172A (zh) * 2016-07-08 2016-11-30 深圳天珑无线科技有限公司 一种图像处理的方法及系统
CN108229278B (zh) * 2017-04-14 2020-11-17 深圳市商汤科技有限公司 人脸图像处理方法、装置和电子设备
CN109419140A (zh) * 2017-08-31 2019-03-05 丽宝大数据股份有限公司 推荐眉毛形状显示方法与电子装置
CN107886484B (zh) * 2017-11-30 2020-01-10 Oppo广东移动通信有限公司 美颜方法、装置、计算机可读存储介质和电子设备
CN108665521B (zh) * 2018-05-16 2020-06-02 京东方科技集团股份有限公司 图像渲染方法、装置、系统、计算机可读存储介质及设备
CN108876732A (zh) * 2018-05-25 2018-11-23 北京小米移动软件有限公司 人脸美颜方法及装置
CN108765352B (zh) * 2018-06-01 2021-07-16 联想(北京)有限公司 图像处理方法以及电子设备
CN109584151B (zh) * 2018-11-30 2022-12-13 腾讯科技(深圳)有限公司 人脸美化方法、装置、终端及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107680033A (zh) * 2017-09-08 2018-02-09 北京小米移动软件有限公司 图片处理方法及装置
CN108734126A (zh) * 2018-05-21 2018-11-02 深圳市梦网科技发展有限公司 一种美颜方法、美颜装置及终端设备
CN108921856A (zh) * 2018-06-14 2018-11-30 北京微播视界科技有限公司 图像裁剪方法、装置、电子设备及计算机可读存储介质
CN110097622A (zh) * 2019-04-23 2019-08-06 北京字节跳动网络技术有限公司 渲染图像的方法、装置、电子设备和计算机可读存储介质

Also Published As

Publication number Publication date
CN110097622B (zh) 2022-02-25
CN110097622A (zh) 2019-08-06

Similar Documents

Publication Publication Date Title
CN111242881B (zh) 显示特效的方法、装置、存储介质及电子设备
WO2020186935A1 (zh) 虚拟对象的显示方法、装置、电子设备和计算机可读存储介质
US10599914B2 (en) Method and apparatus for human face image processing
WO2020024483A1 (zh) 用于处理图像的方法和装置
US20230401682A1 (en) Styled image generation method, model training method, apparatus, device, and medium
JP2022542668A (ja) 目標対象物マッチング方法及び装置、電子機器並びに記憶媒体
CN111414879B (zh) 人脸遮挡程度识别方法、装置、电子设备及可读存储介质
CN110070063B (zh) 目标对象的动作识别方法、装置和电子设备
WO2020248900A1 (zh) 全景视频的处理方法、装置及存储介质
EP3917131A1 (en) Image deformation control method and device and hardware device
CN110084154B (zh) 渲染图像的方法、装置、电子设备和计算机可读存储介质
CN110062157B (zh) 渲染图像的方法、装置、电子设备和计算机可读存储介质
US20240104810A1 (en) Method and apparatus for processing portrait image
CN110211195B (zh) 生成图像集合的方法、装置、电子设备和计算机可读存储介质
CN110619656B (zh) 基于双目摄像头的人脸检测跟踪方法、装置及电子设备
CN111199169A (zh) 图像处理方法和装置
US20240095886A1 (en) Image processing method, image generating method, apparatus, device, and medium
CN109981989B (zh) 渲染图像的方法、装置、电子设备和计算机可读存储介质
WO2020215854A1 (zh) 渲染图像的方法、装置、电子设备和计算机可读存储介质
CN110047126B (zh) 渲染图像的方法、装置、电子设备和计算机可读存储介质
WO2020155984A1 (zh) 人脸表情图像处理方法、装置和电子设备
CN110059739B (zh) 图像合成方法、装置、电子设备和计算机可读存储介质
CN116684394A (zh) 媒体内容处理方法、装置、设备、可读存储介质及产品
CN110264431A (zh) 视频美化方法、装置及电子设备
CN110288552A (zh) 视频美化方法、装置及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20796189

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 24.02.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20796189

Country of ref document: EP

Kind code of ref document: A1