WO2021098107A1 - 图像处理方法及装置、电子设备和存储介质 - Google Patents

图像处理方法及装置、电子设备和存储介质 Download PDF

Info

Publication number
WO2021098107A1
WO2021098107A1 PCT/CN2020/080924 CN2020080924W WO2021098107A1 WO 2021098107 A1 WO2021098107 A1 WO 2021098107A1 CN 2020080924 W CN2020080924 W CN 2020080924W WO 2021098107 A1 WO2021098107 A1 WO 2021098107A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
rendered
area
feature point
reference image
Prior art date
Application number
PCT/CN2020/080924
Other languages
English (en)
French (fr)
Inventor
苏柳
Original Assignee
北京市商汤科技开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京市商汤科技开发有限公司 filed Critical 北京市商汤科技开发有限公司
Priority to KR1020207036012A priority Critical patent/KR20210064114A/ko
Priority to SG11202012481UA priority patent/SG11202012481UA/en
Priority to JP2020570186A priority patent/JP7159357B2/ja
Priority to EP20820759.7A priority patent/EP3852069A4/en
Priority to US17/117,230 priority patent/US11403788B2/en
Publication of WO2021098107A1 publication Critical patent/WO2021098107A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present disclosure relates to the field of image processing technology, and in particular to an image processing method and device, electronic equipment, and storage medium.
  • Image rendering is the process of converting three-dimensional light energy transfer processing into a two-dimensional image. Scenes and entities are represented in a three-dimensional form, which is closer to the real world and is easy to manipulate and transform. However, in the process of rendering the target object, rendering overflow is prone to occur.
  • the present disclosure proposes a technical solution for image processing.
  • an image processing method including:
  • the target material is used to render the region to be rendered to generate a rendering result.
  • the generating a reference image including a mark according to the target image and the second object includes:
  • the marking the coverage area of the second object in the first initial image to obtain the reference image includes:
  • At least one pixel included in the coverage area of the second object is changed to a target pixel value to obtain the reference image.
  • the determining the region to be rendered according to the reference image and the first object includes:
  • a region to be rendered is determined according to the reference image and the first object.
  • determining the area to be rendered according to the reference image and the first object includes:
  • the area formed by the pixels to be rendered is used as the area to be rendered.
  • the identifying the first object to be rendered in the target image and the second object to which the first object belongs in the target image includes:
  • the second feature points included in the second feature point set are connected in a second preset manner to obtain an area corresponding to the second object.
  • the connecting the first feature points included in the first feature point set according to a first preset manner to obtain the area corresponding to the first object includes:
  • the area covered by the at least one first grid is taken as the area corresponding to the first object.
  • the connecting the second feature points included in the second feature point set in a second preset manner to obtain the area corresponding to the second object includes:
  • the area covered by the at least one second grid is taken as the area corresponding to the second object.
  • the method further includes:
  • the rendering result and the modification result are merged to obtain a merged image.
  • the first object includes a pupil
  • the second object includes an eye
  • the target material includes a material for beautifying the pupil
  • an image processing device including:
  • a recognition module configured to recognize a first object to be rendered in a target image, and a second object to which the first object belongs in the target image;
  • a reference image generating module configured to generate a reference image including a mark according to the target image and the second object, wherein the mark is used to record the coverage area of the second object;
  • a to-be-rendered area determining module configured to determine a to-be-rendered area according to the reference image and the first object, wherein the to-be-rendered area is located within a coverage area corresponding to the marker;
  • the rendering module is used to render the region to be rendered by using the target material to generate a rendering result.
  • the reference image generation module is used to:
  • the reference image generation module is further configured to:
  • At least one pixel included in the coverage area of the second object is changed to a target pixel value to obtain the reference image.
  • the to-be-rendered area determination module is used to:
  • a region to be rendered is determined according to the reference image and the first object.
  • the to-be-rendered area determination module is further configured to:
  • the area formed by the pixels to be rendered is used as the area to be rendered.
  • the identification module is used to:
  • the second feature points included in the second feature point set are connected in a second preset manner to obtain an area corresponding to the second object.
  • the identification module is further configured to:
  • the area covered by the at least one first grid is taken as the area corresponding to the first object.
  • the identification module is further configured to:
  • the area covered by the at least one second grid is taken as the area corresponding to the second object.
  • the device further includes a fusion module, and the fusion module is configured to:
  • the rendering result and the modification result are merged to obtain a merged image.
  • the first object includes a pupil
  • the second object includes an eye
  • the target material includes a material for beautifying the pupil
  • an electronic device including:
  • a memory for storing processor executable instructions
  • the processor is configured to execute the above-mentioned image processing method.
  • a computer-readable storage medium having computer program instructions stored thereon, and when the computer program instructions are executed by a processor, the foregoing image processing method is implemented.
  • a computer-readable code when the computer-readable code runs in an electronic device, a processor in the electronic device is executed to implement an image processing method.
  • the first object to be rendered in the target image and the second object to which the first object belongs are recognized, a reference image including a mark is generated according to the target image and the second object, and a reference image including a mark is generated according to the reference image and the second object.
  • An object determines the area to be rendered within the coverage area corresponding to the mark, so that the target material is used to render the area to be rendered to generate a rendering result.
  • Fig. 1 shows a flowchart of an image processing method according to an embodiment of the present disclosure.
  • Fig. 2 shows a schematic diagram of a target image according to an embodiment of the present disclosure.
  • Fig. 3 shows a schematic diagram of a target material according to an embodiment of the present disclosure.
  • Fig. 4 shows a schematic diagram of a first feature point set according to an embodiment of the present disclosure.
  • Fig. 5 shows a schematic diagram of a second feature point set according to an embodiment of the present disclosure.
  • Fig. 6 shows a schematic diagram of a second object according to an embodiment of the present disclosure.
  • Fig. 7 shows a schematic diagram of a reference image according to an embodiment of the present disclosure.
  • FIG. 8 shows a schematic diagram of a rendering result according to an embodiment of the present disclosure
  • Fig. 9 shows a schematic diagram of a fusion result according to an embodiment of the present disclosure.
  • Fig. 10 shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure.
  • Fig. 11 shows a block diagram of an electronic device according to an embodiment of the present disclosure.
  • Fig. 12 shows a block diagram of an electronic device according to an embodiment of the present disclosure.
  • Fig. 1 shows a flowchart of an image processing method according to an embodiment of the present disclosure.
  • the method may be applied to an image processing apparatus, which may be a terminal device, a server, or other processing equipment.
  • terminal devices can be User Equipment (UE), mobile devices, user terminals, terminals, cellular phones, cordless phones, personal digital assistants (PDAs), handheld devices, computing devices, vehicle-mounted devices, and portable devices. Wearable equipment, etc.
  • UE User Equipment
  • PDAs personal digital assistants
  • the image processing method may be implemented by a processor invoking computer-readable instructions stored in the memory.
  • the image processing method may include:
  • Step S11 Identify the first object to be rendered in the target image and the second object to which the first object belongs in the target image.
  • Step S12 Generate a reference image including a mark according to the target image and the second object, where the mark is used to record the coverage area of the second object.
  • Step S13 Determine an area to be rendered according to the reference image and the first object, where the area to be rendered is located within the coverage area corresponding to the marker.
  • step S14 the target material is used to render the area to be rendered to generate a rendering result.
  • the first object to be rendered in the target image and the second object to which the first object belongs are recognized, a reference image including a mark is generated according to the target image and the second object, and a reference image including a mark is generated according to the reference image and the second object.
  • An object determines the area to be rendered within the coverage area corresponding to the mark, so that the target material is used to render the area to be rendered to generate a rendering result.
  • the implementation of the target image is not limited, and any image with rendering requirements can be used as the implementation of the target image.
  • the target image may be an object containing a human face area, such as an avatar image, a half-length image, or a full-body image.
  • Fig. 2 shows a schematic diagram of a target image according to an embodiment of the present disclosure.
  • the target image may be an avatar image containing a human face.
  • the specific object content of the first object to be rendered in the target image can also be confirmed according to actual rendering requirements, which is not limited in the embodiment of the present disclosure.
  • the first object when the target image is an object containing a face area, the first object can be a certain part of the face, such as pupils, nose bridge, earlobes or lips, etc., which is determined according to actual needs. can.
  • the specific content of the second object to which the first object belongs can be flexibly determined according to the actual situation of the first object.
  • the target image when the first object is the pupil, the second object can be the eye; when the first object is the bridge of the nose, The second object may be a nose; when the first object is an earlobe, the second object may be an ear, and when the first object is a lips, the second object may be a mouth.
  • the number of target images is not limited.
  • the target image can include a single picture or multiple pictures, that is, batch rendering of objects in multiple pictures at a time .
  • the reference image generated according to the target image and the second object, the marks included in the reference image, and the reference image and the first object can be flexibly changed according to different implementation methods such as the target image, and will not be explained here for the time being. For details, please refer to the following disclosed embodiments.
  • the rendering result can be obtained by rendering the area to be rendered based on the target material.
  • the realization method of the target material can also be flexibly set according to the actual situation of the first object, and is not limited to the following The disclosed embodiments are described.
  • the target material when the first object is the pupil, the target material can be a cosmetic contact material; when the first object is the bridge of the nose, the target material can be a shadow material; when the first object is an earlobe, the target material can be an earring material ; When the first object is lips, the target material can be a lipstick material.
  • the first object may include pupil objects
  • the second object may include eye objects
  • the target material may include materials for beautifying pupils.
  • the material for beautifying pupils may be beautiful.
  • the specific content of the color contact material can be flexibly selected and set, and its acquisition method can also be flexibly determined according to the actual situation.
  • the color contact material can be randomly selected from the color contact material library.
  • the material of the beauty contact lenses may also be a specific material selected according to requirements.
  • Fig. 3 shows a schematic diagram of a target material according to an embodiment of the present disclosure. It can be seen from the figure that, in an example of the present disclosure, the target material may be a purple color contact lens material with a certain gloss texture (due to the limitation of the drawing, The purple color cannot be seen from the picture for the time being).
  • the image processing method proposed in the embodiment of the present disclosure can be applied to the color pupil processing of the face image.
  • the rendering range is effectively restricted, and the possibility of rendering the beauty pupil material to the area outside the pupil during the rendering process is reduced.
  • the image processing method of the embodiment of the present disclosure is not limited to the processing applied to the image containing the face region, and can be applied to any image processing, which is not limited in the present disclosure.
  • the method of acquiring the target image is not limited, and is not limited by the following disclosed embodiments.
  • the target image can be obtained by reading or receiving; in a possible implementation manner, the target image can be obtained by active shooting or other active acquisition methods.
  • the first object and the second object in the target image can be identified through step S11.
  • the implementation manner of step S11 is not limited.
  • the first object and the second object to be rendered in the target image can be determined by performing object detection in the target image.
  • the first object-oriented object detection and the second object-oriented object detection can be performed separately in the target image; in an example, the second object-oriented object detection can also be performed in the target image first, and then According to the detection result of the second object, the target image is cropped to retain the image of the second object. Since the first object is part of the second object, object detection can be further performed in the cropped image to obtain the first object corresponding Area.
  • the method of object detection is not limited in the embodiments of the present disclosure, and any method that can detect or fail the target object from the image can be used as an implementation method of object detection.
  • object detection can be implemented by feature extraction.
  • step S11 may include:
  • Step S111 Perform feature extraction on the target image to obtain a first feature point set corresponding to the first object and a second feature point set corresponding to the second object, respectively.
  • Step S112 Connect the first feature points included in the first feature point set in a first preset manner to obtain an area corresponding to the first object.
  • step S113 the second feature points included in the second feature point set are connected in a second preset manner to obtain an area corresponding to the second object.
  • the first feature point set and the second feature point set are obtained by feature extraction of the target image, and then the points in the first feature point set are connected through the first preset method to obtain the area corresponding to the first object, and the second feature point set
  • the points in the point set are connected by a second preset method to obtain the area corresponding to the second object, where the area corresponding to the first object may also be referred to as the coverage area of the first object, and the area corresponding to the second object may also be referred to as the first object. 2.
  • the coverage area of the object is obtained by feature extraction of the target image, and then the points in the first feature point set are connected through the first preset method to obtain the area corresponding to the first object, and the second feature point set
  • the points in the point set are connected by a second preset method to obtain the area corresponding to the second object, where the area corresponding to the first object may also be referred to as the coverage area of the first object, and the area corresponding to the second object may also be referred to as the first object.
  • step S111 is not limited, that is, the specific method of feature extraction is not limited in the embodiment of the present disclosure. Any calculation method that can perform feature extraction can be used as the implementation of step S111.
  • the order of object detection for the first object and the second object can be flexibly selected according to the actual situation. Since feature extraction is a possible implementation of object detection, it can be performed at the same time. The feature extraction of the first object and the second object can also be performed on the second object first, and then the feature extraction of the first object can be further performed. The feature extraction methods of the two can be the same or different, and will not be repeated here. Based on the above reasons, the first feature point set and the second feature point set may have common feature points or no common feature points. Regardless of whether there are common feature points, it will not affect the subsequent obtaining of the first feature point.
  • FIG. 4 shows a schematic diagram of a first feature point set according to an embodiment of the present disclosure
  • FIG. 5 shows a schematic diagram of a second feature point set according to an embodiment of the present disclosure. As shown in the figure, in an example, FIG.
  • the first feature point set in Figure 2 is the point set obtained by pupil feature point extraction on the face image in Figure 2
  • the second feature point set in Figure 5 is the eye feature point extraction on the face image in Figure 2
  • step S112 and step S113 After the first feature point set and the second feature point set are obtained, the area corresponding to the first object and the area corresponding to the second object can be obtained through step S112 and step S113.
  • steps S112 and The order of execution of step S113 is not limited, and can be executed simultaneously or sequentially. When executed sequentially, step S112 may be executed first, or step S113 may be executed first, and there is no limitation here.
  • the process of obtaining the first object may be to connect the first feature point set according to the first preset manner.
  • the specific implementation form of the first preset manner is not limited in the embodiment of the present disclosure, and may be It is determined according to the actual situation of the first object, and there is no restriction here.
  • the second preset method for obtaining the area corresponding to the second object may also be flexibly selected according to actual conditions.
  • the second preset method may be the same as the first preset method or may be different.
  • step S112 may include:
  • the first feature points included in the first feature point set are connected into at least one first grid according to the sequence of the first preset manner, and the area covered by the first grid is taken as the area corresponding to the first object.
  • the first feature points included in the first feature point set can be connected in the order of the first preset method. It is proposed that the first preset mode can be set according to the actual situation, so the connection sequence of the first feature points is also determined according to the preset mode. After the first feature points are connected according to the preset sequence, at least one grid can be obtained. In the embodiments of the present disclosure, this grid may be referred to as the first grid. The number and shape of the first grid can be determined according to the number of first feature points contained in the first feature point set and the first preset manner.
  • the first preset manner may be Every three first feature points in a feature point set are divided into a group and connected to form multiple triangular grids. There is no specific restriction on which three points are divided into a group, and it can be set according to the actual situation. .
  • the first preset manner may also be to form multiple four-corner grids by dividing every four first feature points in the first feature point set into a group for connection, and the same division method is not limited .
  • step S112 may include:
  • step S1121 at least three first feature points are taken as a group, and the first feature point set is divided to obtain at least one group of first feature point subsets.
  • step S1122 the first feature points included in the at least one first feature point subset are respectively connected in sequence to obtain at least one first grid.
  • Step S1123 Use at least one area covered by the first grid as an area corresponding to the first object.
  • the number of the first feature points as a group can be 3, or more than 3, such as 4, 5 or 6, etc., which is not limited here.
  • step S113 may include:
  • the second feature points included in the second feature point set are connected into at least one second grid according to the sequence of the second preset manner, and the area covered by the second grid is taken as the area corresponding to the second object.
  • step S113 may include:
  • step S1131 the at least three second feature points are taken as a group, and the second feature point set is divided to obtain at least one set of second feature point subsets.
  • step S1132 the second feature points included in the at least one set of second feature point subsets are respectively connected in sequence to obtain at least one second grid.
  • Step S1133 Use at least one area covered by the second grid as an area corresponding to the second object.
  • Fig. 6 shows a schematic diagram of a second object according to an embodiment of the present disclosure. It can be seen from the figure that, in an example, the second feature point in the second feature point set in Fig. If it is connected by way of connection, multiple three grids can be obtained, and these triangular grids together constitute the area corresponding to the second object.
  • first mesh and second mesh is a collection of vertices and polygons representing the shape of a polyhedron in three-dimensional computer graphics, which is also called an unstructured mesh.
  • first grid and the second grid may take the form of triangular grids. The area corresponding to the first object and the area corresponding to the second object are enclosed by a triangular grid to facilitate subsequent rendering.
  • step S12 may be used to generate a reference image including a mark according to the target image and the second object, wherein the mark is used to record the coverage area of the second object.
  • the implementation of step S12 can be flexibly determined according to the actual conditions of the first object and the second object, and is not limited to the following disclosed embodiments. In a possible implementation manner, step S12 may include:
  • Step S121 Generate a first initial image with the same size as the target image.
  • Step S122 In the first initial image, mark the coverage area of the second object to obtain a reference image.
  • the reference image is obtained by generating a first initial image with the same size as the target image, and marking the coverage area of the second object in the first initial image.
  • This process can use a reconstructed image with the same size as the target image to obtain a reference image. Effectively mark the position of the second object in the target image, so that in the subsequent rendering process, you can determine whether the rendered pixels exceed the position constraint of the second object when rendering the first object based on the mark, and reduce rendering overflow The possibility to improve the reliability and authenticity of rendering.
  • the size of the first initial image is the same as the target image, and its specific image content may not be limited in the embodiment of the present disclosure.
  • the first initial image may be a blank image.
  • the first initial image may be an image covered by a certain texture, and the covered texture is not limited in the embodiment of the present disclosure, and can be flexibly set according to actual conditions.
  • texture is one or several two-dimensional graphics representing the details of the surface of an object, and can also be referred to as texture mapping.
  • the first initial image may be an image covered by a certain solid color texture, and the color of the texture may also be flexibly set, which may be black, white, or red.
  • the generation process of the first initial image is not limited in the embodiment of the present disclosure, and may be to create an image of the same size after reading the size of the target image.
  • step S122 may be used to mark the coverage area of the second object in the first initial image to obtain the reference image. Since the size of the first initial image and the target image are the same, the position of the marked area in the first initial image is consistent with the position of the second object in the target image.
  • the marking method is not limited in the embodiments of the present disclosure. Any marking method that can distinguish the area corresponding to the second object in the first initial image from the rest of the first initial object itself can be used as a marking realization the way. In a possible implementation manner, the marking method may be to add a marker at the position to be marked in the first initial image, and the marker may be a symbol, data or texture, etc., which is not limited here.
  • step S122 may include:
  • At least one pixel included in the coverage area of the second object is changed to a target pixel value to obtain a reference image.
  • the image area with the target pixel value is the image area including the mark in the reference image.
  • other areas other than the coverage area of the second object can also be adjusted to other pixel values different from the target pixel value. In order to achieve a clear distinction between the two different areas.
  • the reference image By changing at least one pixel included in the coverage area of the second object to the target pixel value in the first initial image, the reference image can be obtained.
  • the comparison can be achieved by changing the color of the coverage area of the second object.
  • the marking of the coverage area of the second object is simple and cost-saving.
  • step S122 may also include:
  • the coverage area of the second object is rendered through a preset texture to obtain a reference image.
  • the preset texture for rendering the area corresponding to the second object its specific implementation is not limited in the embodiment of the present disclosure, and the area corresponding to the second object can be compared with other areas of the first initial image.
  • the area can be distinguished.
  • the preset texture may be any texture that does not contain blank areas; in one example, when the first initial image is covered by a solid color texture of a certain color, the preset texture
  • the texture can be any texture that does not contain the color.
  • FIG. 7 shows a schematic diagram of a reference image according to an embodiment of the present disclosure. As can be seen from the figure, in the example of the present disclosure, the first initial image is an image covered by a black texture, and the second object is rendered by a red texture. To get the reference image as shown in the figure.
  • the reference image can be obtained by rendering the coverage area of the second object with a preset texture in the first initial image.
  • the position of the second object can be marked in the reference image for subsequent marking
  • the second object is applied to the overall rendering process through the rendering mark, and the entire image processing process can be completed by the same rendering method, reducing additional resource consumption. It improves the overall efficiency of image processing and also saves costs.
  • step S13 can be used to determine the area to be rendered.
  • step S13 can include:
  • Step S131 Generate a second initial image with the same size as the target image.
  • Step S132 In the second initial image, determine the area to be rendered according to the reference image and the first object.
  • the area to be rendered is determined according to the reference image and the first object in the second initial image with the same size as the target image.
  • step S132 may include:
  • Step S1321 In the second initial image, the area corresponding to the first object is used as the initial area to be rendered.
  • Step S1322 traverse the pixels of the initial region to be rendered, and when the corresponding position of the pixel in the reference image includes a mark, the pixel is regarded as the pixel to be rendered.
  • step S1323 the area formed by the pixels to be rendered is used as the area to be rendered.
  • step S1321 the generation process and implementation manner of the second initial image are not limited. You can refer to the generation process and implementation manner of the first initial image proposed in the above-mentioned disclosed embodiment, which will not be repeated here. It should be noted that, The implementation of the second initial image may be the same as or different from the first initial image.
  • step S1322 can be used to traverse the pixels of the initial area to be rendered.
  • each pixel When each pixel is traversed, it can be found in the reference image that has the same position as the pixel.
  • the traversed pixel can be recorded as the traversed pixel, and the corresponding pixel in the reference image is recorded as the reference pixel. Then it can be determined whether the reference pixel is marked. From the content disclosed in the above disclosed embodiment, it can be known that the marked area in the reference image is the position of the second object in the target image. Therefore, if the reference pixel is marked, It can be stated that the traversed pixels in the area to be rendered are within the range of the second object.
  • the traversed pixels can be used as pixels to be rendered and wait for subsequent rendering by the target material; if the reference pixel is not marked, it can be explained
  • the traversed pixels in the rendering area exceed the range of the second object. At this time, if the traversed pixels are rendered, the rendering may exceed the preset range. Therefore, the reference pixel may not be used as the pixel to be rendered.
  • step S14 may be used to render the area to be rendered using the target material to obtain a rendering result.
  • the specific method of rendering is not limited in the embodiment of the present disclosure. Any applicable rendering method can be used as the implementation manner in the embodiment of the present disclosure. In one example, the rendering can be implemented in the shader through OpenGL, as disclosed above.
  • the specific implementation manner of rendering through the preset texture in the first initial image proposed in the embodiment may also adopt the same rendering manner.
  • Fig. 8 shows a schematic diagram of a rendering result according to an embodiment of the present disclosure. It can be seen from the figure that the second initial image is an image covered by a black texture and is rendered by the color contact material shown in Fig. 3, thereby Get the rendering result as shown in the figure.
  • the The pixel is used as the pixel to be rendered, and the area formed by the pixel to be rendered is the area to be rendered. Based on this process, the area that needs to be rendered in the target material can be effectively cropped, reducing this area beyond the first object to belong to The possibility of restricting the scope of the second object, thereby improving the reliability and authenticity of the rendering result.
  • the rendering result can also be combined with the original target image, so that the target image can be further modified or perfected.
  • the specific combination method can be determined according to the actual situation and requirements of the target image and target material. For example, when the target image is an image including a face area and the target material is a cosmetic contact material, this combination can be the combination of the rendering result and the target image Perform fusion and so on. Therefore, in a possible implementation manner, the method proposed in the embodiment of the present disclosure may further include:
  • step S15 the transparency of the target image is changed, and the result of the change is obtained.
  • step S16 the rendering result and the modification result are merged to obtain a merged image.
  • the specific value for changing the transparency of the target image is not limited here, and can be flexibly set according to the actual situation, and the fusion effect of the rendering result and the target image can be guaranteed. Since the size of the second initial image that generates the rendering result is the same as the target image, the rendering result can be effectively fused to the corresponding position in the target image. For example, when the target image is a face image and the target material is a cosmetic contact lens In the fusion result, the position where the color contact material is rendered is the same as the position of the pupil in the face image.
  • FIG. 9 shows a schematic diagram of the fusion result according to an embodiment of the present disclosure. It can be seen from the figure that through the image processing methods proposed in the above disclosed embodiments, the color contact material in FIG. 3 can be effectively and accurately fused Go to the pupil position of the face image in Figure 2.
  • Applying makeup to face images is currently a mainstream method in face image processing, such as applying cosmetic contact lenses to face images, adding lipstick to the lips of the face image, or adding shadows to the bridge of the nose.
  • cosmetic contact lenses as an example, when rendering cosmetic contact materials on a face image, because the degree of opening and closing of the eyes in the face image may be different, it is very likely that the cosmetic contact materials will be rendered on or outside the eyelids during the cosmetic contact process. Etc. area, leading to inaccurate results of the color contact lenses, reducing the true degree of the results of the color contact lenses.
  • an image processing process based on a reliable rendering method can greatly improve the quality of the color contact lenses, thereby expanding the quality and application range of the image processing method.
  • the embodiment of the present disclosure proposes an image processing method, and the specific process of this processing method may be:
  • Fig. 2 is a face image (that is, the target image mentioned in each embodiment of the disclosure) to be subjected to cosmetic contact lenses.
  • the pupil contour points of the face image are extracted through feature point extraction. (As shown in FIG. 4) as the second feature point set of the second object, and the eye contour points (as shown in FIG. 5) as the first feature point set of the first object.
  • the eye contour points can be connected in a preset order to form an eye triangle mesh as the first mesh (as shown in Figure 6).
  • the pupil contour points can also be preset It is connected in order to form another triangular mesh of pupils as the second mesh.
  • the mask texture of the eye contour and the vertex coordinates of the previously obtained pupil triangle mesh, as well as the color contact material shown in Figure 3 can be transferred to the shader together to achieve beauty Rendering of pupil materials.
  • the rendering process of the beauty pupil materials can be as follows: first generate a black texture map as the second initial image, and then determine the position of the pupil according to the vertex coordinates of the pupil triangle mesh, and then Traverse each pixel at the pupil position in turn, each pixel is traversed, you can compare whether the color of this pixel in the mask texture of the eye contour is red, if it is, it means that the pixel is in Within the range of the eye, it is the pixel to be rendered, and the color contact material can be rendered to this pixel.
  • the rendering result can be obtained.
  • This rendering result can be regarded as a cropped color contact material, and the position of this color contact material is the same as that of the pupil in the face image. The positions are the same, and the rendering result is shown in Figure 8.
  • the rendering result can be transparently fused with the original face image. Since the position of the rendering result is consistent with the position of the pupil in the face image, the fusion lens can be accurately fused to the face image.
  • the result of the fusion of the human face is shown in Figure 9. It can be seen from the figure that through the above process, a reliable result can be obtained, and the fusion of the US pupil does not exceed the range of the pupil. It is not rendered to the eye area.
  • the image processing method of the embodiment of the present disclosure is not limited to the above-mentioned image processing that includes the face area, nor is it limited to the above-mentioned process of applying cosmetic pupil processing to the face image, and can be applied to any image processing. This disclosure does not limit this.
  • the writing order of the steps does not mean a strict execution order but constitutes any limitation on the implementation process.
  • the specific execution order of each step should be based on its function and possibility.
  • the inner logic is determined.
  • Fig. 10 shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure.
  • the image processing apparatus may be a terminal device, a server, or other processing equipment.
  • terminal devices can be User Equipment (UE), mobile devices, user terminals, terminals, cellular phones, cordless phones, personal digital assistants (PDAs), handheld devices, computing devices, vehicle-mounted devices, and portable devices. Wearable equipment, etc.
  • UE User Equipment
  • PDAs personal digital assistants
  • handheld devices computing devices
  • vehicle-mounted devices vehicle-mounted devices
  • portable devices wearable equipment, etc.
  • the image processing apparatus may be implemented by a processor invoking computer-readable instructions stored in a memory.
  • the image processing apparatus 20 may include:
  • the recognition module 21 is configured to recognize the first object to be rendered in the target image and the second object to which the first object belongs in the target image.
  • the reference image generating module 22 is configured to generate a reference image including a mark according to the target image and the second object, wherein the mark is used to record the coverage area of the second object.
  • the to-be-rendered area determining module 23 is configured to determine the to-be-rendered area according to the reference image and the first object, where the to-be-rendered area is located within the coverage area corresponding to the marker.
  • the rendering module 24 is used for rendering the area to be rendered by using the target material to generate a rendering result.
  • the reference image generation module is used to generate a first initial image with the same size as the target image; in the first initial image, mark the coverage area of the second object to obtain the reference image.
  • the reference image generation module is further configured to: in the first initial image, change at least one pixel included in the coverage area of the second object to a target pixel value to obtain the reference image.
  • the to-be-rendered area determination module is configured to: generate a second initial image with the same size as the target image; in the second initial image, determine the to-be-rendered area according to the reference image and the first object.
  • the to-be-rendered area determination module is further configured to: in the second initial image, use the area corresponding to the first object as the initial to-be-rendered area; traverse the pixels of the initial to-be-rendered area, when the pixel When the corresponding position of the point in the reference image includes a mark, the pixel is regarded as the pixel to be rendered; the area formed by the pixel to be rendered is the area to be rendered.
  • the recognition module is used to: perform feature extraction on the target image to obtain a first feature point set corresponding to the first object and a second feature point set corresponding to the second object;
  • the first feature points included in a feature point set are connected in a first preset manner to obtain the area corresponding to the first object;
  • the second feature points included in the second feature point set are connected in a second preset manner to obtain a second object Corresponding area.
  • the recognition module is further configured to: take at least three first feature points as a group, divide the first feature point set to obtain at least one set of first feature point subsets; The first feature points included in the first feature point subset are sequentially connected to obtain at least one first grid; the area covered by the at least one first grid is taken as the area corresponding to the first object.
  • the recognition module is further configured to: group at least three second feature points as a group, divide the second feature point set to obtain at least one set of second feature point subsets; The second feature points included in the second feature point subset are sequentially connected to obtain at least one second grid; the area covered by the at least one second grid is taken as the area corresponding to the second object.
  • the device further includes a fusion module, which is used to: change the transparency of the target image to obtain a modified result; fuse the rendering result with the modified result to obtain a fused image.
  • a fusion module which is used to: change the transparency of the target image to obtain a modified result; fuse the rendering result with the modified result to obtain a fused image.
  • the first object includes a pupil
  • the second object includes an eye
  • the target material includes a material for beautifying the pupil
  • the embodiments of the present disclosure also provide a computer-readable storage medium on which computer program instructions are stored, and the computer program instructions implement the above-mentioned method when executed by a processor.
  • the computer-readable storage medium may be a non-volatile computer-readable storage medium.
  • An embodiment of the present disclosure also provides an electronic device, including: a processor; a memory for storing executable instructions of the processor; wherein the processor is configured as the above method.
  • the electronic device can be provided as a terminal, server or other form of device.
  • FIG. 11 is a block diagram of an electronic device 800 according to an embodiment of the present disclosure.
  • the electronic device 800 may be a mobile phone, a computer, a digital broadcasting terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, and other terminals.
  • the electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power supply component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, and a sensor component 814 , And communication component 816.
  • the processing component 802 generally controls the overall operations of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
  • the processing component 802 may include one or more processors 820 to execute instructions to complete all or part of the steps of the foregoing method.
  • the processing component 802 may include one or more modules to facilitate the interaction between the processing component 802 and other components.
  • the processing component 802 may include a multimedia module to facilitate the interaction between the multimedia component 808 and the processing component 802.
  • the memory 804 is configured to store various types of data to support operations in the electronic device 800. Examples of these data include instructions for any application or method operating on the electronic device 800, contact data, phone book data, messages, pictures, videos, etc.
  • the memory 804 can be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable and Programmable read only memory (EPROM), programmable read only memory (PROM), read only memory (ROM), magnetic memory, flash memory, magnetic disk or optical disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EPROM erasable and Programmable read only memory
  • PROM programmable read only memory
  • ROM read only memory
  • magnetic memory flash memory
  • flash memory magnetic disk or optical disk.
  • the power supply component 806 provides power for various components of the electronic device 800.
  • the power supply component 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
  • the multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touch, sliding, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure related to the touch or slide operation.
  • the multimedia component 808 includes a front camera and/or a rear camera. When the electronic device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
  • the audio component 810 is configured to output and/or input audio signals.
  • the audio component 810 includes a microphone (MIC), and when the electronic device 800 is in an operation mode, such as a call mode, a recording mode, and a voice recognition mode, the microphone is configured to receive an external audio signal.
  • the received audio signal may be further stored in the memory 804 or transmitted via the communication component 816.
  • the audio component 810 further includes a speaker for outputting audio signals.
  • the I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module.
  • the above-mentioned peripheral interface module may be a keyboard, a click wheel, a button, and the like. These buttons may include, but are not limited to: home button, volume button, start button, and lock button.
  • the sensor component 814 includes one or more sensors for providing the electronic device 800 with various aspects of state evaluation.
  • the sensor component 814 can detect the on/off status of the electronic device 800 and the relative positioning of the components.
  • the component is the display and the keypad of the electronic device 800.
  • the sensor component 814 can also detect the electronic device 800 or the electronic device 800.
  • the position of the component changes, the presence or absence of contact between the user and the electronic device 800, the orientation or acceleration/deceleration of the electronic device 800, and the temperature change of the electronic device 800.
  • the sensor component 814 may include a proximity sensor configured to detect the presence of nearby objects when there is no physical contact.
  • the sensor component 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor component 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • the communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices.
  • the electronic device 800 can access a wireless network based on a communication standard, such as WiFi, 2G, or 3G, or a combination thereof.
  • the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component 816 further includes a near field communication (NFC) module to facilitate short-range communication.
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • the electronic device 800 may be implemented by one or more application-specific integrated circuits (ASIC), digital signal processors (DSP), digital signal processing devices (DSPD), programmable logic devices (PLD), field-available A programmable gate array (FPGA), controller, microcontroller, microprocessor, or other electronic components are implemented to implement the above methods.
  • ASIC application-specific integrated circuits
  • DSP digital signal processors
  • DSPD digital signal processing devices
  • PLD programmable logic devices
  • FPGA field-available A programmable gate array
  • controller microcontroller, microprocessor, or other electronic components are implemented to implement the above methods.
  • a non-volatile computer-readable storage medium such as the memory 804 including computer program instructions, which can be executed by the processor 820 of the electronic device 800 to complete the foregoing method.
  • FIG. 12 is a block diagram of an electronic device 1900 according to an embodiment of the present disclosure.
  • the electronic device 1900 may be provided as a server. 12
  • the electronic device 1900 includes a processing component 1922, which further includes one or more processors, and a memory resource represented by a memory 1932, for storing instructions executable by the processing component 1922, such as application programs.
  • the application program stored in the memory 1932 may include one or more modules each corresponding to a set of instructions.
  • the processing component 1922 is configured to execute instructions to perform the above-described methods.
  • the electronic device 1900 may also include a power supply component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to the network, and an input output (I/O) interface 1958 .
  • the electronic device 1900 can operate based on an operating system stored in the memory 1932, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or the like.
  • a non-volatile computer-readable storage medium such as the memory 1932 including computer program instructions, which can be executed by the processing component 1922 of the electronic device 1900 to complete the foregoing method.
  • the present disclosure may be a system, method and/or computer program product.
  • the computer program product may include a computer-readable storage medium loaded with computer-readable program instructions for enabling a processor to implement various aspects of the present disclosure.
  • the computer-readable storage medium may be a tangible device that can hold and store instructions used by the instruction execution device.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • Non-exhaustive list of computer-readable storage media include: portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM) Or flash memory), static random access memory (SRAM), portable compact disk read-only memory (CD-ROM), digital versatile disk (DVD), memory stick, floppy disk, mechanical encoding device, such as a printer with instructions stored thereon
  • RAM random access memory
  • ROM read-only memory
  • EPROM erasable programmable read-only memory
  • flash memory flash memory
  • SRAM static random access memory
  • CD-ROM compact disk read-only memory
  • DVD digital versatile disk
  • memory stick floppy disk
  • mechanical encoding device such as a printer with instructions stored thereon
  • the computer-readable storage medium used here is not interpreted as the instantaneous signal itself, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (for example, light pulses through fiber optic cables), or through wires Transmission of electrical signals.
  • the computer-readable program instructions described herein can be downloaded from a computer-readable storage medium to various computing/processing devices, or downloaded to an external computer or external storage device via a network, such as the Internet, a local area network, a wide area network, and/or a wireless network.
  • the network may include copper transmission cables, optical fiber transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
  • the network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network, and forwards the computer-readable program instructions for storage in the computer-readable storage medium in each computing/processing device .
  • the computer program instructions used to perform the operations of the present disclosure may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, state setting data, or in one or more programming languages.
  • Source code or object code written in any combination, the programming language includes object-oriented programming languages such as Smalltalk, C++, etc., and conventional procedural programming languages such as "C" language or similar programming languages.
  • Computer-readable program instructions can be executed entirely on the user's computer, partly on the user's computer, executed as a stand-alone software package, partly on the user's computer and partly executed on a remote computer, or entirely on the remote computer or server carried out.
  • the remote computer can be connected to the user's computer through any kind of network-including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, using an Internet service provider to connect to the user's computer) connection).
  • LAN local area network
  • WAN wide area network
  • an electronic circuit such as a programmable logic circuit, a field programmable gate array (FPGA), or a programmable logic array (PLA), can be customized by using the status information of the computer-readable program instructions.
  • FPGA field programmable gate array
  • PDA programmable logic array
  • the computer-readable program instructions are executed to realize various aspects of the present disclosure.
  • These computer-readable program instructions can be provided to the processor of a general-purpose computer, a special-purpose computer, or other programmable data processing device, thereby producing a machine that makes these instructions when executed by the processor of the computer or other programmable data processing device , A device that implements the functions/actions specified in one or more blocks in the flowcharts and/or block diagrams is produced. It is also possible to store these computer-readable program instructions in a computer-readable storage medium. These instructions make computers, programmable data processing apparatuses, and/or other devices work in a specific manner. Thus, the computer-readable medium storing the instructions includes An article of manufacture, which includes instructions for implementing various aspects of the functions/actions specified in one or more blocks in the flowcharts and/or block diagrams.
  • each block in the flowchart or block diagram may represent a module, program segment, or part of an instruction, and the module, program segment, or part of an instruction contains one or more components for realizing the specified logical function.
  • Executable instructions may also occur in a different order than the order marked in the drawings. For example, two consecutive blocks can actually be executed substantially in parallel, or they can sometimes be executed in the reverse order, depending on the functions involved.
  • each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart can be implemented by a dedicated hardware-based system that performs the specified functions or actions Or it can be realized by a combination of dedicated hardware and computer instructions.

Abstract

本公开涉及一种图像处理方法及装置、电子设备和存储介质,所述方法包括:识别目标图像中待渲染的第一对象,以及第一对象在目标图像中所属的第二对象;根据目标图像和第二对象,生成包括标记的参考图像,其中,标记用于记录第二对象的覆盖区域;根据参考图像和第一对象,确定待渲染区域,其中,待渲染区域位于与标记对应的覆盖区域以内;利用目标素材对待渲染区域进行渲染,生成渲染结果。通过上述过程,可以根据第一对象所属的第二对象所生成的包括标记的参考图像,在对第一对象渲染时,对渲染范围进行约束,从而提高渲染结果的可靠性和真实性。

Description

图像处理方法及装置、电子设备和存储介质
本公开要求在2019年11月22日提交中国专利局、申请号为201911154806.5、申请名称为“图像处理方法及装置、电子设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本公开中。
技术领域
本公开涉及图像处理技术领域,尤其涉及一种图像处理方法及装置、电子设备和存储介质。
背景技术
图像渲染是将三维的光能传递处理转换为一个二维图像的过程。场景和实体用三维形式表示,更接近于现实世界,便于操纵和变换。然而在对目标对象进行渲染的过程中,容易发生渲染溢出的状况。
发明内容
本公开提出了一种图像处理技术方案。
根据本公开的一方面,提供了一种图像处理方法,包括:
识别目标图像中待渲染的第一对象,以及所述第一对象在所述目标图像中所属的第二对象;
根据所述目标图像和所述第二对象,生成包括标记的参考图像,其中,所述标记用于记录所述第二对象的覆盖区域;
根据所述参考图像和所述第一对象,确定待渲染区域,其中,所述待渲染区域位于与所述标记对应的覆盖区域以内;
利用目标素材对所述待渲染区域进行渲染,生成渲染结果。
在一种可能的实现方式中,所述根据所述目标图像和所述第二对象,生成包括标记的参考图像,包括:
生成与所述目标图像尺寸相同的第一初始图像;
在所述第一初始图像中,标记所述第二对象的覆盖区域,得到所述参考图像。
在一种可能的实现方式中,所述在所述第一初始图像中,标记所述第二对象的覆盖区域,得到所述参考图像,包括:
在所述第一初始图像中,将所述第二对象的覆盖区域包括的至少一个像素点更改为目标像素值,得到所述参考图像。
在一种可能的实现方式中,所述根据所述参考图像和所述第一对象,确定待渲染区域,包括:
生成与所述目标图像尺寸相同的第二初始图像;
在所述第二初始图像中,根据所述参考图像和所述第一对象,确定待渲染区域。
在一种可能的实现方式中,所述在所述第二初始图像中,根据所述参考图像和所述第一对象,确定待渲染区域,包括:
在所述第二初始图像中,将与所述第一对象对应的区域作为初始待渲染区域;
遍历所述初始待渲染区域的像素点,当所述像素点在所述参考图像中的对应位置包括所述标记时,将所述像素点作为待渲染像素点;
将所述待渲染像素点构成的区域作为所述待渲染区域。
在一种可能的实现方式中,所述识别所述目标图像中待渲染的第一对象,以及所述第一对象在所述目标图像中所属的第二对象,包括:
对所述目标图像进行特征提取,分别得到与所述第一对象对应的第一特征点集合,以及与所述第二对象对应的第二特征点集合;
将所述第一特征点集合包括的第一特征点按照第一预设方式连接,得到所述第一对象对应的区域;
将所述第二特征点集合包括的第二特征点按照第二预设方式连接,得到所述第二对象对应的区域。
在一种可能的实现方式中,所述将所述第一特征点集合包括的第一特征点按照第一预设方式连接,得到所述第一对象对应的区域,包括:
将至少三个所述第一特征点作为一组,对所述第一特征点集合进行划分,得到至少一组第一特征点子集合;
对至少一组所述第一特征点子集合包括的所述第一特征点依次连接,得到至少一个第一网格;
将所述至少一个第一网格覆盖的区域作为第一对象对应的区域。
在一种可能的实现方式中,所述将所述第二特征点集合包括的第二特征点按照第二预设方式连接,得到所述第二对象对应的区域,包括:
将至少三个所述第二特征点作为一组,对所述第二特征点集合进行划分,得到至少一组第二特征点子集合;
对至少一组所述第二特征点子集合包括的所述第二特征点依次连接,得到至少一个第二网格;
将所述至少一个第二网格覆盖的区域作为第二对象对应的区域。
在一种可能的实现方式中,所述方法还包括:
更改所述目标图像的透明度,得到更改结果;
将所述渲染结果与所述更改结果进行融合,得到融合图像。
在一种可能的实现方式中,所述第一对象包括瞳孔,所述第二对象包括眼睛,所述目标素材包括用于美化瞳孔的素材。
根据本公开的一方面,提供了一种图像处理装置,包括:
识别模块,用于识别目标图像中待渲染的第一对象,以及所述第一对象在所述目标图像中所属的第二对象;
参考图像生成模块,用于根据所述目标图像和所述第二对象,生成包括标记的参考图像,其中,所述标记用于记录所述第二对象的覆盖区域;
待渲染区域确定模块,用于根据所述参考图像和所述第一对象,确定待渲染区域,其中,所述待渲染区域位于与所述标记对应的覆盖区域以内;
渲染模块,用于利用目标素材对所述待渲染区域进行渲染,生成渲染结果。
在一种可能的实现方式中,所述参考图像生成模块用于:
生成与所述目标图像尺寸相同的第一初始图像;
在所述第一初始图像中,标记所述第二对象的覆盖区域,得到所述参考图像。
在一种可能的实现方式中,所述参考图像生成模块进一步用于:
在所述第一初始图像中,将所述第二对象的覆盖区域包括的至少一个像素点更改为目标像素值,得到所述参考图像。
在一种可能的实现方式中,所述待渲染区域确定模块用于:
生成与所述目标图像尺寸相同的第二初始图像;
在所述第二初始图像中,根据所述参考图像和所述第一对象,确定待渲染区域。
在一种可能的实现方式中,所述待渲染区域确定模块进一步用于:
在所述第二初始图像中,将与所述第一对象对应的区域作为初始待渲染区域;
遍历所述初始待渲染区域的像素点,当所述像素点在所述参考图像中的对应位置包括所述标记时,将所述像素点作为待渲染像素点;
将所述待渲染像素点构成的区域作为所述待渲染区域。
在一种可能的实现方式中,所述识别模块用于:
对所述目标图像进行特征提取,分别得到与所述第一对象对应的第一特征点集合,以及与所述第二对象对应的第二特征点集合;
将所述第一特征点集合包括的第一特征点按照第一预设方式连接,得到所述第一对象对应的区 域;
将所述第二特征点集合包括的第二特征点按照第二预设方式连接,得到所述第二对象对应的区域。
在一种可能的实现方式中,所述识别模块进一步用于:
将至少三个所述第一特征点作为一组,对所述第一特征点集合进行划分,得到至少一组第一特征点子集合;
对至少一组所述第一特征点子集合包括的所述第一特征点依次连接,得到至少一个第一网格;
将所述至少一个第一网格覆盖的区域作为第一对象对应的区域。
在一种可能的实现方式中,所述识别模块进一步用于:
将至少三个所述第二特征点作为一组,对所述第二特征点集合进行划分,得到至少一组第二特征点子集合;
对至少一组所述第二特征点子集合包括的所述第二特征点依次连接,得到至少一个第二网格;
将所述至少一个第二网格覆盖的区域作为第二对象对应的区域。
在一种可能的实现方式中,所述装置还包括融合模块,所述融合模块用于:
更改所述目标图像的透明度,得到更改结果;
将所述渲染结果与所述更改结果进行融合,得到融合图像。
在一种可能的实现方式中,所述第一对象包括瞳孔,所述第二对象包括眼睛,所述目标素材包括用于美化瞳孔的素材。
根据本公开的一方面,提供了一种电子设备,包括:
处理器;
用于存储处理器可执行指令的存储器;
其中,所述处理器被配置为:执行上述图像处理方法。
根据本公开的一方面,提供了一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现上述图像处理方法。
根据本公开的一方面,提供了一种计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现图像处理方法。
本公开实施例的图像处理方法,通过识别目标图像中待渲染的第一对象和第一对象所属的第二对象,根据目标图像和第二对象生成包括标记的参考图像,并根据参考图像和第一对象来确定位于与标记对应的覆盖区域以内的待渲染区域,从而利用目标素材对待渲染区域进行渲染,生成渲染结果。通过上述过程,可以根据第一对象所属的第二对象所生成的包括标记的参考图像,在对第一对象渲染时,对渲染范围进行约束,从而提高渲染结果的可靠性和真实性。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,而非限制本公开。根据下面参考附图对示例性实施例的详细说明,本公开的其它特征及方面将变得清楚。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开的技术方案。
图1示出根据本公开一实施例的图像处理方法的流程图。
图2示出根据本公开一实施例的目标图像的示意图。
图3示出根据本公开一实施例的目标素材示意图。
图4示出根据本公开一实施例的第一特征点集合的示意图。
图5示出根据本公开一实施例的第二特征点集合的示意图。
图6示出根据本公开一实施例的第二对象的示意图。
图7示出根据本公开一实施例的参考图像的示意图。
图8示出根据本公开一实施例的渲染结果的示意图
图9示出根据本公开一实施例的融合结果的示意图。
图10示出根据本公开一实施例的图像处理装置的框图。
图11示出根据本公开实施例的一种电子设备的框图。
图12示出根据本公开实施例的一种电子设备的框图。
具体实施方式
以下将参考附图详细说明本公开的各种示例性实施例、特征和方面。附图中相同的附图标记表示功能相同或相似的元件。尽管在附图中示出了实施例的各种方面,但是除非特别指出,不必按比例绘制附图。
在这里专用的词“示例性”意为“用作例子、实施例或说明性”。这里作为“示例性”所说明的任何实施例不必解释为优于或好于其它实施例。
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中术语“至少一种”表示多种中的任意一种或多种中的至少两种的任意组合,例如,包括A、B、C中的至少一种,可以表示包括从A、B和C构成的集合中选择的任意一个或多个元素。
另外,为了更好地说明本公开,在下文的具体实施方式中给出了众多的具体细节。本领域技术人员应当理解,没有某些具体细节,本公开同样可以实施。在一些实例中,对于本领域技术人员熟知的方法、手段、元件和电路未作详细描述,以便于凸显本公开的主旨。
图1示出根据本公开一实施例的图像处理方法的流程图,该方法可以应用于图像处理装置,图像处理装置可以为终端设备、服务器或者其他处理设备等。其中,终端设备可以为用户设备(User Equipment,UE)、移动设备、用户终端、终端、蜂窝电话、无绳电话、个人数字处理(Personal Digital Assistant,PDA)、手持设备、计算设备、车载设备、可穿戴设备等。
在一些可能的实现方式中,该图像处理方法可以通过处理器调用存储器中存储的计算机可读指令的方式来实现。
如图1所示,所述图像处理方法可以包括:
步骤S11,识别目标图像中待渲染的第一对象,以及第一对象在目标图像中所属的第二对象。
步骤S12,根据目标图像和第二对象,生成包括标记的参考图像,其中,标记用于记录第二对象的覆盖区域。
步骤S13,根据参考图像和第一对象,确定待渲染区域,其中,待渲染区域位于与标记对应的覆盖区域以内。
步骤S14,利用目标素材对待渲染区域进行渲染,生成渲染结果。
本公开实施例的图像处理方法,通过识别目标图像中待渲染的第一对象和第一对象所属的第二对象,根据目标图像和第二对象生成包括标记的参考图像,并根据参考图像和第一对象来确定位于与标记对应的覆盖区域以内的待渲染区域,从而利用目标素材对待渲染区域进行渲染,生成渲染结果。通过上述过程,可以根据第一对象所属的第二对象所生成的包括标记的参考图像,在对第一对象渲染时,对渲染范围进行约束,从而提高渲染结果的可靠性和真实性。
上述公开实施例中,目标图像的实现方式不受限制,任何具有渲染需求的图像,均可以作为目标图像的实现方式。在一种可能的实现方式中,目标图像可以为包含有人脸区域的对象,如头像图像、半身图像或是全身图像等。
图2示出根据本公开一实施例的目标图像的示意图,如图所示,在本公开实施例中,目标图像可以为包含人脸的头像图像。而目标图像中待渲染的第一对象的具体对象内容也可以根据实际的渲染需求进行确认,在本公开实施例中不做限制。
在一种可能的实现方式中,当目标图像为包含人脸区域的对象时,第一对象可以为人脸中的某一部位,如瞳孔、鼻梁、耳垂或是嘴唇等,根据实际需求进行确定即可。第一对象所属的第二对象的 具体内容,则可以根据第一对象的实际情况来进行灵活确定,比如当第一对象为瞳孔时,第二对象可以为眼睛;当第一对象为鼻梁时,第二对象可以为鼻子;第一对象为耳垂时,第二对象可以为耳朵,第一对象为嘴唇时,第二对象可以为嘴部。除此之外,目标图像的数量也不受限定,在一种可能的实现方式中,目标图像可以包括单张图片,也可以包括多张图片,即一次对多张图片中的对象进行批量渲染。
由于目标图像、第一对象和第二对象的实现方式均可以根据实际情况灵活决定,因此根据目标图像和第二对象生成的参考图像、参考图像中所包括的标记以及根据参考图像和第一对象确定的待渲染区域,均可以根据目标图像等实现方式的不同而灵活发生变化,在此暂不展开解释,详见下述各公开实施例。
通过上述公开实施例的步骤S14可以看出,渲染结果可以基于目标素材对待渲染区域进行渲染来得到,目标素材的实现方式,也可以根据第一对象的实际情况进行灵活设定,不局限于下述各公开实施例。比如,当第一对象为瞳孔时,该目标素材可以为美瞳素材;当第一对象为鼻梁时,该目标素材可以为阴影素材;当第一对象为耳垂时,该目标素材可以为耳环素材;当第一对象为嘴唇时,该目标素材可以为唇膏素材。
在一种可能的实现方式中,第一对象可以包括瞳孔对象,第二对象可以包括眼睛对象,目标素材可以包括用于美化瞳孔的素材,在一个示例中,用于美化瞳孔的素材可以为美瞳素材,该美瞳素材的具体内容则可以灵活选择和设定,其获取方式也可以根据实际情况灵活决定,在一个示例中,美瞳素材可以是从美瞳素材库中进行随机选取,在一个示例中,美瞳素材也可以是根据需求选定的特定素材等。图3示出根据本公开一实施例的目标素材示意图,从图中可以看出,在本公开示例中,该目标素材可以是一具有一定光泽纹理的紫色的美瞳素材(由于附图限制,紫色颜色暂时无法从图中看出)。
通过包括瞳孔对象的第一对象、包括眼睛对象的第二对象以及包括美瞳素材的美瞳素材,可以将本公开实施例中提出的图像处理方法,应用于对人脸图像进行美瞳处理的过程中,从而通过根据眼睛对象生成的带有标记的参考图像,有效约束渲染范围,减小渲染过程中将美瞳素材渲染到瞳孔外区域的可能性。
需要说明的是,本公开实施例的图像处理方法不限于应用在包含人脸区域的图像的处理,可以应用于任意的图像处理,本公开对此不作限定。
目标图像的获取方式不受限定,不受下述公开实施例的限制。在一种可能的实现方式中,可以通过读取或接收的方式,获取目标图像;在一种可能的实现方式中,可以通过主动拍摄或其他主动获取的方式,得到目标图像。
基于上述公开实施例得到的目标图像,可以通过步骤S11识别目标图像中的第一对象和第二对象。步骤S11的实现方式不受限定,在一种可能的实现方式中,可以通过在目标图像中进行对象检测的方式,来确定目标图像中待渲染的第一对象和第二对象。在一个示例中,可以在目标图像中分别进行面向第一对象的对象检测和面向第二对象的对象检测;在一个示例中,也可以先在目标图像中进行面向第二对象的对象检测,然后根据第二对象的检测结果,对目标图像进行裁剪保留第二对象的图像,由于第一对象属于第二对象的一部分,因此可以进一步在剪裁后的图像中进行对象检测,来得到第一对象对应的区域。
对象检测的方式在本公开实施例中也不做限制,任何可以从图像中对目标对象进行检测或失败的方式,均可以作为对象检测的实现方式。在一种可能的实现方式中,可以通过特征提取的方式实现对象检测。
因此,在一种可能的实现方式中,步骤S11可以包括:
步骤S111,对目标图像进行特征提取,分别得到与第一对象对应的第一特征点集合,以及与第二对象对应的第二特征点集合。
步骤S112,将第一特征点集合包括的第一特征点按照第一预设方式连接,得到第一对象对应的区域。
步骤S113,将第二特征点集合包括的第二特征点按照第二预设方式连接,得到第二对象对应的 区域。
通过对目标图像进行特征提取来得到第一特征点集合和第二特征点集合,再对第一特征点集合中的点通过第一预设方式连接得到第一对象对应的区域,对第二特征点集合中的点通过第二预设方式连接得到第二对象对应的区域,其中,第一对象对应的区域也可以称为第一对象的覆盖区域,第二对象对应的区域也可以称为第二对象的覆盖区域。
通过上述过程,可以有效通过特征提取的方式,快速基于第一特征点集合和第二特征点集合定位第一对象与第二对象在目标图像中的位置,从而得到第一对象对应的区域以及第一对象所属的第二对象对应的区域,便于后续通过第二对象的位置来对第一对象的渲染过程进行约束,提升渲染结果的可靠性。
上述公开实施例中,步骤S111的实现方式不受限定,即特征提取的具体方式,在本公开实施例中不做限制,任何可以进行特征提取的计算方法,均可以作为步骤S111的实现方式。
如上述公开实施例中提到的,对第一对象和第二对象进行对象检测时的顺序可以根据实际情况灵活选择,由于特征提取是对象检测的一种可能的实现方式,因此,可以同时对第一对象和第二对象进行特征提取,也可以先对第二对象进行特征提取,再进一步进行第一对象的特征提取,二者的特征提取方式可以相同也可以不同,在此不再赘述。基于上述原因,第一特征点集合与第二特征点集合二者之间可以存在共同的特征点,也可以不存在共同的特征点,无论是否存在共同的特征点,均不影响后续得到第一对象对应的区域与第二对象对应的区域的过程,因此在本公开实施例中对两个特征点集合之间的关系不做限制。图4示出根据本公开一实施例的第一特征点集合的示意图,图5示出根据本公开一实施例的第二特征点集合的示意图,如图所示,在一个示例中,图4中的第一特征点集合为对图2中的人脸图像进行瞳孔特征点提取所得到的点集,图5中的第二特征点集合为对图2中的人脸图像进行眼睛特征点提取所得到的点集;从图中可以看出,本公开示例中的第一特征点集合和第二特征点集合各自对目标图像进行特征提取所得到的,第一特征点集合与第二特征点集合在瞳孔与眼睛的交界处存在重合的特征点。
在得到了第一特征点集合和第二特征点集合之后,可以通过步骤S112和步骤S113,来得到第一对象对应的区域和第二对象对应的区域,在本公开实施例中,步骤S112和步骤S113的执行顺序不受限制,可以同时执行也可以依次执行,依次执行时可以步骤S112先执行也可以步骤S113先执行,在此不做限制。
通过步骤S112可以看出,第一对象的得到过程可以为将第一特征点集合按照第一预设方式连接,第一预设方式的具体实现形式,在本公开实施例中不做限制,可以根据第一对象的实际情况进行确定,在此不做限制。同样的,得到第二对象对应的区域的第二预设方式,也可以根据实际情况灵活选择,第二预设方式可以与第一预设方式相同,也可以不同。
在一种可能的实现方式中,步骤S112可以包括:
将第一特征点集合包括的第一特征点,按照第一预设方式的顺序,连接为至少一个第一网格,将第一网格覆盖的区域作为第一对象对应的区域。
通过上述公开实施例可以看出,得到第一对象对应的区域时,可以通过将第一特征点集合中包含的第一特征点按照第一预设方式的顺序进行连接,上述公开实施例中已提出,第一预设方式可以根据实际情况设置,因此第一特征点的连接顺序也是按照预先设置的方式进行确定,对第一特征点按照预设顺序连接后,可以得到至少一个网格,在本公开实施例中可以将此网格称为第一网格。第一网格的数量与形状均可以根据第一特征点集合中包含的第一特征点数量与第一预设方式的情况进行确定,在一个示例中,第一预设方式可以为通过将第一特征点集合中的每三个第一特征点划分为一组进行连接,构成多个三角网格,具体将哪三个点划分为一组在此不做具体限制,根据实际情况设置即可。在一个示例中,第一预设方式也可以为通过将第一特征点集合中的每四个第一特征点划分为一组进行连接,构成多个四角网格,同样的划分方式不做限制。
基于上述预设方式的公开示例可以看出,在一种可能的实现方式中,步骤S112可以包括:
步骤S1121,将至少三个第一特征点作为一组,对第一特征点集合进行划分,得到至少一组第一特征点子集合。
步骤S1122,分别对至少一组第一特征点子集合包括的第一特征点依次连接,得到至少一个第一网格。
步骤S1123,将至少一个第一网格覆盖的区域作为第一对象对应的区域。
其中,作为一组的第一特征点数量可以为3个,也可以大于3个,比如4个、5个或6个等等,在此不做限制。通过上述过程,可以有效地根据第一特征点集合的实际情况,将第一对象所在的区域完全覆盖,且无需过多的计算资源,快速且高效的确定第一对象,为后续的渲染过程做准备。
同理,在一种可能的实现方式中,步骤S113可以包括:
将第二特征点集合包括的第二特征点,按照第二预设方式的顺序,连接为至少一个第二网格,将第二网格覆盖的区域作为第二对象对应的区域。
上述公开实施例的具体实现过程可以参考第一对象的实现过程,在一种可能的实现方式中,步骤S113可以包括:
步骤S1131,将至少三个第二特征点作为一组,对第二特征点集合进行划分,得到至少一组第二特征点子集合。
步骤S1132,分别对至少一组第二特征点子集合包括的第二特征点依次连接,得到至少一个第二网格。
步骤S1133,将至少一个第二网格覆盖的区域作为第二对象对应的区域。
图6示出根据本公开一实施例的第二对象的示意图,从图中可以看出,在一个示例中,通过将图5中的第二特征点集合中的第二特征点按照第二预设方式连接,可以得到多个三个网格,这些三角网格共同构成了第二对象对应的区域。
其中,以上第一网格和第二网格,作为多边形网格(Polygon mesh)是三维计算机图形学中表示多面体形状的顶点与多边形的集合,也叫做非结构网格。示例性的,第一网格和第二网格可以采用三角网格的形式。通过三角网格围成第一对象对应的区域和第二对象对应的区域,以便于后续进行渲染。
在确定了第一对象和第二对象后,可以通过步骤S12,根据目标图像和第二对象,来生成包括标记的参考图像,其中,标记用于记录第二对象的覆盖区域。步骤S12的实现方式可以根据第一对象和第二对象的实际情况灵活决定,不局限于下述公开实施例。在一种可能的实现方式中,步骤S12可以包括:
步骤S121,生成与目标图像尺寸相同的第一初始图像。
步骤S122,在第一初始图像中,标记第二对象的覆盖区域,得到参考图像。
通过生成与目标图像尺寸相同的第一初始图像,并在第一初始图像中,标记第二对象的覆盖区域来得到参考图像,这一过程可以利用一个与目标图像尺寸相同的重构图像,来有效标记出第二对象在目标图像中的位置,从而在后续的渲染过程中,可以基于标记来确定在渲染第一对象时,渲染的像素是否超出了第二对象的位置约束,减小渲染溢出的可能性,提高渲染的可靠性和真实性。
上述公开实施例中,第一初始图像的尺寸与目标图像相同,其具体的图像内容在本公开实施例可以不做限制,在一个示例中,第一初始图像可以为空白图像,在一个示例中,第一初始图像可以是被某一纹理覆盖的图像,覆盖的纹理在本公开实施例中不做限制,可以根据实际情况灵活设定。其中,纹理是表示物体表面细节的一幅或几幅二维图形,也可以称作纹理贴图(texture mapping)。在一个示例中,第一初始图像可以是被某一纯色纹理进行覆盖的图像,该纹理的颜色也可以灵活设定,可以为黑色、白色或是红色等。第一初始图像的生成过程在本公开实施例中不做限制,可以为通过读取目标图像的尺寸后,再创建相同尺寸的图像。
在生成了第一初始图像后,可以通过步骤S122,在第一初始图像内标记第二对象的覆盖区域,来得到参考图像。由于第一初始图像与目标图像的尺寸相同,因此被标记的区域在第一初始图像中的 位置,与第二对象在目标图像中的位置一致。标记的方式在本公开实施例中不做限制,任何可以在第一初始图像中将第二对象所对应的区域与第一初始对象本身的其余区域区分开的标记方式,均可以作为标记的实现方式。在一种可能的实现方式中,标记的方式可以为在第一初始图像中在需要标记的位置添加标记物,标记物可以为符号、数据或是纹理等,在此不做限制。
在一种可能的实现方式中,标记可以通过调整图像中像素值的方式来实现,在该种实现方式中,步骤S122可以包括:
在第一初始图像中,将第二对象的覆盖区域包括的至少一个像素点更改为目标像素值,得到参考图像。
其中,具有目标像素值的图像区域即为参考图像中包括标记的图像区域。此外,针对第一初始图像除了对第二对象的覆盖区域进行目标像素值的标记之外,还可以将除了第二对象的覆盖区域以外的其它区域调整为区别于目标像素值的其它像素值,以实现对两种不同区域的明显区分。
通过在第一初始图像中,将第二对象的覆盖区域包括的至少一个像素点更改为目标像素值,得到参考图像,可以通过更改第二对象的覆盖区域的颜色这种简单的方式,实现对第二对象的覆盖区域的标记,过程简单,节约成本。
在一种可能的实现方式中,步骤S122也可以包括:
在第一初始图像中,将第二对象的覆盖区域通过预设纹理进行渲染,得到参考图像。
上述公开实施例中,对第二对象对应的区域进行渲染的预设纹理,其具体的实现方式在本公开实施例中不做限制,可以将第二对象对应的区域与第一初始图像的其他区域进行区分即可。在一个示例中,当第一初始图像为空白图像时,预设纹理可以是不包含空白区域的任意纹理;在一个示例中,当第一初始图像被某一颜色纯色纹理进行覆盖时,预设纹理可以是不包含该颜色的任意纹理。图7示出根据本公开一实施例的参考图像的示意图,从图中可以看出,在本公开示例中,第一初始图像为通过黑色纹理覆盖的图像,而第二对象被红色纹理进行渲染,从而得到如图所示的参考图像。
通过在第一初始图像中,将第二对象的覆盖区域通过预设纹理进行渲染来得到参考图像,可以通过较为简单的方式,将第二对象的位置在参考图像中标记出,为后续根据标记来完成对待渲染区域进行渲染做出了充分的准备,同时,将通过渲染标记第二对象应用于整体的渲染过程中,可以通过同一渲染手段来完成整个图像处理过程,减小额外的资源消耗,提高了图像处理整体效率的同时也节省了成本。
得到参考图像后,可以通过步骤S13来确定待渲染区域,在一种可能的实现方式中,步骤S13可以包括:
步骤S131,生成与目标图像尺寸相同的第二初始图像。
步骤S132,在第二初始图像中,根据参考图像和第一对象,确定待渲染区域。
上述公开实施例中,通过在与目标图像尺寸相同的第二初始图像中,根据参考图像和第一对象,确定待渲染区域,通过上述过程,由于参考图像是根据第二对象在目标图像中的位置而生成的,因此待渲染区域可以受到第二对象位置的约束,从而减小渲染溢出所需渲染范围的可能性,提高了渲染结果的可靠性。
具体如何根据参考图像和第一对象来确定待渲染区域,其具体实现形式可以根据参考图像中标记的实际情况进行灵活确定。在一种可能的实现方式中,步骤S132可以包括:
步骤S1321,在第二初始图像中,将与第一对象对应的区域作为初始待渲染区域。
步骤S1322,遍历初始待渲染区域的像素点,当像素点在参考图像中的对应位置包括标记时,将像素点作为待渲染像素点。
步骤S1323,将待渲染像素点构成的区域作为待渲染区域。
步骤S1321中,第二初始图像的生成过程与实现方式均不受限制,可以参考上述公开实施例中提出的第一初始图像的生成过程与实现方式,在此不再赘述,需要注意的是,第二初始图像的实现方 式可以与第一初始图像相同,也可以不同。
在生成第二初始图像后,可以通过步骤S1322,遍历初始待渲染区域的像素点,当遍历到每个像素点时,都可以在参考图像中,寻找与该像素点具有相同位置的像素点,在本公开实施例中可以将遍历到的像素点记为遍历像素点,参考图像中对应的像素点记为参考像素点。继而可以判断参考像素点是否被标记,从上述公开实施例中公开的内容可以得知,参考图像中被标记区域的是第二对象在目标图像中的位置,因此,若参考像素点被标记,则可以说明待渲染区域中的遍历像素点在第二对象的范围内,此时遍历像素点可以作为待渲染像素点等待后续被目标素材所渲染;若参考像素点未被标记,则可以说明待渲染区域中的遍历像素点超过了第二对象的范围,此时若渲染该遍历像素点,则可能会发生渲染超出预设范围的情况,因此该参考像素点可以不作为待渲染像素点。
在对初始待渲染区域中的像素点遍历后,可以得到全部的待渲染像素点,这些待渲染像素点可以共同构成待渲染区域。
在得到了待渲染区域后,可以通过步骤S14,利用目标素材对待渲染区域进行渲染,从而得到渲染结果。渲染的具体方式在本公开实施例中不做限制,任何可以应用的渲染方法均可以作为本公开实施例中的实现方式,在一个示例中,可以通过OpenGL在着色器shader中实现渲染,上述公开实施例中提出的在第一初始图像中通过预设纹理进行渲染的具体实现方式,也可以采取同样的渲染方式。图8示出根据本公开一实施例的渲染结果的示意图,从图中可以看出,第二初始图像为通过黑色纹理覆盖的图像,并被如图3所示的美瞳素材进行渲染,从而得到如图所示的渲染结果。
通过在第二初始图像中,将与第一对象对应的区域作为初始待渲染区域,遍历初始待渲染区域中的像素点,并在该像素点在参考图像中的对应像素点被标记时,将该像素点作为待渲染像素点,将待渲染像素点构成的区域作为待渲染区域,基于这一过程,可以有效对目标素材所需要渲染的区域进行裁剪,减小这一区域超出第一对象所属的第二对象约束范围的可能性,从而提升渲染结果的可靠性和真实性。
在得到渲染结果后,还可以将这一渲染结果与原始的目标图像进行结合,从而可以进一步对目标图像进行修饰或完善。具体的结合方式可以根据目标图像和目标素材的实际情况和需求来确定,比如当目标图像为包括人脸区域的图像,目标素材为美瞳素材时,这一结合可以是将渲染结果和目标图像进行融合等。因此,在一种可能的实现方式中,本公开实施例中提出的方法还可以包括:
步骤S15,更改目标图像的透明度,得到更改结果。
步骤S16,将渲染结果与更改结果进行融合,得到融合图像。
上述公开实施例中,更改目标图像的透明度的具体数值在此不做限制,可以根据实际情况进行灵活设定,可以保证渲染结果和目标图像的融合效果即可。由于生成渲染结果的第二初始图像的尺寸与目标图像相同,因此渲染结果可以有效的融合到目标图像中的相应位置处,举例来说,当目标图像为人脸图像,而目标素材为美瞳时,融合结果中美瞳素材被渲染到的位置与瞳孔在人脸图像中的位置是一致的,因此渲染结果与更改了透明度的目标图像在进行融合时,渲染的美瞳素材自然的被融合到人脸图像中瞳孔所在的位置处,从而实现人脸图像的美瞳效果。图9示出根据本公开一实施例的融合结果的示意图,从图中可以看出,通过上述各公开实施例中提出的图像处理方法,可以有效且准确地将图3中的美瞳素材融合到图2中人脸图像的瞳孔位置处。
应用场景示例
对人脸图像进行美妆是目前人脸图像处理中的一种主流方式,比如对人脸图像进行美瞳、在人脸图像的嘴唇部位添加口红或是在鼻梁部位添加阴影等。以美瞳为例,在对人脸图像渲染美瞳素材时,由于人脸图像中眼睛的开闭程度可能存在不同,美瞳过程中极有可能将美瞳素材渲染到眼皮上或是眼皮外等区域,导致美瞳结果的不准确,降低美瞳结果的真实程度。
因此,一个基于可靠渲染方式的图像处理过程能够极大提升美瞳的质量,从而可以扩大该图像处理方法的质量和应用范围。
如图2~图9所示,本公开实施例提出了一种图像处理方法,这一处理方法的具体过程可以为:
图2为待进行美瞳的人脸图像(即上述公开各实施例中提到的目标图像),在本公开应用示例中,首先通过特征点提取,分别提取出该人脸图像的瞳孔轮廓点(如图4所示)作为第二对象的第二特征点集合,和眼睛轮廓点(如图5所示)作为第一对象的第一特征点集合。在得到了眼睛轮廓点后,可以将眼睛轮廓点按照预设的顺序连接成眼睛三角网格作为第一网格(如图6所示),同理,也可以将瞳孔轮廓点按照预设的顺序连接成另一瞳孔三角网格作为第二网格。
在得到了如图6所示的眼睛三角网格后,可以创建一张与图2大小相同的黑色纹理图作为第一初始图像,然后将黑色纹理图和图6中眼睛三角网格的顶点坐标传入shader,通过OpenGL将黑色纹理图中与眼睛三角网格对应位置处的像素点渲染为红色作为标记,其余位置的像素点仍保持原有状态,从而得到眼睛轮廓的mask纹理,即包括标记的参考图像(如图7所示)。
在得到了眼睛轮廓的mask纹理后,可以再将这一眼睛轮廓的mask纹理以及之前得到的瞳孔三角网格的顶点坐标,还有如图3所示的美瞳素材共同传入shader,从而实现美瞳素材的渲染,在本公开应用示例中,美瞳素材的渲染过程可以为:首先生成一张黑色纹理图作为第二初始图像,然后根据瞳孔三角网格的顶点坐标,确定瞳孔的位置,接着依次遍历瞳孔位置处的每一像素点,每遍历到一个像素点,均可以比对这一像素点在眼睛轮廓的mask纹理中对应的像素点颜色是否为红色,若是则说明这一像素点在眼睛的范围内,为待渲染像素点,可以将美瞳素材渲染到这一像素点上,若不是则说明这一像素点在眼睛的范围外,可以不将美瞳素材渲染到这一像素点上。当瞳孔位置处的每一像素点均被遍历后,可以得到渲染结果,这一渲染结果可以看作是被裁剪过的美瞳素材,且这一美瞳素材的位置与人脸图像中瞳孔的位置一致,渲染结果如图8所示。
在得到了渲染结果后,可以将渲染结果与原始的人脸图像进行透明度融合,由于渲染结果的位置与人脸图像中瞳孔的位置一致,因此融合后美瞳可以准确地融合到人脸图像的瞳孔中,从而实现人脸的美瞳,融合的结果如图9所示,从图中可以看出,通过上述过程,可以得到可靠的美瞳结果,融合后的美瞳没有超出瞳孔的范围,更没有渲染到眼周位置处。
需要说明的是,本公开实施例的图像处理方法不限于应用在上述包含人脸区域的图像处理,也不局限于上述对人脸图像进行美瞳处理的过程,可以应用于任意的图像处理,本公开对此不作限定。
可以理解,本公开提及的上述各个方法实施例,在不违背原理逻辑的情况下,均可以彼此相互结合形成结合后的实施例,限于篇幅,本公开不再赘述。
本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的撰写顺序并不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。
图10示出根据本公开实施例的图像处理装置的框图。该图像处理装置可以为终端设备、服务器或者其他处理设备等。其中,终端设备可以为用户设备(User Equipment,UE)、移动设备、用户终端、终端、蜂窝电话、无绳电话、个人数字处理(Personal Digital Assistant,PDA)、手持设备、计算设备、车载设备、可穿戴设备等。
在一些可能的实现方式中,该图像处理装置可以通过处理器调用存储器中存储的计算机可读指令的方式来实现。
如图10所示,所述图像处理装置20可以包括:
识别模块21,用于识别目标图像中待渲染的第一对象,以及第一对象在所述目标图像中所属的第二对象。
参考图像生成模块22,用于根据目标图像和所述第二对象,生成包括标记的参考图像,其中,标记用于记录第二对象的覆盖区域。
待渲染区域确定模块23,用于根据参考图像和第一对象,确定待渲染区域,其中,待渲染区域位于与标记对应的覆盖区域以内。
渲染模块24,用于利用目标素材对待渲染区域进行渲染,生成渲染结果。
在一种可能的实现方式中,参考图像生成模块用于:生成与目标图像尺寸相同的第一初始图像;在第一初始图像中,标记第二对象的覆盖区域,得到参考图像。
在一种可能的实现方式中,参考图像生成模块进一步用于:在第一初始图像中,将第二对象的覆盖区域包括的至少一个像素点更改为目标像素值,得到参考图像。
在一种可能的实现方式中,待渲染区域确定模块用于:生成与目标图像尺寸相同的第二初始图像;在第二初始图像中,根据参考图像和第一对象,确定待渲染区域。
在一种可能的实现方式中,待渲染区域确定模块进一步用于:在第二初始图像中,将与第一对象对应的区域作为初始待渲染区域;遍历初始待渲染区域的像素点,当像素点在参考图像中的对应位置包括标记时,将像素点作为待渲染像素点;将待渲染像素点构成的区域作为待渲染区域。
在一种可能的实现方式中,识别模块用于:对目标图像进行特征提取,分别得到与第一对象对应的第一特征点集合,以及与第二对象对应的第二特征点集合;将第一特征点集合包括的第一特征点按照第一预设方式连接,得到第一对象对应的区域;将第二特征点集合包括的第二特征点按照第二预设方式连接,得到第二对象对应的区域。
在一种可能的实现方式中,识别模块进一步用于:将至少三个第一特征点作为一组,对第一特征点集合进行划分,得到至少一组第一特征点子集合;分别对至少一组第一特征点子集合包括的第一特征点依次连接,得到至少一个第一网格;将至少一个第一网格覆盖的区域作为第一对象对应的区域。
在一种可能的实现方式中,识别模块进一步用于:将至少三个第二特征点作为一组,对第二特征点集合进行划分,得到至少一组第二特征点子集合;分别对至少一组第二特征点子集合包括的第二特征点依次连接,得到至少一个第二网格;将至少一个第二网格覆盖的区域作为第二对象对应的区域。
在一种可能的实现方式中,装置还包括融合模块,融合模块用于:更改目标图像的透明度,得到更改结果;将渲染结果与更改结果进行融合,得到融合图像。
在一种可能的实现方式中,第一对象包括瞳孔,第二对象包括眼睛,目标素材包括用于美化瞳孔的素材。
本公开实施例还提出一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现上述方法。计算机可读存储介质可以是非易失性计算机可读存储介质。
本公开实施例还提出一种电子设备,包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为上述方法。
电子设备可以被提供为终端、服务器或其它形态的设备。
图11是根据本公开实施例的一种电子设备800的框图。例如,电子设备800可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等终端。
参照图11,电子设备800可以包括以下一个或多个组件:处理组件802,存储器804,电源组件806,多媒体组件808,音频组件810,输入/输出(I/O)的接口812,传感器组件814,以及通信组件816。
处理组件802通常控制电子设备800的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件802可以包括一个或多个处理器820来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件802可以包括一个或多个模块,便于处理组件802和其他组件之间的交互。例如,处理组件802可以包括多媒体模块,以方便多媒体组件808和处理组件802之间的交互。
存储器804被配置为存储各种类型的数据以支持在电子设备800的操作。这些数据的示例包括用于在电子设备800上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器804可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
电源组件806为电子设备800的各种组件提供电力。电源组件806可以包括电源管理系统,一个或多个电源,及其他与为电子设备800生成、管理和分配电力相关联的组件。
多媒体组件808包括在所述电子设备800和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件808包括一个前置摄像头和/或后置摄像头。当电子设备800处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件810被配置为输出和/或输入音频信号。例如,音频组件810包括一个麦克风(MIC),当电子设备800处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器804或经由通信组件816发送。在一些实施例中,音频组件810还包括一个扬声器,用于输出音频信号。
I/O接口812为处理组件802和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件814包括一个或多个传感器,用于为电子设备800提供各个方面的状态评估。例如,传感器组件814可以检测到电子设备800的打开/关闭状态,组件的相对定位,例如所述组件为电子设备800的显示器和小键盘,传感器组件814还可以检测电子设备800或电子设备800一个组件的位置改变,用户与电子设备800接触的存在或不存在,电子设备800方位或加速/减速和电子设备800的温度变化。传感器组件814可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件814还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件814还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件816被配置为便于电子设备800和其他设备之间有线或无线方式的通信。电子设备800可以接入基于通信标准的无线网络,如WiFi,2G或3G,或它们的组合。在一个示例性实施例中,通信组件816经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件816还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,电子设备800可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。
在示例性实施例中,还提供了一种非易失性计算机可读存储介质,例如包括计算机程序指令的存储器804,上述计算机程序指令可由电子设备800的处理器820执行以完成上述方法。
图12是根据本公开实施例的一种电子设备1900的框图。例如,电子设备1900可以被提供为一服务器。参照图12,电子设备1900包括处理组件1922,其进一步包括一个或多个处理器,以及由存储器1932所代表的存储器资源,用于存储可由处理组件1922的执行的指令,例如应用程序。存储器1932中存储的应用程序可以包括一个或一个以上的每一个对应于一组指令的模块。此外,处理组件1922被配置为执行指令,以执行上述方法。
电子设备1900还可以包括一个电源组件1926被配置为执行电子设备1900的电源管理,一个有线或无线网络接口1950被配置为将电子设备1900连接到网络,和一个输入输出(I/O)接口1958。电子设备1900可以操作基于存储在存储器1932的操作系统,例如Windows ServerTM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTM或类似。
在示例性实施例中,还提供了一种非易失性计算机可读存储介质,例如包括计算机程序指令的 存储器1932,上述计算机程序指令可由电子设备1900的处理组件1922执行以完成上述方法。
本公开可以是系统、方法和/或计算机程序产品。计算机程序产品可以包括计算机可读存储介质,其上载有用于使处理器实现本公开的各个方面的计算机可读程序指令。
计算机可读存储介质可以是可以保持和存储由指令执行设备使用的指令的有形设备。计算机可读存储介质例如可以是――但不限于――电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意合适的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、静态随机存取存储器(SRAM)、便携式压缩盘只读存储器(CD-ROM)、数字多功能盘(DVD)、记忆棒、软盘、机械编码设备、例如其上存储有指令的打孔卡或凹槽内凸起结构、以及上述的任意合适的组合。这里所使用的计算机可读存储介质不被解释为瞬时信号本身,诸如无线电波或者其他自由传播的电磁波、通过波导或其他传输媒介传播的电磁波(例如,通过光纤电缆的光脉冲)、或者通过电线传输的电信号。
这里所描述的计算机可读程序指令可以从计算机可读存储介质下载到各个计算/处理设备,或者通过网络、例如因特网、局域网、广域网和/或无线网下载到外部计算机或外部存储设备。网络可以包括铜传输电缆、光纤传输、无线传输、路由器、防火墙、交换机、网关计算机和/或边缘服务器。每个计算/处理设备中的网络适配卡或者网络接口从网络接收计算机可读程序指令,并转发该计算机可读程序指令,以供存储在各个计算/处理设备中的计算机可读存储介质中。
用于执行本公开操作的计算机程序指令可以是汇编指令、指令集架构(ISA)指令、机器指令、机器相关指令、微代码、固件指令、状态设置数据、或者以一种或多种编程语言的任意组合编写的源代码或目标代码,所述编程语言包括面向对象的编程语言—诸如Smalltalk、C++等,以及常规的过程式编程语言—诸如“C”语言或类似的编程语言。计算机可读程序指令可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络—包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。在一些实施例中,通过利用计算机可读程序指令的状态信息来个性化定制电子电路,例如可编程逻辑电路、现场可编程门阵列(FPGA)或可编程逻辑阵列(PLA),该电子电路可以执行计算机可读程序指令,从而实现本公开的各个方面。
这里参照根据本公开实施例的方法、装置(系统)和计算机程序产品的流程图和/或框图描述了本公开的各个方面。应当理解,流程图和/或框图的每个方框以及流程图和/或框图中各方框的组合,都可以由计算机可读程序指令实现。
这些计算机可读程序指令可以提供给通用计算机、专用计算机或其它可编程数据处理装置的处理器,从而生产出一种机器,使得这些指令在通过计算机或其它可编程数据处理装置的处理器执行时,产生了实现流程图和/或框图中的一个或多个方框中规定的功能/动作的装置。也可以把这些计算机可读程序指令存储在计算机可读存储介质中,这些指令使得计算机、可编程数据处理装置和/或其他设备以特定方式工作,从而,存储有指令的计算机可读介质则包括一个制造品,其包括实现流程图和/或框图中的一个或多个方框中规定的功能/动作的各个方面的指令。
也可以把计算机可读程序指令加载到计算机、其它可编程数据处理装置、或其它设备上,使得在计算机、其它可编程数据处理装置或其它设备上执行一系列操作步骤,以产生计算机实现的过程,从而使得在计算机、其它可编程数据处理装置、或其它设备上执行的指令实现流程图和/或框图中的一个或多个方框中规定的功能/动作。
附图中的流程图和框图显示了根据本公开的多个实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或指令的一部分,所述模块、程序段或指令的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的 功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
以上已经描述了本公开的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实施例的原理、实际应用或对市场中的技术改进,或者使本技术领域的其它普通技术人员能理解本文披露的各实施例。

Claims (20)

  1. 一种图像处理方法,其特征在于,包括:
    识别目标图像中待渲染的第一对象,以及所述第一对象在所述目标图像中所属的第二对象;
    根据所述目标图像和所述第二对象,生成包括标记的参考图像,其中,所述标记用于记录所述第二对象的覆盖区域;
    根据所述参考图像和所述第一对象,确定待渲染区域,其中,所述待渲染区域位于与所述标记对应的覆盖区域以内;
    利用目标素材对所述待渲染区域进行渲染,生成渲染结果。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述目标图像和所述第二对象,生成包括标记的参考图像,包括:
    生成与所述目标图像尺寸相同的第一初始图像;
    在所述第一初始图像中,标记所述第二对象的覆盖区域,得到所述参考图像。
  3. 根据权利要求2所述的方法,其特征在于,所述在所述第一初始图像中,标记所述第二对象的覆盖区域,得到所述参考图像,包括:
    在所述第一初始图像中,将所述第二对象的覆盖区域包括的至少一个像素点更改为目标像素值,得到所述参考图像。
  4. 根据权利要求1至3中任意一项所述的方法,其特征在于,所述根据所述参考图像和所述第一对象,确定待渲染区域,包括:
    生成与所述目标图像尺寸相同的第二初始图像;
    在所述第二初始图像中,根据所述参考图像和所述第一对象,确定待渲染区域。
  5. 根据权利要求4所述的方法,其特征在于,所述在所述第二初始图像中,根据所述参考图像和所述第一对象,确定待渲染区域,包括:
    在所述第二初始图像中,将与所述第一对象对应的区域作为初始待渲染区域;
    遍历所述初始待渲染区域的像素点,当所述像素点在所述参考图像中的对应位置包括所述标记时,将所述像素点作为待渲染像素点;
    将所述待渲染像素点构成的区域作为所述待渲染区域。
  6. 根据权利要求1至5中任意一项所述的方法,其特征在于,所述识别所述目标图像中待渲染的第一对象,以及所述第一对象在所述目标图像中所属的第二对象,包括:
    对所述目标图像进行特征提取,分别得到与所述第一对象对应的第一特征点集合,以及与所述第二对象对应的第二特征点集合;
    将所述第一特征点集合包括的第一特征点按照第一预设方式连接,得到所述第一对象对应的区域;
    将所述第二特征点集合包括的第二特征点按照第二预设方式连接,得到所述第二对象对应的区域。
  7. 根据权利要求6所述的方法,其特征在于,所述将所述第一特征点集合包括的第一特征点按照第一预设方式连接,得到所述第一对象对应的区域,包括:
    将至少三个所述第一特征点作为一组,对所述第一特征点集合进行划分,得到至少一组第一特征点子集合;
    对至少一组所述第一特征点子集合包括的所述第一特征点依次连接,得到至少一个第一网格;
    将所述至少一个第一网格覆盖的区域作为第一对象对应的区域。
  8. 根据权利要求6或7所述的方法,其特征在于,所述将所述第二特征点集合包括的第二特征点按照第二预设方式连接,得到所述第二对象对应的区域,包括:
    将至少三个所述第二特征点作为一组,对所述第二特征点集合进行划分,得到至少一组第二特征点子集合;
    对至少一组所述第二特征点子集合包括的所述第二特征点依次连接,得到至少一个第二网格;
    将所述至少一个第二网格覆盖的区域作为第二对象对应的区域。
  9. 根据权利要求1至8中任意一项所述的方法,其特征在于,所述方法还包括:
    更改所述目标图像的透明度,得到更改结果;
    将所述渲染结果与所述更改结果进行融合,得到融合图像。
  10. 根据权利要求1至9中任意一项所述的方法,其特征在于,所述第一对象包括瞳孔,所述第二对象包括眼睛,所述目标素材包括用于美化瞳孔的素材。
  11. 一种图像处理装置,其特征在于,包括:
    识别模块,用于识别目标图像中待渲染的第一对象,以及所述第一对象在所述目标图像中所属的第二对象;
    参考图像生成模块,用于根据所述目标图像和所述第二对象,生成包括标记的参考图像,其中,所述标记用于记录所述第二对象的覆盖区域;
    待渲染区域确定模块,用于根据所述参考图像和所述第一对象,确定待渲染区域,其中,所述待渲染区域位于与所述标记对应的覆盖区域以内;
    渲染模块,用于利用目标素材对所述待渲染区域进行渲染,生成渲染结果。
  12. 根据权利要求11所述的装置,其特征在于,所述参考图像生成模块用于:
    生成与所述目标图像尺寸相同的第一初始图像;
    在所述第一初始图像中,标记所述第二对象的覆盖区域,得到所述参考图像。
  13. 根据权利要求12所述的装置,其特征在于,所述参考图像生成模块进一步用于:
    在所述第一初始图像中,将所述第二对象的覆盖区域包括的至少一个像素点更改为目标像素值,得到所述参考图像。
  14. 根据权利要求11至13中任意一项所述的装置,其特征在于,所述待渲染区域确定模块用于:
    生成与所述目标图像尺寸相同的第二初始图像;
    在所述第二初始图像中,根据所述参考图像和所述第一对象,确定待渲染区域。
  15. 根据权利要求14所述的装置,其特征在于,所述待渲染区域确定模块进一步用于:
    在所述第二初始图像中,将与所述第一对象对应的区域作为初始待渲染区域;
    遍历所述初始待渲染区域的像素点,当所述像素点在所述参考图像中的对应位置包括所述标记时,将所述像素点作为待渲染像素点;
    将所述待渲染像素点构成的区域作为所述待渲染区域。
  16. 根据权利要求11至15中任意一项所述的装置,其特征在于,所述识别模块用于:
    对所述目标图像进行特征提取,分别得到与所述第一对象对应的第一特征点集合,以及与所述第二对象对应的第二特征点集合;
    将所述第一特征点集合包括的第一特征点按照第一预设方式连接,得到所述第一对象对应的区域;
    将所述第二特征点集合包括的第二特征点按照第二预设方式连接,得到所述第二对象对应的区域。
  17. 根据权利要求11至16中任意一项所述的装置,其特征在于,所述第一对象包括瞳孔,所述第二对象包括眼睛,所述目标素材包括用于美化瞳孔的素材。
  18. 一种电子设备,其特征在于,包括:
    处理器;
    用于存储处理器可执行指令的存储器;
    其中,所述处理器被配置为调用所述存储器存储的指令,以执行权利要求1至10中任意一项所述的方法。
  19. 一种计算机可读存储介质,其上存储有计算机程序指令,其特征在于,所述计算机程序指令被处理器执行时实现权利要求1至10中任意一项所述的方法。
  20. 一种计算机程序,包括计算机可读代码,其特征在于,当所述计算机可读代码在电子设备 中运行时,所述电子设备中的处理器执行用于实现如权利要求1-10中的任一项所述的方法。
PCT/CN2020/080924 2019-11-22 2020-03-24 图像处理方法及装置、电子设备和存储介质 WO2021098107A1 (zh)

Priority Applications (5)

Application Number Priority Date Filing Date Title
KR1020207036012A KR20210064114A (ko) 2019-11-22 2020-03-24 이미지 처리 방법 및 장치, 전자 기기 및 저장 매체
SG11202012481UA SG11202012481UA (en) 2019-11-22 2020-03-24 Image processing method and apparatus, electronic device, and storage medium
JP2020570186A JP7159357B2 (ja) 2019-11-22 2020-03-24 画像処理方法及び装置、電子機器並びに記憶媒体
EP20820759.7A EP3852069A4 (en) 2019-11-22 2020-03-24 IMAGE PROCESSING METHOD AND DEVICE, ELECTRONIC DEVICE AND STORAGE MEDIUM
US17/117,230 US11403788B2 (en) 2019-11-22 2020-12-10 Image processing method and apparatus, electronic device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911154806.5 2019-11-22
CN201911154806.5A CN111091610B (zh) 2019-11-22 2019-11-22 图像处理方法及装置、电子设备和存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/117,230 Continuation US11403788B2 (en) 2019-11-22 2020-12-10 Image processing method and apparatus, electronic device, and storage medium

Publications (1)

Publication Number Publication Date
WO2021098107A1 true WO2021098107A1 (zh) 2021-05-27

Family

ID=70393606

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/080924 WO2021098107A1 (zh) 2019-11-22 2020-03-24 图像处理方法及装置、电子设备和存储介质

Country Status (7)

Country Link
EP (1) EP3852069A4 (zh)
JP (1) JP7159357B2 (zh)
KR (1) KR20210064114A (zh)
CN (1) CN111091610B (zh)
SG (1) SG11202012481UA (zh)
TW (1) TWI752473B (zh)
WO (1) WO2021098107A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113657361A (zh) * 2021-07-23 2021-11-16 阿里巴巴(中国)有限公司 页面异常检测方法、装置及电子设备

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111569418B (zh) * 2020-06-10 2023-04-07 网易(杭州)网络有限公司 对于待输出内容的渲染方法、装置、介质及电子设备
CN113763324A (zh) * 2021-08-02 2021-12-07 阿里巴巴达摩院(杭州)科技有限公司 图像处理方法、计算机可读存储介质、处理器和系统
CN114240742A (zh) * 2021-12-17 2022-03-25 北京字跳网络技术有限公司 图像处理方法、装置、电子设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150206339A1 (en) * 2014-01-22 2015-07-23 Hankookin, Inc. Object Oriented Image Processing And Rendering In A Multi-dimensional Space
US20180164586A1 (en) * 2016-12-12 2018-06-14 Samsung Electronics Co., Ltd. Methods and devices for processing motion-based image
CN109981989A (zh) * 2019-04-04 2019-07-05 北京字节跳动网络技术有限公司 渲染图像的方法、装置、电子设备和计算机可读存储介质
CN109977868A (zh) * 2019-03-26 2019-07-05 深圳市商汤科技有限公司 图像渲染方法及装置、电子设备和存储介质
CN110062157A (zh) * 2019-04-04 2019-07-26 北京字节跳动网络技术有限公司 渲染图像的方法、装置、电子设备和计算机可读存储介质
CN110084154A (zh) * 2019-04-12 2019-08-02 北京字节跳动网络技术有限公司 渲染图像的方法、装置、电子设备和计算机可读存储介质

Family Cites Families (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4435809B2 (ja) 2002-07-08 2010-03-24 株式会社東芝 仮想化粧装置及びその方法
JP3747918B2 (ja) * 2003-04-25 2006-02-22 ヤマハ株式会社 光ディスク記録装置
US7379071B2 (en) * 2003-10-14 2008-05-27 Microsoft Corporation Geometry-driven feature point-based image synthesis
US7289119B2 (en) * 2005-05-10 2007-10-30 Sony Computer Entertainment Inc. Statistical rendering acceleration
JP2010250420A (ja) 2009-04-13 2010-11-04 Seiko Epson Corp 顔の特徴部位の座標位置を検出する画像処理装置
JP4760999B1 (ja) 2010-10-29 2011-08-31 オムロン株式会社 画像処理装置、画像処理方法、および制御プログラム
JP2012181688A (ja) * 2011-03-01 2012-09-20 Sony Corp 情報処理装置、情報処理方法、情報処理システムおよびプログラム
CN102184108A (zh) * 2011-05-26 2011-09-14 成都江天网络科技有限公司 一种利用计算机程序进行虚拟化妆的方法及化妆模拟程序
CN102915308B (zh) * 2011-08-02 2016-03-09 阿里巴巴集团控股有限公司 一种页面渲染的方法及装置
CN104679509B (zh) * 2015-02-06 2019-11-15 腾讯科技(深圳)有限公司 一种渲染图形的方法和装置
CN107924579A (zh) * 2015-08-14 2018-04-17 麦特尔有限公司 生成个性化3d头部模型或3d身体模型的方法
US10360708B2 (en) * 2016-06-30 2019-07-23 Snap Inc. Avatar based ideogram generation
KR20200026808A (ko) * 2017-07-13 2020-03-11 시쉐이도 아메리카스 코포레이션 가상 얼굴 메이크업 제거, 빠른 얼굴 검출, 및 랜드마크 추적
CN109583263A (zh) * 2017-09-28 2019-04-05 丽宝大数据股份有限公司 结合扩增实境的身体信息分析装置及其眉型预览方法
JP2019070872A (ja) 2017-10-05 2019-05-09 カシオ計算機株式会社 画像処理装置、画像処理方法及びプログラム
JP7087331B2 (ja) 2017-10-05 2022-06-21 カシオ計算機株式会社 画像処理装置、画像処理方法及びプログラム
JP2019070870A (ja) 2017-10-05 2019-05-09 カシオ計算機株式会社 画像処理装置、画像処理方法及びプログラム
CN107818305B (zh) * 2017-10-31 2020-09-22 Oppo广东移动通信有限公司 图像处理方法、装置、电子设备和计算机可读存储介质
CN108564531B (zh) * 2018-05-08 2022-07-08 麒麟合盛网络技术股份有限公司 一种图像处理方法及装置
CN109242785A (zh) * 2018-08-10 2019-01-18 广州二元科技有限公司 一种基于神经网络和颜色筛选的精准人像上唇彩的方法
CN109302618A (zh) * 2018-11-27 2019-02-01 网易(杭州)网络有限公司 移动终端中的直播画面渲染方法、装置和存储介质
CN110211211B (zh) * 2019-04-25 2024-01-26 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150206339A1 (en) * 2014-01-22 2015-07-23 Hankookin, Inc. Object Oriented Image Processing And Rendering In A Multi-dimensional Space
US20180164586A1 (en) * 2016-12-12 2018-06-14 Samsung Electronics Co., Ltd. Methods and devices for processing motion-based image
CN109977868A (zh) * 2019-03-26 2019-07-05 深圳市商汤科技有限公司 图像渲染方法及装置、电子设备和存储介质
CN109981989A (zh) * 2019-04-04 2019-07-05 北京字节跳动网络技术有限公司 渲染图像的方法、装置、电子设备和计算机可读存储介质
CN110062157A (zh) * 2019-04-04 2019-07-26 北京字节跳动网络技术有限公司 渲染图像的方法、装置、电子设备和计算机可读存储介质
CN110084154A (zh) * 2019-04-12 2019-08-02 北京字节跳动网络技术有限公司 渲染图像的方法、装置、电子设备和计算机可读存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3852069A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113657361A (zh) * 2021-07-23 2021-11-16 阿里巴巴(中国)有限公司 页面异常检测方法、装置及电子设备

Also Published As

Publication number Publication date
EP3852069A1 (en) 2021-07-21
TW202121343A (zh) 2021-06-01
TWI752473B (zh) 2022-01-11
SG11202012481UA (en) 2021-06-29
CN111091610A (zh) 2020-05-01
JP2022512048A (ja) 2022-02-02
JP7159357B2 (ja) 2022-10-24
KR20210064114A (ko) 2021-06-02
EP3852069A4 (en) 2021-08-18
CN111091610B (zh) 2023-04-11

Similar Documents

Publication Publication Date Title
US11367307B2 (en) Method for processing images and electronic device
WO2021098107A1 (zh) 图像处理方法及装置、电子设备和存储介质
CN110189340B (zh) 图像分割方法、装置、电子设备及存储介质
WO2021051857A1 (zh) 目标对象匹配方法及装置、电子设备和存储介质
WO2019101021A1 (zh) 图像识别方法、装置及电子设备
WO2022179026A1 (zh) 图像处理方法及装置、电子设备和存储介质
WO2021056808A1 (zh) 图像处理方法及装置、电子设备和存储介质
US10007841B2 (en) Human face recognition method, apparatus and terminal
WO2022179025A1 (zh) 图像处理方法及装置、电子设备和存储介质
US20220237812A1 (en) Item display method, apparatus, and device, and storage medium
US11030733B2 (en) Method, electronic device and storage medium for processing image
WO2016192325A1 (zh) 视频文件的标识处理方法及装置
CN111243011A (zh) 关键点检测方法及装置、电子设备和存储介质
WO2021218121A1 (zh) 图像处理方法、装置、电子设备及存储介质
KR20230121919A (ko) 증강 현실 콘텐츠를 생성하기 위한 응시 방향의 결정
KR20230127326A (ko) 증강 현실 콘텐츠에서의 디스플레이 스크린들의 검출및 난독화
JP2021501924A (ja) 顔画像の処理方法および装置、電子機器ならびに記憶媒体
US20220270352A1 (en) Methods, apparatuses, devices, storage media and program products for determining performance parameters
US11403788B2 (en) Image processing method and apparatus, electronic device, and storage medium
WO2022142298A1 (zh) 关键点检测方法及装置、电子设备和存储介质
CN111325220A (zh) 图像生成方法、装置、设备及存储介质
CN112613447A (zh) 关键点检测方法及装置、电子设备和存储介质
US20220319061A1 (en) Transmitting metadata via invisible light
US20220270313A1 (en) Image processing method, electronic device and storage medium
WO2022042160A1 (zh) 图像处理方法及装置

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2020570186

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2020820759

Country of ref document: EP

Effective date: 20201217

NENP Non-entry into the national phase

Ref country code: DE