WO2023193664A1 - 图像处理方法、装置及电子设备 - Google Patents

图像处理方法、装置及电子设备 Download PDF

Info

Publication number
WO2023193664A1
WO2023193664A1 PCT/CN2023/085565 CN2023085565W WO2023193664A1 WO 2023193664 A1 WO2023193664 A1 WO 2023193664A1 CN 2023085565 W CN2023085565 W CN 2023085565W WO 2023193664 A1 WO2023193664 A1 WO 2023193664A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
area
color value
image
target
Prior art date
Application number
PCT/CN2023/085565
Other languages
English (en)
French (fr)
Inventor
黎小凤
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Publication of WO2023193664A1 publication Critical patent/WO2023193664A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping

Definitions

  • the present disclosure relates to the field of image processing technology, and in particular, to an image processing method, device and electronic equipment.
  • the related augmented reality (AR) shoe try-on technology can draw shoes onto the feet in the video through image processing.
  • the first aspect provides an image processing method, including:
  • the original image when it is determined that there is a foot characteristic area in the original image, determine a foot muff model based on the foot characteristic area of the original image, and the foot muff model is used to mark the foot characteristic area.
  • the area to be optimized perform background completion on the area to be optimized in the original image to obtain a completed image; determine the shoe model according to the foot feature area in the completed image; in the completed image
  • the foot feature area in the full image is rendered, and the shoe model is rendered to obtain the target image.
  • an image processing device including:
  • Acquisition module used to obtain original images
  • a processing module configured to determine a foot cover model based on the foot feature area of the original image when it is determined that there is a foot feature area in the original image, and the foot cover model is used to mark the foot feature.
  • the area to be optimized of the area perform background completion on the area to be optimized of the original image to obtain a completed image; determine the shoe model according to the foot feature area in the completed image; in the repair The foot feature area in the full image is rendered, and the shoe model is rendered to obtain the target image.
  • an electronic device including: a processor, a memory, and a computer program stored on the memory and executable on the processor.
  • the computer program executes the following steps: The image processing method described in one aspect or any optional implementation thereof.
  • a fourth aspect provides a computer-readable storage medium, including: storing a computer program on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the first aspect or any optional implementation thereof.
  • the image processing method described in the method including: storing a computer program on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the first aspect or any optional implementation thereof.
  • a fifth aspect provides a computer program product, including: when the computer program product is run on a computer, causing the computer to implement the image processing method as described in the first aspect or any optional implementation thereof .
  • Figure 1 is a schematic diagram of a shoe wearing situation when drawing shoes onto the feet in a video provided by an embodiment of the present disclosure
  • FIG. 2 is a schematic flowchart of an image processing method provided by an embodiment of the present disclosure
  • Figure 3 is a schematic diagram of a foot cover model provided by an embodiment of the present disclosure.
  • Figure 4 is a schematic diagram of a complementary image provided by an embodiment of the present disclosure.
  • Figure 5A is a schematic diagram of a background completion method provided by an embodiment of the present disclosure.
  • Figure 5B is a schematic diagram of another background completion method provided by an embodiment of the present disclosure.
  • Figure 6 is a schematic diagram of obtaining a target image provided by an embodiment of the present disclosure.
  • Figure 7 is a structural block diagram of an image processing device provided by an embodiment of the present disclosure.
  • FIG. 8 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • words such as “exemplary” or “such as” are used to represent examples, illustrations or explanations. Any embodiment or design described as “exemplary” or “such as” in the present disclosure is not intended to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the words “exemplary” or “such as” is intended to present the concept in a concrete manner. Furthermore, in the description of the embodiments of the present disclosure, unless otherwise specified, the meaning of “plurality” means two or more.
  • the shoes in order to achieve the special effect of trying on shoes, the shoes can be drawn to the feet in the video through image processing.
  • image processing For some shoes with special structures, when the shoes are drawn to the feet in the video, There may be some features of the foot that go through the upper and are exposed in areas not covered by the shoe. Based on this, there is an urgent need for an image processing method that does not wear uppers when drawing shoes to the feet in the video.
  • Figure 1 is a schematic diagram of a shoe wearing situation when drawing shoes onto the feet in a video provided by an embodiment of the present disclosure. As shown in Figure 1, after the shoe model 12 is drawn into the foot feature area 11 in the video, the obtained image will have a hem area 13 as shown in Figure 1.
  • the present disclosure provides an image processing method, device and electronic equipment. For some shoes with special structures, it is possible to avoid drawing the shoes into the video through image processing. Wearing problems that occur when the foot is in the middle.
  • embodiments of the present disclosure provide an image processing method, which can first determine the to-be-used foot feature area in the original image through a foot cover model before performing image processing to wear shoes on the foot in the image. Optimize the area and perform background completion. In this way, after the shoe model is subsequently rendered onto the original image, even if the shoes are specially constructed shoes, because the background of the foot features is completed in advance, after the shoe model is rendered, the foot features will not be exposed after rendering. Outside of the shoe model, get The target image is more consistent with the actual scene of the foot wearing shoes, thus avoiding the situation of wearing shoes.
  • the image processing method provided in the embodiments of the present disclosure can be implemented by an image processing device or an electronic device.
  • the image processing device can be a functional module or functional entity in the electronic device.
  • the above-mentioned electronic devices may include: mobile phones, tablet computers, notebook computers, PDAs, vehicle-mounted terminals, wearable devices, ultra-mobile personal computers (UMPC), netbooks or personal digital assistants (Personal Digital Assistant, PDA) , personal computer (PC), etc., the embodiments of the present disclosure do not specifically limit this.
  • FIG. 2 it is a schematic flow chart of an image processing method provided by an embodiment of the present disclosure.
  • the method includes the following steps 201 to 205 .
  • Step 201 Obtain the original image.
  • the above original image may be a frame of a video image acquired in real time.
  • Step 202 When it is determined that a foot characteristic area exists in the original image, determine a foot muff model based on the foot characteristic area of the original image.
  • the foot muff model is used to mark the area to be optimized in the foot characteristic area.
  • the foot cover model can be a model that can cover the heel and the forefoot.
  • the area covered by the foot cover model is the area that may cause wear when the shoe is drawn on the characteristic area of the foot.
  • the foot cover model may be a preset model.
  • foot cover models corresponding to these shoe models may be different.
  • shoes with a triangular outline correspond to one foot cover model
  • shoes with a diamond outline correspond to another foot cover model.
  • FIG. 3 it is a schematic diagram of a foot cover model provided by an embodiment of the present disclosure. Based on the foot feature area 31 present in the image in Figure 3, a foot cover model 32 is determined, and the area indicated by the foot cover model 32 is the area to be optimized in the foot feature area.
  • step 203 is to perform background completion on the area to be optimized of the original image to obtain a completed image.
  • the area to be optimized of the foot feature area can be determined based on the foot cover model, and then background completion can be performed on the area to obtain the completion image 41.
  • the method of performing background completion on the area to be optimized in the original image includes: based on the initial screen coordinates of the target pixel in the area to be optimized in the original image, the width of the foot feature area on the screen, the foot features Based on the length of the area on the screen, calculate the offset screen coordinates corresponding to the target pixel point; replace the color value of the target pixel point with the color value of the pixel point corresponding to the offset screen coordinates to achieve background completion of the area to be optimized.
  • the target pixel is any pixel in at least part of the pixels in the area to be optimized, and at least some of the pixels in the area to be optimized include: all pixels in the area to be optimized, or part of the pixels in the area to be optimized.
  • the color values involved refer to the values of the three channels of R (red), G (green), and B (blue).
  • the mapping method is used to sample non-foot features by using the initial screen coordinates of the target pixel in the area to be optimized in the original image, the width of the foot feature area on the screen, and the length of the foot feature area on the screen.
  • the color value of the pixel point in the area is used, and the color value of the pixel point in the non-feet feature area is used to replace the color value of the pixel point in the area to be optimized.
  • This method determines the pixels in the non-feet feature area as the background, and replaces the color value of the area to be optimized with the color value of the non-feet area, thereby achieving background completion of the area to be optimized.
  • the color values of pixels in the area to be optimized can be replaced with the color values of pixels whose distance from the foot feature area is within a preset range. This method determines the pixels whose distance from the foot feature area is within a preset range as the background, thereby achieving background completion of the area to be optimized.
  • Color texture2D(u_inputTex,g_vary_sp_uv-u_widthVector+ u_heightVector) (1)
  • Color represents the color value of a pixel in the area to be optimized after background completion
  • texture2D represents 2D texture mapping
  • u_inputTex is the input texture of the corresponding image of the video, that is, the initial color of the pixel.
  • value, g_vary_sp_uv is the screen coordinate of the currently rendered pixel, that is, the screen coordinate corresponding to the pixel
  • u_widthVector is the width of the foot feature area on the screen
  • u_heightVector is the length of the foot feature area on the screen.
  • FIG. 5A it is a schematic diagram of a background completion method provided by an embodiment of the present disclosure.
  • the offset Move the screen coordinates to determine the corresponding pixel point 51, and replace the color value of pixel point 52 with the color value of pixel point 51.
  • the color value can be replaced in the above manner to achieve background completion of the pixels in the area to be optimized.
  • the method of performing background completion on the area to be optimized of the original image includes: based on the acquired initial color value of the target pixel in the area to be optimized of the original image, and the values of adjacent pixels of the target pixel.
  • the initial color value determines the final color value corresponding to the target pixel; replace the initial color value of the target pixel with the final color value corresponding to the target pixel to achieve background completion of the area to be optimized.
  • the target pixel is any pixel in at least part of the pixels in the area to be optimized. At least some of the pixels in the area to be optimized include: all the pixels in the area to be optimized, or some of the pixels in the area to be optimized.
  • the final color value corresponding to the target pixel may be the weighted sum of the initial color value of the target pixel in the area to be optimized and the initial color values of its neighboring pixels. In this way, the final color value of the area closer to the background in the area to be optimized will be closer to the real background, so that the area to be optimized can be integrated with the background, so that the boundary between the area to be optimized and the background is blurred, so that the area to be optimized can also be realized. Area for background completion.
  • the weight value of the current pixel to be optimized (target pixel) in the area to be optimized can be set larger, and the weight values of adjacent pixels of the current pixel can be set smaller, so that Achieve better display effects.
  • step 204 Determine the shoe model based on the foot feature area in the completed image.
  • Step 205 Render the shoe model in the foot feature area in the completed image to obtain the target image.
  • a schematic diagram of obtaining a target image is provided according to an embodiment of the present disclosure.
  • the shoe model can be further determined, and the shoe model can be further determined in the complementary image.
  • the shoe model is rendered to the foot feature area of the completed image, and then the target image 62 shown in Figure 6 will be obtained, that is, the image with the effect of trying on high heels.
  • the image processing method provided by the embodiment of the present disclosure can obtain the original image; when it is determined that there is a foot feature area in the original image, a foot cover model is determined based on the foot feature area of the original image, and the foot cover model is used to mark the foot.
  • the area to be optimized in the foot feature area perform background completion on the area to be optimized in the original image to obtain a completed image; determine the shoe model based on the foot feature area in the completed image; determine the foot features in the completed image area, rendering the shoe model to obtain the target image.
  • the area to be optimized in the foot feature area in the original image can be determined through the foot cover model, and the background is completed, so that the shoes can be subsequently
  • the model is rendered to the original image, even if the shoes are specially constructed shoes, because the background of the foot features is completed in advance, after the shoe model is rendered, the footstep features will not be exposed outside the rendered shoe model.
  • the obtained target image is more consistent with the actual picture of the foot wearing shoes, thereby avoiding the situation of wearing shoes.
  • some special display effects can also be superimposed on the shoe model, which can be based on some special effect diagrams. Sampling, overlaying special effects, and achieving gradual effects when drawing shoe models.
  • the starry sky effect can be superimposed on the shoe model to achieve the effect of gradually appearing from the heel to the toe of the shoe when drawing the shoe model, or to achieve the effect of gradually appearing from the toe of the shoe to the heel when drawing the shoe model. Effect.
  • the following steps may also be included: obtaining the initial color value of the first pixel point, the first pixel
  • the point is any pixel corresponding to the shoe model in the completed image; according to the first coordinate corresponding to the first pixel, the first color value is obtained by sampling from the target rendering; according to the initial color value of the first pixel and For the first color value, calculate the final color value of the first pixel; replace the initial color value of the first pixel with the final color value.
  • the first coordinate includes any of the following: two-dimensional space coordinates, world space coordinates, and screen control coordinates.
  • the final color value of the first pixel point can be calculated according to the following formula (2), the initial color value of the first pixel point and the first color value.
  • finalRGB.xyz refractionRGB.xyz*galaxyColor*2.0*(1.0-cfg)+(1.0- (1.0-galaxyColor)*(1.0-refractionRGB.xyz)*2.0) (2)
  • finalRGB.xyz represents the final color value of the first pixel
  • refractionRGB.xyz represents the initial color value of the first pixel
  • galaxyColor represents the target rendering based on the first coordinate corresponding to the first pixel. Sampling is performed to obtain the first color value.
  • the target effect picture can be a starry sky effect picture.
  • cfg is the superposition parameter of the initial color value of the first pixel point and the first color value.
  • the superposition parameter is used to represent the initial color value of the first pixel point. The ratio of the color value and the first color value when superimposed.
  • the superposition parameter can be set according to actual needs, and is not limited in the embodiment of the present disclosure.
  • the starry sky effect when galaxyColor is a color value sampled based on two-dimensional space coordinates (UV coordinates), the starry sky effect will be fixed on the shoe model; when galaxyColor is based on the world When the color value is sampled based on space coordinates, the starry sky effect will achieve a flowing effect when the model moves; when the galaxyColor is the color value sampled based on the screen control coordinates, the starry sky effect will be displayed at a fixed position on the screen.
  • the target effect can be superimposed on the shoe model, so that the rendered shoes have a better display effect.
  • rendering the shoe model in the foot feature area in the complementary image to obtain the target image includes: establishing a shoe model space based on the shoe model; determining the target according to the second coordinates in the shoe model space Noise value, the second coordinate is any model space coordinate in the shoe model space; if the target noise value is greater than the preset value, the second pixel corresponding to the second coordinate is rendered in the foot feature area.
  • the above-mentioned preset value changes from large to small within the first period of time.
  • the first time period is the total time period for drawing the entire shoe model.
  • dissolution and fading can be implemented based on the axial direction of the shoe model space, and noise is added to increase the randomness of the edges, thereby achieving the effect of the shoe fading from the toe to the heel.
  • a shoe model space is established based on the shoe model, in which the direction from heel to toe can be used as an axis of the shoe model space (for example, it can be the Y axis of the shoe model space), and then for the For any model space coordinate, determine the corresponding value of the model space coordinate on the Y axis, and use this value as the corresponding target noise value. For example, assuming that the value corresponding to the toe of the shoe on the Y axis is 1 and the value corresponding to the heel is 0, then the target noise value is a value between 0 and 1, and the value between 0 and 1 is A value between is determined as the target noise value.
  • the preset value is set to a value that gradually changes from 1 to 0 over time within the first duration. If the target noise value is greater than the preset value, the second pixel corresponding to the second coordinate is rendered in the foot feature area. point, correspondingly, if the target noise value is less than or equal to the preset value, the second pixel point corresponding to the second coordinate is not rendered in the foot feature area. In this way, the display effect gradually appears from the toe to the heel.
  • the method of determining the target noise value based on the second coordinates in the shoe model space includes: determining the first noise value based on the position of the second coordinates in the target axis direction in the shoe model space; The coordinates generate random numbers; based on the first noise value and the random number, the target noise value is calculated.
  • a shoe model space is established based on the shoe model, in which the direction from the heel to the toe can be used as an axis of the shoe model space (for example, it can be the Y axis of the shoe model space), and then for the third element in the shoe model space
  • Two coordinates determine the corresponding value of the model space coordinate on the Y axis, and use this value as the corresponding first noise value. For example, assuming that the value corresponding to the toe of the shoe on the Y-axis is 1 and the value corresponding to the heel is 0, then the target noise value is a value between 0 and 1, and the value between 0 and 1 is The value is determined as the first noise value.
  • noise3 represents the target noise value
  • noise1 represents the first noise value
  • noise2 represents the above-mentioned random number
  • the preset value is set to a value that gradually changes from 1 to 0 over time within a first period of time. After determining the target noise value, if the target noise value is greater than the preset value, then at the foot The second pixel corresponding to the second coordinate is rendered in the foot feature area. Correspondingly, if the target noise value is less than or equal to the preset value, the second pixel corresponding to the second coordinate is not rendered in the foot feature area. In this way, the display effect gradually appears from the toe to the heel.
  • the target noise value is a random number added on the basis of the first noise value
  • the target noise value is a random number added on the basis of the first noise value
  • when drawing the shoe model not only will the display effect gradually appear from the toe to the heel, but also the display effect will appear gradually from the toe to the heel.
  • these pixels will not be drawn at the same time, showing the process of gradual appearance from toe to heel, avoiding the appearance of flat surfaces.
  • the uniform gradient effect makes the display effect unnatural and unsmooth.
  • an embodiment of the present disclosure provides a structural block diagram of an image processing device, which includes:
  • Acquisition module 701 used to obtain original images
  • the processing module 702 is configured to determine a foot cover model based on the foot feature area of the original image when it is determined that there is a foot feature area in the original image.
  • the foot cover model is used to mark the foot.
  • the foot feature area in the complementary image is rendered, and the shoe model is rendered to obtain a target image.
  • processing module 702 may be used, for example:
  • the target pixel is calculated based on the initial screen coordinates of the target pixel in the area to be optimized in the original image, the width of the foot feature area on the screen, and the length of the foot feature area on the screen. Corresponding offset screen coordinates, the target pixel is any pixel in at least some of the pixels in the area to be optimized;
  • the color value of the target pixel is replaced with the color value of the pixel corresponding to the offset screen coordinates to achieve background completion of the area to be optimized.
  • processing module 702 may be used, for example:
  • the final color value corresponding to the target pixel is determined according to the initial color value of the target pixel in the area to be optimized of the original image and the initial color values of adjacent pixels of the target pixel, so
  • the target pixel is any pixel in at least some of the pixels in the area to be optimized;
  • the initial color value of the target pixel is replaced with the final color value corresponding to the target pixel to achieve background completion of the area to be optimized.
  • processing module 702 is also used to:
  • the first coordinate includes any of the following:
  • Two-dimensional space coordinates Two-dimensional space coordinates, world space coordinates, screen control coordinates.
  • the processing module 702 is specifically configured to: establish a shoe model space based on the shoe model;
  • the second pixel corresponding to the second coordinate is rendered in the foot feature area.
  • processing module 702 may be used, for example:
  • the target noise value is calculated according to the first noise value and the random number.
  • an embodiment of the present disclosure provides an electronic device.
  • the electronic device includes: a processor 801, a memory 802, and a computer program stored on the memory 802 and executable on the processor 801.
  • the computer program is executed by the processor 801, each process of the image processing method in the above method embodiment is implemented, and the same technical effect can be achieved. To avoid repetition, the details will not be described here.
  • Embodiments of the present invention provide a computer-readable storage medium.
  • a computer program is stored on the computer-readable storage medium.
  • the various processes of the image processing method in the above method embodiments are implemented, and the same can be achieved. The technical effects will not be repeated here to avoid repetition.
  • the computer-readable storage medium can be a read-only memory (Read-Only Memory, ROM), a random access memory (Random Access Memory, RAM), a magnetic disk or an optical disk, etc.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • magnetic disk or an optical disk etc.
  • Embodiments of the present invention provide a computing program product.
  • the computer program product stores a computer program.
  • the computer program is executed by a processor, each process of the image processing method in the above method embodiment is implemented, and the same technical effect can be achieved. In order to avoid Repeat, I won’t go into details here.
  • An embodiment of the present disclosure provides a computer program, including: instructions that, when executed by a processor, cause the processor to perform an image processing method according to any embodiment of the present disclosure.
  • embodiments of the present disclosure may be provided as methods, systems, or computer programs. order products. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment that combines software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media having computer-usable program code embodied therein.
  • the processor can be a central processing unit (Central Processing Unit, CPU), or other general-purpose processor, digital signal processor (Digital Signal Processor, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC) , off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • CPU Central Processing Unit
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • FPGA off-the-shelf programmable gate array
  • a general-purpose processor may be a microprocessor or the processor may be any conventional processor, etc.
  • memory may include non-permanent memory in computer-readable media, random access memory (RAM) and/or non-volatile memory, such as read-only memory (ROM) or flash memory (flash RAM).
  • RAM random access memory
  • ROM read-only memory
  • flash RAM flash memory
  • computer-readable media includes both persistent and non-transitory, removable and non-removable storage media.
  • Storage media can be implemented by any method or technology to store information, and information can be computer-readable instructions, data structures, program modules, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), and read-only memory.
  • PRAM phase change memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • RAM random access memory
  • read-only memory read-only memory
  • ROM read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • flash memory or other memory technology
  • compact disc read-only memory CD-ROM
  • DVD digital versatile disc
  • Magnetic tape cassettes disk storage or other magnetic storage devices, or any other non-transmission medium, can be used to store information that can be accessed by a computing device.
  • computer-readable media does not include transitory media, such as modulated data signals and carrier waves.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种图像处理方法,涉及图像处理技术领域。包括:可以获取原始图像(201);在确定原始图像中存在脚部特征区域的情况下,根据原始图像的脚部特征区域,确定脚套模型,脚套模型用于标示脚部特征区域的待优化区域(202);对原始图像的待优化区域进行背景补全,以得到补全图像(203);根据补全图像中的脚部特征区域,确定鞋子模型(204);在补全图像中的脚部特征区域,渲染鞋子模型,以得到目标图像(205)。解决将鞋子穿到视频中的脚中时出现的穿帮问题。

Description

图像处理方法、装置及电子设备
相关申请的交叉引用
本申请是以中国申请号为202210369250.7,申请日为2022年4月8日、题目为“一种图像处理方法、装置及电子设备”的申请为基础,并主张其优先权,该中国申请的公开内容在此作为整体引入本申请中。
技术领域
本公开涉及图像处理技术领域,尤其涉及一种图像处理方法、装置及电子设备。
背景技术
相关的增强现实(Augmented Reality,AR)鞋子试穿技术,可以通过对图像的处理,将鞋子绘制到视频中的脚部。
发明内容
本公开实施例提供的技术方案如下:
第一方面,提供一种图像处理方法,包括:
获取原始图像;在确定所述原始图像中存在脚部特征区域的情况下,根据所述原始图像的脚部特征区域,确定脚套模型,所述脚套模型用于标示所述脚部特征区域的待优化区域;对所述原始图像的所述待优化区域进行背景补全,以得到补全图像;根据所述补全图像中的所述脚部特征区域,确定鞋子模型;在所述补全图像中的所述脚部特征区域,渲染所述鞋子模型,以得到目标图像。
第二方面,提供一种图像处理装置,包括:
获取模块,用于获取原始图像;
处理模块,用于在确定所述原始图像中存在脚部特征区域的情况下,根据所述原始图像的脚部特征区域,确定脚套模型,所述脚套模型用于标示所述脚部特征区域的待优化区域;对所述原始图像的所述待优化区域进行背景补全,以得到补全图像;根据所述补全图像中的所述脚部特征区域,确定鞋子模型;在所述补 全图像中的所述脚部特征区域,渲染所述鞋子模型,以得到目标图像。
第三方面,提供一种电子设备,包括:处理器、存储器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如第一方面或其任意一种可选的实施方式所述的图像处理方法。
第四方面,提供一种计算机可读存储介质,包括:所述计算机可读存储介质上存储计算机程序,所述计算机程序被处理器执行时实现如第一方面或其任意一种可选的实施方式所述的图像处理方法。
第五方面,提供一种计算机程序产品,包括:当所述计算机程序产品在计算机上运行时,使得所述计算机实现如第一方面或其任意一种可选的实施方式所述的图像处理方法。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本公开实施例提供的一种将鞋子绘制到视频中的脚部时穿帮情况的示意图;
图2为本公开实施例提供的一种图像处理方法的流程示意图;
图3为本公开实施例提供的一种脚套模型的示意图;
图4为本公开实施例提供的一种补全图像的示意图;
图5A为本公开实施例提供的一种背景补全方式的示意图;
图5B为本公开实施例提供的另一种背景补全方式的示意图;
图6为本公开实施例提供的一种得到目标图像的示意图;
图7为本公开实施例提供的一种图像处理装置的结构框图;
图8为本公开实施例提供的一种电子设备的结构示意图。
具体实施方式
为了能够更清楚地理解本公开的上述目的、特征和优点,下面将对本公开的方案进行进一步描述。需要说明的是,在不冲突的情况下,本公开的实施例及实施例中的特征可以相互组合。
在下面的描述中阐述了很多具体细节以便于充分理解本公开,但本公开还可以采用其他不同于在此描述的方式来实施;显然,说明书中的实施例只是本公开的一部分实施例,而不是全部的实施例。
在本公开实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本公开实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。此外,在本公开实施例的描述中,除非另有说明,“多个”的含义是指两个或两个以上。
相关技术中,为了实现试穿鞋子的特效效果,可以通过对图像的处理,将鞋子绘制到视频中的脚部,但是对于一些具有特殊结构的鞋子,将鞋子绘制到视频中的脚部时,可能会有部分脚部的特征会穿帮,露在鞋子没有覆盖的位置。基于此,亟需一种不穿帮的图像处理方法,在将鞋子绘制到视频中的脚部时不穿帮。
示例性的,假设针对高跟鞋这种具有特殊结构的鞋子,在将鞋子绘制到视频中的脚部时,脚尖或者脚跟部分会有部分特征会穿帮,露在鞋子没有覆盖的位置。图1为本公开实施例提供的一种将鞋子绘制到视频中的脚部时穿帮情况的示意图。如图1所示,将鞋子模型12绘制到视频中的脚部特征区域11之后,得到的图像中,会存在如图1中所示的穿帮区域13。
为了解决上述技术问题或者至少部分地解决上述技术问题,本公开提供了一种图像处理方法、装置及电子设备,对于一些具有特殊结构的鞋子,可以避免通过对图像的处理,将鞋子绘制到视频中的脚部时出现的穿帮问题。
在一些实施例中,本公开实施例提供了一种图像处理方法,可以在进行图像处理以将鞋子穿在图像中脚部之前,先通过脚套模型确定原始图像中脚部特征区域中的待优化区域,并进行背景补全。这样在后续将鞋子模型渲染到原始图像上之后,即使鞋子为特殊构造的鞋子,但是由于提前对脚部特征进行了背景补全,因此在渲染鞋子模型之后,脚部特征不会露出在渲染后的鞋子模型之外,得到的 目标图像更加符合实际场景中脚部穿上鞋子的画面,从而避免了穿帮的情况。
本公开实施例中提供的图像处理方法,可以通过图像处理装置或者电子设备实现,该图像处理装置可以为电子设备中的功能模块或者功能实体。上述电子设备可以包括:手机、平板电脑、笔记本电脑、掌上电脑、车载终端、可穿戴设备、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本或者个人数字助理(personal digital assistant,PDA)、个人计算机(personal computer,PC)等,本公开实施例对此不作具体限定。
如图2所示,为本公开实施例提供的一种图像处理方法的流程示意图,该方法包括以下步骤201至步骤205。
步骤201、获取原始图像。
上述原始图像可以为实时获取的视频图像中的一帧图像。
步骤202、在确定原始图像中存在脚部特征区域的情况下,根据原始图像的脚部特征区域,确定脚套模型,该脚套模型用于标示脚部特征区域的待优化区域。
该脚套模型可以为一个可以覆盖脚跟和前脚掌的模型,该脚套模型所覆盖区域是在脚部特征区域上绘制鞋子时,可能会造成穿帮的区域。该脚套模型可以是一个预先设置好的模型。
在一些实施例中,针对一些结构非常特殊的鞋子模型,可能需要设置与这些鞋子模型对应的脚套模型,也就是说不同鞋子模型对应的脚套模型可能不一样。例如,轮廓为三角形的鞋子对应一种脚套模型,轮廓为菱形的鞋子对应另一种脚套模型。
在一些实施例中,如图3所示,为本公开实施例提供的一种脚套模型的示意图。基于图3中图像中存在的脚部特征区域31,确定脚套模型32,该脚套模型32所指示的区域为脚部特征区域的待优化区域。
返回图2,步骤203、对原始图像的待优化区域进行背景补全,以得到补全图像。
如图4所示,为本公开实施例提供的一种补全图像的示意图。在一些实施例中,可以在根据脚套模型,确定出脚部特征区域的待优化区域,然后可以对该区域进行背景补全,可以得到补全图像41。
在本公开实施例中,具体的背景补全方式可以包括多种,以下对几种可能的 实现方式进行举例说明。
在一些实施例中,对原始图像的待优化区域进行背景补全的方式包括:根据原始图像的待优化区域中目标像素点的初始屏幕坐标、脚部特征区域在屏幕上的宽度、脚部特征区域在屏幕上的长度,计算目标像素点对应的偏移屏幕坐标;将目标像素点的颜色值,替换为偏移屏幕坐标对应的像素点的颜色值,以实现对待优化区域进行背景补全。目标像素点为待优化区域中至少部分像素点中的任意像素点,待优化区域中至少部分像素点包括:待优化区域中的全部像素点,或者,待优化区域中的部分像素点。
本公开实施例中,所涉及的颜色值是指R(红色)、G(绿色)、B(蓝色)这三个通道的值。
上述实施例中,采用贴图方式,利用原始图像的待优化区域中目标像素点的初始屏幕坐标、脚部特征区域在屏幕上的宽度、脚部特征区域在屏幕上的长度,采样非脚部特征区域的像素点的颜色值,并采用非脚部特征区域的像素点的颜色值,来替换待优化区域中像素点的颜色值。这种方式将非脚部特征区域的像素点确定为背景,将待优化区域的颜色值替换为非脚部区域的颜色值,从而实现对待优化区域进行背景补全。
在一些实施例中,可以利用与脚部特征区域的距离处于预设范围内的像素点的颜色值,来替换待优化区域中像素点的颜色值。这种方式将与脚部特征区域的距离处于预设范围内的像素点确定为背景,从而实现对待优化区域进行背景补全。
例如,可以采用以下公式(1)计算待优化区域中像素点进行背景补全后的颜色值:
Color=texture2D(u_inputTex,g_vary_sp_uv-u_widthVector+
u_heightVector)        (1)
在公式(1)中,Color表示待优化区域中一个像素点进行背景补全后的颜色值,texture2D表示进行2D纹理贴图,u_inputTex为视频对应图像的输入纹理,也即该一个像素点的初始颜色值,g_vary_sp_uv为当前渲染的该一个像素点的屏幕坐标,也即该一个像素点对应的屏幕坐标,u_widthVector为脚部特征区域在屏幕上的宽度,u_heightVector脚部特征区域在屏幕上的长度。
如图5A所示,为本公开实施例提供的一种背景补全方式的示意图。例如,可 以基于待优化区域中的像素点52的初始屏幕坐标、脚部特征区域在屏幕上的宽度、脚部特征区域在屏幕上的长度,计算该像素点52对应的偏移屏幕坐标,根据该偏移屏幕坐标确定对应的像素点51,并将像素点52的颜色值替换为像素点51的颜色值。针对图5A中脚套模型所指示的待优化区域中的每个像素点均可以通过上述方式进行颜色值的替换,以实现对待优化区域中像素点进行背景补全。
在一些实施例中,对原始图像的待优化区域进行背景补全的方式包括:根据所获取的原始图像的待优化区域中目标像素点的初始颜色值,以及目标像素点的相邻像素点的初始颜色值,确定目标像素点对应的最终颜色值;将目标像素点的初始颜色值,替换为目标像素点对应的最终颜色值,以实现对待优化区域进行背景补全。目标像素点为待优化区域中至少部分像素点中的任意像素点。待优化区域中至少部分像素点包括:待优化区域中的全部像素点,或者,待优化区域中的部分像素点。
在一些实施例中,可以将待优化区域中目标像素点的初始颜色值与其相邻像素点的初始颜色值进行加权求和之后的结果,作为目标像素点对应的最终颜色值。这样针对待优化区域中越靠近背景的区域的最终颜色值就会越接近真实背景,这样使得待优化区域可以与背景融为一体,使得待优化区域与背景的边界被模糊,这样也可以实现对待优化区域进行背景补全。
在一些实施例中,可以将待优化区域中待优化的当前像素点(目标像素点)的权重值设置的较大,将当前像素点的相邻像素点的权重值设置的更小,这样可以达到更好的显示效果。
如图5B所示,假设针对原始图像中的像素点A,获取该像素点A的初始颜色值,并且获取与该像素点A相邻的像素点B1、像素点B2、像素点B3,以及像素点B4的这四个像素点的初始颜色值,之后对像素点A、像素点B1、像素点B2、像素点B3,以及像素点B4这五个像素点的初始颜色值进行加权求和之后,计算出最终颜色值,之后将像素点A的颜色值替换为该最终颜色值。
返回图2,步骤204、根据补全图像中的脚部特征区域,确定鞋子模型。
步骤205、在补全图像中的脚部特征区域,渲染鞋子模型,以得到目标图像。
在一些实施例中,如图6所示,为本公开实施例提供的一种得到目标图像的示意图。在得到补全图像61之后,还可以进一步的确定鞋子模型,并在补全图像 61的基础上,渲染鞋子模型到补全图像的脚部特征区域上,之后会得到如图6中所示的目标图像62,即试穿高跟鞋效果的图像。
本公开实施例提供的图像处理方法,可以获取原始图像;在确定原始图像中存在脚部特征区域的情况下,根据原始图像的脚部特征区域,确定脚套模型,脚套模型用于标示脚部特征区域的待优化区域;对原始图像的待优化区域进行背景补全,以得到补全图像;根据补全图像中的脚部特征区域,确定鞋子模型;在补全图像中的脚部特征区域,渲染鞋子模型,以得到目标图像。通过该方案,可以在进行图像处理将鞋子穿在图像中脚部之前,可以先通过脚套模型确定原始图像中脚部特征区域中的待优化区域,并进行背景补全,这样在后续将鞋子模型渲染到原始图像上之后,即使鞋子为特殊构造的鞋子,但是由于提前对脚部特征进行了背景补全,因此在渲染鞋子模型之后,脚步特征不会露出在渲染后的鞋子模型之外,得到的目标图像更加符合实际中脚部穿上鞋子的画面,从而避免了穿帮的情况。
本公开实施例提供的图像处理方法中,在根据上述步骤201至步骤205实现在图像中脚部特征区域绘制鞋子模型之后,还可以给鞋子模型叠加一些特殊的显示效果,可以基于一些特殊效果图采样,进行特殊效果的叠加,还可以在绘制鞋子模型时实现逐渐出现的效果。在一些实施例中,可以在鞋子模型的基础上叠加星空效果,在绘制鞋子模型时实现从鞋跟至鞋尖逐渐出现的效果,或者,在绘制鞋子模型时实现从鞋尖至鞋跟逐渐出现的效果。
在一些实施例中,上述步骤205中在补全图像中的脚部特征区域,渲染鞋子模型,以得到目标图像之后,还可以包括以下步骤:获取第一像素点的初始颜色值,第一像素点为补全图像中鞋子模型对应的任一个像素点;根据第一像素点对应的第一坐标,从目标效果图中进行采样,得到第一颜色值;根据第一像素点的初始颜色值和第一颜色值,计算第一像素点的最终颜色值;将第一像素点的初始颜色值替换为最终颜色值。
例如,第一坐标包括以下任一种:二维空间坐标、世界空间坐标、屏幕控件坐标。
在一些实施例中,可以根据以下公式(2),第一像素点的初始颜色值和第一颜色值,计算第一像素点的最终颜色值。
finalRGB.xyz=refractionRGB.xyz*galaxyColor*2.0*(1.0-cfg)+(1.0-
(1.0-galaxyColor)*(1.0-refractionRGB.xyz)*2.0)   (2)
在公式(2)中,finalRGB.xyz表示第一像素点的最终颜色值,refractionRGB.xyz表示第一像素点的初始颜色值,galaxyColor表示根据第一像素点对应的第一坐标,从目标效果图中进行采样得到第一颜色值,该目标效果图可以是星空效果图,cfg是第一像素点的初始颜色值与第一颜色值的叠加参数,该叠加参数用于表示第一像素点的初始颜色值与第一颜色值叠加时的比例,该叠加参数可以根据实际需求进行设定,本公开实施例不作限定。
在一些实施例中,假设上述目标效果图为星空效果图,那么在galaxyColor为基于二维空间坐标(UV坐标)采样得到的颜色值时,星空效果会固定在鞋子模型上;在galaxyColor为基于世界空间坐标采样得到的颜色值时,星空效果会在模型运动时实现流动效果;在galaxyColor为基于屏幕控件坐标采样得到的颜色值时,星空效果在屏幕中固定位置显示。
上述实施例中,通过采样目标效果图,可以在鞋子模型上叠加目标效果,使得渲染出的鞋子的显示效果更佳。
在一些实施例中,在补全图像中的脚部特征区域,渲染鞋子模型,以得到目标图像的方式包括:基于鞋子模型,建立鞋子模型空间;根据鞋子模型空间中的第二坐标,确定目标噪声值,第二坐标为鞋子模型空间中的任一模型空间坐标;若目标噪声值大于预设值,则在脚部特征区域渲染第二坐标对应的第二像素点。
在一些实施例中,上述预设值在第一时长内从大到小变化。该第一时长为绘制完整个鞋子模型的总时长。
在一些实施例中,上述实施例中可以基于鞋子模型空间的轴向方向,实现溶解渐现,并且添加噪声来增加边缘的随机性,从而实现鞋子从鞋尖到鞋跟渐现的效果。
首先,基于鞋子模型建立鞋子模型空间,其中可以将从鞋跟到鞋尖的的方向作为该鞋子模型空间的一个轴向(例如可以是鞋子模型空间的Y轴),之后针对鞋子模型空间中的任一模型空间坐标,确定该模型空间坐标在该Y轴上对应的取值,将该取值作为对应的目标噪声值。例如,假设在Y轴上鞋尖对应的取值为1,鞋跟对应的取值为0,那么目标噪声值为一个0到1之间的一个取值,将该0到1 之间的一个取值确定为目标噪声值。
然后,将预设值设置为在第一时长内,随着时间从1逐渐变成0的值,若目标噪声值大于预设值,则在脚部特征区域渲染第二坐标对应的第二像素点,相应的,若目标噪声值小于或等于预设值,则不在脚部特征区域渲染第二坐标对应的第二像素点。这样就可以使得呈现出鞋尖到鞋跟逐渐出现的显示效果。
在一些实施例中,上述根据鞋子模型空间中的第二坐标,确定目标噪声值的方式包括:根据第二坐标在鞋子模型空间中目标轴向上的位置,确定第一噪声值;根据第二坐标生成随机数;根据第一噪声值和随机数,计算得到目标噪声值。
首先,基于鞋子模型建立鞋子模型空间,其中可以将从鞋跟到鞋尖的方向作为该鞋子模型空间的一个轴向(例如可以是鞋子模型空间的Y轴),之后针对鞋子模型空间中的第二坐标,确定该模型空间坐标在该Y轴上对应的取值,将该取值作为对应的第一噪声值。例如,假设在Y轴上鞋尖对应的取值为1,鞋跟对应的取值为0,那么目标噪声值为一个0到1之间的一个取值,将该0到1之间的一个取值确定为第一噪声值。
然后,根据上述第二坐标,生成一个-0.05到0.05之间的随机数,然后在第一噪声值的基础上叠加该随机数得到的结果作为上述目标噪声值。
在一些实施例中,可以根据以下公式(3)计算上述目标噪声值:
noise3=noise1+noise2        (3)
在公式(3)中,noise3表示目标噪声值,noise1表示第一噪声值,noise2表示上述随机数。
在一些实施例中,将预设值设置为在第一时长内,随着时间从1逐渐变成0的值,在确定目标噪声值之后,若目标噪声值与大于预设值,则在脚部特征区域渲染第二坐标对应的第二像素点,相应的,若目标噪声值与小于或等于预设值,则不在脚部特征区域渲染第二坐标对应的第二像素点。这样就可以使得呈现出鞋尖到鞋跟逐渐出现的显示效果。
由于上述实施例中,目标噪声值是在第一噪声值的基础上还增加的随机数,因此在绘制鞋子模型时,不仅会呈现出鞋尖到鞋跟逐渐出现的显示效果,并且对于在从鞋跟到鞋尖的方向对应的轴向上坐标一致的像素点,进行处理时,会使得这些像素点不会同时绘制,呈现鞋尖到鞋跟逐渐出现的过程中,避免因为出现平 齐的渐现效果,而使得显示效果不自然,不流畅。
如图7所示,本公开实施例提供一种图像处理装置的结构框图,该装置包括:
获取模块701,用于获取原始图像;
处理模块702,用于在确定所述原始图像中存在脚部特征区域的情况下,根据所述原始图像的脚部特征区域,确定脚套模型,所述脚套模型用于标示所述脚部特征区域的待优化区域;对所述原始图像的所述待优化区域进行背景补全,以得到补全图像;根据所述补全图像中的所述脚部特征区域,确定鞋子模型;在所述补全图像中的所述脚部特征区域,渲染所述鞋子模型,以得到目标图像。
作为本公开实施例一种可选的实施方式,所述处理模块702例如可以用于:
根据所述原始图像的所述待优化区域中目标像素点的初始屏幕坐标、所述脚部特征区域在屏幕上的宽度、所述脚部特征区域在屏幕上的长度,计算所述目标像素点对应的偏移屏幕坐标,所述目标像素点为所述待优化区域中至少部分像素点中的任意像素点;
将所述目标像素点的颜色值,替换为所述偏移屏幕坐标对应的像素点的颜色值,以实现对所述待优化区域进行背景补全。
作为本公开实施例一种可选的实施方式,所述处理模块702,例如可以用于:
根据所获取的所述原始图像的所述待优化区域中目标像素点的初始颜色值,以及所述目标像素点的相邻像素点的初始颜色值,确定目标像素点对应的最终颜色值,所述目标像素点为所述待优化区域中至少部分像素点中的任意像素点;
将所述目标像素点的初始颜色值,替换为所述目标像素点对应的最终颜色值,以实现对所述待优化区域进行背景补全。
作为本公开实施例一种可选的实施方式,所述处理模块702,还用于:
获取第一像素点的初始颜色值,所述第一像素点为所述补全图像中所述鞋子模型对应的任一个像素点;
根据所述第一像素点对应的第一坐标,从目标效果图中进行采样,得到第一颜色值;
根据所述第一像素点的初始颜色值和所述第一颜色值,计算所述第一像素点的最终颜色值;
将第一像素点的初始颜色值替换为所述最终颜色值。
作为本公开实施例一种可选的实施方式,所述第一坐标包括以下任一种:
二维空间坐标、世界空间坐标、屏幕控件坐标。
作为本公开实施例一种可选的实施方式,所述处理模块702,具体用于:基于所述鞋子模型,建立鞋子模型空间;
根据所述鞋子模型空间中的第二坐标,确定目标噪声值,所述第二坐标为所述鞋子模型空间中的任一模型空间坐标;
若所述目标噪声值与大于预设值,则在所述脚部特征区域渲染所述第二坐标对应的第二像素点。
作为本公开实施例一种可选的实施方式,所述处理模块702,例如可以用于:
根据所述第二坐标在所述鞋子模型空间中目标轴向上的位置,确定第一噪声值;
根据所述第二坐标生成随机数;
根据所述第一噪声值和所述随机数,计算得到所述目标噪声值。
如图8所示,本公开实施例提供一种电子设备,该电子设备包括:处理器801、存储器802及存储在所述存储器802上并可在所述处理器801上运行的计算机程序,所述计算机程序被所述处理器801执行时实现上述方法实施例中的图像处理方法的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
本发明实施例提供一种计算机可读存储介质,该计算机可读存储介质上存储计算机程序,该计算机程序被处理器801执行时实现上述方法实施例中图像处理方法的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
在一些实施例中,该计算机可读存储介质可以为只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等。
本发明实施例提供一种计算程序产品,该计算机程序产品存储有计算机程序,计算机程序被处理器执行时实现上述方法实施例中图像处理方法的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
本公开实施例提供一种计算机程序,包括:指令,所述指令当由处理器执行时使所述处理器执行根据本公开任意实施例中的图像处理方法
本领域技术人员应明白,本公开的实施例可提供为方法、系统、或计算机程 序产品。因此,本公开可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本公开可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质上实施的计算机程序产品的形式。
本公开中,处理器可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
本公开中,存储器可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。存储器是计算机可读介质的示例。
本公开中,计算机可读介质包括永久性和非永久性、可移动和非可移动存储介质。存储介质可以由任何方法或技术来实现信息存储,信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。根据本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
需要说明的是,在本文中,诸如“第一”和“第二”等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括要素的过程、方 法、物品或者设备中还存在另外的相同要素。
以上仅是本公开的具体实施方式,使本领域技术人员能够理解或实现本公开。对这些实施例的多种修改对本领域的技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本公开的精神或范围的情况下,在其它实施例中实现。因此,本公开将不会被限制于本文的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。

Claims (20)

  1. 一种图像处理方法,包括:
    获取原始图像;
    在确定所述原始图像中存在脚部特征区域的情况下,根据所述原始图像的脚部特征区域,确定脚套模型,所述脚套模型用于标示所述脚部特征区域的待优化区域;
    对所述原始图像的所述待优化区域进行背景补全,以得到补全图像;
    根据所述补全图像中的所述脚部特征区域,确定鞋子模型;
    在所述补全图像中的所述脚部特征区域,渲染所述鞋子模型,以得到目标图像。
  2. 根据权利要求1所述的图像处理方法,其中,所述对所述原始图像的所述待优化区域进行背景补全,包括:
    根据所述原始图像的所述待优化区域中目标像素点的初始屏幕坐标、所述脚部特征区域在屏幕上的宽度、所述脚部特征区域在屏幕上的长度,计算所述目标像素点对应的偏移屏幕坐标,所述目标像素点为所述待优化区域中至少部分像素点中的任意像素点;
    将所述目标像素点的颜色值,替换为所述偏移屏幕坐标对应的像素点的颜色值,以实现对所述待优化区域进行背景补全。
  3. 根据权利要求1所述的图像处理方法,其中,所述对所述原始图像的所述待优化区域进行背景补全,包括:
    根据所获取的所述原始图像的所述待优化区域中目标像素点的初始颜色值,以及所述目标像素点的相邻像素点的初始颜色值,确定所述目标像素点对应的最终颜色值,所述目标像素点为所述待优化区域中至少部分像素点中的任意像素点;
    将所述目标像素点的初始颜色值,替换为所述目标像素点对应的最终颜色值,以实现对所述待优化区域进行背景补全。
  4. 根据权利要求3所述的图像处理方法,其中,所述确定所述目标像素点对应的最终颜色值包括:
    将所述待优化区域中目标像素点的初始颜色值与其相邻像素点的初始颜色值进行加权求和;
    根据所述加权求和之后的结果,确定所述目标像素点对应的最终颜色值。
  5. 根据权利要求4所述的图像处理方法,其中,所述待优化区域中目标像素点对应的权重值大于所述目标像素点的相邻像素点对应的权重值。
  6. 根据权利要求1-5任一项所述的图像处理方法,其中,所述在所述补全图像中的所述脚部特征区域,渲染所述鞋子模型,以得到目标图像之后,所述图像处理方法还包括:
    获取第一像素点的初始颜色值,所述第一像素点为所述补全图像中所述鞋子模型对应的任一个像素点;
    根据所述第一像素点对应的第一坐标,从目标效果图中进行采样,得到第一颜色值;
    根据所述第一像素点的初始颜色值和所述第一颜色值,计算所述第一像素点的最终颜色值;
    将第一像素点的初始颜色值替换为所述最终颜色值。
  7. 根据权利要求6所述的图像处理方法,其中,所述根据所述第一像素点的初始颜色值和所述第一颜色值,计算所述第一像素点的最终颜色值包括:
    根据叠加参数,对所述第一像素点的初始颜色值和所述第一颜色值进行叠加,得到所述第一像素点的最终颜色,其中,所述叠加参数用于表示所述第一像素点的初始颜色值与所述第一颜色值叠加时的比例。
  8. 根据权利要求6-7任一项所述的图像处理方法,其中,所述第一坐标包括以下任一种:
    二维空间坐标、世界空间坐标、屏幕控件坐标。
  9. 根据权利要求1-8任一项所述的图像处理方法,其中,所述在所述补全图像中的所述脚部特征区域,渲染所述鞋子模型,以得到目标图像,包括:
    基于所述鞋子模型,建立鞋子模型空间;
    根据所述鞋子模型空间中的第二坐标,确定目标噪声值,所述第二坐标为所述鞋子模型空间中的任一模型空间坐标;
    若所述目标噪声值大于预设值,则在所述脚部特征区域渲染所述第二坐标对应的第二像素点,所述预设值在第一时长内从大到小变化。
  10. 根据权利要求9所述的图像处理方法,其中,所述根据所述鞋子模型空间中的第二坐标,确定目标噪声值,包括:
    根据所述第二坐标在所述鞋子模型空间中目标轴向上的位置,确定第一噪声值;
    根据所述第二坐标生成随机数;
    根据所述第一噪声值和所述随机数,计算得到所述目标噪声值。
  11. 一种图像处理装置,包括:
    获取模块,用于获取原始图像;
    处理模块,用于在确定所述原始图像中存在脚部特征区域的情况下,根据所述原始图像的脚部特征区域,确定脚套模型,所述脚套模型用于标示所述脚部特征区域的待优化区域;对所述原始图像的所述待优化区域进行背景补全,以得到补全图像;根据所述补全图像中的所述脚部特征区域,确定鞋子模型;在所述补全图像中的所述脚部特征区域,渲染所述鞋子模型,以得到目标图像。
  12. 根据权利要求11所述的图像处理装置,其中,所述处理模块被配置为:
    根据所述原始图像的所述待优化区域中目标像素点的初始屏幕坐标、所述脚部特征区域在屏幕上的宽度、所述脚部特征区域在屏幕上的长度,计算所述目标像素点对应的偏移屏幕坐标,所述目标像素点为所述待优化区域中至少部分像素点中的任意像素点;
    将所述目标像素点的颜色值,替换为所述偏移屏幕坐标对应的像素点的颜色值,以实现对所述待优化区域进行背景补全。
  13. 根据权利要求11所述的图像处理装置,其中,所述处理模块被配置为:
    根据所获取的所述原始图像的所述待优化区域中目标像素点的初始颜色值,以及所述目标像素点的相邻像素点的初始颜色值,确定所述目标像素点对应的最终颜色值,所述目标像素点为所述待优化区域中至少部分像素点中的任意像素点;
    将所述目标像素点的初始颜色值,替换为所述目标像素点对应的最终颜色值,以实现对所述待优化区域进行背景补全。
  14. 根据权利要求11-13任一项所述的图像处理装置,其中,所述处理模块还被配置为:
    获取第一像素点的初始颜色值,所述第一像素点为所述补全图像中所述鞋子模型对应的任一个像素点;
    根据所述第一像素点对应的第一坐标,从目标效果图中进行采样,得到第一颜色值;
    根据所述第一像素点的初始颜色值和所述第一颜色值,计算所述第一像素点的最终颜色值;
    将第一像素点的初始颜色值替换为所述最终颜色值。
  15. 根据权利要求14所述的图像处理装置,其中,所述第一坐标包括以下任一种:
    二维空间坐标、世界空间坐标、屏幕控件坐标。
  16. 根据权利要求11-15任一项所述的图像处理装置,其中,所述处理模块被配置为:
    基于所述鞋子模型,建立鞋子模型空间;
    根据所述鞋子模型空间中的第二坐标,确定目标噪声值,所述第二坐标为所述鞋子模型空间中的任一模型空间坐标;
    若所述目标噪声值大于预设值,则在所述脚部特征区域渲染所述第二坐标对应的第二像素点,所述预设值在第一时长内从大到小变化。
  17. 根据权利要求16所述的图像处理装置,其中,所述处理模块被配置为:
    根据所述第二坐标在所述鞋子模型空间中目标轴向上的位置,确定第一噪声值;
    根据所述第二坐标生成随机数;
    根据所述第一噪声值和所述随机数,计算得到所述目标噪声值。
  18. 一种电子设备,包括:处理器、存储器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如权利要求1至10中任一项所述的图像处理方法。
  19. 一种计算机可读存储介质,包括:所述计算机可读存储介质上存储计算 机程序,所述计算机程序被处理器执行时实现如权利要求1至10中任一项所述的图像处理方法。
  20. 一种计算机程序,包括:指令,所述指令当由处理器执行时使所述处理器执行根据权利要求1-10任一项所述的图像处理方法。
PCT/CN2023/085565 2022-04-08 2023-03-31 图像处理方法、装置及电子设备 WO2023193664A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210369250.7A CN114742978A (zh) 2022-04-08 2022-04-08 一种图像处理方法、装置及电子设备
CN202210369250.7 2022-04-08

Publications (1)

Publication Number Publication Date
WO2023193664A1 true WO2023193664A1 (zh) 2023-10-12

Family

ID=82279790

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/085565 WO2023193664A1 (zh) 2022-04-08 2023-03-31 图像处理方法、装置及电子设备

Country Status (2)

Country Link
CN (1) CN114742978A (zh)
WO (1) WO2023193664A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114742978A (zh) * 2022-04-08 2022-07-12 北京字跳网络技术有限公司 一种图像处理方法、装置及电子设备

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018049430A2 (en) * 2016-08-11 2018-03-15 Integem Inc. An intelligent interactive and augmented reality based user interface platform
CN107977885A (zh) * 2017-12-12 2018-05-01 北京小米移动软件有限公司 虚拟试穿的方法及装置
CN108961386A (zh) * 2017-05-26 2018-12-07 腾讯科技(深圳)有限公司 虚拟形象的显示方法及装置
CN108961015A (zh) * 2018-07-27 2018-12-07 朱培恒 一种在线虚拟试鞋方法
US20190050427A1 (en) * 2016-08-10 2019-02-14 Zeekit Online Shopping Ltd. Method, System, and Device of Virtual Dressing Utilizing Image Processing, Machine Learning, and Computer Vision
CN111985995A (zh) * 2020-08-14 2020-11-24 足购科技(杭州)有限公司 基于微信小程序的鞋子虚拟试穿方法及装置
CN112581567A (zh) * 2020-12-25 2021-03-30 腾讯科技(深圳)有限公司 图像处理方法、装置、电子设备及计算机可读存储介质
CN114742978A (zh) * 2022-04-08 2022-07-12 北京字跳网络技术有限公司 一种图像处理方法、装置及电子设备

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190050427A1 (en) * 2016-08-10 2019-02-14 Zeekit Online Shopping Ltd. Method, System, and Device of Virtual Dressing Utilizing Image Processing, Machine Learning, and Computer Vision
WO2018049430A2 (en) * 2016-08-11 2018-03-15 Integem Inc. An intelligent interactive and augmented reality based user interface platform
CN108961386A (zh) * 2017-05-26 2018-12-07 腾讯科技(深圳)有限公司 虚拟形象的显示方法及装置
CN107977885A (zh) * 2017-12-12 2018-05-01 北京小米移动软件有限公司 虚拟试穿的方法及装置
CN108961015A (zh) * 2018-07-27 2018-12-07 朱培恒 一种在线虚拟试鞋方法
CN111985995A (zh) * 2020-08-14 2020-11-24 足购科技(杭州)有限公司 基于微信小程序的鞋子虚拟试穿方法及装置
CN112581567A (zh) * 2020-12-25 2021-03-30 腾讯科技(深圳)有限公司 图像处理方法、装置、电子设备及计算机可读存储介质
CN114742978A (zh) * 2022-04-08 2022-07-12 北京字跳网络技术有限公司 一种图像处理方法、装置及电子设备

Also Published As

Publication number Publication date
CN114742978A (zh) 2022-07-12

Similar Documents

Publication Publication Date Title
US9639945B2 (en) Depth-based application of image effects
US20200160493A1 (en) Image filtering based on image gradients
US10726580B2 (en) Method and device for calibration
US9818201B2 (en) Efficient lens re-distortion
WO2023193664A1 (zh) 图像处理方法、装置及电子设备
CN110363837B (zh) 游戏中纹理图像的处理方法及装置、电子设备、存储介质
WO2024001360A1 (zh) 绿幕抠图方法、装置及电子设备
CN112153303B (zh) 一种视觉数据处理方法、装置、图像处理设备和存储介质
CN114742931A (zh) 渲染图像的方法、装置、电子设备及存储介质
CN112561777A (zh) 图像添加光斑的方法及装置
CN115908685A (zh) 一种场景渲染方法、装置、设备和存储介质
CN109615583B (zh) 一种游戏地图的生成方法及装置
CN110378948B (zh) 3d模型重建方法、装置及电子设备
WO2023160487A1 (zh) 一种地形区域的拼接方法、装置、计算机设备及存储介质
CN116712727A (zh) 同屏画面渲染方法、装置及电子设备
WO2019148894A1 (zh) 一种利用图像斑块追踪测量偏移的方法、装置及存储介质
CN111127618A (zh) 一种实时渲染时的纹理调色方法及装置
US11120774B1 (en) Subpixel text rendering
CN116843812A (zh) 一种图像渲染方法、装置及电子设备
JPH11242753A (ja) 3次元描画方法および装置
CN112419147B (zh) 图像渲染方法及装置
CN112785699A (zh) 图像绘制方法与设备
CN116894933B (zh) 一种三维模型比较方法、装置、设备和存储介质
CN112070656B (zh) 帧数据的修改方法及装置
KR102107060B1 (ko) 이미지 데이터 처리 방법, 장치 및 시스템

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23784228

Country of ref document: EP

Kind code of ref document: A1