CN117952817A - Image comparison display method and related device - Google Patents

Image comparison display method and related device Download PDF

Info

Publication number
CN117952817A
CN117952817A CN202410349687.3A CN202410349687A CN117952817A CN 117952817 A CN117952817 A CN 117952817A CN 202410349687 A CN202410349687 A CN 202410349687A CN 117952817 A CN117952817 A CN 117952817A
Authority
CN
China
Prior art keywords
image
contrast
layer
operation object
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410349687.3A
Other languages
Chinese (zh)
Inventor
山芝涵
李洋华
郭金辉
肖文
李斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202410349687.3A priority Critical patent/CN117952817A/en
Publication of CN117952817A publication Critical patent/CN117952817A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

The application provides an image comparison display method and a related device. The embodiment of the application can be applied to the field of image processing. The method comprises the following steps: responding to a comparison request of an operation object for comparing the first image and the second image, respectively converting the first image and the second image into a first element and a second element in an operation layer based on a cascading style sheet CSS, wherein the second element is positioned on the upper layer of the first element; adding a first transformation declaration in the CSS attribute of the second element, wherein the first transformation declaration is used for extracting the second element as a contrast layer; setting the second element as a mixed mode, wherein the mixed mode is used for calculating color difference values of all pixel points at the positions of the second element and the first element; and visually displaying a contrast image to the operation object through the contrast image layer, wherein the contrast image is rendered based on the color difference value. The method is used for reducing the calculation and processing time in the image comparison process and accelerating the comparison display result.

Description

Image comparison display method and related device
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image contrast display method and a related device.
Background
With the rapid progress of computer technology, the development and update cycles of websites and applications have been greatly shortened. In this age of rapid iteration, the importance of interface design (UI) is becoming increasingly prominent. The UI design is not only a bridge between the user and the software, but also a key factor for determining the user experience. In order to ensure that the final delivered interface of the developer is consistent with the design draft picture made by the designer, interface screening (or visual screening) becomes an indispensable ring in the development process.
In the UI walk-checking process, an acceptance person needs to check the pictures to be checked submitted by the developer, and check each detail one by comparing with the design draft pictures. This process typically involves careful comparison of various aspects of color, font, layout, interactive elements, etc.
Traditional UI walking methods mainly rely on manual visual screening. This approach, while intuitive, has a number of limitations. First, the same interface may give different acceptance results in different eyes, as there may be differences in the evaluation criteria of different acceptance persons. This subjectivity not only results in low accuracy but also low efficiency of the walk-through. The open source visual library (open source computer vision library, openCV) is an open source computer vision and machine learning software library that provides many functions for processing images. The absdiff function can be used to calculate the absolute difference between the two image matrices, and the industry generally uses the absdiff function to compare distribute new dispatchs with the design draft, but the function needs to occupy a higher CPU, the comparison process takes a longer time, and the user experience is poor.
Disclosure of Invention
The embodiment of the application provides an image comparison display method and a related device, which are used for reducing the calculation and processing time in the image comparison process and accelerating the comparison display result.
One aspect of the present application provides an image contrast display method, including:
Responding to a comparison request of an operation object for comparing a first image and a second image, and respectively converting the first image and the second image into a first element and a second element in an operation layer based on a Cascading Style Sheet (CSS), wherein the second element is positioned on the upper layer of the first element;
Adding a first transformation declaration in CSS attributes of the second element, wherein the first transformation declaration is used for extracting the second element as a contrast layer;
Setting the second element as a mixed mode, wherein the mixed mode is used for calculating color difference values of all pixel points at the positions of the second element and the first element;
and visually displaying a contrast image to the operation object through the contrast image layer, wherein the contrast image is rendered based on the color difference value.
In one possible implementation method, the first and second modules,
One of the first image and the second image is a design draft image, and the other is an opening distribute new dispatchs image; the opening distribute new dispatchs image is developed based on the design draft image.
In one possible implementation method, the first and second modules,
Before the comparison request for comparing the first image and the second image in response to the operation object, the method further includes:
acquiring a Uniform Resource Locator (URL) input by the operation object;
and extracting the opening distribute new dispatchs image based on the webpage corresponding to the URL.
In one possible implementation method, the first and second modules,
The visual display of the contrast image to the operation object through the contrast image layer comprises:
Determining the pixel point with the color difference value larger than a first threshold value as a target pixel point;
And rendering the target pixel point into a first color on the contrast image layer to obtain the contrast image and visually displaying the contrast image to the operation object.
In one possible implementation method, the first and second modules,
Before the pixel point with the color difference value larger than the first threshold value is determined as the target pixel point, the method further comprises the following steps:
Receiving the fault tolerance rate of the operation object input;
the first threshold is determined based on the fault tolerance.
In one possible implementation method, the first and second modules,
After the response to the comparison request of the operation object to compare the first image and the second image, the method further comprises:
creating a filter element in the operation layer, the filter element being located on top of the second element;
after visually displaying the contrast image on the contrast layer, the method further comprises:
Setting the filter element as a background filter mode, wherein the background filter mode is used for generating a gray-scale image based on the contrast image;
And rendering the pixel points with the brightness lower than a second threshold value in the gray-scale image into a second color, and rendering other pixel points into a third color, wherein the second color and the third color are different.
In one possible implementation method, the first and second modules,
Before rendering the pixel points with the brightness lower than the second threshold value in the gray-scale image into the second color and rendering the other pixel points into the third color, the method further comprises:
Receiving the fault tolerance rate of the operation object input;
The second threshold is determined based on the fault tolerance.
In one possible implementation method, the first and second modules,
After creating the filter element in the operation layer, the method further comprises:
Adding a second transformation declaration in CSS attributes of the filter elements, wherein the second transformation declaration is used for extracting the filter elements as a filter layer;
The rendering the pixel points with the brightness lower than the second threshold value in the gray-scale image as the first color, and after rendering the other pixel points as the second color, further includes:
and visually displaying a target image to the operation object through the filter layer, wherein the target image is obtained after the gray-scale image is rendered.
In one possible implementation method, the first and second modules,
After the target image is visually displayed to the operation object through the filter layer, the method further comprises:
And re-rendering the target image in response to a transformation operation of the operation object on the second element and/or the filter element.
In one possible implementation method, the first and second modules,
After the contrast image is visually displayed to the operation object through the contrast image layer, the method further comprises:
and re-rendering the contrast image in response to a transformation operation performed on the second element by the operation object.
In one possible implementation method, the first and second modules,
The transformation operation includes a rotation, scaling, tilting, or translation operation.
Another aspect of the present application provides an image contrast apparatus, comprising:
The conversion module is used for responding to a comparison request of an operation object for comparing a first image and a second image, and converting the first image and the second image into a first element and a second element in an operation layer based on a Cascading Style Sheet (CSS), wherein the second element is positioned on the upper layer of the first element;
An extraction module, configured to add a first transformation declaration to a CSS attribute of the second element, where the first transformation declaration is used to extract the second element as a contrast layer;
the computing module is used for setting the second element into a mixed mode, and the mixed mode is used for computing color difference values of all pixel points at the positions corresponding to the second element and the first element;
and the display module is used for visually displaying a contrast image to the operation object through the contrast image layer, and the contrast image is rendered based on the color difference value.
In one possible implementation, one of the first image and the second image is a design draft image, and the other is an opening distribute new dispatchs image; the image of the opening distribute new dispatchs is developed based on the design document image.
In one possible implementation method, the method further includes:
the acquisition module is used for acquiring the uniform resource locator URL input by the operation object; and extracting distribute new dispatchs images based on the webpage corresponding to the URL.
In one possible implementation method, the display module is specifically configured to determine a pixel point with a color difference value greater than a first threshold value as a target pixel point; and rendering the target pixel point as a first color on the contrast image layer, obtaining a contrast image and visually displaying the contrast image to the operation object.
In one possible implementation method, the first and second modules,
The acquisition module is also used for receiving the fault tolerance rate input by the operation object; a first threshold is determined based on the fault tolerance.
In one possible implementation method, the method further includes:
The creation module is used for creating a filter element in the operation layer, wherein the filter element is positioned on the upper layer of the second element;
a filter module for setting the filter element as a background filter mode for generating a gray image based on the contrast image; and rendering the pixel points with the brightness lower than the second threshold value in the gray-scale image into a second color, and rendering other pixel points into a third color, wherein the second color and the third color are different.
In one possible implementation method, the first and second modules,
The acquisition module is also used for receiving the fault tolerance rate input by the operation object; a second threshold is determined based on the fault tolerance.
In one possible implementation method, the method further includes:
the extraction module is further used for adding a second transformation declaration into the CSS attribute of the filter element, and the second transformation declaration is used for extracting the filter element as a filter layer;
The display module is also used for visually displaying a target image to the operation object through the filter image layer, and the target image is obtained after the gray-scale image is rendered.
In one possible implementation method, the method further includes:
and the redrawing module is used for re-rendering the target image in response to the transformation operation of the operation object on the second element and/or the filter element.
In one possible implementation method, the first and second modules,
And the redrawing module is also used for re-rendering the contrast image in response to the transformation operation of the operation object on the second element.
In one possible implementation, the transformation operation includes a rotation, scaling, tilting, or translation operation.
Another aspect of the present application provides a computer apparatus comprising: a memory and a processor;
The memory stores instructions that, when executed on the processor, perform the methods of the above aspects.
Another aspect of the application provides a computer readable storage medium having instructions stored therein which, when run on a computer, cause the computer to perform the methods of the above aspects.
Another aspect of the application provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the methods provided in the above aspects.
From the above technical solutions, the embodiment of the present application has the following advantages:
After a comparison request of an operation object for comparing a first image and a second image is obtained, converting the first image and the second image into a first element and a second element in an operation layer based on CSS (content analysis service) respectively, so that the attribute of the first element and the attribute of the second original CSS can be modified; then, extracting the second element as a contrast layer by adding the first transformation declaration into the CSS attribute of the second element, so that pixel calculation on the second element can be realized by GPU processing; setting the second element as a mixed mode so as to calculate color difference values of all pixel points at the positions corresponding to the second element and the first element; and finally, visually displaying a contrast image to the operation object through the contrast image layer, wherein the contrast image is rendered based on the color difference value. According to the method provided by the embodiment of the application, the images to be compared are converted into the elements which can be edited through the cascading style sheets (CASCADING STYLE SHEETS, CSS), and then the transformation declaration is added for the elements, so that the transformation of the elements can be realized through GPU calculation, the multi-core of the GPU is utilized to perform parallel calculation on the multi-pixels, the calculation and processing time in the image comparison process can be greatly reduced, the comparison display result is quickened, and the user experience is improved.
Drawings
FIG. 1 is a flow chart of a method for image contrast display according to an embodiment of the present application;
Fig. 2a to fig. 2f are schematic application diagrams of an image contrast display method according to an embodiment of the present application;
Fig. 3a and fig. 3b are schematic diagrams illustrating setting of fault tolerance in an image comparison display method according to an embodiment of the present application;
FIG. 4 is a flowchart of a method for image contrast display according to an embodiment of the present application;
Fig. 5a to 5d are schematic diagrams illustrating the roles of elements in the image contrast display method according to the embodiment of the present application;
fig. 6a and fig. 6b are schematic diagrams illustrating setting of fault tolerance in an image comparison display method according to an embodiment of the present application;
FIG. 7 is a schematic diagram of an embodiment of an image contrast display device according to an embodiment of the present application;
fig. 8 is a schematic diagram of a server structure according to an embodiment of the present application.
Detailed Description
The embodiment of the application provides an image comparison display method and a related device, which are applied to a web page design scene, wherein images to be compared are converted into elements which can be edited through cascading style sheets (CASCADING STYLE SHEETS, CSS), and then transformation declarations are added to the elements, so that the transformation of the elements can be realized through GPU (graphics processing unit) calculation, and multi-core computing of the GPU is utilized to perform parallel computing on multiple pixels, thereby improving the computing efficiency and accelerating the comparison display result.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented, for example, in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "includes" and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
The UI is an overall design of man-machine interaction, operation logic and attractive interface of software. And the UI designer takes the interface visual effect diagram manufactured by picture inserting, plane design or other technologies as a UI design draft according to project requirements and design targets. And the effect diagram obtained by the developer performing interface development according to the codes and the files used by the design manuscript is UI opening distribute new dispatchs. The consistency of the opening distribute new dispatchs with the design manuscript is significant in accurately conveying design intent, maintaining brand consistency, improving user experience, reducing development errors and costs, facilitating maintenance and updating, and the like.
In order to ensure that the final picture of the opening distribute new dispatchs delivered by the developer is consistent with the picture of the design draft made by the designer, interface screening (or visual screening) becomes an indispensable part of the development process.
In the UI walking process, an acceptance person needs to inspect a development manuscript picture submitted by a developer, check each element and its details one by one against a design manuscript picture, and this process usually involves careful comparison of multiple aspects such as color, font size, pixel size, position layout, interaction effect, and the like.
Traditional UI walking methods mainly rely on manual visual screening. This approach, while intuitive, has a number of limitations. First, the same interface may give different acceptance results in different eyes, as there may be differences in the evaluation criteria of different acceptance persons. This subjectivity not only results in low accuracy but also low efficiency of the walk-through. The open source visual library (open source computer vision library, openCV) is an open source computer vision and machine learning software library that provides many functions for processing images. The absdiff function can be used to calculate the absolute difference between the two image matrices, and the industry generally uses the absdiff function to compare distribute new dispatchs with the design draft, but the function needs to occupy a higher CPU, the comparison process takes a longer time, and the user experience is poor.
In order to solve the problems, the application provides an image comparison display method and a related device, which are applied to a web page design scene, and are used for converting images to be compared into web page elements which can be edited through cascading style sheets (CASCADING STYLE SHEETS, CSS), and then adding a transformation statement for the web page elements, for example, setting a hall-change attribute as transform, so that transformation of the web page elements can be realized through GPU calculation, and a browser can perform parallel calculation on multiple pixels by utilizing multiple cores of the GPU, thereby improving the image processing speed and accelerating the comparison display result.
The image comparison display method is described below, and can be implemented in software or a tool based on Web technology and an Electron framework. Referring to fig. 1, fig. 1 is a flowchart of a method for comparing and displaying images according to an embodiment of the present application, including:
101, in response to a comparison request of an operation object to compare a first image and a second image, the first image and the second image are respectively converted into a first element and a second element in an operation layer based on a cascading style sheet CSS, the second element being located at an upper layer of the first element.
It will be appreciated that the object of operation is the entity or entity that initiated the alignment request, which may be a person (e.g., a user or operator), or a computer system or other automated device. The visual interface of the software can be provided with a check box of 'comparison', and the operation object can start the comparison function by checking the check box of 'comparison', namely, send a comparison request. The software performs subsequent operations in response to the request.
In the embodiment of the application, when the operation object needs to compare the first image with the second image, the software creates two HTML elements to represent the two images after determining the first image and the second image. These elements are typically < div >, < img >, or other suitable HTML tags. For example, the image may be directly contained using an < img > tag, or displayed using a < div > tag in conjunction with a background image (background-imageCSS attribute). These images are then applied as background images to these HTML elements by CSS. Wherein the second element is located at an upper layer of the first element, the position properties of the CSS can be used to locate both elements. In general, the position attribute of both elements may be set to relative or absolute and ensure that the z-index value of the second element is higher than the first element. The z-index attribute determines the stacking order of the elements along the z-axis (direction perpendicular to the screen). The operation layer refers to a layer where elements corresponding to the first image and the second image are located, and after the comparison request is responded, the first element and the second element are located in the operation layer and used for representing the area for subsequent comparison and display.
In one possible implementation, one of the first image and the second image is a design draft image, and the other is an opening distribute new dispatchs image; the image of the opening distribute new dispatchs is developed based on the design document image.
It will be appreciated that CSS allows a developer to achieve consistent layout and style on different screen sizes and devices, by setting CSS attributes to compare differences between the picture of the opening distribute new dispatchs and the picture of the design manuscript, so that the opening distribute new dispatchs can accurately present the intent of the design manuscript on different devices.
Thus, before this step, adding the design draft picture and adding the opening distribute new dispatchs picture are also included. Taking the opening distribute new dispatchs as an example, before step 101, the method further includes:
Acquiring a Uniform Resource Locator (URL) input by an operation object;
And extracting distribute new dispatchs images based on the webpage corresponding to the URL.
It can be understood that the operation object can access the Web page under development by inputting the URL, the interface image corresponding to the Web page is the development manuscript image, and the inputted URL can be displayed in software as a main body of UI comparison restoration after being input as a parameter through the webview component in the Electron frame. The operation object can quickly set the HTTP proxy, the switching debugging environment, the switching host browser environment and the like of the webpage through the setting menu, and the functions can be realized through the configuration of User Agent information, cookie and the like of the operation webview container.
For the design manuscript, an operator can add the design manuscript through the system clipboard, and when the software reads the design manuscript, SVG character strings, picture binary information and file paths in the system clipboard can be sequentially collected, read and converted into picture format files in a unified mode and stored in the system. If the file path is the file path, the system calls the file system capability provided by the node. In addition, the operator may directly upload the document to introduce the design draft. Or when the design manuscript is generated by the UI design tool, the design manuscript in the UI design tool can be directly obtained through the interface by configuring the interface corresponding to the UI design tool. For example, when the UI designer uses Figma to perform design, firstly, the software accesses the oauth2.0 system, obtains authorization of Figma account, calls up a third party interface provided by Figma to obtain Figma manuscript details in the user account, uniformly converts the details into SVG files, and finally downloads the converted files from the Figma server and imports the files into the system.
After importing the design script, the design script may be placed on top of the webview overlay to facilitate visual comparison of the objects. The operation object can drag the design manuscript image through the mouse, and at the moment, the system controls the position, the scaling and other transformation operations of the design manuscript image through the transformation in the CSS attribute according to the event and the coordinates of the operation object mouse. In addition, the software can also provide a toolbar to enable the operation object to freely adjust the transparency of the design manuscript image, lock and hide the design manuscript image in the toolbar, and the operations can be realized by controlling opacity, pointer-events, visibility properties of the < img > tag.
102, Adding a first transformation declaration in the CSS attribute of the second element, wherein the first transformation declaration is used for extracting the second element as a contrast layer.
It is to be understood that, after a comparison request for comparing the first image and the second image in response to the operation object, the region to be compared is divided into one integral operation layer based on the CSS, wherein a first element corresponding to the first image and a second element corresponding to the second image are included, and the first element and the second element are arranged in layers in the operation layer. Wherein the first element is used as a raw material for image contrast, and the first element does not need to be processed.
In order to fully utilize the GPU computing capability of the browser, a first transformation declaration may be added to the CSS attribute of the second element, where the first transformation declaration specifically refers to setting the will-CHANGE CSS attribute of the second element to transform, so that the browser may be prompted to extract the second element separately into a layer, i.e. a comparison layer, and the computing capability of the GPU may be enabled in a subsequent rendering step.
It will be appreciated that when the second element is set to be a transform, it is when the browser is prompted that the element may undergo a change in transform property in the near future. Upon receipt of this signal, the browser may optimize the rendering pipeline to more effectively handle these changes.
During the rendering of a browser, each element is typically drawn on one or more layers (layers). By default, elements on a page are typically drawn on the same layer, meaning that the entire page needs to be re-rendered when the position, size, or color of the elements change. This process can be slow, especially when dealing with complex visual effects or animations.
But when the browser is told that an element (via the web-change attribute) may change frequently, the browser will decide to promote (promote) the element onto a separate layer. This means that when the properties of this element change, the browser only needs to re-render this separate layer, not the entire page. This technique is known as layering (layering) or compositing (compositing).
When this element is layered, the browser can use the GPU to accelerate the composition of the layers. GPUs are adept at processing a large number of graphics computing tasks in parallel, meaning that they can handle layer synthesis and rendering more quickly. When the transform attribute changes, the browser can utilize the GPU to perform the difference operation (i.e., the intermediate steps required to transition the computing element from one state to another), which is typically much faster than using the CPU to perform these computations.
Therefore, by setting "will-change: transform;" it is possible to prompt the browser to render this element, and to use the GPU to accelerate the process of changing the transform attribute, thereby improving the smoothness and performance of the animation.
103, Setting the second element as a mixed mode, wherein the mixed mode is used for calculating color difference values of all pixel points of the positions of the second element corresponding to the first element.
In the embodiment of the application, the color difference value of each pixel point on the position corresponding to the second element and the first element can be calculated by setting the second element to be in the mixed mode. It will be appreciated that the CSS attribute mix-end-mode of the second element may be specifically set to difference.
In CSS, the mix-end-mode attribute is used to define how an element is mixed with its background. The difference value represents the color of the element subtracted from the background. If the background is brighter than the element color, the resulting color will be darker; if the background is darker than the element color, the resulting color will be brighter. After the mix-end-mode of the second element is set to be difference, the browser calculates a color difference value between the current element (second element) and the lower object (first element) according to the pixel in the drawing process, and finally displays the difference value.
Specifically, for each color channel (including red, green, and blue channels, i.e., RGB channels), the current color (the color of the pixel at the position corresponding to the second element) is subtracted from the background color (the color of the pixel at the position corresponding to the first element), and then the absolute value of the result is taken. If the result is a negative number, it is converted to a positive number. This hybrid mode generally produces a negative-like effect.
Assuming that a certain pixel point of the first element is white (RGB: 255, 255), and a pixel point of a corresponding position of the second element is green (RGB: 0,128,0), when the mix-end-mode of the second element is set as a difference, the color difference value is calculated by:
result color= |background color-element color|;
red channel= |255-0|=255;
green channel+|255-128|=127;
blue channel= |255-0|=255;
Thus, the resulting color after mixing will be "RGB:255,127,255", corresponding to one of the magenta colors.
It will be appreciated that assuming that the color of a pixel of the first element is the same as the color of the pixel of the corresponding position of the second element, the resulting color will be "RGB:0, 0", corresponds to black.
And 104, visually displaying a contrast image to the operation object through the contrast layer, wherein the contrast image is rendered based on the color difference value.
It will be appreciated that, when the second element is extracted as a separate layer (contrast layer), the pixel operation may be performed by fully utilizing the GPU operation capability of the browser, that is, the calculating the color difference value in the step 103 may be implemented by using the GPU, and the calculation result is rendered on the contrast layer, so as to visually display the operation object, so that the operation object finds a place where the difference between the first image and the second image is found based on the contrast image.
For ease of understanding, an image contrast presentation method applied to UI design contrast is described below with reference to fig. 2a to 2 e.
As shown in fig. 2a, fig. 2a is a schematic diagram of a software interface developed based on the image comparison display method provided by the embodiment of the present application, in which a browser environment may be selected in the software interface, for selecting a browser terminal applied by the development interface; the scale at which the development interface is displayed in the software may also be set. When the operation object inputs the URL in the text box corresponding to the "development interface URL", the software is displayed in the corresponding browser environment, and the display effect of the development interface, that is, the opening distribute new dispatchs (which can be used as the first image).
Referring to fig. 2b, fig. 2b shows a design draft (which may be a second image) provided by the UI designer. It can be seen that if only distribute new dispatchs and design manuscripts are seen, it is difficult to find the difference between the two. Thus, the design draft can be imported into the software by the picture import function, as shown in fig. 2 c.
The operation object may set the transparency of the design manuscript and then adjust the position of the design manuscript by dragging the mouse to align it with the development manuscript as shown in fig. 2 d.
Then the operation object clicks the "image comparison" button in the software, which is equivalent to sending a comparison request to the software. After the software receives the request, the image comparison display method provided by the embodiment of the application is used for processing, and the processing process is to realize pixel calculation through the GPU, so that the processing result can be immediately displayed, as shown in fig. 2 e. As can be seen from fig. 2e, in addition to the dislocation of the "immediately downloaded" button, the colors of the "connection creation value" of the slogan are inconsistent between the development manuscript and the design manuscript of the UI interface, and the operation object can modify the split distribute new dispatchs image based on the image comparison display result.
In one possible implementation method, the method further includes:
And 105, re-rendering the contrast image in response to the transformation operation performed on the second element by the operation object.
It can be appreciated that, since the second element is a layer alone, the pixel calculation used by the transformation operation of the operation object on the second element can be processed by the GPU, so that the corresponding rendering result can be quickly displayed.
Among them, the transformation operation includes, but is not limited to, a rotation, a zoom, a tilt, or a pan operation.
As shown in fig. 2f, when the operation object moves the second element such that the "immediately download" button on the image of the opening distribute new dispatchs and the image of the design draft overlap, other characters or patterns on the image are dislocated, and a difference in pixel color occurs.
According to the image comparison display method provided by the embodiment of the application, after the comparison request of the operation object for comparing the first image with the second image is obtained, the first image and the second image are respectively converted into the first element and the second element in the operation layer based on the CSS, so that the first element and the second original CSS attribute can be modified; then, extracting the second element as a contrast layer by adding the first transformation declaration into the CSS attribute of the second element, so that pixel calculation on the second element can be realized by GPU processing; setting the second element as a mixed mode so as to calculate color difference values of all pixel points at the positions corresponding to the second element and the first element; and finally, visually displaying a contrast image to the operation object through the contrast image layer, wherein the contrast image is rendered based on the color difference value. By the method provided by the embodiment of the application, the calculation and processing time in the image comparison process can be greatly reduced, the comparison display result is quickened, and the user experience is improved.
In an optional embodiment of the image contrast displaying method provided in the corresponding embodiment of fig. 1 of the present application, step 104 specifically includes:
1041, determining a pixel point with a color difference value larger than a first threshold value as a target pixel point;
1042, rendering the target pixel point to a first color on the contrast layer, obtaining a contrast image and visually displaying the contrast image to the operation object.
In the embodiment of the application, when the first image and the second image have only color differences and the color difference is smaller, the color rendered by calculating the color difference value of the pixel is difficult to perceive, so that all pixel points with the color difference value exceeding the color difference threshold can be rendered into the same color (first color) by setting a color difference threshold (first threshold), and the comparison result is more obvious.
As shown in fig. 2e, the color of the logo "connection creation value" is similar, so that the color displayed by the logo is not obvious when the image contrast is performed. In this embodiment, by setting the first threshold, pixels with color difference values exceeding the first threshold are rendered into more distinct colors, so that a user can more intuitively feel the difference between two images, as shown in fig. 3 a.
Correspondingly, before step 104, the method further includes:
Receiving the fault tolerance rate of the input of the operation object;
a first threshold is determined based on the fault tolerance.
It can be appreciated that in the practical application process, a certain difference or inconsistency is allowed between the image of the opening distribute new dispatchs and the image of the design manuscript, and the degree of the allowable difference can be determined by setting the fault tolerance.
In particular, the fault tolerance may be a value or threshold for quantifying an acceptable difference between two images. When the difference between the two images is less than or equal to this threshold, they can be considered similar or matching; when the differences exceed this threshold, they are considered dissimilar or mismatched. Similarly, in the embodiment of the present application, it may be determined whether the color difference between the corresponding pixel points of the first element and the second element is negligible based on the fault tolerance of the operation object input. It will be appreciated that the higher the fault tolerance, the higher the first threshold, and the lower the fault tolerance, the lower the first threshold.
As shown in fig. 3a and 3b, the operation object can decide a negligible color difference value by dragging the adjustment knob representing "fault tolerance". As shown in FIG. 3a, when the fault tolerance is set to be small, the "connection creation value" banner will be displayed; as shown in fig. 3b, when the fault tolerance is set to be large, the color difference between the banners is ignored and thus not displayed.
In an alternative embodiment of the image contrast display method provided in the corresponding embodiment of fig. 1 of the present application, referring to fig. 4, fig. 4 is a flowchart of a method for image contrast display method provided in the embodiment of the present application, including:
401, in response to a comparison request of an operation object to compare a first image and a second image, converting the first image and the second image into a first element and a second element in an operation layer based on a cascading style sheet CSS, respectively, the second element being located at an upper layer of the first element;
402, adding a first transformation declaration in the CSS attribute of the second element, wherein the first transformation declaration is used for extracting the second element as a contrast layer;
403, setting the second element as a mixed mode, wherein the mixed mode is used for calculating color difference values of all pixel points at the positions corresponding to the second element and the first element;
and 404, visually displaying a contrast image to the operation object through the contrast layer, wherein the contrast image is rendered based on the color difference value.
It will be appreciated that the steps 401 to 404 are similar to the steps 101 to 104 in the corresponding embodiment of fig. 1, and will not be described herein.
Wherein after responding to the comparison request of the operation object to compare the first image and the second image, the method further comprises:
405 creating a filter element in the operational layer, the filter element being located on top of the second element;
In the embodiment of the application, in order to make the image comparison and display result more flexible and convenient to adjust, filter elements are created in the operation layer, namely 3 elements in the operation layer are arranged in a stacked manner, and the operation layer comprises a first element, a second element and a filter element from bottom to top.
Specifically, the first element may correspond to the image distribute new dispatchs, the second element may correspond to the design draft image, and the filter element is used for facilitating the operation object to set the display form of the comparison result, and the specific content is referred to later.
406, Adding a second transformation declaration to the CSS attribute of the filter element, the second transformation declaration for extracting the filter element as a filter layer.
It will be appreciated that, in order to improve the processing efficiency between the filter element and other elements, the GPU may also be used to perform various operation operations similar to the operation of adding the first transformation declaration to the second element, that is, setting the will-CHANGE CSS attribute of the filter element to transform, so that after the operation, when the position and the size of the second element are changed, redrawing (repaint) can be performed quickly.
After step 404, further includes:
407, setting the filter element as a background filter mode for generating a gray-scale image based on the contrast image;
And 408, rendering the pixel points with the brightness lower than the second threshold value in the gray-scale image into a second color, and rendering other pixel points into a third color, wherein the second color and the third color are different.
It can be understood that when the first image and the second image are both color images, each pixel point determines the color to be finally displayed based on three channels (RGB), so that the color corresponding to the color difference value obtained by performing pixel calculation on the two elements is also color, and the display effect is messy. Therefore, after the contrast image is determined, the filter element is set to be in the background filter mode, and the gray-scale image can be generated based on the contrast image.
It will be appreciated that this may also be achieved by setting the CSS attribute, for example, setting the backdrop-filter attribute of the filter element to Grayscale (1). This operation converts the filter elements into a gray-scale image based on the contrast image, meaning that the image will be completely converted to gray-scale without any color information remaining.
After the gray-scale image is obtained, the difference between the pixel points is only reflected by the brightness, so that the pixel points with the brightness lower than the designated threshold value (second threshold value) can be rendered into one color (second color), and the other pixel points are rendered into the other color (third color).
For example, the backdrop-filter attribute of the filter element is set directly to grayscale(1) invert(1) brightness(1.02) invert(1) brightness(15) brightness(0.5) sepia(1) invert() saturate(3.5) hue-rotate(137deg) brightness(1.05).
Wherein, the gray scale image is generated by the gray scale (1), and the description is omitted here;
invert (1) are used to reverse the color of the grayscale image, i.e., black to white, white to black, and the other colors are reversed accordingly. It can be understood that the pixel point with the color difference value of 0 corresponds to black, that is, the background is black, and in order to improve the look and feel, the pixel point can be converted into white through the inversion operation.
Bright (1.02) is used to manipulate the brightness of the image, meaning that the brightness of the image will be increased by 2% relative to the original brightness, it being understood that the second threshold above determines the magnitude of this parameter. Similarly, brightness (15) brightness (0.5) is also the brightness for manipulating the image.
Sepia (1) is used to convert an image to dark brown; saturate (3.5) is used for increasing the saturation of the image, so that the color is more vivid; hue-rotate (137 deg) is used to rotate the hue in the image, rotating the color hue of each pixel by 137 degrees, producing a color filter-like effect. The function functions above are used for toning, with the aim of blending the differentiated positions to a striking red color by parameter adjustment, and the positions with no or less than the second threshold value are white.
The above operation is a filter operation that can be implemented to enhance the contrast effect of the first picture and the second picture, and in the actual operation process, those skilled in the art may modify the filter operation based on the actual requirement, and the present application is not limited to the above operation and the effect thereof.
409, Visually displaying a target image to an operation object through the filter layer, wherein the target image is obtained after rendering the gray-scale image.
It can be appreciated that after the filter elements are extracted as the filter layers, the original processing operations based on the filters can be implemented by the GPU, so as to increase the processing speed. Correspondingly, the rendering effect obtained by processing is also visually displayed on the filter layer.
410, Re-rendering the target image in response to a transformation operation of the second element and/or the filter element by the operation object.
It can be understood that, since the second element and the filter element are both a single layer, the pixel calculation used by the transformation operation of the operation object on the second element or the filter element can be processed by the GPU, so that the corresponding rendering result can be quickly displayed. Among them, the transformation operation includes, but is not limited to, a rotation, a zoom, a tilt, or a pan operation.
The method provided by the embodiment of the application provides a three-element stacking design to intuitively embody image differences, wherein the bottom layer is an image to be compared (a first image), the middle layer is a standard image (a second image), and the upper layer is a filter layer (a filter element). The second image calculates a pixel difference value between the second image and the first image through the difference attribute, the fault tolerance selected by a user is adapted through the filter layer, so that unimportant difference details and calculation errors are screened out, and finally, the pixel points with differences are obviously displayed.
For easy understanding, the image comparison display method provided by the embodiment of the application is described below with reference to fig. 5a to 5d, and the method is applied to the comparison between the opening distribute new dispatchs and the design manuscript.
As shown in fig. 5a, the operation area includes three layers of elements, wherein webview layers are development document images extracted based on URL, i.e., first elements; the contrast picture layer is a design draft image, namely a second element; the filter layer represents a filter element. When only webview layers are present, the interface is displayed as a development document image, as shown in FIG. 5b. After the layers of the contrast pictures are superimposed, the interface is displayed as a contrast image obtained based on the color difference values of the pixels after the design manuscript image and the development manuscript image are superimposed, as shown in fig. 5 c. And finally, after the filter layers are overlapped, displaying the interface as a target image obtained after a series of processing based on the contrast image, wherein the target image can clearly display the difference between the distribute new dispatchs image and the design draft image.
In one possible implementation, before step 408, the method further includes:
Receiving the fault tolerance rate of the input of the operation object;
A second threshold is determined based on the fault tolerance.
It will be appreciated that in the practical application process, a certain difference or inconsistency is allowed between the image of the opening distribute new dispatchs and the image of the design manuscript, and the extent of the running difference can be determined by setting the fault tolerance. For a description of the fault tolerance, please refer to the above, and a detailed description is omitted here. In the embodiment of the application, the fault tolerance is used for determining the magnitude of the second threshold.
As shown in fig. 6a and 6b, the operation object can decide a negligible color difference value by dragging the adjustment knob representing "fault tolerance". As shown in FIG. 6a, when the fault tolerance is set to be small, the "connection creation value" banner will be displayed; as shown in fig. 6b, when the fault tolerance is set to be large, the color difference between the banners is ignored and thus not displayed.
The image contrast display device of the present application is described in detail below, referring to fig. 7. Fig. 7 is a schematic diagram of an embodiment of an image contrast display device 700 according to an embodiment of the present application, where the image contrast display device 700 includes:
A conversion module 701, configured to convert, based on a cascading style sheet CSS, the first image and the second image into a first element and a second element in an operation layer respectively in response to a comparison request for comparing the first image and the second image by an operation object, where the second element is located at an upper layer of the first element;
An extracting module 702, configured to add a first transformation declaration to the CSS attribute of the second element, where the first transformation declaration is used to extract the second element as a contrast layer;
A calculating module 703, configured to set the second element to a mixed mode, where the mixed mode is used to calculate a color difference value of each pixel point at a position corresponding to the first element and the second element;
And the display module 704 is used for visually displaying the contrast image to the operation object through the contrast image layer, wherein the contrast image is rendered based on the color difference value.
After a comparison request of an operation object for comparing a first image and a second image is obtained, the image comparison display device provided by the embodiment of the application converts the first image and the second image into a first element and a second element in an operation layer based on CSS through a conversion module 701, so that the first element and a second original CSS attribute can be modified; then, adding a first transformation declaration into the CSS attribute of the second element through the extraction module 702, so that the second element is extracted as a contrast layer, and pixel calculation on the second element can be realized through GPU processing; setting the second element into a mixed mode through a calculating module 703 so as to calculate color difference values of all pixel points at the positions corresponding to the second element and the first element; finally, the display module 704 is configured to visually display a contrast image to the operation object, where the contrast image is rendered based on the color difference value. By the method provided by the embodiment of the application, the calculation and processing time in the image comparison process can be greatly reduced, the comparison display result is quickened, and the user experience is improved.
In one possible implementation, one of the first image and the second image is a design draft image, and the other is an opening distribute new dispatchs image; the image of the opening distribute new dispatchs is developed based on the design document image.
In one possible implementation method, the method further includes:
the acquisition module is used for acquiring the uniform resource locator URL input by the operation object; and extracting distribute new dispatchs images based on the webpage corresponding to the URL.
In one possible implementation method, the display module 704 is specifically configured to determine a pixel point with a color difference value greater than a first threshold value as a target pixel point; and rendering the target pixel point as a first color on the contrast image layer, obtaining a contrast image and visually displaying the contrast image to the operation object.
In this embodiment, when the first image and the second image have only color differences and the color difference is small, the color rendered by calculating the color difference value of the pixel is difficult to perceive, so that by setting a color difference threshold (first threshold), all pixel points with the color difference value exceeding the color difference threshold can be rendered into the same color (first color), so that the comparison result is more remarkable.
In one possible implementation method, the first and second modules,
The acquisition module is also used for receiving the fault tolerance rate input by the operation object; a first threshold is determined based on the fault tolerance.
In one possible implementation method, the method further includes:
The creation module is used for creating a filter element in the operation layer, wherein the filter element is positioned on the upper layer of the second element;
a filter module for setting the filter element as a background filter mode for generating a gray image based on the contrast image; and rendering the pixel points with the brightness lower than the second threshold value in the gray-scale image into a second color, and rendering other pixel points into a third color, wherein the second color and the third color are different.
In the embodiment of the application, in order to make the image comparison and display result more flexible and convenient to adjust, filter elements are created in the operation layer, namely 3 elements in the operation layer are arranged in a stacked manner, and the operation layer comprises a first element, a second element and a filter element from bottom to top. By adjusting parameters of the filter elements, the contrast effect of the first picture and the second picture can be enhanced.
In one possible implementation method, the first and second modules,
The acquisition module is also used for receiving the fault tolerance rate input by the operation object; a second threshold is determined based on the fault tolerance.
In one possible implementation method, the method further includes:
The extracting module 702 is further configured to add a second transformation declaration to the CSS attribute of the filter element, where the second transformation declaration is used to extract the filter element as a filter layer;
the display module 704 is further configured to visually display a target image to the operation object through the filter layer, where the target image is obtained by rendering the gray-scale image.
In the embodiment of the application, after the filter elements are extracted as the filter image layers, each processing operation based on the original filter can be realized by the GPU, so that the processing speed is increased. Correspondingly, the rendering effect obtained by processing is also visually displayed on the filter layer.
In one possible implementation method, the method further includes:
and the redrawing module is used for re-rendering the target image in response to the transformation operation of the operation object on the second element and/or the filter element.
In one possible implementation method, the first and second modules,
And the redrawing module is also used for re-rendering the contrast image in response to the transformation operation of the operation object on the second element.
It can be understood that, since the second element and the filter element are both a single layer, the pixel calculation used by the transformation operation of the operation object on the second element or the filter element can be processed by the GPU, so that the corresponding rendering result can be quickly displayed.
When the filter element is present, the target image needs to be rendered based on the transformation operation, and when the filter element is not present, the comparative image needs to be rendered based on the transformation operation. Wherein the transformation operation includes, but is not limited to, a rotation, a zoom, a tilt, or a pan operation.
Fig. 8 is a schematic diagram of a server structure provided in an embodiment of the present application, where the server 300 may vary considerably in configuration or performance, and may include one or more central processing units (central processing units, CPU) 322 (e.g., one or more processors) and memory 332, one or more storage mediums 330 (e.g., one or more mass storage devices) storing applications 342 or data 344. Wherein the memory 332 and the storage medium 330 may be transitory or persistent. The program stored on the storage medium 330 may include one or more modules (not shown), each of which may include a series of instruction operations on a server. Still further, the central processor 322 may be configured to communicate with the storage medium 330 and execute a series of instruction operations in the storage medium 330 on the server 300.
The Server 300 may also include one or more power supplies 326, one or more wired or wireless network interfaces 350, one or more input/output interfaces 358, and/or one or more operating systems 341, such as Windows Server TM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTM, or the like.
The steps performed by the server in the above embodiments may be based on the server structure shown in fig. 8.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the present embodiment, the term "module" or "unit" refers to a computer program or a part of a computer program having a predetermined function and working together with other relevant parts to achieve a predetermined object, and may be implemented in whole or in part by using software, hardware (such as a processing circuit or a memory), or a combination thereof. Also, a processor (or multiple processors or memories) may be used to implement one or more modules or units. Furthermore, each module or unit may be part of an overall module or unit that incorporates the functionality of the module or unit.
In the several embodiments provided in the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of elements is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (15)

1. An image contrast display method is characterized by comprising the following steps:
Responding to a comparison request of an operation object for comparing a first image and a second image, and respectively converting the first image and the second image into a first element and a second element in an operation layer based on a Cascading Style Sheet (CSS), wherein the second element is positioned on the upper layer of the first element;
Adding a first transformation declaration in CSS attributes of the second element, wherein the first transformation declaration is used for extracting the second element as a contrast layer;
Setting the second element as a mixed mode, wherein the mixed mode is used for calculating color difference values of all pixel points at the positions of the second element and the first element;
and visually displaying a contrast image to the operation object through the contrast image layer, wherein the contrast image is rendered based on the color difference value.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
One of the first image and the second image is a design draft image, and the other is an opening distribute new dispatchs image; the opening distribute new dispatchs image is developed based on the design draft image.
3. The method of claim 2, further comprising, prior to the request to compare the first image with the second image in response to the operation object:
acquiring a Uniform Resource Locator (URL) input by the operation object;
and extracting the opening distribute new dispatchs image based on the webpage corresponding to the URL.
4. The method of claim 2, wherein the visualizing the contrast image to the operation object through the contrast layer comprises:
Determining the pixel point with the color difference value larger than a first threshold value as a target pixel point;
And rendering the target pixel point into a first color on the contrast image layer to obtain the contrast image and visually displaying the contrast image to the operation object.
5. The method of claim 4, further comprising, prior to said determining a pixel having said color difference value greater than a first threshold as a target pixel:
Receiving the fault tolerance rate of the operation object input;
the first threshold is determined based on the fault tolerance.
6. The method according to claim 1, wherein after responding to the comparison request of the operation object to compare the first image and the second image, further comprising:
creating a filter element in the operation layer, the filter element being located on top of the second element;
after visually displaying the contrast image on the contrast layer, the method further comprises:
Setting the filter element as a background filter mode, wherein the background filter mode is used for generating a gray-scale image based on the contrast image;
And rendering the pixel points with the brightness lower than a second threshold value in the gray-scale image into a second color, and rendering other pixel points into a third color, wherein the second color and the third color are different.
7. The method of claim 6, wherein rendering pixels of the grayscale image having a luminance below a second threshold to a second color and before rendering other pixels to a third color, further comprises:
Receiving the fault tolerance rate of the operation object input;
The second threshold is determined based on the fault tolerance.
8. The method of claim 6, wherein after creating a filter element in the operational layer, further comprising:
Adding a second transformation declaration in CSS attributes of the filter elements, wherein the second transformation declaration is used for extracting the filter elements as a filter layer;
And rendering the pixel points with the brightness lower than a second threshold value in the gray-scale image into a second color, and rendering other pixel points into a third color, wherein the method further comprises the following steps:
and visually displaying a target image to the operation object through the filter layer, wherein the target image is obtained after the gray-scale image is rendered.
9. The method of claim 8, wherein after the visualizing the target image to the operation object through the filter layer, further comprising:
And re-rendering the target image in response to a transformation operation of the operation object on the second element and/or the filter element.
10. The method according to claim 1, further comprising, after the visually displaying the contrast image to the operation object through the contrast layer:
and re-rendering the contrast image in response to a transformation operation performed on the second element by the operation object.
11. The method of claim 9 or 10, wherein the transformation operation comprises a rotation, scaling, tilting or translation operation.
12. An image contrast apparatus, comprising:
The conversion module is used for responding to a comparison request of an operation object for comparing a first image and a second image, and converting the first image and the second image into a first element and a second element in an operation layer based on a Cascading Style Sheet (CSS), wherein the second element is positioned on the upper layer of the first element;
An extraction module, configured to add a first transformation declaration to a CSS attribute of the second element, where the first transformation declaration is used to extract the second element as a contrast layer;
the computing module is used for setting the second element into a mixed mode, and the mixed mode is used for computing color difference values of all pixel points at the positions corresponding to the second element and the first element;
and the display module is used for visually displaying a contrast image to the operation object through the contrast image layer, and the contrast image is rendered based on the color difference value.
13. A computer device, comprising: a memory and a processor;
the memory stores instructions that, when executed on the processor, implement the image contrast presentation method of any one of claims 1 to 11.
14. A computer readable storage medium comprising instructions which, when run on a computer, cause the computer to perform the image contrast presentation method of any of claims 1 to 11.
15. A computer program product comprising a computer program, characterized in that the computer program is executed by a processor for performing the image contrast presentation method according to any of claims 1 to 11.
CN202410349687.3A 2024-03-26 2024-03-26 Image comparison display method and related device Pending CN117952817A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410349687.3A CN117952817A (en) 2024-03-26 2024-03-26 Image comparison display method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410349687.3A CN117952817A (en) 2024-03-26 2024-03-26 Image comparison display method and related device

Publications (1)

Publication Number Publication Date
CN117952817A true CN117952817A (en) 2024-04-30

Family

ID=90796546

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410349687.3A Pending CN117952817A (en) 2024-03-26 2024-03-26 Image comparison display method and related device

Country Status (1)

Country Link
CN (1) CN117952817A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112711526A (en) * 2019-10-25 2021-04-27 腾讯科技(深圳)有限公司 UI test method, device, equipment and storage medium
CN113076165A (en) * 2021-04-16 2021-07-06 北京沃东天骏信息技术有限公司 Page checking method and device
CN113778429A (en) * 2020-09-28 2021-12-10 北京沃东天骏信息技术有限公司 Walk-through method, walk-through device and storage medium
CN116775015A (en) * 2023-06-26 2023-09-19 北京沃东天骏信息技术有限公司 Layer display method and device
CN116955138A (en) * 2022-08-19 2023-10-27 中移(成都)信息通信科技有限公司 Acceptance method, acceptance device, acceptance equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112711526A (en) * 2019-10-25 2021-04-27 腾讯科技(深圳)有限公司 UI test method, device, equipment and storage medium
CN113778429A (en) * 2020-09-28 2021-12-10 北京沃东天骏信息技术有限公司 Walk-through method, walk-through device and storage medium
CN113076165A (en) * 2021-04-16 2021-07-06 北京沃东天骏信息技术有限公司 Page checking method and device
CN116955138A (en) * 2022-08-19 2023-10-27 中移(成都)信息通信科技有限公司 Acceptance method, acceptance device, acceptance equipment and storage medium
CN116775015A (en) * 2023-06-26 2023-09-19 北京沃东天骏信息技术有限公司 Layer display method and device

Similar Documents

Publication Publication Date Title
US10769764B2 (en) Hierarchical scale matching and patch estimation for image style transfer with arbitrary resolution
US8572501B2 (en) Rendering graphical objects based on context
CN101536078B (en) Improving image masks
CN102591848B (en) Selection of foreground characteristics based on background
US7636097B1 (en) Methods and apparatus for tracing image data
CN113379775A (en) Generating a colorized image based on interactive color edges using a colorized neural network
US9395894B2 (en) System and method for browser side colorizing of icon images
CN112882637B (en) Interaction method for multi-layer animation display and browser
JP7213616B2 (en) Information processing device, information processing program, and information processing method.
US7280117B2 (en) Graphical user interface for a keyer
CN111399831A (en) Page display method and device, storage medium and electronic device
WO2018203374A1 (en) Line drawing automatic coloring program, line drawing automatic coloring device, and program for graphical user interface
JP7348380B2 (en) Sketch image coloring device and method
US20060098029A1 (en) System, method and program to generate a blinking image
CN112686939A (en) Depth image rendering method, device and equipment and computer readable storage medium
CN117952817A (en) Image comparison display method and related device
US11651532B1 (en) Assisted creation of artistic digital images
CN112927321A (en) Intelligent image design method, device, equipment and storage medium based on neural network
CN109726382B (en) Typesetting method and device
JP5672168B2 (en) Image processing apparatus, image processing method, and program
US20240104312A1 (en) Photorealistic Text Inpainting for Augmented Reality Using Generative Models
Salehi ImageMagick Tricks Web Image Effects from the Command Line and PHP
US11928757B2 (en) Partially texturizing color images for color accessibility
US11854128B2 (en) Automated digital tool identification from a rasterized image
JP2001101436A (en) Device and method for processing image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination