CN112070656B - Frame data modification method and device - Google Patents

Frame data modification method and device Download PDF

Info

Publication number
CN112070656B
CN112070656B CN202010795359.8A CN202010795359A CN112070656B CN 112070656 B CN112070656 B CN 112070656B CN 202010795359 A CN202010795359 A CN 202010795359A CN 112070656 B CN112070656 B CN 112070656B
Authority
CN
China
Prior art keywords
frame data
input information
color
reserved area
colors
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010795359.8A
Other languages
Chinese (zh)
Other versions
CN112070656A (en
Inventor
林晓明
江金陵
唐大闰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Minglue Artificial Intelligence Group Co Ltd
Original Assignee
Shanghai Minglue Artificial Intelligence Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Minglue Artificial Intelligence Group Co Ltd filed Critical Shanghai Minglue Artificial Intelligence Group Co Ltd
Priority to CN202010795359.8A priority Critical patent/CN112070656B/en
Publication of CN112070656A publication Critical patent/CN112070656A/en
Application granted granted Critical
Publication of CN112070656B publication Critical patent/CN112070656B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a method and a device for modifying frame data, wherein the method comprises the following steps: the method comprises the steps of obtaining first frame data to be processed, carrying out semantic segmentation on the first frame data, obtaining second frame data, filling the second frame data into a palette, changing the background color of a non-reserved area in the second frame data into first colors according to input information, namely, the colors of the non-reserved area are drawn according to user input information, wherein one or more first colors can be found out according to a mapping relation, then, the first colors are displayed as corresponding objects, the reserved area is restored to the objects before semantic segmentation, and fourth frame data are formed.

Description

Frame data modification method and device
Technical Field
The present application relates to the field of computers, and in particular, to a method and apparatus for modifying frame data.
Background
In the related art, some scenes need to automatically generate advertisement graphics, such as an advertisement graphics of a certain automobile brand, and the goal is to display a car in an autonomously set scene, and the most original scheme is to shoot the car in a live-action scene, or people manually draw the scene after shooting the car according to own ideas.
The above approach is obviously inefficient and not in line with the current state of the art, and in the field of image processing, a picture generation technique for generating an countermeasure network may be used to generate a picture with better details based on a drawn diagram. As shown in fig. 1, a model automatically generates a vehicle and scene picture, for example, given a sketch drawing. With this approach, however, it is difficult to ensure the accuracy of the final generated picture, and it is likely that a model of a brand of automobile is ultimately displayed.
In the related art, a technical scheme is also provided, namely, image processing is realized based on a semantic segmentation model of a neural network, as shown in fig. 2, given images of a left part, semantic segmentation is automatically performed to obtain images of a right part, but a detail image of a wanted scene cannot be generated, and background images cannot be designed autonomously.
Aiming at the problems of low efficiency and low flexibility of the scheme for adjusting the image in the related technology, no effective solution is proposed at present.
Disclosure of Invention
The application mainly aims to provide a method and a device for modifying frame data, which are used for solving the problems of low efficiency and low flexibility of a scheme for adjusting images in the related technology.
In order to achieve the above object, according to one aspect of the present application, there is provided a method of modifying frame data. The application comprises the following steps: acquiring first frame data, and performing semantic segmentation on the first frame data to obtain second frame data; filling the second frame data into a palette, changing the background color of a non-reserved area in the second frame data into one or more first colors according to input information, and forming third frame data, wherein the mapping relation between different first colors and different objects is preset; and based on the mapping relation, displaying the region presenting the first color in the third frame data as a corresponding object, and restoring the reserved region as an object before semantic segmentation to form fourth frame data.
According to another embodiment of the present application, there is also provided a frame data modification apparatus including: the acquisition module is used for acquiring first frame data, and performing semantic segmentation on the first frame data to obtain second frame data; the adjusting module is used for filling the second frame data into a palette, changing the background color of the unreserved region in the second frame data into one or more first colors according to input information, and forming third frame data, wherein the mapping relation between different first colors and different objects is preset; and the restoration module is used for displaying the area presenting the first color in the third frame data as a corresponding object based on the mapping relation, restoring the reserved area as an object before semantic segmentation, and forming fourth frame data.
According to another embodiment of the present application, there is also provided a "computer-readable storage medium" or a "nonvolatile storage medium", characterized in that the "computer-readable storage medium" or the "nonvolatile storage medium" includes a stored program, wherein the program, when run, controls a device in which the "computer-readable storage medium" or the "nonvolatile storage medium" is located to perform the method of modifying frame data according to any one of the above embodiments.
According to another embodiment of the present application, there is also provided a processor for running a program, wherein the program runs to perform the method for modifying frame data according to any one of the above embodiments.
According to the application, the first frame data to be processed is acquired, semantic segmentation is carried out on the first frame data, the second frame data is acquired, the second frame data is filled into a palette, the background color of the unreserved area in the second frame data is changed into the first color according to the input information, namely, the color of the unreserved area is drawn according to the input information of a user, one or more first colors can be adopted, then, the object corresponding to each first color is found according to the mapping relation, the first color is displayed as the corresponding object, the reserved area is restored to the object before semantic segmentation, the fourth frame data is formed, the background replacement of the frame data is realized efficiently and flexibly, and the problems of low efficiency and low flexibility of the scheme for adjusting the image in the related technology are solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application. In the drawings:
fig. 1 is a picture schematic diagram of a picture generation technique based on generation of a countermeasure network according to the related art;
FIG. 2 is a schematic diagram of implementing picture processing based on a semantic segmentation model according to the related art;
FIG. 3 is a flow chart of a method of modifying frame data according to an embodiment of the application;
FIG. 4 is a schematic illustration of an initial input image according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a semantically partitioned processed image according to an embodiment of the present application;
FIG. 6 is a schematic diagram of selected reserved areas according to an embodiment of the present application;
FIG. 7 is a schematic illustration of a manually drawn picture according to an embodiment of the application;
FIG. 8 is a schematic diagram of a final generated image according to an embodiment of the application;
fig. 9 is a schematic diagram of a modification apparatus of frame data according to an embodiment of the present application.
Detailed Description
It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other. The application will be described in detail below with reference to the drawings in connection with embodiments.
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate in order to describe the embodiments of the application herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The method aims to solve the problems that for a specified object, the object is scratched out based on a real scene picture, then a new simple background picture is drawn, finally the picture of the real scene is automatically generated based on the simple background picture, and meanwhile the specified object in the picture is ensured to be unchanged. The optional application scenarios of this case are as follows: in the process of manufacturing the advertisement propaganda film, the same automobile needs to be displayed in different scenes, but the cost of placing the automobile in various real environments is high, so by adopting the scheme of the application, the automobile can be scratched out, and then different environment backgrounds are filled manually, so that the purpose of manufacturing the advertisement propaganda film is realized.
For convenience of description, the following will describe some terms or terminology involved in the embodiments of the present application:
according to an embodiment of the present application, there is provided a method of modifying frame data, and fig. 3 is a flowchart of a method of modifying frame data according to an embodiment of the present application. As shown in fig. 3, the application includes the steps of:
step S301, acquiring first frame data, and performing semantic segmentation on the first frame data to obtain second frame data;
the first frame of data is to-be-processed data, namely, an input picture containing a specified object, such as a picture of an automobile in a lawn, as shown in fig. 4, and a semantically-segmented image is shown in fig. 5. The first frame data may be processed using a semantic segmentation model in the related art, such as deeplabv3.
Step S302, filling the second frame data into a palette, changing the background color of a non-reserved area in the second frame data into one or more first colors according to input information, and forming third frame data, wherein the mapping relation between different first colors and different objects is preset;
the manual selection is set as the semantic content of the reserved area, the selected mode is directly selected by using a mouse, meanwhile, the content of the unselected area is converted into the background, as shown in fig. 6, the manual selection process is directly clicked by using the mouse, the optional flow is that the mouse clicks the image area, the algorithm obtains the abscissa of the mouse click, the area with the same color as the selected area of the mouse in the image is obtained as the reserved area, and the rest area is converted into the background color.
A palette is created for use in both the semantic segmentation model and the picture generation model, i.e., representing an object with a color, such as sky blue, vehicles blue, trees green, etc. The palette uses colors in RGB format, and the number of objects is generally not more than one hundred in order to ensure the effect of the model. The drawing process may use a drawing board at the browser end. A picture of the initial rendering is shown in fig. 7.
Optionally, after filling the second frame data into the palette, receiving first input information, determining a reserved area of the second frame data according to the first input information, and setting the reserved area to a second color; and receiving second input information, and setting the reserved area, which is not in the third frame data, as one or more first colors according to the second input information to form the third frame data.
Optionally, after the third frame data is formed, detecting that red, green and blue RGB values of the pixel points in the third frame data do not correspond to the first pixel points of the one or more first colors; calculating the absolute value distance between the RGB value of the first pixel point and the RGB value of each first color, and recording the absolute value distance as an absolute value distance set; and adjusting the RGB value of the first pixel point to be the first color RGB value corresponding to the minimum value in the absolute value distance set.
The first pixel points may be part of the pixels of the third frame data selected by a rule, or may be all the pixels of the third frame data, and the absolute value distance is calculated by performing one pass on all the pixels.
Optionally, there is an optimization step in manually drawing the drawing board. Using the palette at the front of the interface, the manually drawn picture is "impure" in color, for example yellow, which is all pure yellow (so-called pure yellow, i.e. RBG values in a custom palette, such as pure yellow "255,255,0"), but may actually be mostly yellow, and some places are not entirely yellow (yellow regions RGB are not equal to "255, 0" are replaced with black). Simply stated, some areas are actually "255,255,1" or "255, 254,0" after the canvas is drawn, and are not pure yellow.
Therefore, further optimization is needed, and the optimization scheme can be as follows: for each point in the picture, instead of using the original RBG value, the absolute distance is calculated from the RGB value of each object in the palette and then replaced with the nearest object color in the palette. Such as: there are only two objects in the drawing board, object 1: "255,255,1", object 2: "255,255, 100", a pixel point a is now input as "255,255,2", which is neither object 1 nor object 2. Due to
Distance (a, object 1) = |255-255|+|255-255|+|2-1|=1)
Distance (a, object 2) = |255-255|+|255-255|+|100-1|=99
Distance (a, object 1) < Distance (a, object 2). We replace this pixel with "255,255,1". The color of the third frame data is changed to "pure" by adopting the scheme.
Step S303, displaying the region presenting the first color in the third frame data as a corresponding object based on the mapping relationship, and restoring the reserved region as an object before semantic segmentation to form fourth frame data.
Optionally, restoring the reserved area to an object before semantic segmentation includes: acquiring an original object image corresponding to the reserved area in the first frame data; replacing the second color of the reserved area in the third frame data with the original object image.
Optionally, the performing semantic segmentation on the first frame data to obtain second frame data includes: and performing semantic segmentation on the first frame data by using a semantic segmentation model, and displaying second frame data after the semantic segmentation. Based on the mapping relation, displaying the region presenting the first color in the third frame data as a corresponding object includes: and displaying the area presenting the first color in the third frame data as a corresponding object by using an image generation model.
Based on the result of the manual rendering, i.e. the image of fig. 7, a picture of the real scene is generated using the image generation model. And meanwhile, the object of the semantic content selected in the semantic selection module in the original picture is used for replacing the region content corresponding to the generated picture, so that a final picture generation result is obtained, and the final picture generation result is shown in fig. 8. In the generated picture, the vehicles in the picture are vehicles in picture input, but other grasslands, sky, mountains and the like are contents generated by the picture generation model. The image generation model may use many different models, and the image generation model may use pix2pixHD.
By adopting the scheme, the first frame data to be processed is acquired, semantic segmentation is carried out on the first frame data, the second frame data is acquired, the second frame data is filled into a palette, the background color of the unreserved area in the second frame data is changed into the first color according to the input information, namely, the color of the unreserved area is drawn according to the input information of a user, one or more first colors can be adopted, then the object corresponding to each first color is found according to the mapping relation, the first color is displayed as the corresponding object, the reserved area is restored to the object before semantic segmentation, the fourth frame data is formed, the background replacement of the frame data is realized efficiently and flexibly, and the problems of low efficiency and low flexibility of the scheme for adjusting the image in the related technology are solved.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowcharts, in some cases the steps illustrated or described may be performed in an order other than that illustrated herein.
The embodiment of the application also provides a device for modifying frame data, and the device for modifying frame data in the embodiment of the application can be used for executing the method for modifying frame data. The following describes a modification apparatus for frame data provided in an embodiment of the present application.
Fig. 9 is a schematic diagram of a modification apparatus of frame data according to an embodiment of the present application. As shown in fig. 9, the apparatus includes:
the obtaining module 92 is configured to obtain first frame data, and perform semantic segmentation on the first frame data to obtain second frame data;
the adjustment module 94 is configured to fill the second frame data into a palette, change a background color of a non-reserved area in the second frame data into one or more first colors according to input information, and form third frame data, where mapping relationships between different first colors and different objects are preset;
and a restoration module 96, configured to display, based on the mapping relationship, an area in the third frame data in which the first color is presented as a corresponding object, restore the reserved area as an object before semantic segmentation, and form fourth frame data.
By adopting the scheme, the first frame data to be processed is acquired, semantic segmentation is carried out on the first frame data, the second frame data is acquired, the second frame data is filled into a palette, the background color of the unreserved area in the second frame data is changed into the first color according to the input information, namely, the color of the unreserved area is drawn according to the input information of a user, one or more first colors can be adopted, then the object corresponding to each first color is found according to the mapping relation, the first color is displayed as the corresponding object, the reserved area is restored to the object before semantic segmentation, the fourth frame data is formed, the background replacement of the frame data is realized efficiently and flexibly, and the problems of low efficiency and low flexibility of the scheme for adjusting the image in the related technology are solved.
Optionally, the adjustment module is further configured to receive first input information, determine a reserved area of the second frame data according to the first input information, and set the reserved area to a second color; and the device is used for receiving second input information, setting the reserved area, which is not in the third frame data, as one or more first colors according to the second input information, and forming the third frame data.
Optionally, after the third frame data is formed, the adjustment module is further configured to detect that RGB values of pixels in the third frame data do not correspond to the first pixels of the one or more first colors; the absolute value distance between the RGB value of the first pixel point and the RGB value of each first color is calculated and is recorded as an absolute value distance set; and the RGB value of the first pixel point is adjusted to be the first color RGB value corresponding to the minimum value in the absolute value distance set.
The frame data modifying device includes a processor and a memory, where the acquiring module 92, the adjusting module 94, the restoring module 96, etc. are stored as program units, and the processor executes the program units stored in the memory to implement corresponding functions.
The processor includes a kernel, and the kernel fetches the corresponding program unit from the memory. The kernel can be provided with one or more than one, and the background replacement of the frame data is realized efficiently and flexibly by adjusting the kernel parameters.
The memory may include volatile memory, random Access Memory (RAM), and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM), among other forms in computer readable media, the memory including at least one memory chip.
The embodiment of the application provides a storage medium on which a program is stored which, when executed by a processor, implements a method of modifying frame data.
The embodiment of the application provides a processor which is used for running a program, wherein the program runs to execute a modification method of frame data.
The embodiment of the application provides equipment, which comprises a processor, a memory and a program stored in the memory and capable of running on the processor, wherein the processor realizes the following steps when executing the program:
acquiring first frame data, and performing semantic segmentation on the first frame data to obtain second frame data; filling the second frame data into a palette, changing the background color of a non-reserved area in the second frame data into one or more first colors according to input information, and forming third frame data, wherein the mapping relation between different first colors and different objects is preset; and based on the mapping relation, displaying the region presenting the first color in the third frame data as a corresponding object, and restoring the reserved region as an object before semantic segmentation to form fourth frame data.
Optionally, filling the second frame data into a palette, changing a background color of an unreserved region in the second frame data into one or more first colors according to input information, and forming third frame data, including: receiving first input information, determining a reserved area of the second frame data according to the first input information, and setting the reserved area to be a second color; and receiving second input information, and setting the reserved area, which is not in the third frame data, as one or more first colors according to the second input information to form the third frame data.
Optionally, the receiving second input information sets the non-reserved area in the third frame data to the one or more first colors according to the second input information, and detects that the RGB values of the pixels in the third frame data do not correspond to the first pixels of the one or more first colors after the third frame data are formed; calculating the absolute value distance between the RGB value of the first pixel point and the RGB value of each first color, and recording the absolute value distance as an absolute value distance set; and adjusting the RGB value of the first pixel point to be the first color RGB value corresponding to the minimum value in the absolute value distance set.
Optionally, the restoring the reserved area to the object before semantic segmentation includes: acquiring an original object image corresponding to the reserved area in the first frame data; replacing the second color of the reserved area in the third frame data with the original object image.
Optionally, the performing semantic segmentation on the first frame data to obtain second frame data includes: performing semantic segmentation on the first frame data by using a semantic segmentation model, and displaying second frame data after the semantic segmentation; the displaying the region presenting the first color in the third frame data as a corresponding object based on the mapping relation includes: and displaying the area presenting the first color in the third frame data as a corresponding object by using an image generation model.
The technical scheme of the application can be operated on a server, a PC, a PAD, a mobile phone and the like.
The application also provides a computer program product adapted to perform, when executed on a data processing device, a program initialized with the method steps of:
acquiring first frame data, and performing semantic segmentation on the first frame data to obtain second frame data; filling the second frame data into a palette, changing the background color of a non-reserved area in the second frame data into one or more first colors according to input information, and forming third frame data, wherein the mapping relation between different first colors and different objects is preset; and based on the mapping relation, displaying the region presenting the first color in the third frame data as a corresponding object, and restoring the reserved region as an object before semantic segmentation to form fourth frame data.
Optionally, filling the second frame data into a palette, changing a background color of an unreserved region in the second frame data into one or more first colors according to input information, and forming third frame data, including: receiving first input information, determining a reserved area of the second frame data according to the first input information, and setting the reserved area to be a second color; and receiving second input information, and setting the reserved area, which is not in the third frame data, as one or more first colors according to the second input information to form the third frame data.
Optionally, the receiving second input information sets the non-reserved area in the third frame data to the one or more first colors according to the second input information, and detects that the RGB values of the pixels in the third frame data do not correspond to the first pixels of the one or more first colors after the third frame data are formed; calculating the absolute value distance between the RGB value of the first pixel point and the RGB value of each first color, and recording the absolute value distance as an absolute value distance set; and adjusting the RGB value of the first pixel point to be the first color RGB value corresponding to the minimum value in the absolute value distance set.
Optionally, the restoring the reserved area to the object before semantic segmentation includes: acquiring an original object image corresponding to the reserved area in the first frame data; replacing the second color of the reserved area in the third frame data with the original object image.
Optionally, the performing semantic segmentation on the first frame data to obtain second frame data includes: performing semantic segmentation on the first frame data by using a semantic segmentation model, and displaying second frame data after the semantic segmentation; the displaying the region presenting the first color in the third frame data as a corresponding object based on the mapping relation includes: and displaying the area presenting the first color in the third frame data as a corresponding object by using an image generation model.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, etc., such as Read Only Memory (ROM) or flash RAM. Memory is an example of a computer-readable medium.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises an element.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.

Claims (6)

1. A method of modifying frame data, comprising:
acquiring first frame data, and performing semantic segmentation on the first frame data to obtain second frame data;
filling the second frame data into a palette, changing the background color of a non-reserved area in the second frame data into one or more first colors according to input information, and forming third frame data, wherein the mapping relation between different first colors and different objects is preset;
based on the mapping relation, using an image generation model to display a region presenting the first color in the third frame data as a corresponding object, and restoring the reserved region as an object before semantic segmentation to form fourth frame data;
filling the second frame data into a palette, changing the background color of the unreserved region in the second frame data into one or more first colors according to input information, and forming third frame data, including:
receiving first input information, determining a reserved area of the second frame data according to the first input information, and setting the reserved area to be a second color;
receiving second input information, and setting the reserved area, which is not the reserved area, in the second frame data to one or more first colors according to the second input information to form third frame data;
wherein, the receiving the second input information sets the reserved area in the third frame data to the one or more first colors according to the second input information, and after forming the third frame data, the method further includes:
detecting that red, green and blue (RGB) values of pixel points in the third frame data do not correspond to first pixel points of the one or more first colors;
calculating the absolute value distance between the RGB value of the first pixel point and the RGB value of each first color, and recording the absolute value distance as an absolute value distance set;
and adjusting the RGB value of the first pixel point to be the first color RGB value corresponding to the minimum value in the absolute value distance set.
2. The method of claim 1, wherein the restoring the reserved region to the pre-semantic-segmentation object comprises:
acquiring an original object image corresponding to the reserved area in the first frame data;
replacing the second color of the reserved area in the third frame data with the original object image.
3. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the semantic segmentation is performed on the first frame data, and obtaining the second frame data comprises the following steps: and performing semantic segmentation on the first frame data by using a semantic segmentation model, and displaying second frame data after the semantic segmentation.
4. A frame data modification apparatus, comprising:
the acquisition module is used for acquiring first frame data, and performing semantic segmentation on the first frame data to obtain second frame data;
the adjusting module is used for filling the second frame data into a palette, changing the background color of the unreserved region in the second frame data into one or more first colors according to input information, and forming third frame data, wherein the mapping relation between different first colors and different objects is preset;
the restoration module is used for displaying the area presenting the first color in the third frame data as a corresponding object by using an image generation model based on the mapping relation, restoring the reserved area into an object before semantic segmentation, and forming fourth frame data;
the adjustment module is further configured to receive first input information, determine a reserved area of the second frame data according to the first input information, and set the reserved area to a second color;
and the second input information is used for receiving, and setting the reserved area, which is not in the second frame data, as one or more first colors according to the second input information to form the third frame data;
after the third frame data is formed, the adjustment module is further configured to detect that RGB values of pixels in the third frame data do not correspond to first pixels of the one or more first colors;
the absolute value distance between the RGB value of the first pixel point and the RGB value of each first color is calculated and is recorded as an absolute value distance set;
and the RGB value of the first pixel point is adjusted to be the first color RGB value corresponding to the minimum value in the absolute value distance set.
5. A "computer-readable storage medium" or a "nonvolatile storage medium", characterized in that the "computer-readable storage medium" or the "nonvolatile storage medium" includes a stored program, wherein the program, when run, controls the apparatus in which the "computer-readable storage medium" or the "nonvolatile storage medium" is located to execute the method of modifying frame data according to any one of claims 1 to 3.
6. A processor for running a program, wherein the program runs to perform the method of modifying frame data according to any one of claims 1 to 3.
CN202010795359.8A 2020-08-10 2020-08-10 Frame data modification method and device Active CN112070656B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010795359.8A CN112070656B (en) 2020-08-10 2020-08-10 Frame data modification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010795359.8A CN112070656B (en) 2020-08-10 2020-08-10 Frame data modification method and device

Publications (2)

Publication Number Publication Date
CN112070656A CN112070656A (en) 2020-12-11
CN112070656B true CN112070656B (en) 2023-08-25

Family

ID=73661311

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010795359.8A Active CN112070656B (en) 2020-08-10 2020-08-10 Frame data modification method and device

Country Status (1)

Country Link
CN (1) CN112070656B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004229261A (en) * 2002-11-27 2004-08-12 Canon Inc Image-compressing method, image-compressing device, program, and recording media
CN101777180A (en) * 2009-12-23 2010-07-14 中国科学院自动化研究所 Complex background real-time alternating method based on background modeling and energy minimization
WO2018049084A1 (en) * 2016-09-07 2018-03-15 Trustees Of Tufts College Methods and systems for human imperceptible computerized color transfer
CN109961453A (en) * 2018-10-15 2019-07-02 华为技术有限公司 A kind of image processing method, device and equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004229261A (en) * 2002-11-27 2004-08-12 Canon Inc Image-compressing method, image-compressing device, program, and recording media
CN101777180A (en) * 2009-12-23 2010-07-14 中国科学院自动化研究所 Complex background real-time alternating method based on background modeling and energy minimization
WO2018049084A1 (en) * 2016-09-07 2018-03-15 Trustees Of Tufts College Methods and systems for human imperceptible computerized color transfer
CN109961453A (en) * 2018-10-15 2019-07-02 华为技术有限公司 A kind of image processing method, device and equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"图像的剪影风格化算法研究";李赫男;《中国优秀硕士学位论文全文数据库(信息科技辑)》;第I138-1420页 *

Also Published As

Publication number Publication date
CN112070656A (en) 2020-12-11

Similar Documents

Publication Publication Date Title
US11663733B2 (en) Depth determination for images captured with a moving camera and representing moving features
US20180308225A1 (en) Systems and techniques for automatic image haze removal across multiple video frames
CN110392904B (en) Method for dynamic image color remapping using alpha blending
CN111489322B (en) Method and device for adding sky filter to static picture
CN111970503B (en) Three-dimensional method, device and equipment for two-dimensional image and computer readable storage medium
CN111312141B (en) Color gamut adjusting method and device
CN108446089B (en) Data display method and device and display
CN110363837B (en) Method and device for processing texture image in game, electronic equipment and storage medium
US8908964B2 (en) Color correction for digital images
US20130182943A1 (en) Systems and methods for depth map generation
CN108961268B (en) Saliency map calculation method and related device
CN112396610A (en) Image processing method, computer equipment and storage medium
US11468548B2 (en) Detail reconstruction for SDR-HDR conversion
CN112070656B (en) Frame data modification method and device
US9460544B2 (en) Device, method and computer program for generating a synthesized image from input images representing differing views
CN111090384B (en) Soft keyboard display method and device
CN112686939A (en) Depth image rendering method, device and equipment and computer readable storage medium
US20120070080A1 (en) Color correction for digital images
CN113935891B (en) Pixel-style scene rendering method, device and storage medium
CN109993687B (en) Image information processing method and device
CN113362351A (en) Image processing method and device, electronic equipment and storage medium
CN114820834A (en) Effect processing method, device, equipment and storage medium
CN111492400A (en) Image processing apparatus, image processing method, and computer-readable recording medium
CN111402348A (en) Method and device for forming illumination effect and rendering engine
CN110928542B (en) Webpage adaptation method, device, system and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant