CN116580126A - Custom lamp effect configuration method and system based on key frame - Google Patents

Custom lamp effect configuration method and system based on key frame Download PDF

Info

Publication number
CN116580126A
CN116580126A CN202310592906.6A CN202310592906A CN116580126A CN 116580126 A CN116580126 A CN 116580126A CN 202310592906 A CN202310592906 A CN 202310592906A CN 116580126 A CN116580126 A CN 116580126A
Authority
CN
China
Prior art keywords
hand
curve image
image
drawn
drawn curve
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310592906.6A
Other languages
Chinese (zh)
Other versions
CN116580126B (en
Inventor
任天游
赵春生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Lifesmart Technology Co ltd
Original Assignee
Hangzhou Lifesmart Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Lifesmart Technology Co ltd filed Critical Hangzhou Lifesmart Technology Co ltd
Priority to CN202310592906.6A priority Critical patent/CN116580126B/en
Publication of CN116580126A publication Critical patent/CN116580126A/en
Application granted granted Critical
Publication of CN116580126B publication Critical patent/CN116580126B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/203Drawing of straight lines or curves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Abstract

A self-defining light effect configuration method and system based on key frame, it accepts the light effect hand-drawn curve image input by user; taking the starting point and the end point of a curve in the light effect hand-drawn curve image as a first key frame and a second key frame; and determining an interpolated varying curvature from the first keyframe to the second keyframe based on a shape of a curve in the light effect hand drawn curve image. In this way, the hand-drawn graph input by the user can be optimized based on the user intention to reduce friction so as to improve the final light effect self-defining effect.

Description

Custom lamp effect configuration method and system based on key frame
Technical Field
The application relates to the technical field of intelligent configuration, in particular to a custom light effect configuration method and system based on key frames.
Background
In recent years, color decorative lamps are popular in the market, and have some dynamic changing effects of lights, especially some intelligent decorative color lamps, which can send commands for controlling the lights to the color decorative lamps through APP. However, the existing decorative color lamps have limited types of light effects, and the flexibility of configuring the light effects is insufficient when a user wants to customize the light effects.
Thus, an optimized custom lighting configuration scheme is desired.
Disclosure of Invention
The present application has been made to solve the above-mentioned technical problems. The embodiment of the application provides a key frame-based self-defined light effect configuration method and a system thereof, which accept a light effect hand-drawn curve image input by a user; taking the starting point and the end point of a curve in the light effect hand-drawn curve image as a first key frame and a second key frame; and determining an interpolated varying curvature from the first keyframe to the second keyframe based on a shape of a curve in the light effect hand drawn curve image. In this way, the hand-drawn graph input by the user can be optimized based on the user intention to reduce friction so as to improve the final light effect self-defining effect.
In a first aspect, a method for configuring a custom light effect based on a keyframe is provided, which includes: receiving a light effect hand-drawn curve image input by a user; taking the starting point and the end point of a curve in the light effect hand-drawn curve image as a first key frame and a second key frame; and determining an interpolated varying curvature from the first keyframe to the second keyframe based on a shape of a curve in the light effect hand drawn curve image.
In the above-mentioned custom lighting effect configuration method based on the key frame, determining an interpolation change curvature from the first key frame to the second key frame based on a shape of a curve in the lighting effect hand-drawn curve image includes: image noise reduction is carried out on the light effect hand-drawn curve image so as to obtain a noise-reduced hand-drawn image; performing image blocking processing on the noise-reduced hand-painted curve image to obtain a sequence of hand-painted curve image blocks; respectively passing each hand-drawn curve image block in the sequence of hand-drawn curve image blocks through a shallow feature extractor based on a convolutional neural network model to obtain a plurality of hand-drawn curve image block feature matrixes; arranging the hand-painted curve image block feature matrixes according to the positions of image blocks to obtain a hand-painted curve image global feature matrix; the hand-drawn curve image global feature matrix is subjected to a bidirectional attention mechanism to obtain an optimized hand-drawn curve image global feature matrix; performing class probability density discrimination enhancement on the optimized hand-drawn curve image global feature matrix to obtain a re-optimized hand-drawn curve image global feature matrix; passing the re-optimized hand-drawn curve image global feature matrix through a decoder to generate an optimized hand-drawn curve image; and determining the interpolated varying curvature from the first keyframe to the second keyframe based on the shape of the curve in the optimized hand-drawn curve image.
In the above-mentioned custom lighting effect configuration method based on the key frame, performing image noise reduction on the lighting effect hand-drawn curve image to obtain a noise-reduced hand-drawn image, including: and carrying out bilinear filtering on the light effect hand-drawn curve image to obtain the noise-reduced hand-drawn image.
In the above-mentioned custom lighting effect configuration method based on the key frame, performing image blocking processing on the hand-painted curve image after noise reduction to obtain a sequence of hand-painted curve image blocks, including: and uniformly dividing the image blocks of the noise-reduced hand-drawn curve image to obtain a sequence of the hand-drawn curve image blocks, wherein each hand-drawn curve image block in the sequence of the hand-drawn curve image blocks has the same size.
In the above-mentioned custom lighting configuration method based on the key frame, each hand-painted curve image block in the sequence of hand-painted curve image blocks is respectively passed through a shallow feature extractor based on a convolutional neural network model to obtain a plurality of hand-painted curve image block feature matrices, including: and respectively carrying out convolution processing, pooling processing and nonlinear activation processing on input data in forward transfer of layers by using each layer of the shallow feature extractor based on the convolutional neural network model so as to output shallow layers of the shallow feature extractor based on the convolutional neural network model as the plurality of hand-drawn curve image block feature matrices.
In the key frame-based self-defined lighting effect configuration method, the shallow layer feature extractor based on the convolutional neural network model comprises 3-5 convolutional layers.
In the above-mentioned custom lighting configuration method based on key frames, the method for obtaining the optimized hand-drawn curve image global feature matrix by the bidirectional attention mechanism includes: pooling the hand-drawn curve image global feature matrix along the horizontal direction and the vertical direction respectively to obtain a first pooling vector and a second pooling vector; performing association coding on the first pooling vector and the second pooling vector to obtain a bidirectional association matrix; inputting the bidirectional association matrix into a Sigmoid activation function to obtain a bidirectional association weight matrix; and calculating the point-by-point multiplication between the bidirectional association weight matrix and the hand-drawn curve image global feature matrix to obtain the optimized hand-drawn curve image global feature matrix.
In the above-mentioned custom lighting configuration method based on the key frame, performing class probability density discrimination enhancement on the optimized hand-drawn curve image global feature matrix to obtain a re-optimized hand-drawn curve image global feature matrix, including: performing class probability density discrimination enhancement on the optimized hand-drawn curve image global feature matrix by using the following optimization formula to obtain a re-optimized hand-drawn curve image global feature matrix; wherein, the optimization formula is: Wherein (1)>And->Is the mean value and standard deviation of the feature value set of each position in the global feature matrix of the optimized hand-drawn curve image,/for>Is the first +.>Characteristic value of position, and->Is the +.f. of the re-optimized hand-drawn curve image global feature matrix>Characteristic values of the location.
In the above-mentioned key frame-based custom lighting configuration method, the decoder includes a plurality of deconvolution layers.
In a second aspect, a key frame based custom light efficacy configuration system is provided, comprising: the image receiving module is used for receiving the light effect hand-drawn curve image input by the user; the key frame generation module is used for taking the starting point and the end point of a curve in the light effect hand-drawn curve image as a first key frame and a second key frame; and an interpolation change curvature generation module for determining an interpolation change curvature from the first keyframe to the second keyframe based on a shape of a curve in the light effect hand-drawn curve image.
Compared with the prior art, the key frame-based self-defined light effect configuration method and the system thereof provided by the application accept the light effect hand-drawn curve image input by a user; taking the starting point and the end point of a curve in the light effect hand-drawn curve image as a first key frame and a second key frame; and determining an interpolated varying curvature from the first keyframe to the second keyframe based on a shape of a curve in the light effect hand drawn curve image. In this way, the hand-drawn graph input by the user can be optimized based on the user intention to reduce friction so as to improve the final light effect self-defining effect.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a graph of linear interpolation change rate according to an embodiment of the present application.
FIG. 2 is a graph of curve interpolation change rate according to an embodiment of the present application.
FIG. 3 is a graph of polyline interpolation change rate according to an embodiment of the present application.
FIG. 4 is a graph of a custom interpolation rate of change according to an embodiment of the present application.
Fig. 5 is a schematic view of a scenario of a custom lighting configuration method based on a keyframe according to an embodiment of the present application.
Fig. 6 is a flowchart of a key frame based custom light efficacy configuration method according to an embodiment of the present application.
Fig. 7 is a flowchart of the sub-steps of step 130 in the key frame based custom light efficacy configuration method according to an embodiment of the present application.
Fig. 8 is a schematic diagram of the architecture of step 130 in the key frame-based custom light effect configuration method according to an embodiment of the present application.
Fig. 9 is a flowchart of the sub-steps of step 135 in the key frame based custom light efficacy configuration method according to an embodiment of the present application.
Fig. 10 is a block diagram of a key frame based custom light efficacy configuration system in accordance with an embodiment of the present application.
Detailed Description
The following description of the technical solutions according to the embodiments of the present application will be given with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Unless defined otherwise, all technical and scientific terms used in the embodiments of the application have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used in the present application is for the purpose of describing particular embodiments only and is not intended to limit the scope of the present application.
In describing embodiments of the present application, unless otherwise indicated and limited thereto, the term "connected" should be construed broadly, for example, it may be an electrical connection, or may be a communication between two elements, or may be a direct connection, or may be an indirect connection via an intermediate medium, and it will be understood by those skilled in the art that the specific meaning of the term may be interpreted according to circumstances.
It should be noted that, the term "first\second\third" related to the embodiment of the present application is merely to distinguish similar objects, and does not represent a specific order for the objects, it is to be understood that "first\second\third" may interchange a specific order or sequence where allowed. It is to be understood that the "first\second\third" distinguishing objects may be interchanged where appropriate such that embodiments of the application described herein may be practiced in sequences other than those illustrated or described herein.
The popular color decorative lamps in the market are provided with some dynamic changing effects. The intelligent decorative colored lamp can send a command for controlling the light to the lamp belt through an App, and allows a user to define a dynamic effect. App issues command-color lamp command storage-color lamp MCU operation command-color lamp bead driving-lamp bead lighting. Regarding command definition of color light effect, key frame technology is used in industry, which is convenient for users to set custom light effect.
Aiming at the self-definition of the color lamp effect, key frame technology flows in the industry, so that a user can conveniently set the self-definition lamp effect. It will be appreciated by those of ordinary skill in the art that the light is illuminated at all times, requiring at least 24 frames per second with the visual residual effect, in order to appear to the naked eye to be flicker free. The lighting frequency of the decorative color lamp will be higher, assuming that the lighting is performed at 100 frames per second, the lighting effect of a certain frame can be decomposed into hue (H), brightness (B) and saturation (S), if the user wants to set a change of the lighting effect from H1B1S1 (first key frame) to H2B2S2 (second key frame) within one second, the setting of 100 frames constituting 1 second is not needed, and the HSB parameter of each frame is not needed. But only needs to set the HSB of the 0 th second and the 1 st second, and the MCU performs self-interpolation to calculate.
In particular, in the technical solution of the present application, a concept of a rate of change is introduced, that is, what rate H1B1S1 (second key frame) is changed into H2S2B2 (second key frame), and a graphical setting manner is introduced. Those of ordinary skill in the art will appreciate that the common interpolation method in the industry is equal difference interpolation, i.e., constant velocity variation, and does not allow user customization. In the technical scheme of the application, various interpolation change rates are provided in a graphical mode, and a user is allowed to use a hand-drawn curve to customize the change rate.
Specifically, in the technical solution of the present application, in some curves provided by default, if the curves are straight lines, as shown in fig. 1, an arithmetic interpolation is represented; if the curve is a polyline, as shown in FIG. 3, a jump at a certain time is indicated; if the curve is one, as shown in FIG. 2, a more dynamic rate of change is indicated. Also, in addition to selecting some of the curves provided by default, the user can manually plot the curve to customize the rate of change, and the system performs interpolation operations by fitting.
In addition to selecting some of the curves provided by default, the user may manually draw the curve to customize the rate of change. The system performs interpolation operations by fitting. Still further, the three parameters, keyframe 1 through keyframe 2, hsb, are in 3 independent rates of change. For example, H changes uniformly from 1 to 2, s adopts a curvilinear rate, and B uses hopping, as shown in fig. 4. Further, instead of a monotonically increasing curve from key frame 1 to key frame 2, the target value for frame 2 may be exceeded before returning to the target in terms of value change.
In the present application, on the one hand: the abscissa is incremental, and from key frame 1 to key frame 2 is a time-dependent change, and time cannot flow back. There is a limit to the drawing of the curve, with one and only one point on the same abscissa. On the other hand: key frame 1 and key frame 2 are seen from the ordinate of the graph, and the value of key frame 2 > the value of key frame 1 is merely for convenience of demonstrating the graph, and there is no requirement in practical arrangement. Whether the value of key frame 2 > the value of key frame 1, the value of key frame 2 = the value of key frame 1, or the value of key frame 2 < the value of key frame 1, the fitting may be performed according to a demonstration curve, such as a horizontal or vertical flip curve, or a rotation curve, etc. at the time of calculation. For example, the value of +100%, H is +360 (because the whole color ring is 360), S or B is +100, so that the value of key frame 2 is always greater than the value of key frame 1, then the value of each frame in the middle is calculated according to curve fitting, and each value is subtracted by 100%. These modes are very simple to implement in software.
The key points of the application are as follows: 1. the curve graphically represents the manner in which the interpolation from keyframe 1 to keyframe 2 is calculated. 2. Allowing the user to draw a curve by hand.
In particular, in generating the interpolation change rate between the key frame 1 and the key frame 2 based on the user hand-drawn curve, however, deviation between the input hand-drawn curve and the user intention is caused by deviation of the received input of the drawing software and problems of hesitation or insufficient skill of the user when drawing the curve, so that an image processing scheme is expected to optimize the hand-drawn curve input by the user based on the user intention so as to improve the final light effect customization effect.
Specifically, in the technical scheme of the application, firstly, image noise reduction is performed on the light effect hand-drawn curve image to obtain a noise-reduced hand-drawn image. It should be appreciated that the user may introduce a number of outliers of image pixels (represented as image noise on the image) due to hesitation or insufficient skill in drawing the curve, so that after receiving the light effect hand-drawn curve image, the light effect hand-drawn curve image is first image-noise-reduced, for example, in a specific embodiment, the light effect hand-drawn curve image may be bilinear filtered to achieve image noise reduction.
And then, carrying out image blocking processing on the noise-reduced hand-painted curve image to obtain a sequence of hand-painted curve image blocks. It should be understood that, in the hand-drawn curve image after noise reduction, the curve may be regarded as formed by splicing multiple sections of sub-curves, and the data processing amount and the difficulty of image analysis and processing may be reduced by performing overall collaborative optimization after the multiple sections of sub-curves are individually optimized. Therefore, in the technical scheme of the application, the image segmentation processing is performed on the hand-drawn curve image after noise reduction to obtain the sequence of the hand-drawn curve image blocks. For example, in a specific example of the present application, the noise-reduced hand-drawn curve image is subjected to uniform image block segmentation to obtain the sequence of hand-drawn curve image blocks.
And then, respectively passing each hand-drawn curve image block in the sequence of hand-drawn curve image blocks through a shallow feature extractor based on a convolutional neural network model to obtain a plurality of hand-drawn curve image block feature matrixes. That is, in the technical solution of the present application, the convolutional neural network model is used as a feature extractor to capture the image features of the sub-curve segments in the hand-drawn curve image blocks. Here, it should be appreciated by those of ordinary skill in the art that the convolutional neural network model has the following characteristics in terms of extracting image features: shallow features are edge, shape, texture, etc., while deep features are abstract features of objects, structures, etc., and as convolutional coding deepens, shallow features are progressively submerged or even vanished. Therefore, in the technical scheme of the application, the number of the convolution layers of the convolution neural network model is strictly controlled, and in particular, in the technical scheme of the application, the convolution neural network model comprises 3-5 convolution layers, so that the convolution neural network model can fully and accurately capture the image shallow layer characteristics of the sub-curve segments in each hand-drawn curve image block.
After the characteristic matrixes of the plurality of hand-drawn curve image blocks are obtained, the characteristic matrixes of the plurality of hand-drawn curve image blocks are arranged according to the positions of image blocks so as to obtain a global characteristic matrix of the hand-drawn curve image. That is, after extracting the image features of each sub-curve segment in the hand-drawn curve, rearranging the image features of each sub-curve segment into a hand-drawn curve image global feature matrix according to the segmentation positions of the image segments. Further, the hand-drawn curve image global feature matrix may be decoded by a decoder to generate a generation optimized hand-drawn curve image.
In particular, considering that in the technical solution of the present application, the contribution degree of the feature values of each position of the hand-drawn curve image global feature matrix in the spatial dimension thereof to decoding generation based on a decoder is different, in order to fully utilize the spatial feature distribution significance of the hand-drawn curve image global feature matrix, in the technical solution of the present application, before the hand-drawn curve image global feature matrix is input into the decoder for decoding generation, the hand-drawn curve image global feature matrix is subjected to a bidirectional attention mechanism to obtain an optimized hand-drawn curve image global feature matrix. Here, the bidirectional attention mechanism module further performs attention weight strengthening on the row space and the column space dimensions of the feature matrix to strengthen the space dimension distribution on the attention dimension, so that the overall distribution consistency of the hand-drawn curve image global feature matrix on the space dimension can be improved.
However, the consistency of the overall distribution of the global feature matrix of the optimized hand-drawn curve image in the spatial dimension may further cause a problem of distinguishing degree in the probability density dimension between the local distributions of the global feature matrix of the optimized hand-drawn curve image, thereby affecting the accuracy of the decoding regression of the global feature matrix of the optimized hand-drawn curve image.
Thus, the optimized hand-drawn curve image is preferably globally characterized by a matrix, e.g., expressed asOrthogonalization of manifold curved surface dimension of Gaussian probability density is carried out, specifically: />Wherein->And->Is a feature value set +.>Mean and standard deviation of (2), and->Is the +.f. of the optimized hand-drawn curve image global feature matrix after optimization>Characteristic values of the location.
Here, the surface unit tangent vector modulo length and unit normal vector modulo length are characterized by the square root of the mean and standard deviation of a high-dimensional feature set representing a manifold surfaceLong, the optimized hand-drawn curve image global feature matrix can be obtainedOrthogonal projection based on unit modular length is carried out on a tangential plane and a normal plane on a manifold curved surface of the high-dimensional feature manifold, so that the dimension reconstruction of the probability density of the high-dimensional feature is carried out on the basis of the basic structure of the Gaussian feature manifold geometry, and the accuracy of decoding generation of the optimized hand-drawn curve image global feature matrix through the decoder is improved through the dimension orthogonalization of the improved probability density.
That is, the re-optimized hand-drawn curve image global feature matrix is further passed through a decoder to generate an optimized hand-drawn curve image. In a specific example of the present application, the decoder comprises a plurality of deconvolution layers to perform deconvolution decoding generation by deconvolution operations cascaded with each other. Then, based on the shape of the curve in the optimized hand-drawn curve image, the interpolated varying curvature from the first keyframe to the second keyframe is determined.
Fig. 5 is a schematic view of a scenario of a custom lighting configuration method based on a keyframe according to an embodiment of the present application. As shown in fig. 5, in the application scenario, first, a light effect hand-drawn curve image (e.g., C as illustrated in fig. 5) input by a user is accepted; the obtained light effect hand-drawn curve image is then input into a server (e.g., S as illustrated in fig. 5) deployed with a custom light effect configuration algorithm based on a key frame, wherein the server is capable of processing the light effect hand-drawn curve image based on the custom light effect configuration algorithm of a key frame to determine the interpolated varying curvature from the first key frame to the second key frame based on the shape of the curve in the optimized hand-drawn curve image.
Having described the basic principles of the present application, various non-limiting embodiments of the present application will now be described in detail with reference to the accompanying drawings.
In one embodiment of the present application, fig. 6 is a flowchart of a method for key frame based custom light effect configuration according to an embodiment of the present application. As shown in fig. 6, a key frame-based custom light efficacy configuration method 100 according to an embodiment of the present application includes: 110, accepting a light effect hand-drawn curve image input by a user; 120, taking the starting point and the end point of a curve in the light effect hand-drawn curve image as a first key frame and a second key frame; and, 130, determining an interpolated varying curvature from the first keyframe to the second keyframe based on the shape of the curve in the light effect hand drawn curve image.
Fig. 7 is a flowchart of the sub-steps of step 130 in the key frame based custom light efficacy configuration method according to an embodiment of the present application. As shown in fig. 7, determining an interpolated curvature of change from the first keyframe to the second keyframe based on a shape of a curve in the light effect hand-drawn curve image includes: 131, performing image noise reduction on the light effect hand-drawn curve image to obtain a noise-reduced hand-drawn image; 132, performing image blocking processing on the noise-reduced hand-painted curve image to obtain a sequence of hand-painted curve image blocks; 133, passing each hand-drawn curve image block in the sequence of hand-drawn curve image blocks through a shallow feature extractor based on a convolutional neural network model to obtain a plurality of hand-drawn curve image block feature matrices; 134, arranging the hand-painted curve image block feature matrixes according to the positions of the image blocks to obtain a hand-painted curve image global feature matrix; 135, passing the hand-drawn curve image global feature matrix through a bidirectional attention mechanism to obtain an optimized hand-drawn curve image global feature matrix; 136, performing class probability density discrimination enhancement on the optimized hand-drawn curve image global feature matrix to obtain a re-optimized hand-drawn curve image global feature matrix; 137 passing the re-optimized hand-drawn curve image global feature matrix through a decoder to generate an optimized hand-drawn curve image; and, 138, determining the interpolated varying curvature from the first keyframe to the second keyframe based on the shape of the curve in the optimized hand-drawn curve image.
Fig. 8 is a schematic diagram of the architecture of step 130 in the key frame-based custom light effect configuration method according to an embodiment of the present application. As shown in fig. 8, in the network architecture, first, image denoising is performed on the light effect hand-drawn curve image to obtain a denoised hand-drawn image; then, performing image blocking processing on the noise-reduced hand-painted curve image to obtain a sequence of hand-painted curve image blocks; then, each hand-drawn curve image block in the sequence of hand-drawn curve image blocks respectively passes through a shallow feature extractor based on a convolutional neural network model to obtain a plurality of hand-drawn curve image block feature matrixes; then, arranging the hand-painted curve image block feature matrixes according to the positions of image blocks to obtain a hand-painted curve image global feature matrix; then, the hand-painted curve image global feature matrix is subjected to a bidirectional attention mechanism to obtain an optimized hand-painted curve image global feature matrix; then, carrying out class probability density discrimination enhancement on the optimized hand-drawn curve image global feature matrix to obtain a re-optimized hand-drawn curve image global feature matrix; then, passing the re-optimized hand-drawn curve image global feature matrix through a decoder to generate an optimized hand-drawn curve image; and finally, determining the interpolated varying curvature from the first keyframe to the second keyframe based on the shape of the curve in the optimized hand-drawn curve image.
Specifically, in step 131, image denoising is performed on the light effect hand-drawn curve image to obtain a denoised hand-drawn image. In particular, in generating the interpolation change rate between the key frame 1 and the key frame 2 based on the user hand-drawn curve, however, deviation between the input hand-drawn curve and the user intention is caused by deviation of the received input of the drawing software and problems of hesitation or insufficient skill of the user when drawing the curve, so that an image processing scheme is expected to optimize the hand-drawn curve input by the user based on the user intention so as to improve the final light effect customization effect.
Specifically, in the technical scheme of the application, firstly, image noise reduction is performed on the light effect hand-drawn curve image to obtain a noise-reduced hand-drawn image. It should be appreciated that the user may introduce a number of outliers of image pixels (represented as image noise on the image) due to hesitation or insufficient skill in drawing the curve, so that after receiving the light effect hand-drawn curve image, the light effect hand-drawn curve image is first image-noise-reduced, for example, in a specific embodiment, the light effect hand-drawn curve image may be bilinear filtered to achieve image noise reduction.
The method for performing image noise reduction on the light effect hand-painted curve image to obtain a noise-reduced hand-painted image comprises the following steps of: and carrying out bilinear filtering on the light effect hand-drawn curve image to obtain the noise-reduced hand-drawn image.
Specifically, in step 132, the noise-reduced hand-drawn curve image is subjected to image segmentation processing to obtain a sequence of hand-drawn curve image blocks. And then, carrying out image blocking processing on the noise-reduced hand-painted curve image to obtain a sequence of hand-painted curve image blocks. It should be understood that, in the hand-drawn curve image after noise reduction, the curve may be regarded as formed by splicing multiple sections of sub-curves, and the data processing amount and the difficulty of image analysis and processing may be reduced by performing overall collaborative optimization after the multiple sections of sub-curves are individually optimized. Therefore, in the technical scheme of the application, the image segmentation processing is performed on the hand-drawn curve image after noise reduction to obtain the sequence of the hand-drawn curve image blocks.
For example, in a specific example of the present application, the noise-reduced hand-drawn curve image is subjected to uniform image block segmentation to obtain the sequence of hand-drawn curve image blocks, where each hand-drawn curve image block in the sequence of hand-drawn curve image blocks has the same size.
Specifically, in step 133, each hand-drawn curve image block in the sequence of hand-drawn curve image blocks is respectively passed through a shallow feature extractor based on a convolutional neural network model to obtain a plurality of hand-drawn curve image block feature matrices. And then, respectively passing each hand-drawn curve image block in the sequence of hand-drawn curve image blocks through a shallow feature extractor based on a convolutional neural network model to obtain a plurality of hand-drawn curve image block feature matrixes. That is, in the technical solution of the present application, the convolutional neural network model is used as a feature extractor to capture the image features of the sub-curve segments in the hand-drawn curve image blocks.
Here, it should be appreciated by those of ordinary skill in the art that the convolutional neural network model has the following characteristics in terms of extracting image features: shallow features are edge, shape, texture, etc., while deep features are abstract features of objects, structures, etc., and as convolutional coding deepens, shallow features are progressively submerged or even vanished.
Therefore, in the technical scheme of the application, the number of the convolution layers of the convolution neural network model is strictly controlled, and in particular, in the technical scheme of the application, the convolution neural network model comprises 3-5 convolution layers, so that the convolution neural network model can fully and accurately capture the image shallow layer characteristics of the sub-curve segments in each hand-drawn curve image block.
Wherein, each hand-drawn curve image block in the sequence of hand-drawn curve image blocks is respectively passed through a shallow feature extractor based on a convolutional neural network model to obtain a plurality of hand-drawn curve image block feature matrices, including: and respectively carrying out convolution processing, pooling processing and nonlinear activation processing on input data in forward transfer of layers by using each layer of the shallow feature extractor based on the convolutional neural network model so as to output shallow layers of the shallow feature extractor based on the convolutional neural network model as the plurality of hand-drawn curve image block feature matrices.
The convolutional neural network (Convolutional Neural Network, CNN) is an artificial neural network and has wide application in the fields of image recognition and the like. The convolutional neural network may include an input layer, a hidden layer, and an output layer, where the hidden layer may include a convolutional layer, a pooling layer, an activation layer, a full connection layer, etc., where the previous layer performs a corresponding operation according to input data, outputs an operation result to the next layer, and obtains a final result after the input initial data is subjected to a multi-layer operation.
The convolutional neural network model has excellent performance in the aspect of image local feature extraction by taking a convolutional kernel as a feature filtering factor, and has stronger feature extraction generalization capability and fitting capability compared with the traditional image feature extraction algorithm based on statistics or feature engineering.
Specifically, in step 134, the plurality of hand-drawn curve image block feature matrices are arranged according to the positions of the image blocks to obtain a hand-drawn curve image global feature matrix. After the characteristic matrixes of the plurality of hand-drawn curve image blocks are obtained, the characteristic matrixes of the plurality of hand-drawn curve image blocks are arranged according to the positions of image blocks so as to obtain a global characteristic matrix of the hand-drawn curve image. That is, after extracting the image features of each sub-curve segment in the hand-drawn curve, rearranging the image features of each sub-curve segment into a hand-drawn curve image global feature matrix according to the segmentation positions of the image segments. Further, the hand-drawn curve image global feature matrix may be decoded by a decoder to generate a generation optimized hand-drawn curve image.
Specifically, in step 135, the hand-drawn curve image global feature matrix is passed through a bi-directional attention mechanism to obtain an optimized hand-drawn curve image global feature matrix. In particular, considering that in the technical solution of the present application, the contribution degree of the feature values of each position of the hand-drawn curve image global feature matrix in the spatial dimension thereof to decoding generation based on a decoder is different, in order to fully utilize the spatial feature distribution significance of the hand-drawn curve image global feature matrix, in the technical solution of the present application, before the hand-drawn curve image global feature matrix is input into the decoder for decoding generation, the hand-drawn curve image global feature matrix is subjected to a bidirectional attention mechanism to obtain an optimized hand-drawn curve image global feature matrix.
Here, the bidirectional attention mechanism module further performs attention weight strengthening on the row space and the column space dimensions of the feature matrix to strengthen the space dimension distribution on the attention dimension, so that the overall distribution consistency of the hand-drawn curve image global feature matrix on the space dimension can be improved.
Fig. 9 is a flowchart of the sub-step of step 135 in the key frame-based custom lighting effect configuration method according to an embodiment of the present application, as shown in fig. 9, the step of obtaining an optimized hand-drawn curve image global feature matrix by a bidirectional attention mechanism from the hand-drawn curve image global feature matrix includes: 1351, pooling the global feature matrix of the hand-drawn curve image along the horizontal direction and the vertical direction respectively to obtain a first pooled vector and a second pooled vector; 1352, performing association coding on the first pooling vector and the second pooling vector to obtain a bidirectional association matrix; 1353, inputting the bidirectional association matrix into a Sigmoid activation function to obtain a bidirectional association weight matrix; and 1354, calculating the point-by-point multiplication between the bi-directional association weight matrix and the hand-drawn curve image global feature matrix to obtain the optimized hand-drawn curve image global feature matrix.
The attention mechanism is a data processing method in machine learning, and is widely applied to various machine learning tasks such as natural language processing, image recognition, voice recognition and the like. On one hand, the attention mechanism is that the network is hoped to automatically learn out the places needing attention in the picture or text sequence; on the other hand, the attention mechanism generates a mask by the operation of the neural network, the weights of the values on the mask. In general, the spatial attention mechanism calculates the average value of different channels of the same pixel point, and then obtains spatial features through some convolution and up-sampling operations, and the pixels of each layer of the spatial features are given different weights.
Specifically, in step 136, class probability density discrimination enhancement is performed on the optimized hand-drawn curve image global feature matrix to obtain a re-optimized hand-drawn curve image global feature matrix. However, the consistency of the overall distribution of the global feature matrix of the optimized hand-drawn curve image in the spatial dimension may further cause a problem of distinguishing degree in the probability density dimension between the local distributions of the global feature matrix of the optimized hand-drawn curve image, thereby affecting the accuracy of the decoding regression of the global feature matrix of the optimized hand-drawn curve image.
Thus, the optimized hand-drawn curve image is preferably globally characterized by a matrix, e.g., expressed asGaussian is performedOrthogonalization of manifold curved surface dimension of probability density is specifically: performing class probability density discrimination enhancement on the optimized hand-drawn curve image global feature matrix to obtain a re-optimized hand-drawn curve image global feature matrix, including: performing class probability density discrimination enhancement on the optimized hand-drawn curve image global feature matrix by using the following optimization formula to obtain a re-optimized hand-drawn curve image global feature matrix; wherein, the optimization formula is: />Wherein (1)>And->Is the mean value and standard deviation of the feature value set of each position in the global feature matrix of the optimized hand-drawn curve image,/for>Is the first +.>Characteristic value of position, and->Is the +.f. of the re-optimized hand-drawn curve image global feature matrix>Characteristic values of the location.
Here, the optimized hand-drawn curve image global feature matrix can be characterized by characterizing the surface unit tangent vector modulo length and the unit normal vector modulo length by the square root of the mean and standard deviation of the high-dimensional feature set expressing the manifold surfaceOrthogonal projection based on unit mode length is carried out on a tangential plane and a normal plane on a manifold curved surface of a high-dimensional characteristic manifold, so that a basic structure based on Gaussian characteristic manifold geometry is obtained And carrying out dimension reconstruction on the probability density of the high-dimensional features so as to improve the accuracy of decoding generation of the optimized hand-drawn curve image global feature matrix through the decoder by improving the dimension orthogonalization of the probability density.
Specifically, in steps 137 and 138, passing the re-optimized hand-drawn curve image global feature matrix through a decoder to generate an optimized hand-drawn curve image; and determining the interpolated varying curvature from the first keyframe to the second keyframe based on the shape of the curve in the optimized hand-drawn curve image. That is, the re-optimized hand-drawn curve image global feature matrix is further passed through a decoder to generate an optimized hand-drawn curve image. In a specific example of the present application, the decoder comprises a plurality of deconvolution layers to perform deconvolution decoding generation by deconvolution operations cascaded with each other. Then, based on the shape of the curve in the optimized hand-drawn curve image, the interpolated varying curvature from the first keyframe to the second keyframe is determined.
In summary, a key frame based custom light effect configuration method 100 is illustrated that accepts a user-entered light effect hand-drawn curve image in accordance with an embodiment of the present application; taking the starting point and the end point of a curve in the light effect hand-drawn curve image as a first key frame and a second key frame; and determining an interpolated varying curvature from the first keyframe to the second keyframe based on a shape of a curve in the light effect hand drawn curve image. In this way, the hand-drawn graph input by the user can be optimized based on the user intention to reduce friction so as to improve the final light effect self-defining effect.
In one embodiment of the present application, FIG. 10 is a block diagram of a key frame based custom light efficacy configuration system according to an embodiment of the present application. As shown in fig. 10, a key frame-based custom light efficacy configuration system 200 according to an embodiment of the present application includes: the image receiving module 210 is configured to receive a light effect hand-drawn curve image input by a user; a key frame generating module 220, configured to take a start point and an end point of a curve in the light effect hand-drawn curve image as a first key frame and a second key frame; and an interpolated curvature generation module 230 for determining an interpolated curvature of the first keyframe to the second keyframe based on a shape of a curve in the light effect hand-drawn curve image.
In a specific example, in the above-mentioned custom lighting effect configuration system based on keyframes, the interpolation changing curvature generating module includes: the image noise reduction unit is used for carrying out image noise reduction on the light effect hand-painted curve image so as to obtain a noise-reduced hand-painted image; the image blocking processing unit is used for carrying out image blocking processing on the hand-painted curve image after noise reduction to obtain a sequence of hand-painted curve image blocks; the shallow feature extraction unit is used for enabling each hand-drawn curve image block in the sequence of hand-drawn curve image blocks to respectively pass through a shallow feature extractor based on a convolutional neural network model so as to obtain a plurality of hand-drawn curve image block feature matrixes; the matrix arrangement unit is used for arranging the characteristic matrixes of the plurality of hand-drawn curve image blocks according to the positions of the image blocks so as to obtain a hand-drawn curve image global characteristic matrix; the bidirectional attention unit is used for enabling the hand-drawn curve image global feature matrix to obtain an optimized hand-drawn curve image global feature matrix through a bidirectional attention mechanism; the optimizing unit is used for carrying out class probability density distinguishing degree strengthening on the optimized hand-drawn curve image global feature matrix so as to obtain a re-optimized hand-drawn curve image global feature matrix; a decoding unit, configured to pass the re-optimized hand-drawn curve image global feature matrix through a decoder to generate an optimized hand-drawn curve image; and an interpolation changing curvature determining unit configured to determine the interpolation changing curvature from the first key frame to the second key frame based on a shape of a curve in the optimized hand-drawn curve image.
In a specific example, in the above-mentioned custom lighting effect configuration system based on keyframes, the image noise reduction unit is configured to: and carrying out bilinear filtering on the light effect hand-drawn curve image to obtain the noise-reduced hand-drawn image.
In a specific example, in the above-mentioned custom lighting effect configuration system based on keyframes, the image blocking processing unit is configured to: and uniformly dividing the image blocks of the noise-reduced hand-drawn curve image to obtain a sequence of the hand-drawn curve image blocks, wherein each hand-drawn curve image block in the sequence of the hand-drawn curve image blocks has the same size.
In a specific example, in the above-mentioned key frame-based custom lighting effect configuration system, the shallow feature extraction unit is configured to: and respectively carrying out convolution processing, pooling processing and nonlinear activation processing on input data in forward transfer of layers by using each layer of the shallow feature extractor based on the convolutional neural network model so as to output shallow layers of the shallow feature extractor based on the convolutional neural network model as the plurality of hand-drawn curve image block feature matrices.
In a specific example, in the above-mentioned custom lighting effect configuration system based on a key frame, the shallow feature extractor based on a convolutional neural network model comprises 3-5 convolutional layers.
In a specific example, in the above-mentioned keyframe-based custom lighting effect configuration system, the bidirectional attention unit includes: chi Huazi unit, configured to pool the global feature matrix of the hand-drawn curve image along a horizontal direction and a vertical direction to obtain a first pooled vector and a second pooled vector; the association coding subunit is used for carrying out association coding on the first pooling vector and the second pooling vector to obtain a bidirectional association matrix; the activation subunit is used for inputting the bidirectional association matrix into a Sigmoid activation function to obtain a bidirectional association weight matrix; and the matrix calculating subunit is used for calculating the position-based point multiplication between the bidirectional association weight matrix and the hand-painted curve image global feature matrix to obtain the optimized hand-painted curve image global feature matrix.
In a specific example, in the above-mentioned keyframe-based custom lighting effect configuration system, the optimizing unit is configured to: performing class probability density discrimination enhancement on the optimized hand-drawn curve image global feature matrix by using the following optimization formula to obtain a re-optimized hand-drawn curve image global feature matrix; wherein, the optimization formula is: Wherein (1)>And->Is the mean value and standard deviation of the feature value set of each position in the global feature matrix of the optimized hand-drawn curve image,/for>Is the first +.>Characteristic value of position, and->Is the +.f. of the re-optimized hand-drawn curve image global feature matrix>Characteristic values of the location.
In one specific example, in the above-described key frame-based custom lighting effect configuration system, the decoder includes a plurality of deconvolution layers.
Here, it will be understood by those skilled in the art that the specific functions and operations of the respective units and modules in the above-described key frame-based customized light efficacy configuration system have been described in detail in the above description of the key frame-based customized light efficacy configuration method with reference to fig. 1 to 9, and thus, repetitive descriptions thereof will be omitted.
As described above, the key frame-based custom light efficacy configuration system 200 according to the embodiment of the present application may be implemented in various terminal devices, for example, a server for key frame-based custom light efficacy configuration, or the like. In one example, the key frame based custom light efficacy configuration system 200 according to embodiments of the present application may be integrated into a terminal device as a software module and/or hardware module. For example, the keyframe-based custom light efficacy configuration system 200 may be a software module in the operating system of the terminal device or may be an application developed for the terminal device; of course, the key frame based custom lighting effect configuration system 200 could also be one of many hardware modules of the terminal device.
Alternatively, in another example, the key frame based custom light efficacy configuration system 200 and the terminal device may be separate devices, and the key frame based custom light efficacy configuration system 200 may be connected to the terminal device via a wired and/or wireless network and communicate the interaction information in accordance with a agreed data format.
The present application also provides a computer program product comprising instructions which, when executed, cause an apparatus to perform operations corresponding to the above-described method.
In one embodiment of the present application, there is also provided a computer-readable storage medium storing a computer program for executing the above-described method.
It should be appreciated that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the forms of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects may be utilized. Furthermore, the computer program product may take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
Methods, systems, and computer program products of embodiments of the present application are described in the flow diagrams and/or block diagrams. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The basic principles of the present application have been described above in connection with specific embodiments, however, it should be noted that the advantages, benefits, effects, etc. mentioned in the present application are merely examples and not intended to be limiting, and these advantages, benefits, effects, etc. are not to be considered as essential to the various embodiments of the present application. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, as the application is not necessarily limited to practice with the above described specific details.
The block diagrams of the devices, apparatuses, devices, systems referred to in the present application are only illustrative examples and are not intended to require or imply that the connections, arrangements, configurations must be made in the manner shown in the block diagrams. As will be appreciated by one of skill in the art, the devices, apparatuses, devices, systems may be connected, arranged, configured in any manner. Words such as "including," "comprising," "having," and the like are words of openness and mean "including but not limited to," and are used interchangeably therewith. The terms "or" and "as used herein refer to and are used interchangeably with the term" and/or "unless the context clearly indicates otherwise. The term "such as" as used herein refers to, and is used interchangeably with, the phrase "such as, but not limited to.
It is also noted that in the apparatus, devices and methods of the present application, the components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered as equivalent aspects of the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or terminal device comprising the element.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit embodiments of the application to the form disclosed herein. Although a number of example aspects and embodiments have been discussed above, a person of ordinary skill in the art will recognize certain variations, modifications, alterations, additions, and subcombinations thereof.

Claims (10)

1. The key frame-based self-defined lamp effect configuration method is characterized by comprising the following steps of: receiving a light effect hand-drawn curve image input by a user; taking the starting point and the end point of a curve in the light effect hand-drawn curve image as a first key frame and a second key frame; and determining an interpolated varying curvature from the first keyframe to the second keyframe based on a shape of a curve in the light effect hand drawn curve image.
2. The key frame based custom light efficacy configuration method of claim 1, wherein determining an interpolated curvature of change from the first key frame to the second key frame based on a shape of a curve in the light efficacy hand-drawn curve image comprises: image noise reduction is carried out on the light effect hand-drawn curve image so as to obtain a noise-reduced hand-drawn image; performing image blocking processing on the noise-reduced hand-painted curve image to obtain a sequence of hand-painted curve image blocks; respectively passing each hand-drawn curve image block in the sequence of hand-drawn curve image blocks through a shallow feature extractor based on a convolutional neural network model to obtain a plurality of hand-drawn curve image block feature matrixes; arranging the hand-painted curve image block feature matrixes according to the positions of image blocks to obtain a hand-painted curve image global feature matrix; the hand-drawn curve image global feature matrix is subjected to a bidirectional attention mechanism to obtain an optimized hand-drawn curve image global feature matrix; performing class probability density discrimination enhancement on the optimized hand-drawn curve image global feature matrix to obtain a re-optimized hand-drawn curve image global feature matrix; passing the re-optimized hand-drawn curve image global feature matrix through a decoder to generate an optimized hand-drawn curve image; and determining the interpolated varying curvature from the first keyframe to the second keyframe based on the shape of the curve in the optimized hand-drawn curve image.
3. The key frame-based custom light effect configuration method of claim 2, wherein image denoising is performed on the light effect hand-drawn curve image to obtain a denoised hand-drawn image, comprising: and carrying out bilinear filtering on the light effect hand-drawn curve image to obtain the noise-reduced hand-drawn image.
4. The key frame-based custom light effect configuration method of claim 3, wherein performing image blocking processing on the noise-reduced hand-drawn curve image to obtain a sequence of hand-drawn curve image blocks comprises: and uniformly dividing the image blocks of the noise-reduced hand-drawn curve image to obtain a sequence of the hand-drawn curve image blocks, wherein each hand-drawn curve image block in the sequence of the hand-drawn curve image blocks has the same size.
5. The key frame-based custom light effect configuration method of claim 4, wherein passing each hand-drawn curve image block in the sequence of hand-drawn curve image blocks through a shallow feature extractor based on a convolutional neural network model to obtain a plurality of hand-drawn curve image block feature matrices, respectively, comprises: and respectively carrying out convolution processing, pooling processing and nonlinear activation processing on input data in forward transfer of layers by using each layer of the shallow feature extractor based on the convolutional neural network model so as to output shallow layers of the shallow feature extractor based on the convolutional neural network model as the plurality of hand-drawn curve image block feature matrices.
6. The key frame based custom light effect configuration method of claim 5, wherein the shallow feature extractor based on convolutional neural network model comprises 3-5 convolutional layers.
7. The key frame-based custom light effect configuration method of claim 6, wherein passing the hand-drawn curve image global feature matrix through a bi-directional attention mechanism to obtain an optimized hand-drawn curve image global feature matrix comprises: pooling the hand-drawn curve image global feature matrix along the horizontal direction and the vertical direction respectively to obtain a first pooling vector and a second pooling vector; performing association coding on the first pooling vector and the second pooling vector to obtain a bidirectional association matrix; inputting the bidirectional association matrix into a Sigmoid activation function to obtain a bidirectional association weight matrix; and calculating the point-by-point multiplication between the bidirectional association weight matrix and the hand-painted curve image global feature matrix to obtain the optimized hand-painted curve image global feature matrix.
8. The key frame-based custom light effect configuration method of claim 7, wherein performing class probability density differentiation degree reinforcement on the optimized hand-drawn curve image global feature matrix to obtain a re-optimized hand-drawn curve image global feature matrix comprises: performing class probability density discrimination enhancement on the optimized hand-drawn curve image global feature matrix by using the following optimization formula to obtain a re-optimized hand-drawn curve image global feature matrix; wherein, the optimization formula is: Wherein (1)>And->Is the mean value and standard deviation of the feature value set of each position in the global feature matrix of the optimized hand-drawn curve image,/for>Is the first +.>Characteristic value of position, and->Is the +.f. of the re-optimized hand-drawn curve image global feature matrix>Characteristic values of the location.
9. The key frame based custom light efficacy configuration method according to claim 8, characterized in that the decoder comprises a plurality of deconvolution layers.
10. A key frame based custom light efficacy configuration system comprising: the image receiving module is used for receiving the light effect hand-drawn curve image input by the user; the key frame generation module is used for taking the starting point and the end point of a curve in the light effect hand-drawn curve image as a first key frame and a second key frame; and an interpolation change curvature generation module for determining an interpolation change curvature from the first keyframe to the second keyframe based on a shape of a curve in the light effect hand-drawn curve image.
CN202310592906.6A 2023-05-24 2023-05-24 Custom lamp effect configuration method and system based on key frame Active CN116580126B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310592906.6A CN116580126B (en) 2023-05-24 2023-05-24 Custom lamp effect configuration method and system based on key frame

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310592906.6A CN116580126B (en) 2023-05-24 2023-05-24 Custom lamp effect configuration method and system based on key frame

Publications (2)

Publication Number Publication Date
CN116580126A true CN116580126A (en) 2023-08-11
CN116580126B CN116580126B (en) 2023-11-07

Family

ID=87539406

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310592906.6A Active CN116580126B (en) 2023-05-24 2023-05-24 Custom lamp effect configuration method and system based on key frame

Country Status (1)

Country Link
CN (1) CN116580126B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001188605A (en) * 1999-12-28 2001-07-10 Yaskawa Electric Corp Method for interpolating curve
US20080136834A1 (en) * 2006-12-11 2008-06-12 Ruofei Zhang Automatically generating a content-based quality metric for digital images
CN104318600A (en) * 2014-10-10 2015-01-28 无锡梵天信息技术股份有限公司 Method for achieving role treading track animation by using Bezier curve
US20170109029A1 (en) * 2015-10-16 2017-04-20 Sap Se Dynamically-themed display utilizing physical ambient conditions
US20170270696A1 (en) * 2016-03-21 2017-09-21 Adobe Systems Incorporated Enhancing curves using non-uniformly scaled cubic variation of curvature curves
CN110827703A (en) * 2019-10-29 2020-02-21 杭州电子科技大学 Hand-drawn LED lamp board input display method based on similarity correction algorithm
US20200250528A1 (en) * 2017-10-25 2020-08-06 Deepmind Technologies Limited Auto-regressive neural network systems with a soft attention mechanism using support data patches
CN115937516A (en) * 2022-11-21 2023-04-07 北京邮电大学 Image semantic segmentation method and device, storage medium and terminal
CN116113125A (en) * 2023-02-14 2023-05-12 永林电子股份有限公司 Control method of LED atmosphere lamp group of decoration panel

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001188605A (en) * 1999-12-28 2001-07-10 Yaskawa Electric Corp Method for interpolating curve
US20080136834A1 (en) * 2006-12-11 2008-06-12 Ruofei Zhang Automatically generating a content-based quality metric for digital images
CN104318600A (en) * 2014-10-10 2015-01-28 无锡梵天信息技术股份有限公司 Method for achieving role treading track animation by using Bezier curve
US20170109029A1 (en) * 2015-10-16 2017-04-20 Sap Se Dynamically-themed display utilizing physical ambient conditions
US20170270696A1 (en) * 2016-03-21 2017-09-21 Adobe Systems Incorporated Enhancing curves using non-uniformly scaled cubic variation of curvature curves
US20200250528A1 (en) * 2017-10-25 2020-08-06 Deepmind Technologies Limited Auto-regressive neural network systems with a soft attention mechanism using support data patches
CN110827703A (en) * 2019-10-29 2020-02-21 杭州电子科技大学 Hand-drawn LED lamp board input display method based on similarity correction algorithm
CN115937516A (en) * 2022-11-21 2023-04-07 北京邮电大学 Image semantic segmentation method and device, storage medium and terminal
CN116113125A (en) * 2023-02-14 2023-05-12 永林电子股份有限公司 Control method of LED atmosphere lamp group of decoration panel

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZHILING GUO ET AL: "External-Internal Attention for Hyperspectral Image Super-Resolution", IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, pages 1 - 8 *

Also Published As

Publication number Publication date
CN116580126B (en) 2023-11-07

Similar Documents

Publication Publication Date Title
CN110163246B (en) Monocular light field image unsupervised depth estimation method based on convolutional neural network
US20210019866A1 (en) Real-time intelligent image manipulation system
CN108510456B (en) Sketch simplification method of deep convolutional neural network based on perception loss
CN108921926B (en) End-to-end three-dimensional face reconstruction method based on single image
CN110097609B (en) Sample domain-based refined embroidery texture migration method
CN111798400A (en) Non-reference low-illumination image enhancement method and system based on generation countermeasure network
CN108961303A (en) A kind of image processing method, device, electronic equipment and computer-readable medium
CN109816011A (en) Generate the method and video key frame extracting method of portrait parted pattern
CN109993820B (en) Automatic animation video generation method and device
CN113870124B (en) Weak supervision-based double-network mutual excitation learning shadow removing method
CN101286228B (en) Real-time vision frequency and image abstraction method based on characteristic
CN116744511B (en) Intelligent dimming and toning lighting system and method thereof
WO2013106984A1 (en) Learning painting styles for painterly rendering
CN110458247A (en) The training method and device of image recognition model, image-recognizing method and device
CN115205544A (en) Synthetic image harmony method and system based on foreground reference image
CN112598602A (en) Mask-based method for removing Moire of deep learning video
WO2019196718A1 (en) Element image generation method, device and system
CN109816659A (en) Image partition method, apparatus and system
CN116310693A (en) Camouflage target detection method based on edge feature fusion and high-order space interaction
WO2020190624A1 (en) High resolution real-time artistic style transfer pipeline
CN113807340A (en) Method for recognizing irregular natural scene text based on attention mechanism
CN116580126B (en) Custom lamp effect configuration method and system based on key frame
CN115222581A (en) Image generation method, model training method, related device and electronic equipment
CN108924528A (en) A kind of binocular stylization real-time rendering method based on deep learning
CN115908205B (en) Image restoration method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant