CN115884471A - Lamp effect control method and device, equipment, medium and product thereof - Google Patents

Lamp effect control method and device, equipment, medium and product thereof Download PDF

Info

Publication number
CN115884471A
CN115884471A CN202211732617.3A CN202211732617A CN115884471A CN 115884471 A CN115884471 A CN 115884471A CN 202211732617 A CN202211732617 A CN 202211732617A CN 115884471 A CN115884471 A CN 115884471A
Authority
CN
China
Prior art keywords
image
light effect
interface
matched
semantic similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211732617.3A
Other languages
Chinese (zh)
Inventor
黄家明
吴文龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zhiyan Technology Co Ltd
Shenzhen Qianyan Technology Co Ltd
Original Assignee
Shenzhen Zhiyan Technology Co Ltd
Shenzhen Qianyan Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Zhiyan Technology Co Ltd, Shenzhen Qianyan Technology Co Ltd filed Critical Shenzhen Zhiyan Technology Co Ltd
Priority to CN202211732617.3A priority Critical patent/CN115884471A/en
Publication of CN115884471A publication Critical patent/CN115884471A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The application relates to a lamp effect control method, a device, equipment, a medium and a product thereof, wherein the method comprises the following steps: acquiring a displayed interface image; inquiring a light effect type mapped by prestored image characteristics matched with the image characteristic semantics to be matched of the interface image from an image light effect library; and calling the light effect control instruction corresponding to the light effect type to control the light effect display unit connected with the current embedded unit to play the corresponding light effect. The method and the device can determine the corresponding light effect type according to the semantics of the image characteristics of the interface image based on the embedded unit, so that the light effect display unit is controlled to play the corresponding semantics, the atmosphere rendered by the light effect is coordinated with the picture atmosphere of the interface image, the cost is controllable, and the implementation is convenient.

Description

Lamp effect control method and device, equipment, medium and product thereof
Technical Field
The present application relates to the field of lighting technologies, and in particular, to a method and apparatus for controlling lamp efficiency, a device, a medium, and a product.
Background
With the development of artificial intelligence becoming more mature, under the continuous iteration of deep learning algorithm and NPU chip, the application of artificial intelligence gradually can sink to the consumer-side application market, for example, light effect devices begin to try to use various artificial intelligence techniques to implement more intelligent services. The lamp effect equipment has the functions of displaying information, decorating atmosphere and the like, is wide in application, has higher and higher intelligent degree, and has functions which are adapted to different requirements and are continuously developed.
Some of the latest lamp effect devices in the industry pick up colors generated by images in a screen through a camera, and the colors of screen pictures are picked up in a partition mode and mapped to a lamp strip to realize atmosphere lighting. The disadvantage of such devices is that the color of the image is only concerned, and other information besides the color, such as the content and style of the image, etc., is not concerned, so that the implemented light effect is difficult to form effective semantic association with the content and style of the image, and the effect of rendering the real atmosphere is often difficult to achieve. For example, when a sad picture appears in an image, due to a red area therein, the prior art may map red to a warm series of lamp effects such as pink, orange, etc., so that the actual atmosphere is out of balance with the image atmosphere, which is counterproductive.
Another technical obstacle hindering the generation of content and style of images by light effect devices is the limitations of computing power, cost, storage space, etc. of embedded chips such as NPUs, which have relatively limited computational resources, and if the related intelligent technology is introduced excessively, the implementation cost is increased sharply, thereby raising the improvement threshold.
Therefore, under the dilemma of multiple hardware condition constraints and intelligent requirements, how to realize the technical upgrade of the lamp effect control equipment is a problem to be overcome by the technical personnel in the field.
Disclosure of Invention
It is an object of the present application to solve the above-mentioned problems and to provide a lamp efficiency control method, and a corresponding apparatus, device, non-volatile readable storage medium, and computer program product.
According to an aspect of the present application, there is provided a lamp efficiency control method, including the steps of:
acquiring a displayed interface image;
inquiring a light effect type mapped by prestored image characteristics matched with the image characteristic semantics to be matched of the interface image from an image light effect library;
and calling the light effect control instruction corresponding to the light effect type to control the light effect display unit connected with the current embedded unit to play the corresponding light effect.
Optionally, the querying, from the image light effect library, a light effect type mapped by a pre-stored image feature that is matched with the semantic of the image feature to be matched of the interface image includes:
extracting image features of the interface image as image features to be matched by using an image coding model;
calculating semantic similarity between the image features to be matched and each pre-stored image feature in the image light effect library;
judging whether the semantic similarity of the pre-stored image features corresponding to the highest semantic similarity exceeds a preset similarity threshold, and selecting the light effect type mapped with the pre-stored image features with the highest semantic similarity when the semantic similarity exceeds the preset similarity threshold.
Optionally, calculating semantic similarity between the image feature to be matched and each pre-stored image feature in the image light effect library, including:
converting a characteristic matrix formed by vector representation of each pre-stored image characteristic in the image light effect library into a transposed matrix of the image light effect library;
calculating a matrix product between the vector representation of the image features to be matched of the interface image and the transposed matrix as the semantic similarity of the corresponding pre-stored image features;
and normalizing the semantic similarity of each pre-stored image characteristic to a uniform numerical value interval.
Optionally, before acquiring the displayed interface image, the method includes:
acquiring a sample image, and setting the lamp effect type of the sample image;
extracting image features of the sample image, and mapping and storing the image features serving as prestored image features and corresponding lamp effect types in the image lamp effect library;
and initializing and setting a similarity threshold value of each pre-stored image characteristic in the image light effect library.
Optionally, in the step of initializing and setting the similarity threshold of each pre-stored image feature in the image light effect library,
the similarity threshold value is determined in a personalized manner corresponding to the source image of the pre-stored image characteristics, or,
the similarity threshold value is determined in a personalized way corresponding to the light effect type mapped by the source image of the pre-stored image characteristics, or,
and the similarity threshold value is personalized and determined corresponding to the source type of the interface image.
Optionally, the determining whether the semantic similarity of the pre-stored image features corresponding to the highest semantic similarity exceeds a preset similarity threshold, and when the semantic similarity exceeds the preset similarity threshold, before selecting the light effect type mapped with the pre-stored image features of the highest semantic similarity, the method includes:
and identifying the source type of the source interface of the interface image, calling a preset similarity threshold corresponding to the source type according to the source type, and judging whether the semantic similarity of the pre-stored image characteristics corresponding to the highest semantic similarity exceeds the preset similarity threshold.
Optionally, before invoking the light effect control instruction corresponding to the light effect type to control the light effect display unit connected to the current embedded unit to play the corresponding light effect, the method includes:
judging whether the light effect types corresponding to the continuous multiple interface images are consistent, if so, executing subsequent steps to implement light effect switching, otherwise, skipping the subsequent steps to maintain the original light effect;
or,
and judging whether the semantic similarity of the image features to be matched of the continuous interface images is lower than a preset interframe threshold, executing subsequent steps to implement light effect switching when the semantic similarity of the image features to be matched of the continuous interface images is lower than the interframe threshold, and skipping the subsequent steps to maintain the original light effect.
According to another aspect of the present application, there is provided a lamp effect control device including:
the image acquisition module is used for acquiring the displayed interface image;
the light effect matching module is used for inquiring the light effect type mapped by the pre-stored image characteristics matched with the image characteristic semantics to be matched of the interface image from the image light effect library;
and the lamp effect playing module is used for calling the lamp effect control instruction corresponding to the lamp effect type to control the lamp effect display unit connected with the current embedded unit to play the corresponding lamp effect.
According to another aspect of the present application, there is provided a light effect control device, comprising an embedded unit, and a sound pickup unit and a light effect display unit connected to the embedded unit, wherein the embedded unit includes a central processing unit and a memory, and the central processing unit is configured to invoke and run a computer program stored in the memory to perform the steps of the light effect control method described in the present application.
According to another aspect of the present application, there is provided a non-transitory readable storage medium storing a computer program implemented according to the lamp effect control method in the form of computer readable instructions, the computer program, when called by a computer, executing the steps included in the method.
According to another aspect of the present application, there is provided a computer program product comprising computer programs/instructions which, when executed by a processor, implement the steps of the method described in any one of the embodiments of the present application.
Compared with the prior art, the method and the device realize the reality of light effect control in the embedding unit, the mapping relation of a plurality of pre-stored image features and corresponding light effect types is represented in the image light effect library, when the light effect type corresponding to the interface image needs to be determined, the image features of the interface image are used as the image features to be matched, the light effect type of the pre-stored image features which are similar to the semanteme of the image features to be matched is inquired from the image light effect library based on the semantic matching mode, and then the light effect display unit is controlled to play the corresponding light effect according to the light effect type, so that the light effect implemented by the light effect display unit can be associated with the pre-stored image features and the image features to be matched based on the image semantics, the rendered light effect atmosphere can be semantically corresponding to the atmosphere displayed by the interface image, the consistency of the content, style and the like of the light effect and the interface image is intelligently improved, meanwhile, the balance between the hardware condition of the embedding unit and the intelligent degree is also considered, and the economic effect of scale can be obtained.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of a lamp effect control apparatus according to the present application;
FIG. 2 is a schematic flow chart diagram illustrating an embodiment of a lamp efficiency control method according to the present application;
fig. 3 is a schematic flow chart illustrating a process of determining a lamp effect type corresponding to an interface image according to an embodiment of the present application;
FIG. 4 is a schematic flow chart illustrating semantic similarity calculation according to an embodiment of the present application;
FIG. 5 is a schematic flow chart of constructing an image lamp effect library according to an embodiment of the present application;
FIG. 6 is a functional block diagram of the lamp effect control apparatus of the present application;
fig. 7 is a schematic structural diagram of a computer device that can be used as a lamp effect control device according to the present application.
Detailed Description
Referring to fig. 1, in a functional block diagram of a light effect control device exemplarily provided in the present application, the light effect control device includes an embedding unit, a camera unit, and a light effect display unit, where:
the embedded unit may be implemented by various embedded chips, for example, chips such as a bluetooth Soc, a WiFi Soc, an MCU, a DSP, and an NPU, and the chips generally include a central processing unit and a memory, and are mainly used to store and execute program instructions to implement corresponding functions, for example, to run a computer program product implemented by the light effect control method according to the present application, or to run an image feature extraction model that may be used in the present application, and other related control logics.
The camera unit is a camera device suitable for collecting an environmental image. The camera shooting unit is an optional accessory of the lamp effect control device, the lamp effect control device can adopt the camera shooting unit to obtain interface images output and displayed by other intelligent devices, and can also obtain the interface images displayed by the intelligent devices through other various communication protocols, for example, the interface images displayed can be obtained through a wired connection mode provided by an HDMI protocol or a wireless transmission mode of protocols such as WiFi.
The number of the light effect display units can be single or multiple, and the light effect display units are used for executing the light effect control instruction to play corresponding light effects, so that the effects of information display, atmosphere decoration and the like are realized. The light effect can be preset by a person skilled in the art, a corresponding light effect control instruction is compiled, and the light effect control instruction is stored in the embedding unit and/or the light effect display unit in advance, and the corresponding light effect is played when the light effect is required to be called and executed.
When the light effect control equipment operates, the embedding unit can acquire an interface image from any source, then acquire the image characteristics of the interface image according to the interface image, perform semantic matching according to the image characteristics and the pre-stored image characteristics in the image light effect library to determine the corresponding light effect type, and then control one or more light effect display units to play the corresponding light effect by adopting the light effect control instruction corresponding to the light effect type.
Based on the above principle, referring to fig. 2, an embodiment of a lamp efficiency control method according to the present application includes the following steps:
step S1100, acquiring a displayed interface image;
the interface image refers to an image displayed in a graphical user interface of any intelligent device. The images may be images output and displayed during playing of a video stream, for example, image frames output and displayed in a live video stream of a live network program, or image frames output and displayed by other media streams; or may be a content image displayed in a graphical user interface formed by an application. The application program can usually provide content images in a mode of having a change picture, for example, a game program can provide content images corresponding to game pictures which are changed; other still pictures may also be presented on the graphical user interface, such as dynamic or static screen saver pictures.
Since the method of the present application is mainly applied to the light effect control device, when the interface image is obtained, the interface image that is being displayed in the intelligent device is usually obtained, and other images that belong to the same source as the interface image but are not displayed on the graphical user interface are generally processed only when the other images are displayed on the graphical user interface.
The manner of acquiring the interface image displayed is rich, for example:
in one embodiment, the light effect control device of the present application may be connected to the smart device for displaying the interface image in a wired manner based on various wired display interface protocols, such as HDMI, DVI, DP, VGA, and the like, using interfaces provided by these protocols, so that the embedded unit of the light effect control device of the present application can obtain the interface image being displayed on the display device of the smart device through wired connection.
In another embodiment, the interface image being displayed in the smart device may be wirelessly transmitted to the embedded unit of the light effect control device based on various wireless communication protocols, such as WiFi, bluetooth, etc., under the condition that the smart device and the embedded unit of the light effect control device comply with the same media data exchange protocol.
In still another embodiment, an image capturing unit may be configured in the light effect control device of the present application, and the image capturing unit captures an interface image on the display device of the smart device in real time and provides the interface image to the embedded unit in the light effect control device for processing. For the interface image recorded by the camera unit in real time, corresponding image optimization processing can be performed after each image frame is called from the corresponding image space. For example, it is subjected to image correction, smoothing, cropping, scaling, etc., as needed, in order to adjust it to a specification picture of a specific resolution suitable for subsequent processing.
Step S1200, inquiring a light effect type mapped by prestored image characteristics matched with the image characteristic semantics to be matched of the interface image from an image light effect library;
the method is characterized in that an image light effect library is prepared, wherein image characteristics of each preset image are stored as prestored image characteristics, and light effect types corresponding to the prestored image characteristics are set in a mapping mode according to the content and style of the source image corresponding to each prestored image characteristic and the prestored image characteristic.
The light effect types are designed in advance, corresponding light effect control instructions are programmed, and the light effect control instructions are stored in the memory of the embedded unit or other memories which can be accessed by the embedded unit, so that the embedded unit can call the corresponding light effect control instructions according to the light effect instructions. The light effect types are various, each of which corresponds to a different atmosphere style, for example, the corresponding atmosphere style can be set according to the emotion type, the atmosphere style corresponding to the emotions of sadness, peace, joy, stimulation, surprise, depression and the like is determined, and a corresponding light effect type is set for each atmosphere style. Therefore, the corresponding pre-stored image characteristics of the source image can be set to be the light effect type of the corresponding style according to the style of the source image.
The mapping relationship between the light effect type and the pre-stored image features of the source image can be a one-to-one or one-to-many relationship, for example, a plurality of pre-stored image features corresponding to a plurality of source images can be labeled as belonging to the same light effect type.
When the corresponding light effect type needs to be matched for the interface image, the image coding model is required to be used for extracting the image characteristics of the interface image as the image characteristics to be matched, so that semantic matching operation can be implemented between the interface image and each source image in the image light effect library based on the image characteristics.
When semantic matching is implemented, semantic similarity between the image features to be matched of the interface image and each pre-stored image feature in an image light effect library can be calculated, then one pre-stored image feature is screened out according to the semantic similarity and serves as a target image feature for achieving semantic matching with the image features to be matched, content and style of the interface image and a source image of the target image feature are similar, and then a target light effect type mapped with the target image feature is determined and serves as a light effect type corresponding to the interface image.
When the target image features are determined by screening, in an embodiment, the pre-stored image features corresponding to the highest semantic similarity may be selected as the target image features to pursue the most similar result. In another embodiment, in order to prevent the image features from being mistaken due to overfitting of the corresponding neural network model, the pre-stored image features corresponding to the second highest semantic similarity can be selected as the target image features.
In one embodiment, the image features to be matched of the interface image and the pre-stored image features of the source images can be extracted by using the same image coding model. The image coding model may be selected from a basic model in which a Convolutional Neural Network (CNN), a Residual Network (Resnet), a VGG, etc. are adapted to perform a convolution operation on an image to extract deep semantic features thereof. The image coding model may be a pre-trained model.
In another embodiment, the image coding model is trained to a convergence state in advance and then used. When the training of the image coding model is implemented, a classifier is followed, then a large number of training samples and supervision labels thereof are adopted to implement the training, the training samples can select various sample images expressing different atmosphere styles, and the corresponding supervision labels are label information corresponding to the atmosphere styles. When the image coding model is trained, inputting a training sample each time, predicting the corresponding atmosphere style after the training sample is subjected to characteristic representation by the image coding model, calculating the classification loss value of the predicted atmosphere style by using the supervision label corresponding to the training sample, performing gradient updating on the image coding model according to the classification loss value, when the classification loss value reaches a preset target threshold value, indicating that the image coding model reaches convergence, terminating the training of the image coding model, removing a subsequent classifier, and using the classifier to extract the image characteristics.
Because the image coding model is supervised and trained according to the atmosphere style, the image features obtained by the image coding model through feature representation can represent deep semantics corresponding to the description atmosphere style, and the accurate correspondence of indirectly determining the lamp effect type can be ensured when image matching is performed based on the deep semantics with the same property. It can also be seen that the atmosphere style represented by the supervision tags provided in the training of the image coding model is also actually the atmosphere style used by the light effect type of the present application.
In the application, similar matching is performed based on the image characteristics of the interface image and the source image, then the operation of the light effect type is indirectly determined, the light effect type corresponding to the image characteristics of the interface image is not directly predicted by adopting a classifier, the characteristic that the operation resources of the embedding unit are relatively limited can be taken care of, that is, the pre-stored image characteristics in the image light effect library can be obtained by operating the image coding model outside the embedding unit in advance theoretically, and as for various operations when the interface image is matched with the pre-stored image characteristics of the source image, the operation amount is relatively low, so that the method is more suitable for being deployed into the embedding unit to be implemented.
And step S1300, calling the light effect control instruction corresponding to the light effect type to control the light effect display unit connected with the current embedded unit to play the corresponding light effect.
The embedded unit prestores mapping relation data between the lamp effect types and the corresponding lamp effect control instructions, and of course, the mapping relation data can also be stored in the image lamp effect library. Therefore, after the light effect type corresponding to the interface image is determined, the light effect control instruction mapped with the light effect type can be obtained from the mapping relation data, and the light effect control instruction is transmitted to the light effect display unit connected with the embedding unit, so that the light effect display unit can be controlled to play the corresponding light effect.
In one embodiment, the same lamp effect content can be displayed by a plurality of lamp effect display units in a matching manner, so that after the embedded unit transmits the lamp effect control command to the plurality of lamp effect display units, each lamp effect display unit correspondingly plays the lamp effect display process in charge of the lamp effect display unit according to the lamp effect control command, and all the lamp effect display units display the lamp effect content together.
In another embodiment, after transmitting a light effect control command of a light effect type, e.g. classical type, to the light effect display unit, the light effect display unit controls the light effect, e.g. in a blinking display, according to a predefined protocol. If the subsequently determined lamp effect type is not changed, the same lamp effect control instruction does not need to be transmitted to the lamp effect display unit. And transmitting a corresponding lamp effect control instruction once different lamp effect types are generated subsequently, for example, when the new lamp effect type is a rock type, and controlling the lamp effect display unit to display the lamp effect content in a rolling manner according to a predefined protocol. In some embodiments, in the process of controlling the switching display of the light effects of two different light effect types, smoothing may be performed to gradually change the brightness and color values of the light emitting elements to achieve natural transition.
According to the embodiment, the reality of the light effect control in the embedding unit is realized, the mapping relation between a plurality of pre-stored image features and corresponding light effect types is represented in the image light effect library, when the light effect type corresponding to the interface image needs to be determined, the image features of the interface image are used as the image features to be matched, the light effect type of the pre-stored image features which are similar to the semanteme of the image features to be matched is inquired from the image light effect library based on the semantic matching mode, and the light effect display unit is controlled to play the corresponding light effect according to the light effect type, so that the light effect implemented by the light effect display unit can be associated with the pre-stored image features and the image features to be matched based on the image semantics, the rendered light effect atmosphere can be semantically corresponding to the atmosphere displayed by the interface image, the consistency of the content, style and the like of the light effect and the interface image is intelligently improved, meanwhile, the balance between the hardware condition and the intelligent degree of the embedding unit is also considered, and the economic effect can be obtained.
On the basis of any embodiment of the present application, referring to fig. 3, querying a light effect type mapped by a pre-stored image feature that matches a semantic meaning of an image feature to be matched of the interface image from an image light effect library includes:
step S1210, extracting image characteristics of the interface image as image characteristics to be matched by using an image coding model;
in this embodiment, the code information of the image coding model trained in advance in the present application and the weight parameter thereof may be installed in the embedding unit, and the corresponding image coding service may be enabled. When the interface image is transmitted into the embedding unit, the image coding model is called by the running image coding service in the embedding unit to perform feature representation on the interface image.
And the interface image is processed successively by a convolution layer, a pooling layer, a batch processing layer and the like in the image coding model to obtain corresponding image characteristics, namely the image characteristics to be matched. In order to facilitate efficient subsequent operations, in an embodiment, the extracted image features may be expanded into a high-dimensional vector in the image coding model, and similarly, each pre-stored image feature in the present application may also be represented in a high-dimensional vector form.
In one embodiment, an image coding model required for extracting image features is a convolutional neural network VGG16, the VGG16 model is composed of 5 convolutional layers, 3 fully-connected layers and a Softmax output layer, maxpooling (maximum pooling) separation is used between the layers, and all activation units of hidden layers adopt ReLU functions. In an online reasoning stage, only a convolution layer and a pooling layer in the model are used for forming an image coding model, a full connection layer and a Softmax layer used in model training are cut out, and the input image can be subjected to feature representation through the image coding model to obtain corresponding image features. Of course, this is only an example of the image coding model selection, and those skilled in the art may also select other basic models to implement as needed.
Step S1220, calculating semantic similarity between the image features to be matched and each prestored image feature in the image light effect library;
in order to determine the similarity degree between the image features to be matched and each pre-stored image feature in the image light effect library, essentially determining a similarity program between the interface image and each source image in the image light effect library on the image content and the style, calculating a data distance between the image features to be matched and each pre-stored image feature by adopting any data distance algorithm, and converting each data distance into a representation of semantic similarity, so that the higher the semantic similarity is, the more similar the two image features are on the image content and the style are represented.
In the present application, the data distance algorithm may be implemented by any one of the algorithms such as a matrix inner product, a cosine similarity, a pearson correlation coefficient, an euclidean distance, and a jaccard distance. Various data distance algorithms can effectively represent the semantic similarity between two image features, and therefore, the calculation result can be used as the measure of the semantic similarity.
In one embodiment, because the cosine similarity calculation is simple and effective, the cosine similarity calculation method can be preferably used for calculating the similarity. The cosine similarity algorithm measures the similarity between two vectors by measuring their cosine values of their included angle. The cosine value of the 0-degree angle is 1, and the cosine value of any other angle is not more than 1; and its minimum value is-1. The cosine of the angle between the two vectors thus determines whether the two vectors point in approximately the same direction. When the two vectors have the same direction, the cosine similarity value is 1; when the included angle of the two vectors is 90 degrees, the value of the cosine similarity is 0; the cosine similarity has a value of-1 when the two vectors point in completely opposite directions. The result is independent of the length of the vector, only the pointing direction of the vector. Cosine similarity is usually used in the positive space, so the value given is between-1 and 1, and regardless of the case where cosine similarity is negative, the range is 0-1, and conversion to percentage 0% -100% represents image similarity.
Further, the cosine similarity calculation formula can be optimized to improve the calculation speed, and the numerator and the denominator are respectively disassembled for calculation:
(1) The numerator x1 x2+ y1 y2 is calculated by using an NPU matrix product mode, and the product of the image characteristics of the input interface image and each image characteristic in the image lamp effect library can be calculated only by doing the matrix product once, so that the calculation speed is greatly improved.
(2) The denominator is used for respectively calculating the length of the feature vector in the rectangular coordinate system, and calculation is only needed once during feature extraction, so that the calculation force is saved.
(3) After the numerator is solved, the cosine distance can be calculated according to a cosine distance formula, and the calculation mode can prompt 1000 times of operation speed compared with the traditional calculation mode if the image library has 1000 pictures.
Step S1230, determining whether the semantic similarity of the pre-stored image features corresponding to the highest semantic similarity exceeds a preset similarity threshold, and when the semantic similarity exceeds the preset similarity threshold, selecting the light effect type mapped with the pre-stored image features with the highest semantic similarity.
In view of the fact that the light effect display unit can only play one light effect at the same time, in this embodiment, the target light effect type corresponding to the interface image is determined only according to the highest semantic similarity. Therefore, each pre-stored image feature in the image light effect library can be reversely sorted according to the semantic similarity, so that the first pre-stored image feature in the sorting is determined to be the pre-stored image feature with the highest semantic similarity, then, a preset similarity threshold is used for comparing with the highest semantic similarity, when the highest semantic similarity exceeds the similarity threshold, the source image corresponding to the highest semantic similarity is highly similar to the interface image in content and style and is matched with the interface image, and then, the light effect type mapped by the pre-stored image feature corresponding to the highest semantic similarity can be determined to be the target light effect type matched with the interface image.
In one embodiment, when the highest semantic similarity does not exceed the preset threshold, the interface image may be colored, the corresponding average color gamut may be determined, and then the corresponding target light effect type may be determined according to the average color gamut, so that even if the current interface image fails to determine the corresponding light effect type through semantic matching, technical relief may be implemented according to the average color gamut of the interface image, and the determined corresponding target light effect type is used to invoke a corresponding light effect control instruction to implement control on an ambience light effect.
According to the embodiments, based on the semantic matching, the pre-stored image features matched with the interface image in content and style can be accurately determined, the source image corresponding to the pre-stored image features is actually determined, and the light effect type mapped with the pre-stored image features is further determined so as to obtain the corresponding light effect control instruction, and the light effect type is set exactly corresponding to the style of the source image, so that the interface image and the matched light effect type can be ensured to accurately correspond to the atmosphere style, and the generated light effect atmosphere can be kept in high coordination with the atmosphere presented by the interface image.
On the basis of any embodiment of the present application, please refer to fig. 4, calculating semantic similarity between the image features to be matched and each pre-stored image feature in the image light effect library includes:
step S1221, converting a feature matrix formed by vector representation of each pre-stored image feature in the image light effect library into a transposed matrix of the image light effect library;
in this embodiment, each pre-stored image feature in the image light effect library is first expressed as a high-dimensional vector, so that each pre-stored image feature is actually a high-dimensional image feature vector, and thus, when there are M pre-stored image features and each pre-stored image feature has a K-dimensional numerical value, each pre-stored image feature can be expressed as an M × (K) feature matrix.
In order to implement the similarity operation based on matrix operation, the feature matrix may be first transposed to become a transposed matrix of K × M.
Step S1222, calculating a matrix product between the vector representation of the image features to be matched of the interface image and the transpose matrix as the semantic similarity of the corresponding pre-stored image features;
similarly, the vector representation of the image feature to be matched of the interface image is a single-line vector, i.e. a vector of 1 × k. In this embodiment, in consideration of the operation efficiency of the embedding unit, the semantic similarity between the image feature to be matched and each pre-stored image feature is calculated based on a matrix inner product method, and the matrix inner product algorithm is essentially a simplified algorithm of a cosine similarity algorithm and has the same effect as the cosine similarity algorithm, so that, based on the principle of matrix operation, the matrix multiplication operation is performed on 1 × K image features to be matched and a K × M transposed matrix to obtain a matrix product, and a 1 × M product matrix is obtained, in which the numerical values of M elements respectively represent the numerical values of the semantic similarity corresponding to the M pre-stored image features in the image light effect library.
And S1223, normalizing the semantic similarity of each pre-stored image characteristic to a uniform numerical value interval.
To make the result more intuitive, the product matrix may be further normalized to match the values of each semantic similarity to a predetermined value range, e.g., [0,1]. When normalization is carried out, a maximum value normalization mode can be adopted, namely the semantic similarity of each pre-stored image characteristic is divided by the highest semantic similarity to serve as a final semantic similarity value.
According to the embodiment, in the embedding unit, based on the matrix operation, the matrix multiplication operation is performed on the image features to be matched and the pre-stored image features in the image lamp effect library to determine the semantic similarity corresponding to the pre-stored image features, instead of performing the classification mapping operation with a large operation amount directly on the image features to be matched in the embedding unit, the processing mode has the advantages of low system overhead occupation and high operation efficiency, and is more suitable for the specific scene equipment of the embedding unit.
On the basis of any embodiment of the present application, please refer to fig. 5, before acquiring the displayed interface image, including:
step S2100, obtaining a sample image, and setting a light effect type of the sample image;
the image light effect library can be constructed in advance. To this end, a collection of sample images, each having different contents and styles, may be prepared to define different atmosphere styles. After the sample images are prepared, the light effect types manually marked on each sample image are further obtained, and the light effect types belong to member types in a type set. Generally, each sample image can only belong to one light effect type, and one light effect type can correspond to a plurality of sample images.
Step S2200, extracting image characteristics of the sample image, and mapping and storing the image characteristics as pre-stored image characteristics and corresponding lamp effect types in the image lamp effect library;
further, the image coding model as described above in the present application may be run in any computer device, and the corresponding image features are extracted for each prepared sample image, and these image features may also be converted into a corresponding vector representation form, and then the image features of each sample image are used as pre-stored image features, and the light effect type corresponding to the pre-stored image features is constructed as mapping relationship data and stored in the image light effect library. Of course, the image light effect library will eventually be migrated to the embedded unit of the light effect control device of the present application for storage.
In one embodiment, in the image light effect library, all the pre-stored image features may be converted into a representation of a feature matrix, where each row is a high-dimensional vector representation of the pre-stored image features, or the feature matrix may be further directly converted into a transpose form thereof for storage. As for the index item of each pre-stored image feature, a mapping relationship can be additionally established with the corresponding light effect type so as to realize the associated mapping of each pre-stored image feature and the corresponding light effect type. Similarly, according to actual needs, a corresponding mapping relationship can be established between the lamp effect type and the corresponding lamp effect control instruction, and the mapping relationship and the corresponding lamp effect control instruction are stored in the image lamp effect library together.
Step S2300, initializing and setting similar threshold values of all pre-stored image characteristics in the image light effect library.
In the step of subsequently deciding whether the pre-stored image features with the highest semantic similarity are suitable for determining the light effect type of the interface image, a preset similarity threshold value is adopted to decide whether the highest semantic similarity is valid, and for the case, the similarity threshold value can be initialized. The manner of initialization can be implemented in many different ways, for example:
in one embodiment, the similarity threshold corresponds to a source image personalization determination of the pre-stored image features. Specifically, the content and style of the source image of each pre-stored image feature are different, and different source images are also different in the aspect of displayed image stability, so that the corresponding similar threshold values of each source image can be different, thereby realizing personalized definition, for example, for a source image with a thunder scene and a lightning scene, the image representation is relatively unstable, and for a source image with a sunny scene, the image imaging quality is relatively stable, therefore, different conditions can be distinguished to be respectively endowed with different similar threshold values, so that the pre-stored image features are in one-to-one correspondence with the similar threshold values, and when the next pre-stored image feature obtains the highest semantic similarity, the corresponding similar threshold value of the pre-stored image feature is called to judge the highest semantic similarity.
In another embodiment, the similarity threshold corresponds to a personalized determination of the light effect type mapped by the source image of the pre-stored image feature. In particular, the corresponding similarity threshold may be different for each light effect type, for example, for a cheerful style light effect type, the similarity threshold may be set to a relatively low value due to its richer possible scenes, while for a surprise style light effect type, the similarity threshold may be set to a relatively high value if its fewer possible scenes. Therefore, the similar threshold values are correspondingly set according to the light effect types, when one pre-stored image feature obtains the highest semantic similarity and needs to call the similar threshold values for comparison, the light effect type to which the pre-stored image feature belongs is determined, and then the corresponding similar threshold values are called according to the light effect type for comparison.
In yet another embodiment, the similarity threshold is determined in a personalized manner corresponding to the source type of the interface image. Specifically, in consideration of the fact that the source of the interface image is different, the corresponding image quality may also be different, for example, when the interface image transmitted by the smart device is obtained in a wired or wireless manner, the image quality is generally stable, and at this time, a relatively high similarity threshold may be set for the interface image of the source type; when the image pickup unit is used for shooting and acquiring the interface image displayed on the display device of the intelligent device, the image quality of the correspondingly acquired interface image may be relatively general in consideration of factors such as image distortion, pixel grids and light reflection caused by shooting, and in this case, a relatively low similarity threshold may be set corresponding to the source type. Therefore, when the highest semantic similarity obtained by one pre-stored image feature needs to call a similarity threshold for comparison, the source type of the corresponding interface image is determined, and then the corresponding similarity threshold is called according to the source type for comparison.
In other embodiments, the method may be implemented in combination with one or two of the above embodiments for setting the similarity threshold, for example, setting a corresponding basic threshold based on each light effect type, then setting a threshold ratio based on the source type of the interface image, when the similarity threshold needs to be obtained, determining the basic threshold of the light effect type corresponding to the highest semantic similarity, then determining the threshold ratio corresponding to the source type of the interface image, and determining the corresponding similarity threshold by taking the product of the basic threshold and the threshold ratio as the corresponding similarity threshold for determining the highest semantic similarity. Such variations may be flexibly implemented on the basis of the above embodiments without departing from the scope of the inventive concept of the present application.
According to the embodiments, in the process of constructing the image light effect library, the pre-stored image characteristics of the image light effect library are determined according to the sample image, the mapping relation between the pre-stored image characteristics and the light effect types marked by the sample image is established, then the threshold value used for judging the highest semantic similarity is initialized, after the image light effect library is migrated to the embedding unit, the embedding unit can implement efficient operation based on the image light effect library, so that the light effect types corresponding to the interface image are matched quickly, and because the similarity threshold values of the pre-stored image characteristics are initialized in advance, when the image light effect library is deployed into the light effect control equipment, the standardization of the light effect control equipment product can be realized, the accuracy of playing the light effect corresponding to the atmosphere according to the interface image can be ensured, and the economic effectiveness corresponding to the light effect control equipment in large-scale production can be obtained.
On the basis of any embodiment of the present application, judging whether the semantic similarity of the pre-stored image features corresponding to the highest semantic similarity exceeds a preset similarity threshold, and when the semantic similarity exceeds the preset similarity threshold, selecting the light effect type mapped with the pre-stored image features of the highest semantic similarity before, including:
and identifying the source type of the source interface of the interface image, calling a preset similarity threshold corresponding to the source type according to the source type, and judging whether the semantic similarity of the pre-stored image characteristics corresponding to the highest semantic similarity exceeds the preset similarity threshold.
In this embodiment, before determining whether the highest semantic similarity exceeds a preset similarity threshold, the embedded device performs an identification step, and may specifically determine the source type of the source interface by transmitting the source interface of the interface image. For example, in a case where the interface image acquired by the camera unit and the interface image acquired through the HDMI are different in function interface, the corresponding source type can be determined by identifying the function interface. After the source type is determined, a preset similarity threshold corresponding to the source type can be called, the highest semantic similarity is judged by using the similarity threshold, so that whether the highest semantic similarity is effective or not is determined, and when the highest semantic similarity is effective, the light effect type mapped by the corresponding pre-stored image characteristics is called as the target light effect type corresponding to the interface image.
According to the embodiment, the interface images suitable for being transmitted by different source interfaces generally have the characteristics of different levels of image quality, the source interfaces are identified before the judgment of the highest semantic similarity is carried out, the corresponding source types are determined, the similarity threshold value is determined according to the source interfaces, and the screening condition of the highest semantic similarity can be adjusted in a personalized mode according to the different source interfaces, so that the lamp effect type is determined to be related to the image quality of the interface images, and the intelligent degree of the lamp effect control equipment is improved.
On the basis of any embodiment of the present application, before calling the light effect control instruction corresponding to the light effect type to control the light effect display unit connected to the current embedded unit to play the corresponding light effect, in an embodiment, the method includes: judging whether the light effect types corresponding to the continuous multiple interface images are consistent, if so, executing subsequent steps to implement light effect switching, otherwise, skipping the subsequent steps to maintain the original light effect;
in this application, step S1100 and step S1200 may be concurrently executed with step S1300, and before step S1300 is executed, the determined lamp effect types corresponding to the multiple continuous interface images may be obtained to examine the stability of the multiple continuous interface images in content.
Specifically, when the light effect types corresponding to a plurality of continuous interface images are all consistent, it is usually indicated that the interface images belong to images with the same content, and in this case, the generated light effect type is relatively stable, so that step S1300 may be continuously performed, where the light effect type is used as a target light effect type for calling a corresponding light effect control instruction to control the light effect display unit; otherwise, the step S1300 is not required to be executed, but the detection is continued until the above conditions are satisfied, and the step S1300 is not executed.
In another embodiment, before invoking the light effect control instruction corresponding to the light effect type to control the light effect display unit connected to the current embedded unit to play the corresponding light effect, the method includes: and judging whether the semantic similarity of the image features to be matched of the continuous interface images is lower than a preset interframe threshold, executing subsequent steps to implement light effect switching when the semantic similarity of the image features to be matched of the continuous interface images is lower than the interframe threshold, and skipping the subsequent steps to maintain the original light effect.
Different from the previous embodiment, in the present embodiment, the image features of the consecutive interface images, that is, the image features to be matched, are directly utilized to calculate the inter-frame similarity, that is, the semantic similarity between the image features of each two consecutive interface images is calculated for each two consecutive interface images, a specific algorithm may be as described in the foregoing in this application, then, each semantic similarity is compared with a preset inter-frame threshold, the inter-frame threshold may be preset as needed, when each semantic similarity of the consecutive interface images is lower than the inter-frame threshold, it is indicated that the content of each interface image therein has a large change, at this time, the light effect type corresponding to the last interface image is valid, and then, the light effect type corresponding to the last interface image is taken as the target light effect type, the step S1300 is continuously executed, and the corresponding light effect control instruction is invoked to implement control on the light effect display unit; otherwise, the step S1300 is not required to be executed, but the detection is continued until the above conditions are satisfied, and the step S1300 is not executed.
In the above two embodiments, generally, the number of the consecutive interface images is in units of image frames, and it is preferable that the duration corresponding to all the number of image frames is not more than 120 milliseconds, for example, when the frame rate is 25fps, each image frame corresponds to 40 milliseconds, and in this case, the comparison of whether the light effect types are consistent may be performed in units of 3 consecutive image frames. Through the processing, the function of buffering and determining the light effect types is achieved, the determined light effect types are smoother, the light effect types cannot be switched due to the change of individual image frames, and the buffering process is basically controlled within the range which is difficult to be perceived by human eyes.
According to the embodiments, the method and the device can use the change relation of a plurality of continuous interface images to investigate the stability of the light effect type and use a strategy of keeping the stability as much as possible to control the switching of the light effect under the condition that human eyes basically do not perceive, so that the switching of the light effect is not frequently caused by the short-term change of individual interface images, and good user experience is ensured.
According to the embodiments disclosed above, the application can know that by using an artificial intelligent deep learning algorithm and an embedded processor such as an NPU, screen display contents are picked up through a camera unit or other channels to obtain interface images, artificial intelligent training and reasoning are performed, the similarity degree between the images displayed on the screen and the images in an image light effect library is accurately identified, then through a corresponding algorithm, the light effect type corresponding to the similar images is determined according to semantic similarity to perform light effect display and atmosphere fusion, for example, a tatanike number film is played on the screen, the screen contents are picked up through a camera and are reasoned by combining an NPU chip and an AI model, the characteristics of the contents displayed on the screen are obtained, after the characteristics of the current picture are obtained, the characteristics can be compared with the image characteristics in the image light effect library, the image with the highest similarity degree between the current image and the image light effect library is found, a similarity value is output, for example, when the film is played on the picture, the classic images in the image light effect library are picked up, the similarity degree of the images in the image light effect library is output, and the images are compared with the head 'flying effect library, and the flying effect library is output, and the user' flying effect library is controlled, and the flying effect of the user can be improved. Therefore, the method emphasizes the identification of the whole image, namely, when the similarity between the content of the captured picture played by the screen and a certain image in the image lamp effect library is high or completely consistent, the result is output; compared with the prior art, the method has obvious difference in controlling the light effect after only identifying a certain specific characteristic in the screen picture, and can greatly improve the user experience.
Referring to fig. 6, a light effect control apparatus according to an aspect of the present application includes an image obtaining module 1100, a light effect matching module 1200, and a light effect playing module 1300, where the image obtaining module 1100 is configured to obtain an interface image being displayed; the light effect matching module 1200 is configured to query a light effect type mapped by pre-stored image features that are semantically matched with the image features to be matched of the interface image from an image light effect library; the light effect playing module 1300 is configured to call the light effect control instruction corresponding to the light effect type to control the light effect display unit connected to the current embedded unit to play the corresponding light effect.
On the basis of any embodiment of the present application, the lamp effect matching module 1200 includes: the characteristic extraction unit is used for extracting the image characteristics of the interface image as the image characteristics to be matched by using an image coding model; the similarity operation unit is used for calculating the semantic similarity between the image features to be matched and each pre-stored image feature in the image light effect library; and the type determining unit is used for judging whether the semantic similarity of the pre-stored image characteristics corresponding to the highest semantic similarity exceeds a preset similarity threshold, and selecting the light effect type mapped with the pre-stored image characteristics with the highest semantic similarity when the semantic similarity exceeds the preset similarity threshold.
On the basis of any embodiment of the present application, the similarity operation unit includes: the transposition processing is a unit which is set to convert a characteristic matrix formed by vector representation of each pre-stored image characteristic in the image light effect base into a transposition matrix of the characteristic matrix; the similarity calculation subunit is configured to calculate a matrix product between the vector representation of the image features to be matched of the interface image and the transposed matrix as the semantic similarity of the corresponding pre-stored image features; and the normalization processing subunit is configured to normalize the semantic similarity of each pre-stored image feature to a uniform numerical interval.
On the basis of this application arbitrary embodiment, the lamp effect controlling means of this application includes: the system comprises a sample acquisition module, a light source module and a light source module, wherein the sample acquisition module is used for acquiring a sample image and setting the light effect type of the sample image; the mapping construction module is used for extracting image characteristics of the sample image, and mapping and storing the image characteristics as prestored image characteristics and corresponding lamp effect types in the image lamp effect library; and the threshold setting module is used for setting similar thresholds of all pre-stored image characteristics in the image light effect library in an initialization mode.
On the basis of any embodiment of the present application, in the threshold setting module, the similar threshold corresponds to the personalized determination of the source image of the pre-stored image feature, or the similar threshold corresponds to the personalized determination of the light effect type mapped by the source image of the pre-stored image feature, or the similar threshold corresponds to the personalized determination of the source type of the interface image.
On the basis of this application arbitrary embodiment, the lamp effect controlling means of this application includes: and the threshold value calling unit is used for calling a preset similar threshold value corresponding to the source type according to the source type and judging whether the semantic similarity of the pre-stored image characteristics corresponding to the highest semantic similarity exceeds the preset similar threshold value.
On the basis of this application arbitrary embodiment, the lamp effect controlling means of this application includes: the first buffer module is used for judging whether the light effect types corresponding to the continuous multiple interface images are consistent or not, if so, executing the subsequent steps to implement light effect switching, and otherwise, skipping the subsequent steps to maintain the original light effect; alternatively, the method comprises the following steps: and the second buffer module is set to judge whether the semantic similarity of the image features to be matched of the continuous interface images is lower than a preset interframe threshold value, if so, the subsequent steps are executed to implement light effect switching, otherwise, the subsequent steps are skipped to maintain the original light effect.
Another embodiment of the present application also provides a lamp effect control device, which is implemented by using a computer device. As shown in fig. 7, the internal structure of the computer device is schematic. The computer device includes a processor, a camera unit, a computer-readable storage medium, a memory, and a network interface connected by a system bus. The computer-readable non-volatile storage medium of the computer device stores an operating system, a database and computer-readable instructions, the database can store information sequences, and the computer-readable instructions, when executed by the processor, can enable the processor to implement the light effect control method of the present application.
The processor of the computer device is used for providing calculation and control capability and supporting the operation of the whole computer device. The memory of the computer device may have stored therein computer readable instructions that, when executed by the processor, may cause the processor to perform the lamp effect control method of the present application. The network interface of the computer device is used for connecting and communicating with the terminal.
It will be appreciated by those skilled in the art that the configuration shown in fig. 7 is a block diagram of only a portion of the configuration associated with the present application, and is not intended to limit the computing device to which the present application may be applied, and that a particular computing device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In this embodiment, the processor is configured to execute specific functions of each module correspondingly implemented according to each step of the lamp effect control method of the present application, and the memory stores program codes and various types of data required for executing the modules. The nonvolatile readable storage medium in the present embodiment stores program codes and data of a computer program product implemented by the lamp efficiency control method according to the present application.
The present application further provides a non-transitory readable storage medium storing computer readable instructions which, when executed by one or more processors, cause the one or more processors to perform the steps of the light effect control method of any of the embodiments of the present application.
The present application also provides a computer program product comprising computer programs/instructions which, when executed by one or more processors, implement the steps of the light effect control method according to any of the embodiments of the present application.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments of the present application may be implemented by hardware related to instructions of a computer program, which may be stored in a non-volatile readable storage medium, and when executed, may include the processes of the embodiments of the methods as described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), or other computer readable storage medium, or a Random Access Memory (RAM).
To sum up, the application can determine the corresponding lamp effect type according to the semantics of the image characteristics of the interface image based on the embedding unit, thereby controlling the lamp effect display unit to play the corresponding semantics, enabling the atmosphere rendered by the lamp effect to be coordinated and consistent with the picture atmosphere of the interface image, controlling the cost and facilitating the implementation.

Claims (10)

1. A lamp efficiency control method, comprising:
acquiring a displayed interface image;
inquiring a light effect type mapped by prestored image characteristics matched with the image characteristic semantics of the interface image to be matched from an image light effect library;
and calling the lamp effect control instruction corresponding to the lamp effect type to control the lamp effect display unit connected with the current embedded unit to play the corresponding lamp effect.
2. The light effect control method according to claim 1, wherein the step of querying a light effect type mapped by a pre-stored image feature of the interface image, which is semantically matched with an image feature to be matched, from an image light effect library comprises the steps of:
extracting image features of the interface image as image features to be matched by using an image coding model;
calculating semantic similarity between the image features to be matched and each pre-stored image feature in the image light effect library;
judging whether the semantic similarity of the pre-stored image features corresponding to the highest semantic similarity exceeds a preset similarity threshold, and selecting the light effect type mapped with the pre-stored image features with the highest semantic similarity when the semantic similarity exceeds the preset similarity threshold.
3. The method according to claim 2, wherein calculating semantic similarity between the image features to be matched and each pre-stored image feature in the image light effect library comprises:
converting a characteristic matrix formed by vector representation of each pre-stored image characteristic in the image light effect library into a transposed matrix of the image light effect library;
calculating a matrix product between the vector representation of the image features to be matched of the interface image and the transpose matrix to obtain the semantic similarity of each pre-stored image feature;
and normalizing the semantic similarity of each pre-stored image characteristic to a uniform numerical value interval.
4. The lamp effect control method according to claim 2, wherein the step of obtaining the interface image before displaying comprises:
acquiring a sample image, and setting the lamp effect type of the sample image;
extracting image features of the sample image, and mapping and storing the image features serving as prestored image features and corresponding lamp effect types in the image lamp effect library;
and initializing and setting a similarity threshold value of each pre-stored image characteristic in the image light effect library.
5. The light effect control method according to claim 4, wherein in the step of initializing a similarity threshold for each pre-stored image feature in the image light effect library,
the similarity threshold value is determined in a personalized manner corresponding to the source image of the pre-stored image characteristics, or,
the similarity threshold value is determined in a personalized way corresponding to the light effect type mapped by the source image of the pre-stored image characteristics, or,
and the similarity threshold value is personalized and determined corresponding to the source type of the interface image.
6. The method of claim 2, wherein determining whether the semantic similarity of the pre-stored image features corresponding to the highest semantic similarity exceeds a preset similarity threshold, and when the semantic similarity exceeds the preset similarity threshold, selecting a light effect type mapped with the pre-stored image features with the highest semantic similarity before the method comprises:
and identifying the source type of the source interface of the interface image, calling a preset similarity threshold corresponding to the source type according to the source type, and judging whether the semantic similarity of the pre-stored image characteristics corresponding to the highest semantic similarity exceeds the preset similarity threshold.
7. The light effect control method according to any one of claims 1 to 6, wherein before calling the light effect control command corresponding to the light effect type to control the light effect display unit connected to the current embedded unit to play the corresponding light effect, the method comprises:
judging whether the light effect types corresponding to the continuous multiple interface images are consistent, if so, executing subsequent steps to implement light effect switching, otherwise, skipping the subsequent steps to maintain the original light effect;
or,
and judging whether the semantic similarity of the image features to be matched of the continuous interface images is lower than a preset interframe threshold, executing subsequent steps to implement light effect switching when the semantic similarity of the image features to be matched of the continuous interface images is lower than the interframe threshold, and skipping the subsequent steps to maintain the original light effect.
8. A lamp effect control device, comprising:
the image acquisition module is used for acquiring the displayed interface image;
the light effect matching module is used for inquiring the light effect type mapped by the pre-stored image characteristics matched with the image characteristic semantics to be matched of the interface image from the image light effect library;
and the lamp effect playing module is set to call the lamp effect control instruction corresponding to the lamp effect type to control the lamp effect display unit connected with the current embedded unit to play the corresponding lamp effect.
9. A light effect control device comprising an embedding unit and a light effect display unit connected to the embedding unit, the embedding unit comprising a central processor and a memory, characterized in that the central processor is adapted to invoke the execution of a computer program stored in the memory for performing the steps of the method according to any of claims 1 to 7.
10. A non-transitory readable storage medium storing a computer program implemented according to the method of any one of claims 1 to 7 in the form of computer readable instructions, the computer program, when invoked by a computer, performing the steps included in the corresponding method.
CN202211732617.3A 2022-12-30 2022-12-30 Lamp effect control method and device, equipment, medium and product thereof Pending CN115884471A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211732617.3A CN115884471A (en) 2022-12-30 2022-12-30 Lamp effect control method and device, equipment, medium and product thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211732617.3A CN115884471A (en) 2022-12-30 2022-12-30 Lamp effect control method and device, equipment, medium and product thereof

Publications (1)

Publication Number Publication Date
CN115884471A true CN115884471A (en) 2023-03-31

Family

ID=85757714

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211732617.3A Pending CN115884471A (en) 2022-12-30 2022-12-30 Lamp effect control method and device, equipment, medium and product thereof

Country Status (1)

Country Link
CN (1) CN115884471A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116997062A (en) * 2023-09-22 2023-11-03 深圳市千岩科技有限公司 Control method, device, illumination structure and computer storage medium
CN117440184A (en) * 2023-12-20 2024-01-23 深圳市亿莱顿科技有限公司 Live broadcast equipment and control method thereof

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116997062A (en) * 2023-09-22 2023-11-03 深圳市千岩科技有限公司 Control method, device, illumination structure and computer storage medium
CN116997062B (en) * 2023-09-22 2024-02-23 深圳市千岩科技有限公司 Control method, device, illumination structure and computer storage medium
CN117440184A (en) * 2023-12-20 2024-01-23 深圳市亿莱顿科技有限公司 Live broadcast equipment and control method thereof
CN117440184B (en) * 2023-12-20 2024-03-26 深圳市亿莱顿科技有限公司 Live broadcast equipment and control method thereof

Similar Documents

Publication Publication Date Title
WO2021238631A1 (en) Article information display method, apparatus and device and readable storage medium
CN112232425B (en) Image processing method, device, storage medium and electronic equipment
CN108388876B (en) Image identification method and device and related equipment
CN115884471A (en) Lamp effect control method and device, equipment, medium and product thereof
WO2023185785A1 (en) Image processing method, model training method, and related apparatuses
US11887215B2 (en) Image processing apparatus and method for style transformation
CN112037320B (en) Image processing method, device, equipment and computer readable storage medium
US11861769B2 (en) Electronic device and operating method thereof
CN108961369A (en) The method and apparatus for generating 3D animation
CN109886153B (en) Real-time face detection method based on deep convolutional neural network
CN107654406A (en) Fan air-supply control device, fan air-supply control method and device
CN113627402B (en) Image identification method and related device
CN108198130A (en) Image processing method, device, storage medium and electronic equipment
CN109271930A (en) Micro- expression recognition method, device and storage medium
CN108280426A (en) Half-light source expression recognition method based on transfer learning and device
CN110555896A (en) Image generation method and device and storage medium
CN109086351B (en) Method for acquiring user tag and user tag system
CN111242019A (en) Video content detection method and device, electronic equipment and storage medium
CN111080746A (en) Image processing method, image processing device, electronic equipment and storage medium
CN114360018B (en) Rendering method and device of three-dimensional facial expression, storage medium and electronic device
CN113838158B (en) Image and video reconstruction method and device, terminal equipment and storage medium
US20210264191A1 (en) Method and device for picture generation, electronic device, and storage medium
CN111768729A (en) VR scene automatic explanation method, system and storage medium
CN115802557A (en) Lamp effect control method, device, product, medium and lamp effect control equipment
TWI776429B (en) Action recognition method and device, computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination