CN116630455A - Image generation method based on artificial intelligence drawing, display equipment and storage medium - Google Patents

Image generation method based on artificial intelligence drawing, display equipment and storage medium Download PDF

Info

Publication number
CN116630455A
CN116630455A CN202310546621.9A CN202310546621A CN116630455A CN 116630455 A CN116630455 A CN 116630455A CN 202310546621 A CN202310546621 A CN 202310546621A CN 116630455 A CN116630455 A CN 116630455A
Authority
CN
China
Prior art keywords
artificial intelligence
elements
painting
weight
display device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310546621.9A
Other languages
Chinese (zh)
Inventor
张文晶
洪峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Qianhai Shenlei Semiconductor Co ltd
Original Assignee
Shenzhen Qianhai Shenlei Semiconductor Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Qianhai Shenlei Semiconductor Co ltd filed Critical Shenzhen Qianhai Shenlei Semiconductor Co ltd
Priority to CN202310546621.9A priority Critical patent/CN116630455A/en
Publication of CN116630455A publication Critical patent/CN116630455A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application relates to an artificial intelligence drawing technology, and discloses an image generation method based on artificial intelligence drawing, which comprises the following steps: when drawing content information is acquired by display equipment, drawing elements are extracted from the drawing content information, wherein the drawing content information comprises a voice instruction and environment data of an environment where the display equipment is located; generating a drawing image based on the drawing element using an artificial intelligence model; the pictorial image is presented on the display device. The application also discloses a display device and a computer readable storage medium. The application aims to improve the accuracy of the display equipment for acquiring the drawing instruction which the user wants to express in the current environment, so that the artificial intelligent model can generate the drawing image which is more in line with the mind of the user for the display equipment to display.

Description

Image generation method based on artificial intelligence drawing, display equipment and storage medium
The application is a divisional application of the prior application (the application name is an image generation method based on artificial intelligence drawing, a display device and a storage medium; the application date is 2023, 02 and 17; and the application number is 202310131373.1).
Technical Field
The present application relates to the field of artificial intelligence painting, and in particular, to an image generating method, a display device, and a computer readable storage medium based on artificial intelligence painting.
Background
The digital photo frame (Digital Photo Frame) is a display device for displaying digital photos rather than paper photos. The digital photo frame can acquire the pictures from the storage and display the pictures in a circulating mode, compared with a common photo frame, the digital photo frame is more convenient to display the pictures, and the display mode is flexible and changeable.
With the rapid development of AI (Artificial Intelligence ) drawing technology nowadays, it is also possible to send drawing instructions to AI by a user, and to generate images by AI drawing and display the images on a digital photo frame. The conventional AI painting generally uses the intelligent device of the computer to give corresponding painting instructions to the AI, but the display device of the general digital photo frame is not more complete than the computer (i.e. the display device of the digital photo frame has relatively single function and is mainly used for displaying images), and the conventional AI painting instructions are generally text input, so that the user is inconvenient to directly input text instructions to the display device of the digital photo frame to give corresponding painting instructions to the AI, and for some users with weaker text literacy, the user is generally difficult to intuitively and accurately convert what he thinks of and what he feels into corresponding text expressions to input corresponding painting instructions to the display device of the digital photo frame.
The foregoing is provided merely for the purpose of facilitating understanding of the technical solutions of the present application and is not intended to represent an admission that the foregoing is prior art.
Disclosure of Invention
The application mainly aims to provide an image generation method, display equipment and a computer readable storage medium based on artificial intelligence drawing, aiming at improving the accuracy of the display equipment for acquiring drawing instructions which a user wants to express in the current environment, so that an artificial intelligence model can generate a drawing image which is more in line with the mind of the user for the display equipment to display.
In order to achieve the above object, the present application provides an image generation method based on artificial intelligence drawing, comprising the steps of:
when drawing content information is acquired by display equipment, drawing elements are extracted from the drawing content information, wherein the drawing content information comprises a voice instruction and environment data of an environment where the display equipment is located, and the drawing elements comprise a first drawing element and a second drawing element; identifying semantic information and voice emotion information of the voice command, and generating the first drawing element according to the semantic information and the voice emotion information; and extracting the second drawing element according to the environmental data;
Generating a painting image based on the painting elements by using an artificial intelligence model, wherein the weight of the artificial intelligence model matched with the first painting elements is greater than the weight of the artificial intelligence model matched with the second painting elements;
the pictorial image is presented on the display device.
Optionally, the step of identifying semantic information and voice emotion information of the voice command and generating the first drawing element according to the semantic information and the voice emotion information includes:
identifying semantic information and voice emotion information of the voice command;
inquiring preset drawing elements matched with the semantic information, and screening the preset drawing elements by utilizing the voice emotion information;
and taking the screened preset drawing elements as the first drawing elements.
Optionally, the environmental data includes at least one of an environmental sound, an environmental image, and an environmental temperature; the step of extracting the second drawing element from the environmental data includes:
analyzing the environment data to obtain the scene type currently corresponding to the display equipment;
inquiring a preset drawing element matched with the scene type as the second drawing element.
Optionally, the drawing content information further includes a display size of the display device; the drawing elements further include a third drawing element; the image generation method based on the artificial intelligence drawing further comprises the following steps:
inquiring a preset drawing element matched with the display size to serve as the third drawing element;
the artificial intelligence model is the weight matched with the third drawing element and is smaller than the weight matched with the first drawing element.
Optionally, the drawing element further includes a fourth drawing element; the image generation method based on the artificial intelligence drawing further comprises the following steps:
extracting audio features from the voice instruction, and determining a user type according to the audio features;
inquiring a preset drawing element matched with the user type as the fourth drawing element;
the artificial intelligence model is the weight matched with the fourth drawing element and is smaller than the weight matched with the first drawing element.
Optionally, the drawing elements further include a fifth drawing element corresponding to the drawing image currently displayed by the display device, where the weight of the artificial intelligence model matched with the fifth drawing element is smaller than the weight of the artificial intelligence model matched with the first drawing element.
Optionally, before the step of generating the drawing image based on the drawing element using the artificial intelligence model, the method further includes:
acquiring a historical time point of generating the painting image last time;
detecting whether the interval duration between the current time point of receiving the voice command and the historical time point is smaller than a preset duration or not;
if yes, the artificial intelligent model is controlled to increase the matching weight of the first drawing element and decrease the matching weight of the second drawing element.
Optionally, after the step of detecting whether the interval duration between the current time point when the voice command is received and the historical time point is less than the preset duration, the method further includes:
if not, controlling the artificial intelligent model to reduce the matching weight of the first drawing element and to increase the matching weight of the second drawing element.
To achieve the above object, the present application also provides a display device including: the system comprises a memory, a processor and an image generating program based on artificial intelligence painting, wherein the image generating program based on artificial intelligence painting is stored in the memory and can run on the processor, and the image generating program based on artificial intelligence painting realizes the steps of the image generating method based on artificial intelligence painting.
To achieve the above object, the present application also provides a computer-readable storage medium having stored thereon an image generation program based on an artificial intelligence drawing, which when executed by a processor, implements the steps of the image generation method based on an artificial intelligence drawing as described above.
According to the image generation method, the display device and the computer readable storage medium based on the artificial intelligence drawing, a user can send the voice command to the display device to express the drawing content wanted by the user, meanwhile, the display device can also actively acquire the environmental data of the environment where the user is located as the supplement of the drawing content, so that besides learning the subjective drawing intention of the user from the text description corresponding to the voice command, the drawing intention in the subconscious of the user can be acquired by combining the voice emotion information and the environmental data, the user can conveniently send the corresponding drawing command to the display device, the accuracy of acquiring the drawing command wanted by the user in the current environment can be improved, the drawing image generated by the artificial intelligence model can be reflected, and the thought sense of the user when the voice command is given can be reflected, so that the drawing image which is more suitable for the intention of the user is displayed by the display device.
Drawings
FIG. 1 is a schematic diagram of steps of an image generation method based on artificial intelligence painting in an embodiment of the application;
fig. 2 is a schematic block diagram of an internal structure of a display device according to an embodiment of the present application.
The achievement of the objects, functional features and advantages of the present application will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below are exemplary and intended to illustrate the present application and should not be construed as limiting the application, and all other embodiments, based on the embodiments of the present application, which may be obtained by persons of ordinary skill in the art without inventive effort, are within the scope of the present application.
Referring to fig. 1, in an embodiment, the artificial intelligence drawing-based image generation method includes:
step S10, when drawing content information is acquired by display equipment, drawing elements are extracted from the drawing content information, wherein the drawing content information comprises a voice instruction and environment data of an environment where the display equipment is located, and the drawing elements comprise a first drawing element and a second drawing element; identifying semantic information and voice emotion information of the voice command, and generating the first drawing element according to the semantic information and the voice emotion information; and extracting the second drawing element according to the environmental data;
S20, generating a painting image based on the painting elements by using an artificial intelligent model, wherein the weight of the artificial intelligent model matched with the first painting elements is larger than the weight of the artificial intelligent model matched with the second painting elements;
and step S30, displaying the painting image on the display equipment.
In this embodiment, the execution terminal of the embodiment may be a display device, or may be another control device or apparatus for controlling the display device. The following description will take an example of the embodiment terminal as a display device.
Optionally, the display device may be an electronic device, such as a digital photo frame, mainly used for displaying digital images, and the display size of the display device may be large, medium, and small, and is determined according to the display requirements of the user on the images in different scenes. For a home-type scene, the specification of the display device is generally medium and small, and for a large-sized display scene such as a mall, a museum, etc., the specification of the display device is generally medium and large.
As shown in step S10, the display device is provided with a microphone module in addition to basic function modules such as display, storage and communication, and the microphone module can be used for receiving voice instructions sent by a user.
Optionally, the microphone module may be further configured to detect an environmental sound of an environment in which the display device is located as the environmental data.
Alternatively, when the user wants to send a corresponding drawing instruction to the display device, so as to draw by using the artificial intelligence technology through the display device, and generate a corresponding drawing image to display on the display device, the user may speak the corresponding voice instruction to the display device within the voice acquisition range of the display device. Wherein the voice instruction taught by the user can comprise some drawing elements of the drawing image which the user wants to generate besides some keywords for instructing the display device to execute the artificial intelligence drawing.
The basic elements of the pictorial representation are: the basic principle of combining the elements into a complete work comprises diversity, unification, proportion, symmetry, balance, rhythm, comparison, harmony and the like; in addition, the painting elements can be painting styles, such as abstract paintings, oil paintings, cartoon paintings, ink paintings and the like; alternatively, the drawing element may be a drawing main content such as a person (e.g., a public person, a famous person, a specific person, etc.), a scene (e.g., a war scene, a sports scene, a four seasons scene, etc.), a landscape (e.g., mountain, waterfall, desert, etc.), etc. That is, the drawing element can be determined as long as it is an element contributing to the formation of the drawing content.
For example, the voice instruction taught by the user may be "please generate a picture of the painting style, which contains blue sky and white cloud, mountain, river. The terminal can refine the instruction of generating the photo and identify the instruction as an artificial intelligent drawing instruction, and keywords such as oil painting style, blue sky and white cloud, mountain and river can be used as drawing elements.
Optionally, when the display device receives the voice command sent by the user, the display device may also collect the environmental data of the environment where the display device is located synchronously, and use the currently obtained voice command and the environmental data together as the obtained drawing content information.
Or, the display device may collect and update the environmental data of the environment where the display device is located at regular time, and when receiving the voice command sent by the user, use the environmental data collected at the latest time and the voice command obtained at the current time together as the obtained drawing content information. The interval duration of the periodically collected environmental data may be one week, one month, one quarter, etc.
Optionally, the environmental data collected by the display device includes at least one of an environmental sound, an environmental image, and an environmental temperature. The microphone module of the display device is used for receiving voice instructions of a user and collecting environmental sounds; and/or the display equipment is further provided with a camera module, and the camera module can be used for collecting environment images; and/or the display device is further provided with a temperature sensor, which may be used to collect the ambient temperature.
Alternatively, when the display device acquires the drawing content information, the drawing element may be extracted from the drawing content information. If the operation capability of the display equipment configuration is enough, the drawing element extraction can be completed at the local end; if the operation capability of the configuration of the display device is insufficient, the calculation power of the local end device or the cloud end device which is in communication connection with the display device can be used for completing the extraction of the drawing elements.
Optionally, for extracting the first drawing element corresponding to the voice command, the terminal may identify semantic information and voice emotion information of the voice command, and generate the first drawing element according to the semantic information and the voice emotion information. The terminal can identify semantic information and voice emotion information of the voice command, inquire preset drawing elements matched with the semantic information, screen the inquired preset drawing elements by utilizing the voice emotion information, and take the remained preset drawing elements after screening as the first drawing elements; or, the terminal may identify the semantic information and the voice emotion information of the voice command, and then query the preset drawing elements matched with the semantic information and query the preset drawing elements matched with the voice emotion information respectively to be used as the first drawing elements together.
It should be noted that, the terminal may identify text information in the voice command by using a voice recognition technology, and then identify semantic information and voice emotion information from the text information based on the semantic recognition technology and the voice emotion recognition technology. The semantic information obtained by terminal identification at least comprises a keyword, the voice emotion information at least comprises one emotion, the emotion can be roughly divided into positive emotion, negative emotion and common emotion (neutral emotion), the positive emotion is emotion generated by increasing positive value or decreasing negative value of human beings such as pleasure, high excitement, feeling, celebration and the like, the negative emotion is emotion generated by decreasing positive value or increasing negative value of human beings such as pain, difficulty, loss and the like, and the common emotion is emotion expressed by no strong positive or negative emotion of human beings.
Optionally, the terminal presets a plurality of preset drawing elements, and each preset drawing element is associated with at least one keyword. Since a word may have a plurality of paraphrasing, the same keyword may be associated with a plurality of preset drawing elements.
Optionally, after the terminal obtains the semantic information and the voice emotion information, each keyword in the semantic information can be screened, and then a database is queried for preset drawing elements associated with the extracted keywords. In view of the fact that when emotion of a user is different for the same word, different definitions may exist, therefore when a plurality of preset drawing elements are associated with the same keyword, the terminal can assign corresponding emotion labels to the preset drawing elements in advance, then after the preset drawing elements associated with each keyword in semantic information are queried, the preset drawing elements are further screened by utilizing voice emotion information, so that preset drawing elements in which emotion labels accord with emotion in the voice emotion information are screened, and finally the screened residual preset drawing elements are used as first drawing elements. Therefore, the painting elements which are more suitable for the mind of the user can be extracted according to the user semantics, and the painting image generated by the artificial intelligence technology based on the extracted painting elements in the follow-up process is more suitable for the mind of the user (namely, the painting image which is suitable for the mind of the user is obtained).
Or after the terminal obtains the semantic information and the voice emotion information, each keyword in the semantic information can be screened, then the database is queried for preset drawing elements associated with the extracted keywords, keywords corresponding to emotion described by the voice emotion information are queried, and the database is queried for preset drawing elements associated with the keywords corresponding to the voice emotion information. And finally, the terminal uses the preset drawing elements matched with the semantic information and the preset drawing elements matched with the voice emotion information together as the first drawing elements. The subsequent drawing image generated by utilizing the artificial intelligence technology based on the extracted drawing elements can be integrated into the emotion elements of the user, and the emotion of the user can be expressed in a drawing artistic form.
The first drawing element and the second drawing element may include a plurality of predetermined drawing elements.
Optionally, when the terminal extracts the second drawing element according to the environmental data, the terminal may obtain at least one scene type (i.e. a scene to which the environment where the display device is located) corresponding to the display device currently by analyzing the environmental data, and then query a preset drawing element matched with the scene type as the second drawing element.
The scene types may be various, and may be indoor scenes (may be further divided into indoor large, medium and small scenes), outdoor scenes, weather scenes (such as rainy days, cloudy days, sunny days, etc.), noisy scenes (such as malls with a lot of people), quiet environments (such as home, art halls, etc.), four seasons scenes (i.e. divided into four seasons in spring, summer, autumn and winter), temperature scenes (may be divided into colder scenes (such as museums, art halls, ski halls), normal temperature scenes (such as home-suitable scenes) and hotter scenes (such as sauna rooms, hot spring places)), tone scenes (mainly determined according to the main colors of the space where the display device is located, such as wall colors, layout theme colors, lamp colors, outdoor may be natural colors, outer wall colors), bright and dark scenes (may be divided into light bright scenes, light regular scenes, light dark scenes), etc.
Optionally, the terminal may also be preconfigured with at least one preset drawing element for each scene type, for example, the four-season scene may be a characteristic scene (such as spring wind, xia Yu, autumn month, winter snow), a sign (such as a kite in spring, an ice cream in summer, maple leaves in autumn, snowman in winter) and the like associated with each season respectively as the preset drawing elements; for example, indoor and outdoor scenes can be associated with indoor decorations, furniture and household appliances as preset drawing elements, and outdoor scenes (such as animals and plants) can be associated with outdoor scenes as preset drawing elements; for example, the indoor large and medium scenes can be associated with preset drawing elements related to keywords such as ambitious, broad and the like, and the indoor small scenes can be associated with preset drawing elements related to keywords such as warmth, fineness, delicacy and the like; for example, indoor scenes, noisy scenes, light rays of a heat-bias scene and dark scenes can be respectively associated with some hues of warm color systems as preset drawing elements, outdoor scenes, quiet environments, cool-bias scenes and light-bias scenes can be respectively associated with some hues of cool color systems as preset drawing elements, and normal-temperature scenes and light conventional scenes can be respectively associated with some neutral hues as drawing elements; for example, the tone scene can be matched with some tones with the same or similar colors as preset drawing elements according to specific tones.
It should be understood that the above-mentioned combination of scene types and preset drawing elements is only an exemplary illustration, and because the settable scene types are various, and the drawing elements are also various, the combination of different scene types and preset drawing elements is difficult to enumerate one by one in a limited space, and these can be set by the relevant engineer in the actual operation when programming the relevant program, and therefore, the combination is not enumerated one by one. Of course, the related engineers can also collect training samples of various scene types and label corresponding preset drawing elements in the training samples, and then perform repeated iterative training on a certain number of training samples (such as one thousand) by using a machine learning model, so that the terminal can learn autonomously to obtain the capability of distributing the corresponding preset drawing elements for different scene types, and the cost of manpower setting is saved.
Optionally, the microphone module of the display device is provided with a microphone array, when the display device collects the environmental sound by using the microphone array, through analyzing the sound data, it can obtain sound analysis information such as whether the environmental sound has echo, echo far and near, echo size, environmental sound intensity, etc., and then judge the scene type corresponding to the environment where the display device is located currently based on the sound analysis information.
For example, indoor and outdoor scenes can be distinguished according to whether the environmental sound is echo or not (if the environmental sound is echo, the environment is likely to be in an indoor scene at present); according to the distance and the size of the echo, the large, medium and small indoor scenes can be distinguished (for example, the larger the echo is, the more the current indoor space is clear, the large indoor scene is possibly). Or whether the environment is in a noisy scene or a quiet environment can be determined according to the magnitude of the environmental sound intensity.
Optionally, if the display device is provided with a camera module, one or more environmental images of the environment where the display device is located can be acquired through the camera module, and then the terminal analyzes the corresponding scene type currently corresponding to the display device by combining an image analysis technology and a pre-stored scene graph library.
For example, the terminal may determine whether the current situation belongs to a noisy environment or a quiet scene (the current situation is a noisy environment or a quiet scene) according to the average people flow in the plurality of environment images; as for indoor and outdoor scenes, tone scenes, bright and dark scenes, four seasons scenes and the like, the conventional image recognition techniques can be used for rapidly recognizing the indoor and outdoor scenes, tone scenes, bright and dark scenes, four seasons scenes and the like, and are not repeated here.
Optionally, if the display device is provided with a temperature sensor, the temperature sensor can collect the ambient temperature of the environment where the display device is located, and then the terminal can analyze the corresponding scene type currently corresponding to the display device by analyzing the ambient temperature within a certain period of time (in some cases, the latitude and longitude positioning and the current date of the display device can be combined).
For example, various temperature scenes can be distinguished according to a temperature interval in which the ambient temperature is located; whether the current indoor environment or the outdoor environment exists can be judged according to whether the environmental temperature fluctuates greatly in one day; four seasons can be distinguished according to the ambient temperature, longitude and latitude positioning or the current date of the display device.
Optionally, the terminal may identify one or more scene types according to at least one of the environmental sound, the environmental image and the environmental temperature, and then query a database for preset drawing elements matched with each scene type obtained by the identification, so as to obtain the second drawing element.
As described in step S20, the present embodiment constructs an artificial intelligence model dedicated to performing artificial intelligence drawing, which may be deployed in a display device, based on an artificial intelligence technique and a machine learning model (which may be a diffusion probability model) in advance. Of course, in order to save storage and calculation power of the display device, the artificial intelligent model may be deployed in a local end device or a cloud end device which is in communication connection with the display device, and the display device may invoke the artificial intelligent model deployed in the local end device or the cloud end device through data interaction with the display device.
It should be understood that, because the open source technology exists in the artificial intelligence drawing, the logic and mode of the training of the related model are not described herein, and the related engineer may perform optimization and improvement on the basis of the original trained artificial intelligence drawing model, adjust some related parameters, and then obtain the artificial intelligence model suitable for the embodiment; or building a model assembly based on a deep learning framework which is gradually improved, so that the artificial intelligent model suitable for the embodiment is obtained. Of course, the relevant engineers may also write and train artificial intelligence models suitable for the present embodiment from scratch, as conditions allow.
Of course, compared with the existing artificial intelligence painting model, one of the main improvements made by the artificial intelligence model provided by the embodiment is to assign corresponding weights to various painting elements, namely, the first painting elements corresponding to the voice instructions and the second painting elements corresponding to the environmental data are respectively matched with the corresponding weights (the existing artificial intelligence painting does not distinguish the weights of the painting elements in this way, and more common cases are that the weights of all the painting elements are consistent).
The weight rule set by the artificial intelligence model provided in this embodiment includes: the first drawing element may have a greater weight (e.g., a first weight) than the second drawing element, e.g., the ratio of the first weight to the second weight may be 0.7:0.3, 0.6:0.4, 0.8:0.2, etc. And even after the related engineer sets the initial ratio of the first weight and the second weight in the artificial intelligent model, the artificial intelligent model can automatically adjust and optimize the ratio between the first weight and the second weight according to the feedback of training or learning results in the subsequent operation, but the adjusted ratio cannot violate the weight rule, for example, the first weight is set to be greater than or equal to 0.51.
In the logic for training and learning the object by the artificial intelligence model, if the weight assigned to the object is larger, the relevant feature of the object is more focused and learned, so that the relevant feature of the object is more reflected in the result output subsequently.
Since the voice command is the most subjective drawing intention expression of the user after all, the drawing image which is generated by using the artificial intelligence model in the follow-up process can be focused on expressing the associated characteristics of the first drawing element by distributing more weight to the first drawing element, and the drawing image which is more accordant with the mind of the user can be obtained. Instead of directly extracting all the drawing elements from the text information obtained by converting voice into characters, voice emotion analysis and environmental data analysis are added to extract drawing elements which are not directly "spoken" by a user, because people are after all sensitive animals, emotion is easily affected by the environment (namely, different environments can generate different drawing intentions, such as people in a noisy environment can want to see a drawing image based on warm color, and a quiet environment can want to see a drawing image based on cold color), but the expression capability is limited by insufficient expression capability, or some drawing intentions are in subconscious consciousness (which is generated by the environmental influence, many times, but are not thought of in subconscious consciousness), so that descriptions of the drawing intentions are often lost in the characters corresponding to voice instructions are spoken, by adding the voice emotion analysis and the environmental data analysis as quantities, the drawing image based on warm color can be possibly seen, but the drawing image based on cold color can be expected, the expression capability is more easily obtained by the user than the actual drawing intentions, and the current drawing intentions are more easily reflected by the intelligent model, and the actual drawing intentions are more easily obtained by the user, and the actual drawing intentions are more easily (which is more easily expressed by the actual drawing intentions, and the actual drawing intentions are more easily) when the drawing image is more easily represented by the actual drawing image is more easily, thus, AI painting images which are more fit to the mind of the user can be obtained more easily); however, the scheme of supplementing the user's painting intent based on the voice emotion and the environmental data in this embodiment is often lack of or unsmooth in the existing artificial intelligence painting technology.
Optionally, after the terminal extracts the first drawing element and the second drawing element, the drawing elements are input into an artificial intelligence model which is deployed and trained in advance, so that the artificial intelligence model can generate corresponding image data by matching drawing features related to the drawing elements (namely image features related to drawing) and fitting the distribution of the drawing features, and finally, the generated drawing image is output.
As described in step S30, after the display device obtains the drawing image generated by the artificial intelligence model based on each drawing element, the generated drawing image can be displayed on the display screen of the display device for people to watch.
In an embodiment, a user can send a voice command to the display device to express the drawing content wanted by the user, meanwhile, the display device can also actively acquire the environmental data of the environment where the user is located as the supplement of the drawing content, so that besides learning the subjective drawing intention of the user from the corresponding text description of the voice command, the user can also acquire the drawing intention in the subconscious of the user by combining voice emotion information and the environmental data, the user can conveniently send a corresponding drawing command to the display device, the accuracy of acquiring the drawing command wanted by the user in the current environment can also be improved, the drawing image generated by the artificial intelligent model can be further reflected, and the drawing image which is more accordant with the mind of the user is obtained for the display device to display.
In an embodiment, based on the above embodiment, the drawing content information further includes a display size of the display device; the drawing elements further include a third drawing element; the image generation method based on the artificial intelligence drawing further comprises the following steps:
inquiring a preset drawing element matched with the display size to serve as the third drawing element;
the artificial intelligence model is the weight matched with the third drawing element and is smaller than the weight matched with the first drawing element.
In this embodiment, a plurality of preset size intervals (the numerical ranges of the preset size intervals are different) are divided in advance, and each preset size interval is associated with a corresponding preset drawing element. For example, for some preset size intervals with large values, preset drawing elements related to keywords such as ambitious, broad and the like can be associated; for some preset size intervals with small values, preset drawing elements related to warm, fine, exquisite and other keywords can be associated.
Optionally, the drawing content information acquired by the display device includes a display size of a local end of the display device in addition to the voice command and the environmental data.
Optionally, when the terminal extracts the drawing element from the drawing content information, the terminal matches a corresponding preset drawing element as a third drawing element according to a preset size interval in which the display size of the display device is located. And after the terminal obtains the first drawing element, the second drawing element and the third drawing element, the drawing elements are input into the artificial intelligence model together to generate a drawing image.
Note that, in the existing artificial intelligence drawing technology, the inherent relationship between the content in the AI drawing image and the display size of the apparatus for displaying the AI drawing image is not focused on; even though the existing artificial intelligence painting technology supports a user to set the size of the output AI painting image, the setting of the image size does not affect specific painting contents, and the setting of different sizes only cuts out different sizes for the image of the same painting content; compared with the prior art, the present embodiment is capable of fully capturing the relevance between the generated AI painting content and the display size of the display device (for example, for large display sizes, some preset painting elements related to ambitious painting scenes can be generally associated, the preset painting elements can be represented by adding more characters and scenes, arranging larger scene scenes and designing historical famous scenes, for small display sizes, painting settings of some exquisite or single characters and objects can be associated as preset painting elements so as to highlight the main characters or objects in a limited space), and corresponding preset painting elements are allocated for different display sizes, so that the artificial intelligence model can learn the painting features corresponding to different display sizes, and when the painting images are generated, the painting features corresponding to the first painting elements and the second painting elements can be fitted, so that finally generated painting images can reflect the painting features corresponding to the third painting elements (even if the generated painting elements are influenced by the display size of the display device, and when the display sizes are changed, the finally generated painting elements can influence the display environment of the same, and the display device can also display different voice data under different display conditions.
In view of the fact that the user's painting intention is sometimes also affected by the display size of the display device (if the display size of the display device is large, the user may want to obtain a more magnificent and aerodynamic drawing in the scene, and if the display size of the display device is small, the user may want to make a more exquisite drawing with emphasis on a single figure or object), for example, the user stands in front of the display device with different display sizes, may have different painting intentions, and the experience is easy to express for the user with a certain painting skill, but is difficult to express for a general user or a user with a lack of painting foundation, often thinks about how to express the corresponding painting intention, even if the experience is more, the user only stays in the subconscious layer, and the embodiment can make a certain supplement to the user's painting intention in the aspect by fully capturing the relevance between the content of the AI drawing and the display size of the display device, further improves the artificial painting intention of obtaining the instruction to be expressed in the current environment, and the intelligent pattern of the user can generate the voice image more easily according with the subsequent painting intention.
Of course, since the voice command is after all the most subjective painting intention expression of the user, the artificial intelligence model should also satisfy the set weight rule for the weight (which may be marked as the third weight) matched by the third painting element: the third weight is less than the first weight. As for the third weight and the second weight, both may be set equal, or the third weight may be set smaller or larger than the second weight.
For example, the ratio between the first weight, the second weight, and the third weight may be 0.5:0.3:0.2, 0.6:0.2:0.2, 0.6:0.1:0.3.
Or, the corresponding weights may be allocated to the second weight and the third weight according to the number of drawing elements corresponding to the second weight and the third weight, and the more the corresponding drawing elements are, the larger the allocated weights are. For example, since the number of second drawing elements generally extracted from the environmental data is often greater than the number of third drawing elements, the first weight > the second weight > the third weight may be set.
In an embodiment, on the basis of the above embodiment, the drawing element further includes a fourth drawing element; the image generation method based on the artificial intelligence drawing further comprises the following steps:
Extracting audio features from the voice instruction, and determining a user type according to the audio features;
inquiring a preset drawing element matched with the user type as the fourth drawing element;
the artificial intelligence model is the weight matched with the fourth drawing element and is smaller than the weight matched with the first drawing element.
In this embodiment, the drawing elements may include a fourth drawing element in addition to the first drawing element and the second drawing element. Of course, in some embodiments, the drawing elements may also include a third drawing element.
Optionally, the terminal may extract the audio feature of the voice command by performing audio analysis on the voice command, and then perform user portrait according to the extracted audio feature, so as to obtain a user type corresponding to the user who sends the voice command.
It should be understood that, because people of different ages and sexes generally have corresponding audio characteristics, and the combination of the big data analysis technology can obtain representative audio characteristics corresponding to people of different ages and sexes. After the preset audio features corresponding to the people of all ages and sexes are stored in the terminal, when the user type identification is needed, after the audio features corresponding to the user to be identified are extracted, the user type corresponding to the user to be identified can be obtained by inquiring the preset audio features matched with the user to be identified in the database and further according to the crowd type associated with the voice audio features obtained by inquiry.
Because different types of users generally have corresponding drawing preferences, corresponding drawing styles can be configured for various user types in advance to serve as preset drawing elements, and after the user type of the user corresponding to the voice instruction is determined, the preset drawing elements matched with the user type can be queried in the database to serve as fourth drawing elements. For example, when the user type is the elderly, the associated preset drawing element may be a wash painting or a landscape painting style; when the user type is adult female, the associated preset drawing element can be oil painting and impression painting style; when the user type is child, the associated preset drawing element may be a cartoon, a simple drawing style.
Optionally, after obtaining the first drawing element, the second drawing element, and the fourth drawing element (which may further include a third drawing element in some embodiments), the terminal inputs the drawing elements together into the artificial intelligence model to generate the drawing image.
It should be noted that, in the existing artificial intelligence drawing technology, the internal relationship between the drawing instruction issuing person and the drawing content is not concerned, so that the situation that different types of users want to obtain different drawing contents even if the users issue the same drawing instruction is ignored; in this embodiment, through fully capturing the relevance between the user type and the AI painting content, and allocating corresponding preset painting elements for different user types, the artificial intelligence model may learn the painting features corresponding to different user types, and fit the painting features with the painting features corresponding to the first painting element and the second painting element (in some embodiments, the third painting element may also be included) when generating the painting image, so that the finally generated painting image may reflect the painting features corresponding to the fourth painting element (i.e. the content of the generated painting image is affected by the user type, and may also affect the content finally generated by the AI when the user type is changed), so that the AI painting image more accords with the mind of the user of the corresponding type.
Of course, since the voice command is after all the most subjective painting intention expression of the user, the artificial intelligence model matches the weight (which may be marked as the fourth weight) of the fourth painting element, and the set weight rule should be satisfied as well: the fourth weight is less than the first weight.
Alternatively, since the number of second drawing elements generally extracted from the environmental data is often greater than the number of fourth drawing elements, the first weight > the second weight > the fourth weight may be set. As for the third weight and the fourth weight, both may be set equal, or the third weight may be set smaller or larger than the fourth weight.
In an embodiment, on the basis of the foregoing embodiment, the drawing element further includes a fifth drawing element corresponding to a drawing image currently displayed by the display device, where the weight of the artificial intelligence model matched with the fifth drawing element is smaller than the weight of the artificial intelligence model matched with the first drawing element.
In this embodiment, when the display device obtains the drawing content information, if the image currently displayed by the display device is also the previous drawing image obtained based on steps S10-S30, the terminal will store the drawing element corresponding to the currently displayed drawing image (i.e. the drawing element when the drawing image is generated), so that the drawing element corresponding to the currently displayed drawing image can be updated to the fifth drawing element.
Optionally, after the terminal obtains the first drawing element and the second drawing element (and may further include a third drawing element and/or a fourth drawing element in some embodiments), these drawing elements are input into the artificial intelligence model together with the fifth drawing element to generate the drawing image.
Alternatively, after the latest generated drawing image is obtained, the display device may switch the currently displayed drawing image to the latest generated drawing image. Of course, since the voice command is after all the most subjective painting intent expression of the user, the artificial intelligence model should also satisfy the set weight rule for the weight (which may be labeled as the fifth weight) matched for the fifth painting element: the fifth weight is less than the first weight.
And since the fifth drawing element is derived from the drawing image to be replaced after all, it is not necessary to give the fifth drawing element a higher weight than other types of drawing elements, i.e., the fifth weight is set to be smaller than the first weight, and the fifth weight may be further set to be smaller than all types of weights (e.g., the second weight, the third weight, the fourth weight).
It should be noted that, since the currently displayed drawing image of the display device can be displayed, how much the currently displayed drawing image is preferred by the user, by giving a certain weight to the fifth drawing element related to the currently displayed drawing image and adding the fifth drawing element to the machine learning of the new drawing image, the finally generated drawing image of the artificial intelligence model can retain some painting styles and contents of the currently displayed drawing image to a certain extent, so that the drawing image which is as suitable as the mind of the user as possible is provided for the display device to display.
In an embodiment, in addition to the foregoing embodiment, before the step of generating a drawing image based on the drawing element using an artificial intelligence model, the method further includes:
acquiring a historical time point of generating the painting image last time;
detecting whether the interval duration between the current time point of receiving the voice command and the historical time point is smaller than a preset duration or not;
if yes, the artificial intelligent model is controlled to increase the matching weight of the first drawing element and decrease the matching weight of the second drawing element.
In this embodiment, when the display device receives a voice instruction (or when drawing content information is acquired), a history time point at which a drawing image was generated last time by executing steps S10 to S30 is acquired. And then determining the interval duration between the current time point (i.e. the current time point) of the received voice command and the historical time point, and further detecting whether the interval duration is smaller than the preset duration. The preset duration is used for measuring the length of the interval duration, and can be set according to actual conditions, such as 3 days, 7 days, 10 days, 15 days and the like.
Optionally, if the terminal detects that the interval duration is less than the preset duration, the artificial intelligence model may be controlled to increase the first weight by a certain proportion and decrease the second weight (e.g. control the first weight to increase by 0.1 and control the second weight to decrease by 0.1). If the first drawing element and the second drawing element include other drawing elements (such as at least one of the third drawing element, the fourth drawing element and the fifth drawing element), the weights of the other drawing elements can be reduced at the same time. Then, after the artificial intelligence model stores the updated and adjusted weights, the terminal inputs the currently extracted drawing elements into the artificial intelligence model to generate a new drawing image.
Optionally, if the interval time detected by the terminal is longer than or equal to the preset time, the weights corresponding to the drawing elements of each type can be controlled to be kept unchanged, and the terminal can input the drawing elements extracted currently into the artificial intelligent model, so that the artificial intelligent model can continuously generate new drawing images along with the weights of each type currently.
Or if the interval time detected by the terminal is longer than or equal to the preset time, the artificial intelligent model can be controlled to reduce the first weight according to a certain proportion and increase the second weight (for example, the first weight is controlled to be reduced by 0.1 and the second weight is controlled to be increased by 0.1) under the condition that the weight rule is met (namely, the first weight is required to be greater than the second weight); if the first drawing element and the second drawing element include other drawing elements (e.g., at least one of the third drawing element, the fourth drawing element, and the fifth drawing element), the weights of the other drawing elements may be increased at the same time. Then, after the artificial intelligence model stores the updated and adjusted weights, the terminal inputs the currently extracted drawing elements into the artificial intelligence model to generate a new drawing image.
It should be noted that, if the interval duration between the two previous and subsequent generation of the painting images is too short (i.e. the interval duration is smaller than the preset duration), it is indicated that the user may be dissatisfied with the painting image generated in the previous time, so that in this case, the weight corresponding to the first painting element may be properly increased, while the weights corresponding to the other painting elements may be reduced, so that the painting image generated in the subsequent generation may more emphasis on reflecting the painting intention intuitively spoken by the user through voice, and thus the painting image more conforming to the subjective intention of the user may be obtained for display by the display device; if the interval time of generating the painting images twice is longer (i.e. the interval time is longer than or equal to the preset time), the user may be satisfied with the painting images generated in the previous time, so that the current corresponding weights of various painting elements can be continuously used, or the corresponding weights of other painting elements except the first painting element can be appropriately increased, so that the painting image generated in the subsequent process can continuously reflect the painting intention (such as the painting intention in the aspect of partial consciousness) which is not intuitively spoken by the user through voice to a more important degree, and the painting image which is more in line with the mind of the user is generated for display by the display device.
Referring to fig. 2, a display device is further provided in an embodiment of the present application, and the internal structure of the display device may be as shown in fig. 2. The display device includes a processor, a memory, a communication interface, and a database connected by a system bus. Wherein the processor is configured to provide computing and control capabilities. The memory of the display device includes a nonvolatile storage medium, an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the display device is used for storing an image generation program based on artificial intelligence drawing. The communication interface of the display device is used for data communication with an external terminal. The input device of the display device is used for receiving signals input by an external device. The computer program is executed by a processor to implement an artificial intelligence painting-based image generation method as described in the above embodiments.
It will be appreciated by those skilled in the art that the structure shown in fig. 2 is merely a block diagram of a portion of the structure related to the present application and does not constitute a limitation of the display device to which the present application is applied.
Furthermore, the present application also proposes a computer-readable storage medium including an artificial intelligence drawing-based image generation program which, when executed by a processor, implements the steps of the artificial intelligence drawing-based image generation method as described in the above embodiments. It is understood that the computer readable storage medium in this embodiment may be a volatile readable storage medium or a nonvolatile readable storage medium.
In summary, in the image generating method, the display device and the computer readable storage medium based on the artificial intelligence drawing provided in the embodiments of the present application, a user may send a voice command to the display device to express the drawing content wanted by the user, and the display device may also actively obtain the environmental data of the environment where the user is located as the supplement of the drawing content, so that besides learning the drawing intention subjective to the user from the text description corresponding to the voice command, the user may also obtain the drawing intention in the user subconscious by combining the voice emotion information and the environmental data, so that the user may conveniently send a corresponding drawing command to the display device, and the accuracy of obtaining the drawing command wanted to be expressed by the user in the current environment may also be improved, so that the drawing image generated by using the artificial intelligence model may also more reflect the thought of the user when the user gives the voice command, thereby obtaining the drawing image more suitable for the display device to display.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium provided by the present application and used in embodiments may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual speed data rate SDRAM (SSRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, apparatus, article or method that comprises the element.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the scope of the application, and all equivalent structures or equivalent processes using the descriptions and drawings of the present application or direct or indirect application in other related technical fields are included in the scope of the present application.

Claims (7)

1. An image generation method based on artificial intelligence drawing is characterized by comprising the following steps:
when drawing content information is acquired by display equipment, drawing elements are extracted from the drawing content information, wherein the drawing content information comprises a voice instruction and environment data of the environment where the display equipment is located, the environment data at least comprises an environment temperature, and the drawing elements comprise a first drawing element and a second drawing element; identifying semantic information and voice emotion information of the voice command, and generating the first drawing element according to the semantic information and the voice emotion information; analyzing the environment data to obtain a scene type currently corresponding to the display equipment, and inquiring a preset drawing element matched with the scene type as the second drawing element, wherein the scene type comprises at least one of an indoor scene, an outdoor scene, a temperature scene and a four-season scene;
Generating a painting image based on the painting elements by using an artificial intelligence model, wherein the weight of the artificial intelligence model matched with the first painting elements is greater than the weight of the artificial intelligence model matched with the second painting elements;
the pictorial image is presented on the display device.
2. The artificial intelligence drawing-based image generation method according to claim 1, wherein the step of recognizing semantic information and voice emotion information of the voice command and generating the first drawing element according to the semantic information and the voice emotion information comprises:
identifying semantic information and voice emotion information of the voice command;
inquiring preset drawing elements matched with the semantic information, and screening the preset drawing elements by utilizing the voice emotion information;
and taking the screened preset drawing elements as the first drawing elements.
3. The artificial intelligence drawing-based image generation method according to claim 1, wherein the drawing content information further includes a display size of the display device; the drawing elements further include a third drawing element; the image generation method based on the artificial intelligence drawing further comprises the following steps:
Inquiring a preset drawing element matched with the display size to serve as the third drawing element;
the artificial intelligence model is the weight matched with the third drawing element and is smaller than the weight matched with the first drawing element.
4. The artificial intelligence drawing-based image generating method according to claim 1, wherein the drawing elements further include a fourth drawing element; the image generation method based on the artificial intelligence drawing further comprises the following steps:
extracting audio features from the voice instruction, and determining a user type according to the audio features;
inquiring a preset drawing element matched with the user type as the fourth drawing element;
the artificial intelligence model is the weight matched with the fourth drawing element and is smaller than the weight matched with the first drawing element.
5. The method of claim 1, wherein the drawing elements further comprise a fifth drawing element corresponding to a drawing image currently displayed by the display device, wherein the artificial intelligence model matches a weight for the fifth drawing element that is less than a weight for the first drawing element that the artificial intelligence model matches.
6. A display device comprising a memory, a processor, and an artificial intelligence painting-based image generation program stored on the memory and executable on the processor, the artificial intelligence painting-based image generation program when executed by the processor implementing the steps of the artificial intelligence painting-based image generation method of any one of claims 1 to 5.
7. A computer-readable storage medium, on which an artificial intelligence drawing-based image generation program is stored, which when executed by a processor, implements the steps of the artificial intelligence drawing-based image generation method according to any one of claims 1 to 5.
CN202310546621.9A 2023-02-17 2023-02-17 Image generation method based on artificial intelligence drawing, display equipment and storage medium Pending CN116630455A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310546621.9A CN116630455A (en) 2023-02-17 2023-02-17 Image generation method based on artificial intelligence drawing, display equipment and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202310131373.1A CN115830171B (en) 2023-02-17 2023-02-17 Image generation method based on artificial intelligence drawing, display equipment and storage medium
CN202310546621.9A CN116630455A (en) 2023-02-17 2023-02-17 Image generation method based on artificial intelligence drawing, display equipment and storage medium

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202310131373.1A Division CN115830171B (en) 2023-02-17 2023-02-17 Image generation method based on artificial intelligence drawing, display equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116630455A true CN116630455A (en) 2023-08-22

Family

ID=85521807

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202310546621.9A Pending CN116630455A (en) 2023-02-17 2023-02-17 Image generation method based on artificial intelligence drawing, display equipment and storage medium
CN202310131373.1A Active CN115830171B (en) 2023-02-17 2023-02-17 Image generation method based on artificial intelligence drawing, display equipment and storage medium

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202310131373.1A Active CN115830171B (en) 2023-02-17 2023-02-17 Image generation method based on artificial intelligence drawing, display equipment and storage medium

Country Status (1)

Country Link
CN (2) CN116630455A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117274421A (en) * 2023-11-06 2023-12-22 北京中数文化科技有限公司 Interactive scene photo making method based on AI intelligent terminal

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116433825B (en) * 2023-05-24 2024-03-26 北京百度网讯科技有限公司 Image generation method, device, computer equipment and storage medium
CN117150066B (en) * 2023-10-27 2024-01-23 北京朗知网络传媒科技股份有限公司 Intelligent drawing method and device in automobile media field

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101825487B1 (en) * 2017-06-08 2018-03-22 주식회사 엘팩토리 Service system for providing digital photo frame with digital rights management service
US10891969B2 (en) * 2018-10-19 2021-01-12 Microsoft Technology Licensing, Llc Transforming audio content into images
CN111368609B (en) * 2018-12-26 2023-10-17 深圳Tcl新技术有限公司 Speech interaction method based on emotion engine technology, intelligent terminal and storage medium
WO2021112365A1 (en) * 2019-12-02 2021-06-10 삼성전자 주식회사 Method for generating head model animation from voice signal, and electronic device for implementing same
CN111243101B (en) * 2019-12-31 2023-04-18 浙江省邮电工程建设有限公司 Method, system and device for increasing AR environment immersion degree of user based on artificial intelligence
CN112699257A (en) * 2020-06-04 2021-04-23 华人运通(上海)新能源驱动技术有限公司 Method, device, terminal, server and system for generating and editing works
CN111651231B (en) * 2020-06-04 2022-11-11 华人运通(上海)云计算科技有限公司 Work generation method and device, vehicle end and mobile terminal
CN113793398A (en) * 2020-07-24 2021-12-14 北京京东尚科信息技术有限公司 Drawing method and device based on voice interaction, storage medium and electronic equipment
US11477292B1 (en) * 2021-07-30 2022-10-18 Maykis Technology Limited Digital photo frame, a system thereof, and a method thereof
CN114332286B (en) * 2022-03-11 2023-02-07 珠海视熙科技有限公司 Artificial intelligent drawing method and device and computer storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117274421A (en) * 2023-11-06 2023-12-22 北京中数文化科技有限公司 Interactive scene photo making method based on AI intelligent terminal
CN117274421B (en) * 2023-11-06 2024-04-02 北京中数文化科技有限公司 Interactive scene photo making method based on AI intelligent terminal

Also Published As

Publication number Publication date
CN115830171A (en) 2023-03-21
CN115830171B (en) 2023-05-09

Similar Documents

Publication Publication Date Title
CN115830171B (en) Image generation method based on artificial intelligence drawing, display equipment and storage medium
KR101992424B1 (en) Apparatus for making artificial intelligence character for augmented reality and service system using the same
US10924606B2 (en) Artificial intelligence audio apparatus and operation method thereof
EP2143305B2 (en) Method, system and user interface for automatically creating an atmosphere, particularly a lighting atmosphere, based on a keyword input
WO2015175532A1 (en) Automatic theme and color matching of images on an ambient screen to the surrounding environment
US10127226B2 (en) Method for dialogue between a machine, such as a humanoid robot, and a human interlocutor utilizing a plurality of dialog variables and a computer program product and humanoid robot for implementing such a method
CN112040273B (en) Video synthesis method and device
CN106200917B (en) A kind of content display method of augmented reality, device and mobile terminal
WO2021012491A1 (en) Multimedia information display method, device, computer apparatus, and storage medium
CN107463626A (en) A kind of voice-control educational method, mobile terminal, system and storage medium
CN115097946A (en) Remote worship method, system and storage medium based on Internet of things
JP2020095615A (en) Generator, method for generation, and generating program
Kangas Picturing two modernities: Ecological modernisation and the media imagery of climate change
Sade Aesthetics of urban media façades
CN116342739B (en) Method, electronic equipment and medium for generating multiple painting images based on artificial intelligence
Chu Ruptured Shanshui: landscape composite photography from Lang Jingshan to Yang Yongliang
Chen et al. Automatic Generation of Multimedia Teaching Materials Based on Generative AI: Taking Tang Poetry as an Example
KR20150097969A (en) Color extraction and color storing method for build an emotion image map using environmental color
CN109654668B (en) Air conditioner display control method and device and air conditioner
KR102631154B1 (en) Artificial intelligence system performing big data-based photo image capture, correction, and printing
CN118170036A (en) Intelligent home environment atmosphere adjusting method, system, device and storage medium
CN116300092B (en) Control method, device and equipment of intelligent glasses and storage medium
KR101896489B1 (en) Smart film Advertise set-top box system for Advertising Display control using Smart film screen
US20230410553A1 (en) Semantic-aware auto white balance
CN118052907A (en) Text map generation method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination