CN113298896A - Picture generation method and device, electronic equipment and storage medium - Google Patents

Picture generation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113298896A
CN113298896A CN202010112938.8A CN202010112938A CN113298896A CN 113298896 A CN113298896 A CN 113298896A CN 202010112938 A CN202010112938 A CN 202010112938A CN 113298896 A CN113298896 A CN 113298896A
Authority
CN
China
Prior art keywords
picture
target
information
color
color information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010112938.8A
Other languages
Chinese (zh)
Inventor
李羿瑺
陈德银
赵文慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202010112938.8A priority Critical patent/CN113298896A/en
Priority to US17/183,298 priority patent/US20210264191A1/en
Priority to PCT/CN2021/077439 priority patent/WO2021169945A1/en
Priority to EP21158731.6A priority patent/EP3869466A1/en
Publication of CN113298896A publication Critical patent/CN113298896A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0486Drag-and-drop
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04808Several contacts: gestures triggering a specific function, e.g. scrolling, zooming, right-click, when the user establishes several contacts with the surface simultaneously; e.g. using several fingers or a combination of fingers and pen

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a picture generation method and device, electronic equipment and a storage medium, and relates to the technical field of electronic equipment. The method is applied to the electronic equipment, and comprises the following steps: the method comprises the steps of obtaining image information, extracting texture information and color information of the image information, obtaining a picture corresponding to the texture information as a picture to be processed based on a preset mapping relation, processing the picture to be processed based on the color information, and generating a target picture. According to the method and the device, the picture to be processed is obtained through the texture information of the image information, and the picture to be processed is processed through the color information of the image information to generate the target picture, so that the generated target picture and the characteristics of the image information are more consistent, the personalized requirements of users are met, and the user experience is improved.

Description

Picture generation method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of electronic device technologies, and in particular, to a method and an apparatus for generating a picture, an electronic device, and a storage medium.
Background
With the development of science and technology, electronic equipment is more and more widely used and has more and more functions, and the electronic equipment becomes one of the necessary things in daily life of people. At present, display screens of electronic equipment, such as desktops and wallpapers, can be replaced automatically or according to the selection of a user, but the number of replaceable display screens is small and fixed, the playability is poor, and the user experience is poor.
Disclosure of Invention
In view of the above problems, the present application provides a picture generation method, an apparatus, an electronic device, and a storage medium to solve the above problems.
In a first aspect, an embodiment of the present application provides a picture generation method, which is applied to an electronic device, and the method includes: acquiring image information, extracting texture information of the image information, and extracting color information of the image information; acquiring a picture corresponding to the texture information as a picture to be processed based on a preset mapping relation; and processing the picture to be processed based on the color information to generate a target picture.
In a second aspect, an embodiment of the present application provides a picture generating apparatus, which is applied to an electronic device, and the apparatus includes: the image information acquisition module is used for acquiring image information, extracting texture information of the image information and extracting color information of the image information; the to-be-processed picture acquisition module is used for acquiring a picture corresponding to the texture information as the to-be-processed picture based on a preset mapping relation; and the target picture generation module is used for processing the picture to be processed based on the color information to generate a target picture.
In a third aspect, an embodiment of the present application provides an electronic device, including a memory and a processor, the memory being coupled to the processor, the memory storing instructions, and the processor performing the above method when the instructions are executed by the processor.
In a fourth aspect, the present application provides a computer-readable storage medium, in which a program code is stored, and the program code can be called by a processor to execute the above method.
The image generation method and device, the electronic device and the storage medium provided by the embodiment of the application acquire image information, extract texture information and color information of the image information, acquire a picture corresponding to the texture information as a picture to be processed based on a preset mapping relation, process the picture to be processed based on the color information, and generate a target picture. According to the method and the device, the picture to be processed is obtained through the texture information of the image information, and the picture to be processed is processed through the color information of the image information to generate the target picture, so that the generated target picture and the characteristics of the image information are more consistent, the personalized requirements of users are met, and the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart illustrating a picture generation method according to an embodiment of the present application;
fig. 2 is a schematic flow chart illustrating a picture generation method according to another embodiment of the present application;
FIG. 3 is a flow chart illustrating a method for generating a picture according to still another embodiment of the present application;
FIG. 4 is a schematic interface diagram of an electronic device according to an embodiment of the present disclosure;
fig. 5 shows a flowchart of step S303 of the picture generation method shown in fig. 3 of the present application;
FIG. 6 is a schematic diagram illustrating yet another interface of an electronic device provided by an embodiment of the present application;
fig. 7 shows a flowchart of step S307 of the picture generation method shown in fig. 3 of the present application;
FIG. 8 is a schematic interface diagram of an electronic device according to an embodiment of the present disclosure;
fig. 9 is a schematic flowchart illustrating a picture generation method according to another embodiment of the present application;
FIG. 10 is a timing diagram illustrating a picture generation method according to yet another embodiment of the present application;
FIG. 11 shows a block diagram of a picture generation apparatus provided in an embodiment of the present application;
fig. 12 is a block diagram of an electronic device for executing a picture generation method according to an embodiment of the present application;
fig. 13 illustrates a storage unit for storing or carrying program codes for implementing a picture generation method according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
However, in research, when the display screen of the electronic device is replaced, the display screen available for replacement is a picture or the like stored locally in the electronic device, and the number of the display screens is small, the screen is fixed, and user experience is poor.
In view of the above problems, the inventors have found, through long-term research, that a method, an apparatus, an electronic device, and a storage medium for generating a picture provided in an embodiment of the present application are provided, in which texture information of image information obtains a picture to be processed, and the picture to be processed is processed through color information of the image information to generate a target picture for device replacement, so that characteristics of the generated target picture and the image information are more consistent, personalized requirements of a user are met, and user experience is improved. The specific picture generation method is described in detail in the following embodiments.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating a picture generation method according to an embodiment of the present application. The picture generation method is used for acquiring the picture to be processed through the texture information of the image information and processing the picture to be processed through the color information of the image information to generate the target picture, so that the generated target picture and the characteristics of the image information are more consistent, the personalized requirements of users are met, and the user experience is improved. In a specific embodiment, the picture generation method is applied to the picture generation apparatus 200 shown in fig. 11 and the electronic device 100 (fig. 12) equipped with the picture generation apparatus 200. The specific process of the present embodiment will be described below by taking an electronic device as an example, and it is understood that the electronic device applied in the present embodiment may include a desktop computer, a smart phone, a tablet computer, a wearable electronic device, and the like, which is not limited herein. As will be explained in detail with respect to the flow shown in fig. 1, the image generating method may specifically include the following steps:
step S101: acquiring image information, extracting texture information of the image information, and extracting color information of the image information.
In this embodiment, the electronic device may acquire image information. In some embodiments, the electronic device may obtain image information locally, for example, the electronic device may obtain image information from a local album, where the image information of the album may be obtained by shooting with a camera and then saving in the album, or may be obtained by downloading from a network and then saving in the album, which is not limited herein. In some embodiments, the electronic device may obtain the image information from a server, for example, the electronic device may download the image information from the server through a data network or a wireless network, which is not limited herein. In some embodiments, the electronic device may acquire the image information in real time, for example, the electronic device may acquire and acquire the image information through a camera, which is not limited herein. Of course, in this embodiment, the electronic device may also obtain the image information in other manners, which is not described herein again.
In this embodiment, after acquiring the image information, the electronic device may extract texture information of the image information. The texture information generally refers to image texture, which is a visual feature reflecting homogeneity phenomenon in an image and represents the surface structure organization arrangement attribute with slow change or periodic change on the surface of an object. Texture has three major landmarks: a local sequence of continuously repeating, non-random arrays, a substantially uniform continuum within a textured area. In some embodiments, after acquiring the image information, the electronic device may extract texture information of the image information by a statistical method, extract texture information of the image information by a geometric method, extract texture information of the image information by a model method, extract texture information of the image information by a signal processing method, or extract texture information of the image information by a structural method, which is not limited herein. As one mode, after acquiring the image information, the electronic device may output and acquire texture information of the image information after performing graying processing, normalization processing, and SVM model matching on the image information in sequence.
In this embodiment, after acquiring the image information, the electronic device may extract color information of the image information, where the color information may include red, yellow, green, blue, white, black, and the like, which is not limited herein. In some embodiments, after acquiring the image information, the electronic device may extract color information of the image information by a general histogram method, may extract color information of the image information by a global accumulation histogram method, may extract color information of the image information by a local accumulation histogram method, may extract color information of the image information by a statistical feature method of color parameters, may extract color information of the image information by a first-order matrix and a second-order matrix of colors, may extract color information of the image information by a wavelet-based block image, and the like, which are not limited herein. As one mode, after the electronic device obtains the image information, the electronic device may sequentially perform HSV model conversion processing, color optimization processing on an H-channel value, and counting and sorting the optimized H-channel values to obtain the maximum three colors as the color information of the image information.
In some embodiments, after acquiring the image information, the electronic device may extract texture information of the image information and then extract color information of the image information; color information of the image information can be extracted first, and then texture information of the image information is extracted; the color information and the texture information of the image information may be extracted at the same time, which is not limited herein.
Step S102: and acquiring a picture corresponding to the texture information as a picture to be processed based on a preset mapping relation.
In some embodiments, the electronic device may preset and store a mapping relationship table as a preset mapping relationship, where the preset mapping relationship may include a correspondence relationship between a plurality of texture information and a plurality of pictures, and in the preset mapping relationship, the plurality of texture information and the plurality of pictures may correspond to each other one by one, the plurality of texture information may correspond to one picture, one texture information may correspond to a plurality of pictures, and the like, which is not limited herein.
For example, as shown in table 1, the texture information of the image information may include first texture information, second texture information, third texture information, and fourth texture information, the plurality of pictures may include a first picture, a second picture, a third picture, and a fourth picture, and the first texture information corresponds to the first picture, the second texture information corresponds to the second picture, the third texture information corresponds to the third picture, and the fourth texture information corresponds to the fourth picture.
TABLE 1
Texture information Picture frame
First texture information First picture
Second texture information Second picture
Third texture information Third picture
Fourth texture information Fourth picture
In some embodiments, after the texture information of the image information is obtained, a picture corresponding to the texture information of the image information may be obtained as a picture to be processed according to a preset mapping relationship. Specifically, after texture information of the image information is acquired, the texture information of the image information may be compared with a plurality of pieces of texture information in a preset mapping relationship, so as to determine texture information matched with the texture information of the image information from the plurality of pieces of texture information in the preset mapping relationship, then obtain a picture corresponding to the texture information matched with the texture information of the image information in the preset mapping relationship, and use the obtained picture corresponding to the texture information matched with the texture information of the image information as a picture to be processed, so that the picture corresponding to the texture information of the image information may be obtained as the picture to be processed. For example, after comparing the texture information of the image information with the plurality of texture information in the preset mapping relationship, when it is determined that the texture information of the image information matches the first texture information in the plurality of texture information, a first picture corresponding to the first texture information in the preset mapping relationship may be obtained, and the first picture may be determined as a picture to be processed.
Step S103: and processing the picture to be processed based on the color information to generate a target picture.
In this embodiment, after acquiring the to-be-processed picture, the electronic device may process the to-be-processed picture based on the color information extracted from the image information, so as to generate the target picture. In some embodiments, the background color of the to-be-processed picture may be replaced with the color information extracted from the image information to generate the target picture, for example, if the background color of the to-be-processed picture is yellow and the color information extracted from the image information is blue, the yellow background of the to-be-processed picture may be replaced with the blue background to generate the target picture. In some embodiments, the color information extracted from the image information may be added to the foreground image of the picture to be processed, and the background color of the picture to be processed is kept unchanged, for example, if the background color of the picture to be processed is yellow, the foreground image is a puppy, and the color information extracted from the image information is blue, then blue may be added to the puppy of the picture to be processed, and the yellow background is kept unchanged, so as to generate the target picture. Of course, in this embodiment, other manners for processing the picture to be processed based on the color information may also be included, which is not described herein again.
The image generation method provided by one embodiment of the application obtains image information, extracts texture information and color information of the image information, obtains an image corresponding to the texture information as a to-be-processed image based on a preset mapping relation, processes the to-be-processed image based on the color information, and generates a target image. According to the method and the device, the picture to be processed is obtained through the texture information of the image information, and the picture to be processed is processed through the color information of the image information to generate the target picture, so that the generated target picture and the characteristics of the image information are more consistent, the personalized requirements of users are met, and the user experience is improved.
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating a picture generation method according to another embodiment of the present application. The method is applied to the electronic device, and will be described in detail with respect to the flow shown in fig. 2, where the picture generation method may specifically include the following steps:
step S201: acquiring image information, extracting texture information of the image information, and extracting color information of the image information.
For detailed description of step S201, please refer to step S101, which is not described herein again.
Step S202: and inputting the texture information into a trained classification model, and acquiring the type of the texture information output by the trained classification model.
In some embodiments, after extracting texture information of obtained image information, the electronic device may input the texture information of the image information into a trained classification model, where the trained classification model is obtained through machine learning, specifically, first, a training data set is collected, where attributes or features of one type of data in the training data set are different from those of another type of data, and then, training and modeling a neural network according to a preset algorithm is performed on the collected training data set, so that a rule is aggregated based on the training data to obtain the trained classification model. In this embodiment, one type of data of the training data set may comprise, for example, texture information of the image information, and another type of data may comprise, for example, a type of the texture information. The type of texture information may include: pure color, lattice, complex, etc., and are not limited herein.
In some embodiments, the trained classification model may be stored locally on the electronic device after pre-training is completed. Based on this, after the electronic device obtains the texture information of the image information, the trained classification model may be directly invoked locally, for example, an instruction may be directly sent to the trained classification model to instruct the trained classification model to read the texture information in the target storage area, or the electronic device may directly input the texture information into the locally trained classification model, so that the speed of inputting the texture information into the trained classification model due to the influence of network factors is effectively avoided, the speed of obtaining the texture information by the trained classification model is increased, and user experience is improved.
In some embodiments, the trained classification model may also be stored in a server communicatively coupled to the electronic device after being trained in advance. Based on this, after the electronic device acquires the texture information of the image information, the electronic device may send an instruction to the trained classification model stored in the server through the network to instruct the trained classification model to read the texture information through the network, or the electronic device may send the texture information to the trained classification model stored in the server through the network, so that the occupation of the storage space of the electronic device is reduced and the influence on the normal operation of the electronic device is reduced by storing the trained classification model in the server.
In some embodiments, the present application further includes a training method for the classification model, where training of the classification model may be performed in advance according to an acquired training data set, and then, each time the type of texture information is acquired, the classification model may be acquired according to the classification model, and there is no need to train the classification model each time the type of texture information is acquired.
In the embodiment of the present application, a training data set of the electronic device may be trained by using a machine learning algorithm, so as to obtain a classification model. Wherein, the adopted machine learning algorithm can comprise: neural networks, Long Short-Term Memory (LSTM) networks, threshold cycle units, simple cycle units, autoencoders, decision trees, random forests, feature mean classifications, classification regression trees, hidden markov, K-nearest neighbor (KNN) algorithms, logistic regression models, bayesian models, gaussian models, and KL divergence (Kullback-Leibler) among others. The specific machine learning algorithm may not be limiting.
The training of the initial model based on the training data set is described below using a neural network as an example.
The texture information of the image information in a set of data in the training dataset is used as an input sample (input data) of the neural network, and the type of texture information labeled in a set of data is used as an output sample (output data) of the neural network. The neurons in the input layer are fully connected with the neurons in the hidden layer, and the neurons in the hidden layer are fully connected with the neurons in the output layer, so that potential features with different granularities can be effectively extracted. And the number of the hidden layers can be multiple, so that the nonlinear relation can be better fitted, and the trained preset model is more accurate.
It is understood that the training process for the classification model may or may not be performed by the electronic device. When the training process is not performed by the electronic device, then the electronic device may be used only as a direct user or may be an indirect user.
In some embodiments, the classification model may be trained and updated by periodically or aperiodically acquiring new training data.
Step S203: and acquiring a picture corresponding to the type of the texture information as a picture to be processed based on a preset mapping relation.
In some embodiments, the electronic device may preset and store a mapping relationship table as a preset mapping relationship, where the preset mapping relationship may include a correspondence between types of a plurality of texture information and a plurality of pictures, and in the preset mapping relationship, the types of the plurality of texture information and the plurality of pictures may correspond to each other one by one, the types of the plurality of texture information may correspond to one picture, the type of one texture information may correspond to a plurality of pictures, and the like, which is not limited herein.
For example, as shown in table 2, the plurality of types may include a first type, a second type, a third type, and a fourth type, the plurality of pictures may include a first picture, a second picture, a third picture, and a fourth picture, and the first type corresponds to the first picture, the second type corresponds to the second picture, the third type corresponds to the third picture, and the fourth type corresponds to the fourth picture.
TABLE 2
Type (B) Picture frame
First type First picture
Of the second type Second picture
Third type Third picture
Type IV Fourth picture
In some embodiments, after the type of the texture information is obtained, a picture corresponding to the type of the texture information may be obtained as a picture to be processed according to a preset mapping relationship. Specifically, after the type of the texture information is obtained, the type of the texture information may be compared with multiple types in the preset mapping relationship to determine a type matched with the type of the texture information from the multiple types in the preset mapping relationship, then, a picture corresponding to the type matched with the type of the texture information in the preset mapping relationship is obtained, and the obtained picture corresponding to the type matched with the type of the texture information is taken as a picture to be processed, so that the picture corresponding to the texture information may be obtained as the picture to be processed. For example, after comparing the type of the texture information with a plurality of types in the preset mapping relationship, and determining that the type of the texture information matches a first type in the plurality of types, a first picture corresponding to the first type in the preset mapping relationship may be obtained, and the first picture may be determined as a picture to be processed.
Step S204: and processing the picture to be processed based on the color information to generate a target picture.
For detailed description of step S204, please refer to step S103, which is not described herein again.
Step S205: and acquiring a target display picture based on the target picture, and displaying the target display picture on the equipment to be replaced.
In this embodiment, after acquiring the target picture, the electronic device may acquire the target display screen based on the target picture. In some embodiments, the target display screen obtained based on the target picture may use the target picture as an element or material, and may also be the target picture. When the target display picture obtained based on the target picture takes the target picture as an element or a material, the target display picture may include a plurality of elements, and the target picture is a part of the elements in the plurality of elements, for example, the target display picture may include grassland, cattle, and sheep, and the target picture is cattle, and then the target picture is a part of the elements in the target display picture at this time. When the target display picture is the target picture, all pictures included in the target display picture are the same as all pictures included in the target picture, for example, if the target display picture includes a cow and the target picture is a cow, then the target display picture is the target picture.
In this embodiment, after acquiring the target display screen, the electronic device may display the target display screen on the device to be replaced. The device to be replaced may be the electronic device itself, or another electronic device, which is not limited herein.
In some embodiments, when the device to be replaced is the electronic device itself, the display screen of the electronic device may be directly replaced with the target display screen, for example, if the display screen of the electronic device is sea and the target display screen is grassland, the display screen of the electronic device may be replaced from sea to grassland to change the display screen of the electronic device.
In some embodiments, when the device to be replaced is another electronic device, the electronic device may send the target display screen to the device to be replaced by using a short-distance communication technology to instruct the device to be replaced to display the target display screen. For example, if the display screen of the other electronic device is the sea and the target display screen is the grassland, the electronic device may send the grassland to the other electronic device to instruct the other electronic device to replace the display screen from the sea to the grassland to change the display screen of the other electronic device.
In some embodiments, when the device to be replaced is another electronic device, the other electronic device may store a plurality of pictures to be processed in advance, and the electronic device may send the color parameter RGB of the target display picture to the device to be replaced by using a short-distance communication technology, so as to instruct the device to be replaced to process the pictures to be processed based on the received color parameter RGB, so as to obtain the target picture and generate and display the target display picture based on the target picture.
In another embodiment of the present application, a method for generating a picture includes obtaining image information, extracting texture information and color information of the image information, inputting the texture information into a trained classification model, obtaining a type of the texture information output by the trained classification model, obtaining a picture corresponding to the type of the texture information as a picture to be processed based on a preset mapping relationship, processing the picture to be processed based on the color information to generate a target picture, obtaining a target display picture based on the target picture, and displaying the target display picture on a device to be replaced. Compared with the picture generation method shown in fig. 1, in the embodiment, the target display picture is further obtained through the texture information and the color information of the image information to be displayed on the device to be replaced, so that the diversity of the display picture is improved.
Referring to fig. 3, fig. 3 is a schematic flowchart illustrating a picture generation method according to still another embodiment of the present application. The method is applied to the electronic device, and will be described in detail with respect to the flow shown in fig. 3, where the picture generation method may specifically include the following steps:
step S301: and acquiring image information and displaying the image information.
Referring to fig. 4, fig. 4 is a schematic interface diagram of an electronic device according to an embodiment of the present disclosure. As shown in fig. 4, after the electronic device acquires the image information, the electronic device may display the image information, where a in fig. 4 represents the displayed image information, and the electronic device may display the image information in a full screen, and may display the image information in a non-full screen, which is not limited herein.
Step S302: a selection operation acting on the image information is received.
In some embodiments, the electronic device may detect a selection operation applied to the image information during displaying the image information, and may receive the selection operation applied to the image information when the selection operation applied to the image information is detected. In this embodiment, the selection operation applied to the image information may be triggered by a finger of a user, a stylus, a peripheral device, and the like, which is not limited herein. When the selection operation performed on the image information is triggered by a finger of a user, the selection operation performed on the image information may include a single-finger click operation, a multi-finger click operation, a single-finger press operation, a multi-finger press operation, a single-finger slide operation, a multi-finger slide operation, and the like, which is not limited herein.
Step S303: and determining a target image area in the image information based on the selection operation.
In some embodiments, the electronic device, upon receiving a selection operation acting on the image information, may determine a target image area in the image information based on the selection operation. The target image area can be smaller than or equal to the area corresponding to the image information, so that the purpose that the user selects a favorite area from the acquired image information is achieved.
Referring to fig. 5, fig. 5 is a flowchart illustrating step S303 of the image generating method illustrated in fig. 3 according to the present application. As will be explained in detail with respect to the flow shown in fig. 5, the method may specifically include the following steps:
step S3031: and generating and displaying a selection control in the image information.
Referring to fig. 6, fig. 6 is a schematic view illustrating another interface of an electronic device according to an embodiment of the present application. As shown in fig. 6, the electronic device may further generate and display a selection control in the image information in a process of starting to display the image information or displaying the image information, where B in fig. 6 represents the selection control, the shape of the selection control may be a square, a rectangle, a circle, a triangle, or the like, the size of the selection control may be larger or smaller, and the shape and size of the selection control may be changed as needed in a using process, which is not limited herein.
Step S3032: responding to the dragging operation acted on the selection control, and taking the corresponding area of the dragged selection control in the image information as the target image area.
In this embodiment, the electronic device may detect a dragging operation acting on the selection control in the process of displaying the image information and displaying the selection control in the image information, wherein when the dragging operation acting on the selection control is detected, a region corresponding to the dragged selection control in the image information may be determined as the target image region in response to the dragging operation.
In some embodiments, when the electronic device generates and displays the selection control in the image information, the selection control may default to a preset position displayed in the image information and select a region with a preset size. When the default display position of the selection control and the frame selection area meet the requirements of the user, the user can not drag the selection control, the default frame selection area of the selection control is used as a target image area, and when the default display position of the selection control and the selection area do not meet the requirements of the user, the user can drag the selection control, and the corresponding area of the dragged selection control in the image information is used as the target image area.
In some embodiments, when the electronic device generates and displays the selection control in the image information, the electronic device may identify the image information, and control the display position of the selection control and the selection object of the selection control according to the identification result, for example, may identify the image information to obtain a foreground image in the image information, and control the selection control to select the foreground image. When the frame selection object of the selection control meets the requirements of the user, the user can not drag the selection control, the area corresponding to the frame selection object of the selection control is used as the target image area, and when the frame selection object of the selection control does not meet the requirements of the user, the user can drag the selection control, and the area corresponding to the dragged selection control in the image information is used as the target image area.
Step S304: and extracting texture information of the target image area and extracting color information of the target image area.
In some embodiments, the electronic device may acquire the number of types of color information in the image information, and extract the specified number of types of color information from the image information in order of the number of types of color information from the image information when the number of types of color information is greater than the specified number of types.
Specifically, the electronic device may identify the type of the color information in the image information and acquire the number of types in the color information, for example, when the color information in the image information is identified to include red, yellow, green, and blue, the number of types of the color information may be determined to be 4, when the color information in the image information is identified to include red, yellow, and green, the number of types of the color information may be determined to be 3, and the like, which is not limited herein. Wherein the electronic apparatus may be previously set and stored with a specified number of kinds as a criterion for the number of kinds of color information extracted from the image information, and therefore, in the present embodiment, after acquiring the number of types of color information in the image information, the number of types of color information may be compared with the number of specified types to determine whether the number of types of color information is greater than the number of specified types, wherein, when the number of the types of the color information represented by the judgment result is larger than the specified number of the types, the color types can be considered to be too many, a specified number of kinds of color information can be extracted from the image information in order of the number of color information, when the number of the types of the color information represented by the judgment result is not more than the specified number of the types, the color information can be directly extracted if the color type is considered to be appropriate. In some embodiments, the specified number of species may be 3.
In some embodiments, when the number of types of color information is greater than the specified number of types, the number of pixel points of each color information of the image information may be obtained, and the specified number of types of color information may be extracted from the image information in order of the number of pixel points from the largest to the smallest, for example, when the specified number of types is 3, then 3 types of color information may be extracted from the image information in order of the number of pixel points from the largest to the smallest.
In some embodiments, when the number of types of color information is not greater than the specified number of types, the size of the area occupied by each color in the image information may be acquired, and the specified number of types of color information may be extracted from the image information in the order from the large size to the small size of the occupied area, for example, when the specified number of types is 3, then 3 types of color information may be extracted from the image information in the order from the large size to the small size of the occupied area.
Step S305: and inputting the texture information into a trained classification model, and acquiring the type of the texture information output by the trained classification model.
Step S306: and acquiring a picture corresponding to the type of the texture information as a picture to be processed based on a preset mapping relation.
For detailed description of steps S305 to S306, please refer to steps S202 to S203, which are not described herein again.
Step S307: and processing the color information based on a color optimization algorithm to obtain target color information.
In this embodiment, after the electronic device obtains the color information of the image information, the electronic device may process the color information based on a color optimization algorithm to obtain the target color information, so as to obtain color information that is better-looking than the color information of the image information, and improve the display effect.
In some embodiments, after acquiring the color information of the image information, the electronic device may search a preset color optimization table for a color optimization mode corresponding to the color information of the image information, and process the color information based on the color optimization mode to obtain the target color information.
Specifically, the electronic device may preset and store a preset color optimization table, where the preset color optimization table may include a corresponding relationship between a plurality of color information and a plurality of color optimization manners, where the color information in the preset color optimization table may be a pure color or a combination of a plurality of colors, and in the preset color optimization table, the plurality of color information and the plurality of color optimization manners may correspond to one another, the plurality of color information may correspond to one color optimization manner, and one color information may correspond to a plurality of color optimization manners, and the like, which is not limited herein.
For example, as shown in table 3, the plurality of color information may include first color information, second color information, third color information, and fourth color information, the plurality of color optimization manners may include a first color optimization manner, a second color optimization manner, a third color optimization manner, and a fourth color optimization manner, and the first color information corresponds to the first color optimization manner, the second color information corresponds to the second color optimization manner, the third color information corresponds to the third color optimization manner, and the fourth color information corresponds to the fourth color optimization manner.
TABLE 3
Color information Color optimization mode
First color information First color optimization mode
Second color information Second color optimization mode
Third color information Third color optimization mode
Fourth color information Fourth color optimization mode
In some embodiments, after the color information of the image information is acquired, the color information of the image information may be compared with a plurality of color information in a preset color optimization table to determine color information matching the color information of the image information from the plurality of color information in the preset color optimization table, and then the color optimization manner corresponding to the color information matching the color information of the image information in the preset color optimization table is acquired, and the color optimization manner corresponding to the color information matching the color information of the image information is determined as the color optimization manner corresponding to the color information of the image information. For example, when it is determined that the color information of the image information matches first color information of the plurality of color information after comparing the color information of the image information with the plurality of color information in the preset color optimization table, a first color optimization manner corresponding to the first color information may be obtained, and the first color optimization manner may be determined as the color optimization manner corresponding to the color information of the image information.
In some embodiments, the preset color optimization table may further include a corresponding relationship between a plurality of color information and a plurality of optimized color information, where in the preset color optimization table, the plurality of color information and the plurality of optimized color information may correspond to one another, the plurality of color information may correspond to one optimized color information, one color information may correspond to the plurality of optimized color information, and the like, which is not limited herein.
For example, as shown in table 4, the plurality of color information may include first color information, second color information, third color information, and fourth color information, the plurality of optimized color information may include fifth color information, sixth color information, seventh color information, and eighth color information, and the first color information and the fifth color information correspond, the second color information and the sixth color information correspond, the third color information and the seventh color information correspond, and the fourth color information and the eighth color information correspond.
TABLE 4
Color information Optimized color information
First color information Fifth color information
Second color information Sixth color information
Third color information Seventh color information
Fourth color information Eighth color information
In some embodiments, after obtaining the color information of the image information, the color information of the image information may be compared with a plurality of color information in a preset color optimization table to determine color information matching the color information of the image information from the plurality of color information in the preset color optimization table, then obtain optimized color information corresponding to the color information matching the color information of the image information in the preset color optimization table, and determine the obtained optimized color information corresponding to the color information matching the color information of the image information as the optimized color information corresponding to the color information of the image information. For example, when it is determined that the color information of the image information matches the first color information of the plurality of color information after comparing the color information of the image information with the plurality of color information in the preset color optimization table, the fifth color information corresponding to the first color information may be acquired and determined as the optimized color information corresponding to the color information of the image information.
Referring to fig. 7, fig. 7 is a flowchart illustrating step S307 of the image generating method illustrated in fig. 3 according to the present application. As will be explained in detail with respect to the flow shown in fig. 7, the method may specifically include the following steps:
step S3071: and acquiring the brightness corresponding to the color information.
In this embodiment, after the electronic device acquires the color information of the image information, the electronic device may acquire the brightness corresponding to the color information. In some embodiments, the mean and variance of the image information on the gray scale map may be calculated to obtain the brightness of the color information of the image information, the conversion of the image information from RGB to HSL or HSV may be controlled to obtain the brightness of the color information of the image information, the image information may be converted into a gray scale picture, the pixel mean value is calculated as the brightness of the color information of the image information using cvAvg, and the like, which are not limited herein. Of course, in this embodiment, other manners for acquiring the brightness of the color information of the image information may also be included, and details are not described herein.
Step S3072: and when the brightness is smaller than the preset brightness, processing the color information based on the color optimization algorithm to obtain target color information, wherein the corresponding brightness of the target color information is larger than the preset brightness.
In some embodiments, the electronic device may preset and store a preset brightness, which is used as a basis for determining the brightness corresponding to the color information. Therefore, in this embodiment, after obtaining the brightness corresponding to the color information, the brightness corresponding to the color information may be compared with the preset brightness to determine whether the brightness corresponding to the color information is smaller than the preset brightness, wherein when the brightness corresponding to the characterization color information is smaller than the preset brightness as a result of the determination, the characterization color information is darker and darker than the preset brightness, and no discussion preference is given to the characterization color information, the color information may be processed based on a color optimization algorithm to obtain target color information with the brightness larger than the preset brightness, so as to obtain a color with the discussion preference. When the brightness corresponding to the characterization color information is not less than the preset brightness, the characterization color information is more gorgeous and happy, and the color information can not be processed.
Step S308: and processing the picture to be processed based on the target color information to generate a target picture.
For detailed description of step S208, please refer to step S103, which is not described herein.
Step S309: and displaying a plurality of target pictures.
Referring to fig. 8, fig. 8 is a schematic view illustrating another interface of an electronic device according to an embodiment of the present disclosure. As shown in fig. 8, when the number of generated target pictures is plural, plural target pictures may be displayed for selection by the user. Wherein C in fig. 7 represents the target picture, the number of the target pictures C shown in fig. 8 is 5, and the display order, the display layout, and the like of the plurality of target pictures are not limited herein.
Step S310: and acquiring selection operation acting on the plurality of target pictures, and selecting one target picture from the plurality of target pictures based on the selection operation.
In some embodiments, during displaying of a plurality of target pictures, the electronic device may detect a selection operation performed on the plurality of target pictures, and when a selection operation performed on one of the plurality of target pictures is detected, the electronic device may obtain the selection operation performed on the one target picture, and select and determine one target picture from the plurality of target pictures based on the selection operation. In this embodiment, the selecting operation applied to the plurality of target pictures may be triggered by a finger of the user, may be triggered by a stylus, may be triggered by a peripheral device, and the like, which is not limited herein. When the selecting operation for the plurality of target pictures is triggered by the user's finger, the selecting operation for the plurality of target pictures may include a single-finger click operation, a multi-finger click operation, a single-finger press operation, a multi-finger press operation, a single-finger slide operation, a multi-finger slide operation, and the like, which is not limited herein.
Step S311: and acquiring a target display picture corresponding to the target picture based on the target picture, and displaying the target display picture on equipment to be replaced.
For detailed description of step S311, please refer to step S205, which is not described herein again.
A still further embodiment of the present application provides a method for generating a picture, including obtaining and displaying image information, receiving a selection operation applied to the image information, determining a target image region in the image information based on the selection operation, extracting texture information and color information of the target image region, inputting the texture information into a trained classification model, obtaining a type of the texture information output by the trained classification model, obtaining a picture corresponding to the type of the texture information as a picture to be processed based on a preset mapping relationship, processing the color information based on a color optimization algorithm to obtain target color information, processing the picture to be processed based on the target color information to generate a target picture, displaying a plurality of target pictures, obtaining a selection operation applied to the plurality of target pictures, selecting a target picture from the plurality of target pictures based on the selection operation, obtaining a target display picture corresponding to the target picture based on the target picture, and displaying the target display picture on the equipment to be replaced. Compared with the picture generation method shown in fig. 1, in the embodiment, the target image area is determined based on the selection operation acting on the image information, and one target picture is selected based on the selection operation acting on a plurality of target pictures to generate the target display picture, so that the interaction with the user is improved, and the user experience is improved.
Referring to fig. 9, fig. 9 is a schematic flowchart illustrating a picture generation method according to another embodiment of the present application. The method is applied to the electronic device, and as will be described in detail below with respect to the flow shown in fig. 9, the picture generation method may specifically include the following steps:
step S401: acquiring image information, extracting texture information of the image information, and extracting color information of the image information.
Step S402: and inputting the texture information into a trained classification model, and acquiring the type of the texture information output by the trained classification model.
Step S403: and acquiring a picture corresponding to the type of the texture information as a picture to be processed based on a preset mapping relation.
For detailed description of steps S401 to S403, refer to steps S201 to S203, which are not described herein again.
Step S404: and setting the path channel of the picture to be processed as the color information, and generating the target picture.
In some embodiments, the picture to be processed is a SVG picture. After the electronic device acquires the picture to be processed, the path channel of the picture to be processed can be set as the color information extracted from the image information, so as to generate a target picture similar to the texture information and the color information of the acquired image information.
In this embodiment, the SVG picture may include a primary channel, a secondary channel, and a third channel, and in some embodiments, the color information may include a primary color (solid color), and may also include a primary color, a secondary color, and a third color. Wherein, when colour information is the pure colour, can fill the primary color at the primary channel of SVG picture, all fill the second colour after the primary color conversion at the secondary channel of SVG picture and third channel, as a mode, the second colour can be obtained through the look-up of above-mentioned colour optimization table of predetermineeing, no longer gives details here. Wherein, when the color information is not a pure color, a primary color may be filled in the primary channel of the SVG picture, if the SVG picture includes a secondary channel, and the color information includes a second color, the second color may be filled in the secondary channel of the SVG picture, if the SVG picture does not include the secondary channel and/or the color information does not include the second color, the filling of the second color is not performed, likewise, if the SVG picture includes a third channel, and the color information includes a third color, a third color may be filled in the third channel of the SVG picture, and if the SVG picture does not include the third channel and/or the color information does not include the third color, the filling of the third color is not performed.
Step S405: and acquiring a target display picture based on the target picture, and sending the target display picture to the wearable device so as to instruct the wearable device to display the target display picture.
In some embodiments, the device to be replaced is a wearable device. The electronic device can transmit the target display screen to the wearable device through the short-distance communication technology to instruct the wearable device to display the target display screen. The electronic device can also send the color parameters RGB of the target display picture to the wearable device through a short-distance communication technology so as to instruct the wearable device to process the picture to be processed based on the received color parameters RGB, so as to obtain the target picture and generate and display the target display picture based on the target picture.
In some embodiments, the wearable device may include a smart watch, and after the electronic device acquires the target picture, the electronic device may acquire the target display picture based on the target picture, and transmit the target display picture to the smart watch to instruct the smart watch to replace the dial with the target display picture. The electronic device may also send the color parameter RGB of the target display image to the smartwatch to instruct the smartwatch to process the picture to be processed based on the received color parameter RGB to obtain the target picture and generate and display the target display image based on the target picture. In some embodiments, the smart watch may further synthesize information such as a pointer, time, and electric quantity when displaying the target display screen, which is not limited herein.
The image generation method provided by another embodiment of the present application obtains image information, extracts texture information and color information of the image information, inputs the texture information into a trained classification model, obtains a type of the texture information output by the trained classification model, obtains a picture corresponding to the type of the texture information as a picture to be processed based on a preset mapping relationship, sets a path channel of the picture to be processed as color information, generates a target picture, obtains a target display picture based on the target picture, and sends the target display picture to a wearable device to instruct the wearable device to display the target display picture. Compared with the image generation method shown in fig. 1, in the embodiment, the setting of the color is further realized by setting the path channel of the image to be processed as the acquired color information, and in addition, the target display picture is sent to the wearable device to be displayed, so that the display diversity of the wearable device is improved.
Referring to fig. 10, fig. 10 is a timing chart illustrating a picture generation method according to still another embodiment of the present application. As will be explained in detail with respect to the flow shown in fig. 10, the image generating method may specifically include the following steps:
step S501: the electronic equipment acquires image information, extracts texture information of the image information, and extracts color information of the image information.
Step S502: and the electronic equipment acquires the picture corresponding to the texture information as a picture to be processed based on a preset mapping relation.
Step S503: and the electronic equipment processes the picture to be processed based on the color information to generate a target picture.
Step S504: and the electronic equipment acquires a target display picture based on the target picture and sends the target display picture to the equipment to be replaced.
Step S505: and the equipment to be replaced receives the target display picture and displays the target display picture.
In another embodiment of the application, in the method for generating a picture, an electronic device obtains image information, extracts texture information and color information of the image information, obtains a picture corresponding to the texture information as a to-be-processed picture based on a preset mapping relationship, processes the to-be-processed picture based on the color information to generate a target picture, obtains a target display picture based on the target picture, sends the target display picture to a to-be-replaced device, and receives the target display picture and displays the target display picture, so that the target display picture is obtained through the texture information and the color information of the image information to be displayed on the to-be-replaced device, and diversity of the display pictures is improved.
Referring to fig. 11, fig. 11 is a block diagram illustrating a picture generating apparatus 200 according to an embodiment of the present disclosure. The picture generating apparatus 200 is applied to the above-mentioned electronic device, and will be explained with reference to the block diagram shown in fig. 11, where the picture generating apparatus 200 includes: an image information obtaining module 210, a to-be-processed picture obtaining module 220, and a target picture generating module 230, wherein:
the image information obtaining module 210 is configured to obtain image information, extract texture information of the image information, and extract color information of the image information.
Further, the image information obtaining module 210 includes: the image information display sub-module, the selection operation receiving sub-module, the target image area determining sub-module and the information extraction sub-module, wherein:
and the image information display submodule is used for displaying the image information.
A selection operation receiving submodule for receiving a selection operation applied to the image information.
And the target image area determining submodule is used for determining a target image area in the image information based on the selection operation.
Further, the target image region determination sub-module includes: selecting a control display unit and a target image area determining unit, wherein:
and the selection control display unit is used for generating and displaying the selection control in the image information.
And the target image area determining unit is used for responding to the dragging operation acted on the selection control and taking the area corresponding to the dragged selection control in the image information as the target image area.
And the information extraction submodule is used for extracting the texture information of the target image area and extracting the color information of the target image area.
Further, the image information obtaining module 210 includes: a category number obtaining submodule and a color information extracting submodule, wherein:
and the type number obtaining submodule is used for obtaining the type number of the color information in the image information.
And the color information extraction sub-module is used for extracting the color information with the specified number of types from the image information according to the sequence of the color information from the image information when the number of types of the color information is larger than the specified number of types.
And a to-be-processed picture obtaining module 220, configured to obtain, based on a preset mapping relationship, a picture corresponding to the texture information as a to-be-processed picture.
Further, the to-be-processed picture obtaining module 220 includes: the texture information type acquisition submodule and the to-be-processed picture acquisition submodule, wherein:
and the texture information type obtaining submodule is used for inputting the texture information into the trained classification model and obtaining the type of the texture information output by the trained classification model.
And the to-be-processed picture obtaining submodule is used for obtaining a picture corresponding to the type of the texture information as the to-be-processed picture based on the preset mapping relation.
And a target picture generating module 230, configured to process the to-be-processed picture based on the color information, and generate a target picture.
Further, the target picture generation module 230 includes: the target color information acquisition sub-module and the first target picture generation sub-module, wherein:
and the target color information acquisition submodule is used for processing the color information based on a color optimization algorithm to acquire target color information.
Further, the target color information acquisition sub-module includes: color optimization mode search unit and first target color information acquisition unit, wherein:
and the color optimization mode searching unit is used for searching the color optimization mode corresponding to the color information from a preset color optimization table.
And the first target color information acquisition unit is used for processing the color information based on the color optimization mode to obtain target color information.
Further, the target color acquisition sub-module includes: a shading acquisition unit and a second target color information acquisition unit, wherein:
and the brightness acquiring unit is used for acquiring the brightness corresponding to the color information.
And the second target color information acquisition unit is used for processing the color information based on the color optimization algorithm to obtain target color information when the brightness is smaller than the preset brightness, wherein the corresponding brightness of the target color information is larger than the preset brightness.
And the first target picture generation submodule is used for processing the picture to be processed based on the target color information to generate a target picture.
Further, the picture to be processed is an SVG picture, and the target picture generating module includes: a second target picture generation sub-module, wherein:
and the second target picture generation submodule is used for setting the path channel of the picture to be processed as the color information and generating the target picture.
Further, the picture generation apparatus 200 further includes: a target display screen acquisition module, wherein:
and the target display picture acquisition module is used for acquiring a target display picture based on the target picture and displaying the target display picture on the equipment to be replaced.
Further, the number of the target pictures is multiple, and the target display screen acquiring module includes: the target picture display sub-module, the target picture selection sub-module and the first picture replacement sub-module, wherein:
and the target picture display submodule is used for displaying a plurality of target pictures.
And the target picture selection submodule is used for acquiring selection operation acting on the plurality of target pictures and selecting one target picture from the plurality of target pictures based on the selection operation.
And the first picture replacing submodule is used for acquiring a target display picture corresponding to the target picture based on the target picture and displaying the target display picture on the equipment to be replaced.
Further, the target display screen acquisition module includes: a target display screen acquisition sub-module, wherein:
and the target display picture acquisition sub-module is used for acquiring a target display picture based on the target picture and sending the target display picture to the wearable device so as to instruct the wearable device to display the target display picture.
Further, the wearable device comprises a smart watch, and the target display image acquisition sub-module comprises: a target display screen acquisition unit, wherein:
and the target display picture acquisition unit is used for acquiring a target display picture based on the target picture and sending the target display picture to the intelligent watch so as to indicate the intelligent watch to take the target display picture as a dial background and display the target display picture.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and modules may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, the coupling between the modules may be electrical, mechanical or other type of coupling.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Referring to fig. 12, a block diagram of an electronic device 100 according to an embodiment of the present disclosure is shown. The electronic device 100 may be a smart phone, a tablet computer, an electronic book, or other electronic devices capable of running an application. The electronic device 100 in the present application may include one or more of the following components: a processor 110, a memory 120, and one or more applications, wherein the one or more applications may be stored in the memory 120 and configured to be executed by the one or more processors 110, the one or more programs configured to perform a method as described in the aforementioned method embodiments.
Processor 110 may include one or more processing cores, among other things. The processor 110 connects various parts within the overall electronic device 100 using various interfaces and lines, and performs various functions of the electronic device 100 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 120 and calling data stored in the memory 120. Alternatively, the processor 110 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 110 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content to be displayed; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 110, but may be implemented by a communication chip.
The Memory 120 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 120 may be used to store instructions, programs, code sets, or instruction sets. The memory 120 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The data storage area may also store data created by the electronic device 100 during use (e.g., phone book, audio-video data, chat log data), and the like.
Referring to fig. 13, a block diagram of a computer-readable storage medium according to an embodiment of the present application is shown. The computer-readable medium 300 has stored therein a program code that can be called by a processor to execute the method described in the above-described method embodiments.
The computer-readable storage medium 300 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium 300 includes a non-volatile computer-readable storage medium. The computer readable storage medium 300 has storage space for program code 310 for performing any of the method steps of the method described above. The program code can be read from or written to one or more computer program products. The program code 310 may be compressed, for example, in a suitable form.
To sum up, the image generation method, the image generation device, the electronic device and the storage medium provided by the embodiment of the application obtain image information, extract texture information and color information of the image information, obtain an image corresponding to the texture information as a to-be-processed image based on a preset mapping relationship, process the to-be-processed image based on the color information, and generate a target image. According to the method and the device, the picture to be processed is obtained through the texture information of the image information, and the picture to be processed is processed through the color information of the image information to generate the target picture, so that the generated target picture and the characteristics of the image information are more consistent, the personalized requirements of users are met, and the user experience is improved.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (16)

1. A picture generation method is applied to an electronic device, and comprises the following steps:
acquiring image information, extracting texture information of the image information, and extracting color information of the image information;
acquiring a picture corresponding to the texture information as a picture to be processed based on a preset mapping relation;
and processing the picture to be processed based on the color information to generate a target picture.
2. The method according to claim 1, wherein the obtaining the picture corresponding to the texture information as the picture to be processed based on the preset mapping relationship comprises:
inputting the texture information into a trained classification model, and acquiring the type of the texture information output by the trained classification model;
and acquiring a picture corresponding to the type of the texture information as the picture to be processed based on the preset mapping relation.
3. The method according to claim 1, wherein the processing the picture to be processed based on the color information to generate a target picture comprises:
processing the color information based on a color optimization algorithm to obtain target color information;
and processing the picture to be processed based on the target color information to generate a target picture.
4. The method of claim 3, wherein the processing the color information based on the color optimization algorithm to obtain target color information comprises:
searching a color optimization mode corresponding to the color information from a preset color optimization table;
and processing the color information based on the color optimization mode to obtain target color information.
5. The method of claim 3, wherein the processing the color information based on the color optimization algorithm to obtain target color information comprises:
acquiring the brightness corresponding to the color information;
and when the brightness is smaller than the preset brightness, processing the color information based on the color optimization algorithm to obtain target color information, wherein the corresponding brightness of the target color information is larger than the preset brightness.
6. The method of claim 1, wherein extracting texture information and extracting color information of the image information comprises:
displaying the image information;
receiving a selection operation acting on the image information;
determining a target image area in the image information based on the selection operation;
and extracting texture information of the target image area and extracting color information of the target image area.
7. The method of claim 6, wherein receiving a selection operation applied to the image information, and determining a target image area in the image information based on the selection operation comprises:
generating and displaying a selection control in the image information;
responding to the dragging operation acted on the selection control, and taking the corresponding area of the dragged selection control in the image information as the target image area.
8. The method according to claim 1, further comprising, after the processing the to-be-processed picture based on the color information to generate a target picture:
and acquiring a target display picture based on the target picture, and displaying the target display picture on the equipment to be replaced.
9. The method according to claim 8, wherein the number of the target pictures is multiple, and the obtaining a target display screen based on the target pictures and displaying the target display screen on the device to be replaced comprises:
displaying a plurality of target pictures;
acquiring a selection operation acting on the plurality of target pictures, and selecting one target picture from the plurality of target pictures based on the selection operation;
and acquiring a target display picture corresponding to the target picture based on the target picture, and displaying the target display picture on equipment to be replaced.
10. The method according to claim 8, wherein the obtaining a target display screen based on the target picture and displaying the target display screen on a device to be replaced comprises:
and acquiring a target display picture based on the target picture, and sending the target display picture to the wearable device so as to instruct the wearable device to display the target display picture.
11. The method of claim 10, wherein the wearable device comprises a smart watch, and wherein obtaining a target display based on the target picture, sending the target display to the wearable device to instruct the wearable device to display the target display comprises:
and acquiring a target display picture based on the target picture, and sending the target display picture to the intelligent watch so as to indicate the intelligent watch to take the target display picture as a dial background and display the target display picture.
12. The method of claim 1, wherein the extracting color information of the image information comprises:
acquiring the type number of the color information in the image information;
when the number of types of the color information is larger than a specified number of types, the specified number of types of color information is extracted from the image information in order of the number of types of color information.
13. The method according to any one of claims 1-12, wherein the picture to be processed is an SVG picture, and the processing the picture to be processed based on the color information to generate a target picture comprises:
and setting the path channel of the picture to be processed as the color information, and generating the target picture.
14. A picture generation device applied to an electronic device, the device comprising:
the image information acquisition module is used for acquiring image information, extracting texture information of the image information and extracting color information of the image information;
the to-be-processed picture acquisition module is used for acquiring a picture corresponding to the texture information as the to-be-processed picture based on a preset mapping relation;
and the target picture generation module is used for processing the picture to be processed based on the color information to generate a target picture.
15. An electronic device comprising a memory and a processor, the memory coupled to the processor, the memory storing instructions that, when executed by the processor, the processor performs the method of any of claims 1-13.
16. A computer-readable storage medium, having stored thereon program code that can be invoked by a processor to perform the method according to any one of claims 1 to 13.
CN202010112938.8A 2020-02-24 2020-02-24 Picture generation method and device, electronic equipment and storage medium Pending CN113298896A (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN202010112938.8A CN113298896A (en) 2020-02-24 2020-02-24 Picture generation method and device, electronic equipment and storage medium
US17/183,298 US20210264191A1 (en) 2020-02-24 2021-02-23 Method and device for picture generation, electronic device, and storage medium
PCT/CN2021/077439 WO2021169945A1 (en) 2020-02-24 2021-02-23 Method and device for picture generation, electronic device, and storage medium
EP21158731.6A EP3869466A1 (en) 2020-02-24 2021-02-23 Method and device for picture generation, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010112938.8A CN113298896A (en) 2020-02-24 2020-02-24 Picture generation method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113298896A true CN113298896A (en) 2021-08-24

Family

ID=77317847

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010112938.8A Pending CN113298896A (en) 2020-02-24 2020-02-24 Picture generation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113298896A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114491112A (en) * 2022-02-16 2022-05-13 浙江网商银行股份有限公司 Information processing method and device
CN114582301A (en) * 2022-03-08 2022-06-03 康键信息技术(深圳)有限公司 Information display method and device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101706793A (en) * 2009-11-16 2010-05-12 中兴通讯股份有限公司 Method and device for searching picture
CN105447846A (en) * 2014-08-25 2016-03-30 联想(北京)有限公司 Image-processing method and electronic device
KR20160044285A (en) * 2014-10-15 2016-04-25 엘지전자 주식회사 Watch-type mobile terminal
US20180357800A1 (en) * 2017-06-09 2018-12-13 Adobe Systems Incorporated Multimodal style-transfer network for applying style features from multi-resolution style exemplars to input images
US20190026870A1 (en) * 2017-07-19 2019-01-24 Petuum Inc. Real-time Intelligent Image Manipulation System
CN109325903A (en) * 2017-07-31 2019-02-12 北京大学 The method and device that image stylization is rebuild

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101706793A (en) * 2009-11-16 2010-05-12 中兴通讯股份有限公司 Method and device for searching picture
CN105447846A (en) * 2014-08-25 2016-03-30 联想(北京)有限公司 Image-processing method and electronic device
KR20160044285A (en) * 2014-10-15 2016-04-25 엘지전자 주식회사 Watch-type mobile terminal
US20180357800A1 (en) * 2017-06-09 2018-12-13 Adobe Systems Incorporated Multimodal style-transfer network for applying style features from multi-resolution style exemplars to input images
US20190026870A1 (en) * 2017-07-19 2019-01-24 Petuum Inc. Real-time Intelligent Image Manipulation System
CN109325903A (en) * 2017-07-31 2019-02-12 北京大学 The method and device that image stylization is rebuild

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114491112A (en) * 2022-02-16 2022-05-13 浙江网商银行股份有限公司 Information processing method and device
CN114582301A (en) * 2022-03-08 2022-06-03 康键信息技术(深圳)有限公司 Information display method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111368893B (en) Image recognition method, device, electronic equipment and storage medium
US10853987B2 (en) Generating cartoon images from photos
US9501724B1 (en) Font recognition and font similarity learning using a deep neural network
CN108961303A (en) A kind of image processing method, device, electronic equipment and computer-readable medium
CN109117760B (en) Image processing method, image processing device, electronic equipment and computer readable medium
US11861769B2 (en) Electronic device and operating method thereof
US11436436B2 (en) Data augmentation system, data augmentation method, and information storage medium
CN114155543A (en) Neural network training method, document image understanding method, device and equipment
WO2021164550A1 (en) Image classification method and apparatus
CN112308144A (en) Method, system, equipment and medium for screening samples
US11138699B2 (en) Utilizing context-aware sensors and multi-dimensional gesture inputs to efficiently generate enhanced digital images
CN110097616B (en) Combined drawing method and device, terminal equipment and readable storage medium
CN108876751A (en) Image processing method, device, storage medium and terminal
CN113298896A (en) Picture generation method and device, electronic equipment and storage medium
WO2021169945A1 (en) Method and device for picture generation, electronic device, and storage medium
CN112418327A (en) Training method and device of image classification model, electronic equipment and storage medium
CN113469092A (en) Character recognition model generation method and device, computer equipment and storage medium
CN113821296B (en) Visual interface generation method, electronic equipment and storage medium
CN114548218A (en) Image matching method, device, storage medium and electronic device
CN116229188B (en) Image processing display method, classification model generation method and equipment thereof
WO2024088269A1 (en) Character recognition method and apparatus, and electronic device and storage medium
US11250542B2 (en) Mosaic generation apparatus and method
CN113313066A (en) Image recognition method, image recognition device, storage medium and terminal
WO2023174063A1 (en) Background replacement method and electronic device
CN114363528B (en) Dial generation method, dial generation device, terminal equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination