CN111882501A - Image acquisition method and device - Google Patents

Image acquisition method and device Download PDF

Info

Publication number
CN111882501A
CN111882501A CN202010739119.6A CN202010739119A CN111882501A CN 111882501 A CN111882501 A CN 111882501A CN 202010739119 A CN202010739119 A CN 202010739119A CN 111882501 A CN111882501 A CN 111882501A
Authority
CN
China
Prior art keywords
image
dish
information
real
dishes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010739119.6A
Other languages
Chinese (zh)
Inventor
高德迎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN202010739119.6A priority Critical patent/CN111882501A/en
Publication of CN111882501A publication Critical patent/CN111882501A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The specification discloses an image acquisition method and an image acquisition device, wherein an image sensor is controlled to be started by receiving a request for starting the image sensor, a real-time dish image is acquired by the image sensor, dishes in the real-time dish image are identified, the environmental characteristics of the dishes are determined, the image parameters of the real-time dish image are adjusted according to the information of the dishes, the dishes are rendered according to the environmental characteristics and the image parameters, and if the image acquisition request is received, the final dish image is determined according to the rendering result. Through the content, when the user obtains the dish image, the dish image is not required to be beautified by spending extra time or cost, and the effect of conveniently and efficiently obtaining the high-quality dish image is achieved.

Description

Image acquisition method and device
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image acquisition method and apparatus.
Background
Currently, in the process of publishing information by a user, particularly when a merchant publishes commodity information or the user evaluates the merchant, dish images can be uploaded to increase the browsing amount.
Taking the example of the merchant issuing the commodity information, when the merchant issues the commodity information related to the dishes, the intelligent device such as a mobile phone can be used for acquiring the dish images, and the image processing software installed in the intelligent device is used for beautifying the dish images so as to obtain the satisfactory dish images. Or, the merchant can also purchase the entity prop, the entity prop is placed around the dishes to beautify the environment around the dishes, so that the dishes with higher quality can be shot conveniently, and the entity prop can comprise a decorative background table, an animation model and the like.
Since the dish image is beautified by using the image processing software, a large amount of time is consumed, and the cost for acquiring the dish image is increased due to the purchase of the entity prop. Therefore, how to conveniently and efficiently acquire a high-quality dish image becomes a problem which needs to be solved urgently.
Disclosure of Invention
The embodiment of the specification provides an image acquisition method and an image acquisition device, so as to partially solve the problems in the prior art.
The embodiment of the specification adopts the following technical scheme:
the image acquisition method provided by the present specification, the method comprising:
receiving a request for starting an image sensor;
responding to the request, controlling the image sensor to start, and acquiring a real-time dish image through the image sensor;
identifying dishes in the real-time dish image and determining environmental characteristics of the dishes;
adjusting the image parameters of the real-time dish image according to the information of the dishes;
rendering the dishes according to the environmental features and the image parameters;
and if an image acquisition request is received, determining a final dish image according to a rendering result.
Optionally, adjusting an image parameter of the real-time dish image according to the information of the dish specifically includes:
determining attribute information of the dishes;
determining the picture style corresponding to the dish according to the corresponding relation between the attribute information of the dish and the picture style which is determined in advance;
and adjusting the image parameters of the real-time dish image according to the picture style corresponding to the dish.
Optionally, the determining the corresponding relationship between the attribute information of the dish and the style of the picture in advance specifically includes:
acquiring a historical image containing the dish;
for each historical image, determining attribute information of the dishes contained in the historical image and image parameters of the historical image;
acquiring image description information of the historical image;
determining the picture style of the historical image according to at least one of the image parameter and the image description information of the historical image;
and determining the corresponding relation between the attribute information and the picture style of the dishes according to the attribute information of the dishes contained in the historical images and the picture style of the historical images.
Optionally, rendering the dishes according to the environmental features and the image parameters specifically includes:
determining illumination information of the real-time dish image according to the environmental characteristics, wherein the illumination information comprises at least one of illumination direction and illumination intensity;
setting information of a virtual light source according to the illumination information;
determining an intermediate image of the dish under the irradiation of the virtual light source according to the position information of the dish in the real-time dish image and the virtual light source;
and rendering the dish according to the intermediate image and the image parameter.
Optionally, rendering the dishes according to the environmental features and the image parameters specifically includes:
determining a virtual prop matched with the dish in prestored virtual props according to the information of the dish;
providing the virtual item matched with the dish for the user to select;
determining a virtual prop selected by the user;
and rendering the dishes and the virtual props selected by the user according to the environment characteristics and the image parameters.
Optionally, determining a final dish image according to the rendering result, specifically including:
determining position information of the virtual item placed in the real-time dish image by the user aiming at each virtual item selected by the user;
and determining a final dish image containing the dishes and the virtual props selected by the user according to a rendering result.
Optionally, identifying the dishes in the real-time dish image specifically includes:
inputting the real-time dish image into a pre-trained recognition model to obtain the information of the dishes in the real-time dish image determined by the recognition model;
wherein the recognition model is trained in advance by:
acquiring a sample dish image, and determining real dish information in the sample dish image;
inputting the sample dish image into a recognition model to be trained to obtain predicted dish information output by the recognition model to be trained;
and training the recognition model to be trained by taking the minimum difference between the real dish information and the predicted dish information as a training target.
The present specification provides an image acquisition apparatus, the apparatus comprising:
the receiving module is used for receiving a request for starting the image sensor;
the response module is used for responding to the request, controlling the image sensor to start and acquiring a real-time dish image through the image sensor;
the identification module is used for identifying the dishes in the real-time dish image and determining the environmental characteristics of the dishes;
the adjusting module is used for adjusting the image parameters of the real-time dish images according to the information of the dishes;
the rendering module is used for rendering the dishes according to the environment characteristics and the image parameters;
and the determining module is used for determining the final dish image according to the rendering result if the image acquisition request is received.
The present specification provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the above-described image acquisition method.
The electronic device provided by the present specification includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor implements the image acquisition method when executing the program.
The embodiment of the specification adopts at least one technical scheme which can achieve the following beneficial effects:
in this specification, a request for starting an image sensor may be received, the image sensor may be controlled to be started, a real-time dish image is acquired through the image sensor, dishes in the real-time dish image are identified, an environmental characteristic of the dishes is determined, an image parameter of the real-time dish image is adjusted according to information of the dishes, the dishes are rendered according to the environmental characteristic and the image parameter, and if an image acquisition request is received, a final dish image is determined according to a rendering result. Through the content, when the user obtains the dish image, the dish image is not required to be beautified by spending extra time or cost, and the effect of conveniently and efficiently obtaining the high-quality dish image is achieved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the specification and are incorporated in and constitute a part of this specification, illustrate embodiments of the specification and together with the description serve to explain the specification and not to limit the specification in a non-limiting sense. In the drawings:
FIG. 1 is a flowchart of an image acquisition method provided in an embodiment of the present disclosure;
fig. 2 is a flowchart illustrating a method for determining a correspondence between attribute information of dishes and a style of a picture according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a real-time dish image and a final dish image provided in an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an image acquisition apparatus provided in an embodiment of the present specification;
fig. 5 is a schematic diagram of an electronic device corresponding to fig. 1 provided in an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the present disclosure more clear, the technical solutions of the present disclosure will be clearly and completely described below with reference to the specific embodiments of the present disclosure and the accompanying drawings. It is to be understood that the embodiments described are only a few embodiments of the present disclosure, and not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present specification without any creative effort belong to the protection scope of the present specification.
In many scenarios, a user often needs to take a picture of the dish. For example, a merchant uploads an image scene of a commodity (dish) on an online platform, and for example, a user issues a comment information scene about the dish. Taking the example that the merchant uploads the image scene of the dishes on the online platform as an example, the merchant can upload commodity information on the third-party service platform so as to improve the browsing amount, and the commodity information can include the images of the dishes. Since most of the current image sensors with Augmented Reality (AR) photographing function can realize beauty about a human face, beautification of dishes is neglected. Therefore, in order to obtain a high-quality dish image, a merchant can purchase entity props to be placed around the dish to achieve the effect of shooting a more exquisite dish image, or the merchant can shoot the dish image and then beautify the dish image by using image processing software to obtain the higher-quality dish image.
However, when the prior art is used to acquire a high-quality dish image, a user needs to spend much effort and cost, so that the user cannot acquire the high-quality dish image conveniently and efficiently.
Therefore, the present specification provides an image obtaining method, which, when receiving a request for starting an image sensor, responds to the request, controls the image sensor to start, collects a real-time dish image through the image sensor, then identifies a dish in the real-time dish image, determines an environmental characteristic of the dish, adjusts an image parameter of the real-time dish image according to information of the dish, then renders the dish according to the environmental characteristic and the image parameter, and when receiving the image obtaining request, determines a final dish image according to a rendering result. Through the above, the present specification can achieve the effect of automatically acquiring high-quality images of dishes without the need for the user to spend additional cost and effort.
The technical solutions provided by the embodiments of the present description are described in detail below with reference to the accompanying drawings.
Fig. 1 is a flowchart of an image obtaining method provided in an embodiment of the present disclosure, which may specifically include the following steps:
s100: a request to start an image sensor is received.
S102: and responding to the request, controlling the image sensor to start, and acquiring a real-time dish image through the image sensor.
In this specification, a user may acquire an image of a dish using an electronic device equipped with an image sensor, and of course, the user may also acquire the image of the dish using a server capable of communicating with the image sensor, where the electronic device may include a smart device such as a mobile phone and a tablet computer or a portable device, and the server may be a single device or a distributed server composed of multiple devices. For convenience of description, the electronic device is taken as an example in the present specification, and a process of acquiring an image of a dish is described.
The electronic device can monitor the operation of the user, and when the condition that the user clicks the camera control is monitored, the request for starting the image sensor can be received. Alternatively, the electronic device may jump from the program to request to start the image sensor during the process of implementing other services. In addition, there may be other ways, for example, a server in communication with the electronic device may send a start image sensor request so that the electronic device may receive the start image sensor request. The electronic device may receive the image sensor activation request, and the description is not limited as to the source of the image sensor activation request.
The electronic device may control the image sensor to be booted in response to the received image sensor boot request after receiving the image sensor boot request. Then, the electronic device may acquire the dish image in real time through the image sensor as a real-time dish image, which may be an image acquired in real time by the image sensor and stored in the cache. That is, when the image sensor works on the bottom layer, the line signal and the field signal are output in real time to acquire a frame of real-time dish image, the real-time dish image is stored in the cache in real time, and since the line signal and the field signal are continuously output during the work, the image sensor continuously acquires a frame of image, and the real-time dish image stored in the cache is continuously updated in an overlaying manner.
The real-time dish image may also be an image retrieved from a cache and stored in a non-volatile memory in response to a user operation. That is, at this time, the electronic device receives the image acquisition request, and stores the real-time dish image in the cache when the image acquisition request is received in the nonvolatile memory. The real-time dish image can be displayed on a display interface of the electronic equipment to be displayed to a user, and under the condition, the specification can display two dish images to the user, wherein one dish image is the real-time dish image, and the other dish image is a final dish image obtained after the real-time dish image is rendered on the basis of the real-time dish image. The real-time dish image can not be displayed on a display interface of the electronic equipment, the user cannot actually see the real-time dish image, the electronic equipment renders the real-time dish image and obtains a final dish image, and under the condition, the specification can display the final dish image to the user.
S104: identifying the dishes in the real-time dish image and determining the environmental characteristics of the dishes.
After the real-time dish image is acquired, the electronic equipment can identify dishes in the real-time dish image by itself, and can also identify the dishes in the real-time dish image according to the operation of the user by monitoring the operation of the user.
Specifically, the electronic device may input the real-time dish image into a pre-trained recognition model to obtain information of the dishes in the real-time dish image determined by the recognition model, where the information of the dishes may include names of the dishes, positions of the dishes in the real-time dish image, feature information of the dishes, and the like, and the feature information of the dishes may include information of colors, shapes, and the like of the dishes.
When the electronic equipment trains the recognition model in advance, the sample dish image can be obtained in advance, the real dish information in the sample dish image is determined, the sample dish image is input into the recognition model to be trained, the predicted dish information output by the recognition model to be trained is obtained, and the recognition model to be trained is trained by taking the minimum difference between the real dish information and the predicted dish information as a training target.
Specifically, the electronic device may pre-label an actual name of a dish in the sample dish image, an actual position of the dish in the sample dish image, actual feature information of the dish, and the like, input the sample dish image into the recognition model to be trained, obtain a predicted name of the dish in the sample dish image output by the recognition model to be trained, a predicted position of the dish in the sample dish image, predicted feature information of the dish, and the like, and then, the electronic device may determine a difference between the predicted dish information and the actual dish information, that is, the electronic device may determine a difference between the predicted name and the actual name, a difference between the predicted position and the actual position, a difference between the predicted feature information and the actual feature information, and the like, according to at least one of the difference between the predicted name and the actual name, the difference between the predicted position and the actual position, the difference between the predicted feature information and the actual feature information, and the like, and determining the loss of the recognition model to be trained, and training the recognition model to be trained by taking the loss minimization as a training target.
In addition, when the electronic equipment identifies the dishes in the real-time dish image, a large number of sample dish images can be obtained in advance, the standard features of the dishes are determined by extracting the features of the dishes in the sample dish images, then the features of the dishes in the real-time dish images are extracted, the features of the dishes in the real-time dish images are compared with the standard features of the dishes, and the information of the dishes is determined according to the comparison result. Of course, the electronic device may also identify the dishes in the real-time dish image in other manners, and details of the specific process of identifying the dishes in the real-time dish image in other manners are not repeated in this specification.
After identifying the dish in the real-time dish image, the electronic device may also determine an environmental characteristic of the dish. The environment characteristics comprise illumination characteristics, wind direction characteristics, wind speed characteristics and ornament characteristics, and of course, the environment characteristics can also comprise characteristics of other aspects, such as container characteristics and the like, wherein the illumination characteristics represent illumination information when a real-time dish image is collected, so on, the wind direction characteristics represent wind direction information, the wind speed characteristics represent wind speed information, the ornament characteristics represent information of ornaments around the dish, wherein the ornament can comprise a background decorative picture, a decorative flower and the like, and the container characteristics represent information of a container for containing dishes.
S106: and adjusting the image parameters of the real-time dish image according to the information of the dishes.
After identifying the dish in the real-time dish image, the electronic device may determine attribute information of the dish according to the information of the dish.
Specifically, the attribute information of the dish includes a vegetable family to which the dish belongs and a meat and vegetable category of the dish. The electronic equipment can determine the cuisine to which the dish belongs according to the name of the dish and a predetermined knowledge map, wherein the cuisine can comprise an X1 cuisine, an X2 cuisine and the like, and the cuisine can further comprise a sub-cuisine, for example, the sub-cuisine of the X1 cuisine can comprise an X11 sub-cuisine, an X12 sub-cuisine and the like. Meanwhile, the electronic equipment can also determine the meat and vegetable category of the dish according to the characteristic information of the dish. In addition, the attribute information of the dishes may further include other information, for example, taste information, etc.
For example, if the name of a dish is YY dish, the attribute information of the dish may be determined: the vegetable is X1 vegetable series, and the sub-vegetable is X11 sub-vegetable series. And if the characteristic information of the dish is green and in a slender shape, the meat and vegetable categories of the dish can be determined to be vegetables and the like.
The electronic device may determine a knowledge graph about the dish in advance, where the knowledge graph may include a relationship between information of the dish and attributes of the dish, for example, an entity in the knowledge graph may include a name of the dish, a family, a subcategory, dish characteristics, meat category, taste, and the like. The electronic equipment can acquire a large amount of dish knowledge, and then processes the dish knowledge such as knowledge extraction, knowledge representation, knowledge fusion and the like to obtain a knowledge map about the dish. The specific process of determining the knowledge-graph is not set forth in detail in this specification.
And then, determining the picture style corresponding to the dish according to the corresponding relation between the attribute information of the dish and the picture style determined in advance.
Specifically, the electronic device may determine a corresponding relationship between attribute information of a dish and a picture style in advance, as shown in fig. 2, fig. 2 is a schematic flow chart of a method for determining a corresponding relationship between attribute information of a dish and a picture style provided in an embodiment of this specification, and specifically, the method may include the following steps:
s1060: and acquiring a historical image containing the dish.
Specifically, the electronic device may obtain user evaluation information, where the user evaluation information includes a history image and text content including the dish. The user evaluation information is information issued by the user for the dishes, that is, the electronic device can acquire historical images including the dishes from the third-party service platform, especially historical images including the dishes in the real-time dish images. Of course, the electronic device may also obtain commodity information issued by the merchant, where the commodity information includes a commodity image (i.e., a history image) including the dish and text content.
S1062: for each history image, attribute information of the dishes included in the history image and image parameters of the history image are determined.
Specifically, for each historical image, the information of the dishes in the historical image can be determined through the recognition model, and the attribute information of the dishes contained in the historical image is determined according to the information of the dishes and the knowledge graph. At the same time, image parameters of the historical image may be determined, wherein the image parameters include contrast, brightness, sharpness, saturation. In addition, the electronic device may further determine virtual item information and image tag information included in the historical image, where the image tag information includes text and border information in the historical image.
For example, for the history image, the electronic device may determine that the attribute information of the dishes included in the history image is X1 cuisine, the image parameters of the history image are contrast 50, brightness 70, sharpness 70 and saturation 50, the virtual props in the history image are flowers, the image label information includes text content "zzzz", text borders are thick borders, and the border color is black.
S1064: image description information of the history image is acquired.
The electronic device can acquire the historical image and also can acquire the text content corresponding to the historical image, that is, the image description information is the text content in the user comment information. Or, the electronic device may extract keywords from the text content corresponding to the history image as the image description information. For example, the keywords can be extracted from the text content as: emerald green, hot and spicy and other key words.
S1066: and determining the picture style of the historical image according to at least one of the image parameter and the image description information of the historical image.
The electronic device may preset several styles, for example, H1 style, H2 style, etc., and determine image parameters corresponding to each style and image description information corresponding to the style. And comparing the image parameters of the historical image with the image parameters corresponding to each picture style and the image description information of the historical image with the image description information corresponding to each picture style according to the image parameters, the image description information and each picture style of the historical image, respectively determining the matching degree of the historical image and each picture style according to the comparison result, and selecting the picture style corresponding to the historical image in each picture style according to the matching degree. For example, the screen styles may be sorted according to the degree of matching, and the screen style with the highest ranking may be selected as the screen style corresponding to the history image.
S1068: and determining the corresponding relation between the attribute information and the picture style of the dishes according to the attribute information of the dishes contained in the historical images and the picture style of the historical images.
The electronic device can determine the attribute information of the dishes contained in each history image and the picture styles corresponding to each history image through steps S1062-S1066, so that the electronic device can select at least one picture style from the picture styles corresponding to each history image as the picture style corresponding to the attribute information of the dish. Specifically, when the screen style is selected, the screen style may be randomly selected as the screen style corresponding to the attribute information of the dish, or the number of the history images corresponding to each screen style may be determined, and the screen style with the largest number may be selected as the screen style corresponding to the attribute information of the dish.
In addition, since the electronic device can determine the virtual item information and the image tag information included in each historical image, the electronic device can determine the picture style corresponding to each virtual item information and each image tag information. And determining the specific process of the picture style corresponding to each virtual item information and image label information, and referring to the process of the picture style corresponding to the attribute information of the dishes.
Therefore, after the attribute of the dish is determined, the electronic device can select the picture style as the picture style corresponding to the dish from among the picture styles according to the correspondence between the attribute information of the dish and the picture styles which are determined in advance. Specifically, the electronic equipment can determine the image parameters of the real-time dish image, compare the image parameters of the real-time dish image with the image parameters corresponding to each picture style, and select one picture style as the picture style corresponding to the dish according to the comparison result.
And finally, adjusting the image parameters of the real-time dish image according to the picture style corresponding to the dish.
After the picture style corresponding to the dish is selected, the image parameters of the real-time dish image can be adjusted according to the image parameters corresponding to the picture style. For example, the image parameter corresponding to the picture style may be directly adjusted to the image parameter of the real-time dish image, or, for example, the image parameter of the real-time dish image may be determined, and the average value of the image parameter corresponding to the picture style and the image parameter of the real-time dish image is used as the image parameter of the adjusted real-time dish image.
Or the electronic equipment can randomly select one picture style from the picture styles and adjust the image parameters of the real-time dish image according to the image parameters corresponding to the randomly selected picture style. And simultaneously, displaying each picture style corresponding to the attribute information of the dishes so as to enable a user to select the picture style, readjusting the image parameters of the real-time dish image according to the image parameters corresponding to the picture style selected by the user, and taking the picture style selected by the user as a default picture style so as to be convenient for directly using the default picture style when the dish image containing the dishes is acquired next time.
S108: and rendering the dishes according to the environment characteristics and the image parameters.
After adjusting the image parameters of the real-time dish image, the electronic device may determine illumination information of the real-time dish image according to the environmental characteristics, where the illumination information may include at least one of an illumination direction and an illumination intensity. And setting a virtual light source according to the illumination information. And determining an intermediate image of the dish under the irradiation of the virtual light source according to the position information and the virtual light source of the dish in the real-time dish image, and rendering the dish according to the intermediate image and the image parameters.
Specifically, the electronic device may determine, according to the illumination characteristic in the environmental characteristic, illumination information when acquiring the real-time dish image, then set a virtual light source in the real-time dish image, determine, according to the illumination direction and the illumination intensity in the illumination characteristic, position information of the virtual light source, and set the illumination information of the virtual light source according to a relationship between the illumination intensity of the virtual light source and the illumination intensity of the real-time dish image, and a relationship between the illumination direction of the virtual light source and the illumination direction of the real-time dish image, for example, set the illumination direction of the virtual light source as the illumination direction in the illumination characteristic, and set the illumination intensity of the virtual light source as the illumination intensity in the illumination characteristic.
Meanwhile, the electronic equipment can select the area where the dish is located according to the position information of the dish in the real-time dish image, wherein the area where the dish is located comprises the dish and a container for containing the dish, the shadow area corresponding to the area where the dish is located is determined according to the illumination direction of the virtual light source, the illumination intensity of the shadow area is determined according to the illumination intensity of the virtual light source, and therefore the middle image of the dish under the illumination of the virtual light source is determined. That is, shadow information corresponding to a dish is determined according to position information of the dish in a real-time dish image and a virtual light source, and the intermediate image comprises the information of the dish and the information of the shadow of the dish under the irradiation of the virtual light source.
In addition, the electronic equipment can also determine the virtual prop matched with the dish in each pre-stored virtual prop according to the information of the dish, provide the virtual prop matched with the dish for the user to select, determine the virtual prop selected by the user, and render the dish and the virtual prop selected by the user according to the environmental characteristics and the image parameters. For example, when the electronic device renders the dishes and the virtual items, the electronic device may render information such as colors and depths of field of the dishes and the virtual items.
In step S106, the electronic device may determine the virtual item information and the image tag information included in each historical image, so that the electronic device may sort the virtual items included in each historical image, select a plurality of virtual items from the virtual items included in each historical image according to the sorting result, and provide the selected virtual items and the virtual items matched with the dish information to the user at the same time.
After determining the virtual item selected by the user, for each virtual item selected by the user, the electronic device may determine location information where the user placed the virtual item in the real-time dish image. Specifically, the electronic device may monitor the operation of the user, and determine the position of the virtual item according to the position information of the user placing the virtual item in the real-time dish image.
In addition, the electronic equipment can also preset default positions of the virtual props, and after the user selects the virtual props, the position information of the virtual props in the real-time dish images is determined according to the default positions of the virtual props. When the default positions of the virtual props are preset, the position preset rules can be determined, for example, the name of the virtual prop is tableware, the default position of the tableware can be preset to be the right side of dishes, the tableware position preset rules can be that the positions of the tableware are not overlapped, if the name of the virtual prop is decoration, the default position of the decoration can be preset to be the upper left corner of the area where the dishes are located, and the decoration position preset rules can be used for placing the decorations in the area where the dishes are located.
Therefore, the virtual item selected by the user can be added to the real-time dish image according to the position information of the virtual item selected by the user. And rendering the dishes according to the intermediate images and the image parameters, and simultaneously rendering the virtual props selected by the user, for example, determining shadow information of the virtual props selected by the user under a virtual light source.
In addition, the electronic device can also sort the image tags in the historical images according to the image tag information in the historical images, and provide the image tags in the historical images for the user to select according to the sorting result, so that the user can select the image tags. Or selecting an image label from image labels corresponding to the picture style according to the picture style corresponding to the dish. According to the preset position information of the image tag or the position information of the image tag placed by the user, the position information of the image tag selected by the user in the real-time dish image is determined, the image tag selected by the user is added to the real-time dish image, and when the dish is rendered, the image tag selected by the user is rendered.
S110: and if an image acquisition request is received, determining a final dish image according to a rendering result.
When the electronic equipment receives the image acquisition request, the electronic equipment can determine a final dish image containing dishes and virtual props selected by the user according to the rendering result.
Specifically, the timing when the electronic device receives the image acquisition request may be in step S102, that is, the electronic device may store the real-time dish image and the final dish image in the nonvolatile memory, or may be in the current step, that is, the electronic device only stores the final dish image in the nonvolatile memory. Therefore, in the present specification, the time when the electronic device receives the image acquisition request may be any time before the final dish is determined according to the rendering result. The process of the electronic device receiving the image obtaining request may refer to the process of receiving the request for starting the image sensor in step S100, which is not described herein again.
The electronic equipment can determine a final dish image containing the dishes, the virtual items selected by the user and the image tags selected by the user according to the position information of the dishes, the position information of the virtual items selected by the user and the position information of the image tags selected by the user. For example, the virtual prop selected by the user is a virtual table, the dishes in the real-time dish image and the containers for placing the dishes can be scratched, the scratched dishes and containers are placed on the virtual table, the dishes, the containers and the virtual table are rendered, and when an image acquisition request is received, a final dish image is obtained according to the rendered results of the dishes, the containers and the virtual table. That is, the final dish image only contains dishes in the real-time dish image, the virtual props selected by the user and the image tags selected by the user.
Fig. 3 is a schematic diagram of a real-time dish image and a final dish image provided in an embodiment of the present specification, in fig. 3, the left side is the real-time dish image, in the real-time dish image, the dish is placed on a real table, and sundries are placed around the dish, and in the final dish image obtained through steps S104 to S110, the dish is placed on a virtual table selected by the user, and a virtual fork selected by the user is placed beside the dish, where the virtual table and the virtual fork are both virtual props, and in order to achieve a more real effect, corresponding shadows are rendered around the dish and around the virtual fork.
Since the real-time dish image may be an image that is captured by the image sensor in real time and stored in the cache in step S102, after the electronic device receives the image acquisition request, the image in the current cache may be rendered by the image sensor to obtain a final dish image, and the final dish image may be stored in the non-volatile memory. In addition, the real-time dish image can also be an image which is obtained from the cache and stored in the nonvolatile memory in response to the operation of the user, so that the image obtained by rendering the real-time dish image can be used as a final dish image and the final dish image can be stored in the nonvolatile memory.
Based on the image acquisition method shown in fig. 1, an embodiment of the present specification further provides a schematic structural diagram of an image acquisition apparatus, as shown in fig. 4.
Fig. 4 is a schematic structural diagram of an image capturing apparatus provided in an embodiment of the present specification, where the apparatus includes:
a receiving module 401, configured to receive a request for starting an image sensor;
a response module 402, configured to respond to the request, control the image sensor to start, and acquire a real-time dish image through the image sensor;
an identifying module 403, configured to identify a dish in the real-time dish image, and determine an environmental characteristic of the dish;
an adjusting module 404, configured to adjust an image parameter of the real-time dish image according to the information of the dish;
a rendering module 405, configured to render the dish according to the environmental feature and the image parameter;
a determining module 406, configured to determine a final dish image according to a rendering result if an image obtaining request is received.
Optionally, the adjusting module 404 is specifically configured to determine attribute information of the dish; determining the picture style corresponding to the dish according to the corresponding relation between the attribute information of the dish and the picture style which is determined in advance; and adjusting the image parameters of the real-time dish image according to the picture style corresponding to the dish.
Optionally, the adjusting module 404 is specifically configured to obtain a history image including the dish; for each historical image, determining attribute information of the dishes contained in the historical image and image parameters of the historical image; acquiring image description information of the historical image; determining the picture style of the historical image according to at least one of the image parameter and the image description information of the historical image; and determining the corresponding relation between the attribute information and the picture style of the dishes according to the attribute information of the dishes contained in the historical images and the picture style of the historical images.
Optionally, the rendering module 405 is specifically configured to determine, according to the environment characteristic, illumination information of the real-time dish image, where the illumination information includes at least one of an illumination direction and an illumination intensity; setting information of a virtual light source according to the illumination information; determining an intermediate image of the dish under the irradiation of the virtual light source according to the position information of the dish in the real-time dish image and the virtual light source; and rendering the dish according to the intermediate image and the image parameter.
Optionally, the rendering module 405 is specifically configured to determine, according to the information of the dish, a virtual item matched with the dish from pre-stored virtual items; providing the virtual item matched with the dish for the user to select; determining a virtual prop selected by the user; and rendering the dishes and the virtual props selected by the user according to the environment characteristics and the image parameters.
Optionally, the determining module 406 is specifically configured to, for each virtual item selected by the user, determine position information of the virtual item, which is placed in the real-time dish image by the user; and determining a final dish image containing the dishes and the virtual props selected by the user according to a rendering result.
Optionally, the identification module 403 is specifically configured to input the real-time dish image into a pre-trained identification model, so as to obtain information of dishes in the real-time dish image determined by the identification model; wherein the recognition model is trained in advance by: acquiring a sample dish image, and determining real dish information in the sample dish image; inputting the sample dish image into a recognition model to be trained to obtain predicted dish information output by the recognition model to be trained; and training the recognition model to be trained by taking the minimum difference between the real dish information and the predicted dish information as a training target.
Embodiments of the present specification further provide a computer-readable storage medium, where the storage medium stores a computer program, and the computer program can be used to execute the image acquisition method provided in fig. 1.
Based on the image acquisition method shown in fig. 1, the embodiment of the present specification further provides a schematic structural diagram of the electronic device shown in fig. 5. As shown in fig. 5, at the hardware level, the electronic device includes a processor, an internal bus, a network interface, a memory, and a non-volatile memory, but may also include hardware required for other services. The processor reads a corresponding computer program from the non-volatile memory into the memory and then runs the computer program to implement the image acquisition method described in fig. 1 above.
Of course, besides the software implementation, the present specification does not exclude other implementations, such as logic devices or a combination of software and hardware, and the like, that is, the execution subject of the following processing flow is not limited to each logic unit, and may be hardware or logic devices.
In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually making an integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific Programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as abel (advanced Boolean Expression Language), ahdl (alternate Language Description Language), traffic, pl (core unified Programming Language), HDCal, JHDL (Java Hardware Description Language), langue, Lola, HDL, laspam, hardsradware (Hardware Description Language), vhjhd (Hardware Description Language), and vhigh-Language, which are currently used in most common. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic for the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be considered a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functions of the various elements may be implemented in the same one or more software and/or hardware implementations of the present description.
As will be appreciated by one skilled in the art, embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The description has been presented with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the description. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
This description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of the present specification, and is not intended to limit the present specification. Various modifications and alterations to this description will become apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present specification should be included in the scope of the claims of the present specification.

Claims (10)

1. An image acquisition method, characterized in that the method comprises:
receiving a request for starting an image sensor;
responding to the request, controlling the image sensor to start, and acquiring a real-time dish image through the image sensor;
identifying dishes in the real-time dish image and determining environmental characteristics of the dishes;
adjusting the image parameters of the real-time dish image according to the information of the dishes;
rendering the dishes according to the environmental features and the image parameters;
and if an image acquisition request is received, determining a final dish image according to a rendering result.
2. The method of claim 1, wherein adjusting the image parameters of the real-time dish image according to the information of the dish comprises:
determining attribute information of the dishes;
determining the picture style corresponding to the dish according to the corresponding relation between the attribute information of the dish and the picture style which is determined in advance;
and adjusting the image parameters of the real-time dish image according to the picture style corresponding to the dish.
3. The method of claim 2, wherein the pre-determining the correspondence between the attribute information of the dish and the style of the picture comprises:
acquiring a historical image containing the dish;
for each historical image, determining attribute information of the dishes contained in the historical image and image parameters of the historical image;
acquiring image description information of the historical image;
determining the picture style of the historical image according to at least one of the image parameter and the image description information of the historical image;
and determining the corresponding relation between the attribute information and the picture style of the dishes according to the attribute information of the dishes contained in the historical images and the picture style of the historical images.
4. The method of claim 1, wherein rendering the dish according to the environmental features and the image parameters comprises:
determining illumination information of the real-time dish image according to the environmental characteristics, wherein the illumination information comprises at least one of illumination direction and illumination intensity;
setting information of a virtual light source according to the illumination information;
determining an intermediate image of the dish under the irradiation of the virtual light source according to the position information of the dish in the real-time dish image and the virtual light source;
and rendering the dish according to the intermediate image and the image parameter.
5. The method of claim 1, wherein rendering the dish according to the environmental features and the image parameters comprises:
determining a virtual prop matched with the dish in prestored virtual props according to the information of the dish;
providing the virtual item matched with the dish for the user to select;
determining a virtual prop selected by the user;
and rendering the dishes and the virtual props selected by the user according to the environment characteristics and the image parameters.
6. The method of claim 5, wherein determining a final dish image based on the rendering results comprises:
determining position information of the virtual item placed in the real-time dish image by the user aiming at each virtual item selected by the user;
and determining a final dish image containing the dishes and the virtual props selected by the user according to a rendering result.
7. The method of claim 1, wherein identifying the dishes in the real-time dish image specifically comprises:
inputting the real-time dish image into a pre-trained recognition model to obtain the information of the dishes in the real-time dish image determined by the recognition model;
wherein the recognition model is trained in advance by:
acquiring a sample dish image, and determining real dish information in the sample dish image;
inputting the sample dish image into a recognition model to be trained to obtain predicted dish information output by the recognition model to be trained;
and training the recognition model to be trained by taking the minimum difference between the real dish information and the predicted dish information as a training target.
8. Image acquisition apparatus, characterized in that it comprises:
the receiving module is used for receiving a request for starting the image sensor;
the response module is used for responding to the request, controlling the image sensor to start and acquiring a real-time dish image through the image sensor;
the identification module is used for identifying the dishes in the real-time dish image and determining the environmental characteristics of the dishes;
the adjusting module is used for adjusting the image parameters of the real-time dish images according to the information of the dishes;
the rendering module is used for rendering the dishes according to the environment characteristics and the image parameters;
and the determining module is used for determining the final dish image according to the rendering result if the image acquisition request is received.
9. A computer-readable storage medium, characterized in that the storage medium stores a computer program which, when executed by a processor, implements the method of any of the preceding claims 1-7.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any of claims 1-7 when executing the program.
CN202010739119.6A 2020-07-28 2020-07-28 Image acquisition method and device Pending CN111882501A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010739119.6A CN111882501A (en) 2020-07-28 2020-07-28 Image acquisition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010739119.6A CN111882501A (en) 2020-07-28 2020-07-28 Image acquisition method and device

Publications (1)

Publication Number Publication Date
CN111882501A true CN111882501A (en) 2020-11-03

Family

ID=73201816

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010739119.6A Pending CN111882501A (en) 2020-07-28 2020-07-28 Image acquisition method and device

Country Status (1)

Country Link
CN (1) CN111882501A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112927321A (en) * 2021-03-17 2021-06-08 北京太火红鸟科技有限公司 Intelligent image design method, device, equipment and storage medium based on neural network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103606182A (en) * 2013-11-19 2014-02-26 华为技术有限公司 Method and device for image rendering
CN107798653A (en) * 2017-09-20 2018-03-13 北京三快在线科技有限公司 A kind of method of image procossing and a kind of device
CN110490960A (en) * 2019-07-11 2019-11-22 阿里巴巴集团控股有限公司 A kind of composograph generation method and device
CN111353532A (en) * 2020-02-26 2020-06-30 北京三快在线科技有限公司 Image generation method and device, computer-readable storage medium and electronic device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103606182A (en) * 2013-11-19 2014-02-26 华为技术有限公司 Method and device for image rendering
CN107798653A (en) * 2017-09-20 2018-03-13 北京三快在线科技有限公司 A kind of method of image procossing and a kind of device
CN110490960A (en) * 2019-07-11 2019-11-22 阿里巴巴集团控股有限公司 A kind of composograph generation method and device
CN111353532A (en) * 2020-02-26 2020-06-30 北京三快在线科技有限公司 Image generation method and device, computer-readable storage medium and electronic device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112927321A (en) * 2021-03-17 2021-06-08 北京太火红鸟科技有限公司 Intelligent image design method, device, equipment and storage medium based on neural network
CN112927321B (en) * 2021-03-17 2022-04-22 北京太火红鸟科技有限公司 Intelligent image design method, device, equipment and storage medium based on neural network

Similar Documents

Publication Publication Date Title
US11126922B2 (en) Extracting live camera colors for application to a digital design
US10109051B1 (en) Item recommendation based on feature match
US10956784B2 (en) Neural network-based image manipulation
CN106569763B (en) Image display method and terminal
US10083521B1 (en) Content recommendation based on color match
US20180047200A1 (en) Combining user images and computer-generated illustrations to produce personalized animated digital avatars
US8990672B1 (en) Flexible design architecture for designing media-based projects in a network-based platform
US9940745B2 (en) Image manipulation for electronic display
US9633462B2 (en) Providing pre-edits for photos
US8237819B2 (en) Image capture method with artistic template design
CN109690617A (en) System and method for digital vanity mirror
CN107621966B (en) Graphical user interface display method and device and terminal equipment
US20150332622A1 (en) Automatic Theme and Color Matching of Images on an Ambient Screen to the Surrounding Environment
US10871884B1 (en) Product image characteristic detection and manipulation
CN106203286A (en) The content acquisition method of a kind of augmented reality, device and mobile terminal
CN102473318A (en) Processing digital templates for image display
CN107943924B (en) Method for automatically generating webpage theme, storage medium and electronic equipment
CN109274891B (en) Image processing method, device and storage medium thereof
US20220174237A1 (en) Video special effect generation method and terminal
US20210027539A1 (en) Mobile device image item replacements
CN110084871B (en) Image typesetting method and device and electronic terminal
US20150178955A1 (en) Digital art systems and methods
CN113986407A (en) Cover generation method and device and computer storage medium
CN111882501A (en) Image acquisition method and device
CN111448847B (en) Illumination control system for controlling a plurality of light sources based on source image and method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination