CN107622473B - Image rendering method, device, terminal and computer readable storage medium - Google Patents

Image rendering method, device, terminal and computer readable storage medium Download PDF

Info

Publication number
CN107622473B
CN107622473B CN201710868728.XA CN201710868728A CN107622473B CN 107622473 B CN107622473 B CN 107622473B CN 201710868728 A CN201710868728 A CN 201710868728A CN 107622473 B CN107622473 B CN 107622473B
Authority
CN
China
Prior art keywords
information
rendering
image
target
machine learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710868728.XA
Other languages
Chinese (zh)
Other versions
CN107622473A (en
Inventor
梁昆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201710868728.XA priority Critical patent/CN107622473B/en
Publication of CN107622473A publication Critical patent/CN107622473A/en
Application granted granted Critical
Publication of CN107622473B publication Critical patent/CN107622473B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The application discloses an image rendering method, an image rendering device, a terminal and a computer readable storage medium. The method comprises the following steps: acquiring a target rendering mode of a target image according to a machine learning model, wherein the target image is an image obtained by a photographing function; rendering the target image according to the target rendering mode; and outputting the rendered target image. According to the image rendering method provided by the embodiment of the application, firstly, a target rendering mode of a target image is obtained according to a machine learning model, and the target image is obtained by a photographing function; then rendering the target image according to the target rendering mode; and finally, outputting the rendered target image, so that the rendering efficiency of the rendering function can be improved, and the utilization rate of the rendering function can be improved.

Description

Image rendering method, device, terminal and computer readable storage medium
Technical Field
The embodiment of the application relates to an electronic device application technology, and in particular relates to an image rendering method, an image rendering device, a terminal and a computer-readable storage medium.
Background
With the development of the intelligent terminal, the photographing function on the intelligent terminal is widely used by users. At present, a rendering function is nested in a photographing function of a camera. The rendering function is used for adjusting the temperature and color of the picture, so that the picture has the styles of vintage, black and white, nostalgia and the like. The rendering function of the picture in the related art is manually selected by the user, that is, the user selects a rendering tone required by the user. However, for a new user who does not know the photo styles corresponding to the rendering functions with different names, even after trying to render in different styles for multiple tests, a proper rendering mode cannot be found, which wastes time and causes a low utilization rate of the rendering function because the user cannot find the proper rendering mode for multiple times.
Disclosure of Invention
The application provides an image rendering method, an image rendering device, a terminal and a computer readable storage medium, which can improve rendering efficiency of a rendering function and improve utilization rate of the rendering function.
In a first aspect, an embodiment of the present application provides an image rendering method, including:
acquiring a target rendering mode of a target image according to a machine learning model, wherein the target image is an image obtained by a photographing function;
rendering the target image according to the target rendering mode;
and outputting the rendered target image.
In a second aspect, an embodiment of the present application further provides an image rendering apparatus, including:
the machine learning module is used for acquiring a target rendering mode of a target image according to a machine learning model, wherein the target image is an image obtained by a photographing function;
the rendering module is used for rendering the target image according to the target rendering mode obtained by the machine learning module;
and the output module is used for outputting the target image rendered by the rendering module.
In a third aspect, an embodiment of the present application further provides a terminal, where the terminal includes:
one or more processors;
a storage device for storing one or more programs,
the data transceiver is used for carrying out data interaction with the server;
when executed by the one or more processors, cause the one or more processors to implement the image rendering method as shown in the first aspect.
In a fourth aspect, the present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the image rendering method according to the first aspect.
According to the image rendering method provided by the embodiment of the application, firstly, a target rendering mode of a target image is obtained according to a machine learning model, and the target image is obtained by a photographing function; then rendering the target image according to the target rendering mode; and finally, outputting the rendered target image, so that the rendering efficiency of the rendering function can be improved, and the utilization rate of the rendering function can be improved.
Drawings
Fig. 1 is a flowchart of an image rendering method in an embodiment of the present application;
fig. 2 is a flowchart of another image rendering method in an embodiment of the present application;
FIG. 3 is a flow chart of another image rendering method in an embodiment of the present application;
FIG. 4 is a flow chart of another image rendering method in an embodiment of the present application;
FIG. 5 is a flow chart of another image rendering method in an embodiment of the present application;
fig. 6 is a schematic structural diagram of an image rendering apparatus in an embodiment of the present application;
fig. 7 is a schematic structural diagram of another image rendering apparatus in the embodiment of the present application;
fig. 8 is a schematic structural diagram of a terminal in an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be further noted that, for the convenience of description, only some of the structures related to the present application are shown in the drawings, not all of the structures.
At present, a camera function or an image processing application in a terminal can render a picture, but what rendering mode to render the picture needs to be manually selected by a user. However, the user usually does not know the most suitable rendering mode of the current scene, so the user needs to try different rendering modes for many times, which is time-consuming and labor-consuming. Or the user only uses one rendering mode which is used to by the user, and then ignores other rendering modes which are better. The application provides a combined machine learning image automatic rendering mode, which provides suitable image rendering for a user, and improves rendering efficiency of rendering functions and utilization rate of the image rendering functions.
Fig. 1 is a flowchart of an image rendering method according to an embodiment of the present application, where the method is applied to a terminal, and the terminal may be an electronic device with a photographing function or an image processing function, such as a smart phone, a wearable device, a tablet computer, and a notebook computer. The method is suitable for rendering the photographed image, and specifically comprises the following steps:
and 110, acquiring a target rendering mode of the target image according to the machine learning model.
Wherein, the target image is an image obtained by the photographing function. Optionally, when the user inputs a photographing instruction, the terminal generates a target image, and obtains a target rendering mode of the target image according to the machine learning model when the target image is obtained. Optionally, after the target image is generated according to the photographing instruction input by the user, the target image is stored in the album. When a user browses images in an album or browses images using other image processing applications including an image rendering function, rendering operations may be performed on the images. And when the user starts the rendering function, acquiring a target rendering mode of the target image according to the machine learning model.
The machine learning model may select Artificial Neural Networks (ANNs). The way in which multiple users render images may be obtained first, and then the artificial neural network may be optimized by machine learning.
In one implementation, image analysis is performed on a target image to obtain attribute information of the target image. And inputting the attribute information of the target image and the rendering mode selected by the user into the artificial neural network. Wherein, the attribute information may include one or more of the following information: image orientation, image color temperature, image brightness, image pixel, or image subject. When the target image is rendered, the target image is subjected to image analysis to obtain attribute information of the target image, and then the attribute information of the target image is input into the obtained artificial neural network to obtain a target rendering mode corresponding to the attribute information of the target image.
And 120, rendering the target image according to the target rendering mode.
The rendering mode comprises styles such as monochrome, hue, black and white, fading, chrome yellow, developing, time or nostalgic and the like, and each style is used for adjusting the color, the saturation and the color temperature of the target image.
And step 130, outputting the rendered target image.
Optionally, when the target image is obtained by photographing, the image is rendered, and the rendered target image is displayed to the user. Optionally, when the target image is obtained by taking a picture, the image is rendered, and the rendered image is stored. And displaying the rendered target image when the user browses the target image through the album.
Further, after step 130, the method further includes: receiving feedback information of a user on a target image; and adjusting the machine learning model according to the feedback information.
And after the rendered target image is displayed, receiving feedback information of a user on a rendering result. The feedback information may be a save, delete, or replace rendering mode. The feedback information is input into the machine learning model, and the machine learning model can adjust the determination strategy of the rendering mode according to the input feedback information.
According to the image rendering method provided by the embodiment, a target rendering mode of a target image is obtained according to a machine learning model, and the target image is an image obtained by a photographing function; rendering the target image according to the target rendering mode; and outputting the rendered target image. The machine learning can determine the target rendering mode of the target image according to the historical rendering mode of the current user or other users, so that the user is prevented from blindly trying multiple rendering modes, and the rendering efficiency of the rendering function and the utilization rate of the image rendering function are improved.
Fig. 2 is a flowchart of an image rendering method according to an embodiment of the present application, which is further described in the foregoing embodiment, and includes:
step 210, obtaining first position information of a plurality of first users when shooting a first image and a rendering mode of the first image.
The first user comprises the current user or other users except the current user. The first image is any image photographed by the first user. When a first image is obtained according to a photographing instruction input by a user, first position information corresponding to the first image is obtained. The first position information may be coordinate information acquired by a Global Positioning System (GPS). Alternatively, the first location information may be an identifier of the current attraction or an identifier of the current store. And recording the corresponding relation between the first position information and the first image when the first user feeds back the rendering mode of the first image.
Further, whether the character characteristic region exists in the first image or not is judged. And if the character characteristic region exists in the first image, performing machine learning on first position information and rendering modes corresponding to a plurality of first users to obtain a first machine learning model. And if the character characteristic region does not exist in the first image, cancelling the machine learning of the first position information and the rendering mode corresponding to the first user.
And judging whether the first image has a face region or a person body region, if so, judging that the first image has a person characteristic region. Machine learning is carried out on the rendering mode of the picture with the character feature region, and a first machine learning model is obtained, so that the first machine learning model can carry out more accurate rendering recommendation on the character image.
Step 220, performing machine learning on the first position information and the rendering modes corresponding to the plurality of first users to obtain a first machine learning model.
The first user may be any other user or may be other users with similar attributes to the current user. The user attribute includes any one of age, gender, or interest. And performing machine learning on the first position information and the rendering mode corresponding to the first user, wherein the obtained first machine learning model can determine a proper rendering mode according to the position information.
In one usage scenario, when a user registers at a certain sight spot, different styles of renderings are required for different sight spots. For example, when the group photo is performed with a remote mountain peak, a rendering mode A is used; when a piece of cultural relic or collection is shot, the rendering mode B is used. Therefore, generally, the sight point visible by a position in the sight area is unique, and therefore, the rendering mode used by the position is also unique. And determining the rendering mode corresponding to the position through the rendering mode selected by the first user at the same position.
And step 230, acquiring current position information, and substituting the current position information into the first machine learning model to obtain a target rendering mode corresponding to the current position.
And when the current user triggers a photographing instruction, acquiring current position information. And inputting (also called substituting) the current position information into the first machine learning model to obtain a target rendering mode corresponding to the current position information.
And 240, rendering the target image according to the target rendering mode.
Step 240 is the same as step 120, and reference may be made to the description of step 120.
And step 250, outputting the rendered target image.
Step 250 is the same as step 130, and reference may be made to the description of step 130.
The image rendering method provided by the embodiment can determine the target rendering mode suitable for the current position according to the position of the user, and render the target image according to the target rendering mode, so that the rendering mode is determined according to the position of the user and the machine learning model, the rendering mode is more accurate, and the rendering efficiency is improved.
Fig. 3 is a flowchart of an image rendering method according to an embodiment of the present application, which is further described in the foregoing embodiment, and includes:
step 310, acquiring first position information of a plurality of first users when shooting the first image and a rendering mode of the first image.
And step 320, acquiring the photographing time of the plurality of first users when the first images are photographed.
The photographing time includes a photographing time period or a specific time of photographing. The photographing time period may be in the morning, afternoon or evening. The specific time for photographing is year-month-day-hour-minute-second. The specific time of the photographing time can be acquired through the system clock. And determining the photographing time period according to the specific time.
And 330, performing machine learning on the first position information, the photographing time and the rendering mode corresponding to the plurality of first users to obtain a second machine learning model.
The second machine learning model can determine a proper rendering mode according to the photographing position and the photographing time. For example, for the same scene, a rendering mode a is required during shooting in the morning, and a rendering mode B is required during shooting in the afternoon.
And 340, acquiring current position information and current time information, and substituting the current position information and the current time information into the second machine learning model to obtain a target rendering mode corresponding to the current position information and the current time information.
And acquiring current position and current time information through a GPS and a system clock. And then, obtaining a target rendering mode corresponding to the current position information and the current time information by using the obtained second machine learning model.
And 350, rendering the target image according to the target rendering mode.
Step 350 is the same as step 120, and reference may be made to the description of step 120.
And step 360, outputting the rendered target image.
Step 360 is the same as step 130, and reference may be made to the description of step 130.
The image rendering method provided by the embodiment can combine the shooting position and the shooting time to generate the second machine learning model. When a user takes a photo, determining a target rendering mode applicable to the current time and the current position according to the second machine learning model, and rendering a target image according to the target rendering mode, so that the rendering mode is more accurate, and the rendering efficiency is improved.
Fig. 4 is a flowchart of an image rendering method according to an embodiment of the present application, which is further described in the foregoing embodiment, and includes:
and step 410, acquiring position information of a plurality of first users when the first images are shot and a rendering mode of the first images.
And step 420, acquiring weather information of a plurality of first users when the first images are shot.
The weather information can be acquired through a weather application or a weather server on the network side. The weather information includes temperature information, sunshine information, or wind information. The temperature information includes a maximum air temperature, a minimum air temperature, and a real-time air temperature. The sunshine information includes sunrise time, sunset time, and sunshine intensity. The wind power information comprises wind direction information and wind intensity information. The weather information also includes cloudy days, sunny days, cloudy days, showers, light rains, medium rains, heavy rains, and the like.
And 430, performing machine learning on the first position information, the weather information and the rendering modes corresponding to the plurality of first users to obtain a third machine learning model.
The third machine learning model can determine a proper rendering mode according to the photographing position and the photographing weather. Furthermore, the position information, the weather information, the time information and the rendering mode can be input into the machine learning model to obtain a third machine learning model. The third machine learning model can determine a suitable rendering mode based on the photographing time, the photographing place, and the weather information during photographing.
And 440, acquiring current position information and current weather information, and substituting the current position information and the current weather information into a third machine learning model to obtain a target rendering mode corresponding to the current position and the current weather information.
And step 450, rendering the target image according to the target rendering mode.
Step 450 is the same as step 120, and reference may be made to the description of step 120.
And step 460, outputting the rendered target image.
Step 460 is the same as step 130, and reference is made to the description of step 130.
The image rendering method provided by the embodiment can determine the third machine learning model according to the weather information and the position information. When a user takes a photo, determining a target rendering mode applicable to the current weather information and the current position according to the third machine learning model, and rendering a target image according to the target rendering mode, so that the rendering mode is more accurate, and the rendering efficiency is improved.
Fig. 5 is a flowchart of an image rendering method according to an embodiment of the present application, which is further described in the foregoing embodiment, and includes:
step 510, obtaining character attribute information of a plurality of first users and rendering modes of the plurality of first users.
Wherein the person attribute information includes at least one or more of the following attribute information: age information, gender information, occupation information, or interest information.
The character attributes of the first user may be the same or different. The age information may be a specific age or an age group. The age group may be juvenile, adolescent, adult, middle aged or elderly. The gender information includes male or female. The professional information can be art, literary and professional, outdoor and the like. The interest information may include: sports, literature or politics, etc.
And step 520, performing machine learning on the character attribute information and the rendering modes corresponding to the plurality of first users to obtain a fourth machine learning model.
The fourth machine learning model can determine the rendering mode corresponding to the fourth machine learning model according to the input attribute information through training.
And step 530, acquiring target attribute information of the current user, and substituting the target attribute information into a fourth machine learning model to obtain a target rendering mode corresponding to the target attribute information.
And 540, rendering the target image according to the target rendering mode.
Step 540 is the same as step 120, and reference may be made to the description of step 120.
And 550, outputting the rendered target image.
Step 550 is the same as step 130, and reference may be made to the description of step 130.
The image rendering method provided by the embodiment can determine the rendering mode according to the attribute information of the user, so that the rendering mode is more accurate, and the rendering efficiency is improved.
Fig. 6 is a schematic structural diagram of an image rendering apparatus according to an embodiment of the present application, where the apparatus is configured to implement the method according to the embodiment, and the apparatus is located in a mobile terminal, and includes:
the machine learning module 610 is configured to obtain a target rendering manner of a target image according to a machine learning model, where the target image is an image obtained by a photographing function;
a rendering module 620, configured to render the target image according to the target rendering manner obtained by the machine learning module 610;
an output module 630, configured to output the target image rendered by the rendering module 620.
Further, the machine learning module 610 is further configured to:
acquiring first position information of a plurality of first users when shooting a first image and a rendering mode of the first image;
performing machine learning on the first position information and the rendering mode corresponding to the plurality of first users to obtain a first machine learning model;
and acquiring current position information, and substituting the current position information into the first machine learning model to obtain a target rendering mode corresponding to the current position.
Further, the machine learning module 610 is further configured to: acquiring photographing time of a plurality of first users when the first images are photographed;
performing machine learning on the first position information, the photographing time and the rendering mode corresponding to the plurality of first users to obtain a second machine learning model;
and acquiring current position information and current time information, and substituting the current position information and the current time information into the second machine learning model to obtain a target rendering mode corresponding to the current position and the current time information.
Further, the machine learning module 610 is further configured to: judging whether a character characteristic region exists in the first image or not;
and if the character characteristic region exists in the first image, performing machine learning on the first position information and the rendering mode corresponding to the plurality of first users to obtain a first machine learning model.
Further, the machine learning module 610 is further configured to: acquiring weather information of a plurality of first users when shooting a first image;
performing machine learning on the first position information, the weather information and the rendering mode corresponding to the plurality of first users to obtain a third machine learning model;
and acquiring current position information and current weather information, and substituting the current position information and the current weather information into the third machine learning model to obtain a target rendering mode corresponding to the current position and the current weather information.
Further, the machine learning module 610 is further configured to: acquiring character attribute information of a plurality of first users and rendering modes of the plurality of first users, wherein the character attribute information comprises at least one or more of the following attribute information: age information, gender information, occupation information, or interest information;
performing machine learning on the character attribute information and the rendering mode corresponding to the plurality of first users to obtain a fourth machine learning model;
and acquiring target attribute information of the current user, and substituting the target attribute information into the fourth machine learning model to obtain a target rendering mode corresponding to the target attribute information.
Further, as shown in fig. 7, the apparatus further includes a feedback module 710, where the feedback module 710 is configured to:
receiving feedback information of a user on the target image;
and adjusting the machine learning model according to the feedback information.
In the image rendering apparatus provided in this embodiment, the machine learning module 610 obtains a target rendering manner of a target image according to a machine learning model, where the target image is an image obtained by a photographing function; the rendering module 620 renders the target image according to the target rendering mode; the output module 630 outputs the rendered target image. The machine learning can determine the target rendering mode of the target image according to the historical rendering mode of the current user or other users, so that the user is prevented from blindly trying multiple rendering modes, and the rendering efficiency of the rendering function and the utilization rate of the image rendering function are improved.
The device can execute the methods provided by all the embodiments of the application, and has corresponding functional modules and beneficial effects for executing the methods. For details of the technology not described in detail in this embodiment, reference may be made to the methods provided in all the foregoing embodiments of the present application.
Fig. 8 is a schematic structural diagram of a terminal according to an embodiment of the present application. As shown in fig. 4, the terminal may include: a housing (not shown), a first memory 801, a first Central Processing Unit (CPU) 802 (also called a first processor, hereinafter referred to as CPU), a computer program stored in the first memory 801 and operable on the first processor 802, a circuit board (not shown), and a power circuit (not shown). The circuit board is arranged in a space enclosed by the shell; the CPU802 and the first memory 801 are provided on the circuit board; the power supply circuit is used for supplying power to each circuit or device of the terminal; the first memory 801 is used for storing executable program codes; the CPU802 reads the executable program code stored in the first memory 801 to run a program corresponding to the executable program code, and executes:
acquiring a target rendering mode of a target image according to a machine learning model, wherein the target image is an image obtained by a photographing function;
rendering the target image according to the target rendering mode;
and outputting the rendered target image.
The above terminal further includes: peripheral interface 803, RF (Radio Frequency) circuitry 805, audio circuitry 806, speakers 811, power management chip 808, input/output (I/O) subsystem 809, touch screen 812, other input/control devices 810, and external port 804, which communicate over one or more communication buses or signal lines 807.
In addition, the terminal also comprises a camera and an RGB light sensor. The RGB light sensor is located beside the camera and can be arranged adjacent to the camera. The camera can be a front camera or a rear camera. The RGB light sensor may also be arranged separately from the camera, for example on a narrow side of the terminal.
It should be understood that the illustrated terminal 800 is merely one example of a terminal and that the terminal 800 may have more or fewer components than shown in the figures, may combine two or more components, or may have a different configuration of components. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
The following describes the terminal provided in this embodiment in detail, and the terminal is a smart phone as an example.
A first memory 801, said first memory 801 being accessible by the CPU802, the peripheral interface 803, etc., said first memory 801 may comprise a high speed random access first memory, and may further comprise a non-volatile first memory, such as one or more magnetic disk first storage devices, flash memory devices, or other volatile solid state first storage devices.
A peripheral interface 803, the peripheral interface 803 described above may connect input and output peripherals of the device to the CPU802 and the first memory 801.
An I/O subsystem 809, such as the I/O subsystem 809 may connect input and output peripherals on the device, such as a touch screen 812 and other input/control devices 810, to the peripheral interface 803. The I/O subsystem 809 may include a display controller 8091 and one or more input controllers 8092 for controlling other input/control devices 810. Where one or more input controllers 8092 receive electrical signals from or transmit electrical signals to other input/control devices 810, other input/control devices 810 may include physical buttons (push buttons, rocker buttons, etc.), dials, slide switches, joysticks, click wheels. It is worth noting that the input controller 8092 may be connected to any of the following: a keyboard, an infrared port, a USB interface, and a pointing device such as a mouse. In addition, other input/control devices 810 may include cameras, fingerprint sensors, gyroscopes, and the like.
The touch screen 812 may be a resistive type, a capacitive type, an infrared type, or a surface acoustic wave type, according to the operating principle of the touch screen and the classification of media for transmitting information. The touch screen 812 may be classified by installation method: external hanging, internal or integral. Classified according to technical principles, the touch screen 812 may be: a vector pressure sensing technology touch screen, a resistive technology touch screen, a capacitive technology touch screen, an infrared technology touch screen, or a surface acoustic wave technology touch screen.
A touch screen 812, which is an input interface and an output interface between the user terminal and the user, displays visual output to the user, which may include graphics, text, icons, video, and the like. Optionally, the touch screen 812 sends an electrical signal (e.g., an electrical signal of the touch surface) triggered by the user on the touch screen to the first processor 802.
The display controller 8091 in the I/O subsystem 809 receives electrical signals from the touch screen 812 or sends electrical signals to the touch screen 812. The touch screen 812 detects a contact on the touch screen, and the display controller 8091 converts the detected contact into an interaction with a user interface object displayed on the touch screen 812, that is, implements a human-computer interaction, and the user interface object displayed on the touch screen 812 may be an icon for running a game, an icon networked to a corresponding network, or the like. It is worth mentioning that the device may also comprise a light mouse, which is a touch sensitive surface that does not show visual output, or an extension of the touch sensitive surface formed by the touch screen.
The RF circuit 805 is mainly used to establish communication between the smart speaker and a wireless network (i.e., a network side), and implement data reception and transmission between the smart speaker and the wireless network. Such as sending and receiving short messages, e-mails, etc.
The audio circuit 806 is mainly used to receive audio data from the peripheral interface 803, convert the audio data into an electric signal, and transmit the electric signal to the speaker 811.
Speaker 811 is used to convert the voice signals received by the smart speaker from the wireless network through RF circuit 805 into sound and play the sound to the user.
And the power management chip 808 is used for supplying power and managing power to the hardware connected with the CPU802, the I/O subsystem and the peripheral interface.
In this embodiment, the central first processor 802 is configured to:
acquiring a target rendering mode of a target image according to a machine learning model, wherein the target image is an image obtained by a photographing function;
rendering the target image according to the target rendering mode;
and outputting the rendered target image.
Further, the obtaining of the target rendering mode of the target image according to the machine learning model includes:
acquiring first position information of a plurality of first users when shooting a first image and a rendering mode of the first image;
performing machine learning on the first position information and the rendering mode corresponding to the plurality of first users to obtain a first machine learning model;
and acquiring current position information, and substituting the current position information into the first machine learning model to obtain a target rendering mode corresponding to the current position.
Further, after acquiring the position information of the plurality of first users when shooting the first image and the rendering mode of the first image, the method further includes:
acquiring photographing time of a plurality of first users when the first images are photographed;
performing machine learning on the first position information, the photographing time and the rendering mode corresponding to the plurality of first users to obtain a second machine learning model;
and acquiring current position information and current time information, and substituting the current position information and the current time information into the second machine learning model to obtain a target rendering mode corresponding to the current position and the current time information.
Further, the performing machine learning on the first location information and the rendering manner corresponding to the plurality of first users to obtain a first machine learning model includes:
judging whether a character characteristic region exists in the first image or not;
and if the character characteristic region exists in the first image, performing machine learning on the first position information and the rendering mode corresponding to the plurality of first users to obtain a first machine learning model.
Further, after acquiring the position information of the plurality of first users when shooting the first image and the rendering mode of the first image, the method further includes:
acquiring weather information of a plurality of first users when shooting a first image;
performing machine learning on the first position information, the weather information and the rendering mode corresponding to the plurality of first users to obtain a third machine learning model;
and acquiring current position information and current weather information, and substituting the current position information and the current weather information into the third machine learning model to obtain a target rendering mode corresponding to the current position and the current weather information.
Further, the obtaining of the target rendering mode of the target image according to the machine learning model includes:
acquiring character attribute information of a plurality of first users and rendering modes of the plurality of first users, wherein the character attribute information comprises at least one or more of the following attribute information: age information, gender information, occupation information, or interest information;
performing machine learning on the character attribute information and the rendering mode corresponding to the plurality of first users to obtain a fourth machine learning model;
and acquiring target attribute information of the current user, and substituting the target attribute information into the fourth machine learning model to obtain a target rendering mode corresponding to the target attribute information.
Further, after outputting the rendered target image, the method further includes:
receiving feedback information of a user on the target image;
and adjusting the machine learning model according to the feedback information.
Embodiments of the present application further provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, can implement the following steps:
acquiring a target rendering mode of a target image according to a machine learning model, wherein the target image is an image obtained by a photographing function;
rendering the target image according to the target rendering mode;
and outputting the rendered target image.
Further, the obtaining of the target rendering mode of the target image according to the machine learning model includes:
acquiring first position information of a plurality of first users when shooting a first image and a rendering mode of the first image;
performing machine learning on the first position information and the rendering mode corresponding to the plurality of first users to obtain a first machine learning model;
and acquiring current position information, and substituting the current position information into the first machine learning model to obtain a target rendering mode corresponding to the current position.
Further, after acquiring the position information of the plurality of first users when shooting the first image and the rendering mode of the first image, the method further includes:
acquiring photographing time of a plurality of first users when the first images are photographed;
performing machine learning on the first position information, the photographing time and the rendering mode corresponding to the plurality of first users to obtain a second machine learning model;
and acquiring current position information and current time information, and substituting the current position information and the current time information into the second machine learning model to obtain a target rendering mode corresponding to the current position and the current time information.
Further, the performing machine learning on the first location information and the rendering manner corresponding to the plurality of first users to obtain a first machine learning model includes:
judging whether a character characteristic region exists in the first image or not;
and if the character characteristic region exists in the first image, performing machine learning on the first position information and the rendering mode corresponding to the plurality of first users to obtain a first machine learning model.
Further, after acquiring the position information of the plurality of first users when shooting the first image and the rendering mode of the first image, the method further includes:
acquiring weather information of a plurality of first users when shooting a first image;
performing machine learning on the first position information, the weather information and the rendering mode corresponding to the plurality of first users to obtain a third machine learning model;
and acquiring current position information and current weather information, and substituting the current position information and the current weather information into the third machine learning model to obtain a target rendering mode corresponding to the current position and the current weather information.
Further, the obtaining of the target rendering mode of the target image according to the machine learning model includes:
acquiring character attribute information of a plurality of first users and rendering modes of the plurality of first users, wherein the character attribute information comprises at least one or more of the following attribute information: age information, gender information, occupation information, or interest information;
performing machine learning on the character attribute information and the rendering mode corresponding to the plurality of first users to obtain a fourth machine learning model;
and acquiring target attribute information of the current user, and substituting the target attribute information into the fourth machine learning model to obtain a target rendering mode corresponding to the target attribute information.
Further, after outputting the rendered target image, the method further includes:
receiving feedback information of a user on the target image;
and adjusting the machine learning model according to the feedback information.
The computer storage media of the embodiments of the present application may take any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, or the like, as well as conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present application and the technical principles employed. It will be understood by those skilled in the art that the present application is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the application. Therefore, although the present application has been described in more detail with reference to the above embodiments, the present application is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present application, and the scope of the present application is determined by the scope of the appended claims.

Claims (7)

1. An image rendering method, comprising:
the method for obtaining the target rendering mode of the target image according to the machine learning model comprises the following steps: the method comprises the steps of obtaining character attribute information of a plurality of first users, first position information, photographing time and a rendering mode of a first image when the first images are photographed by the plurality of first users, wherein the first position information is coordinate information or place identification information; the person attribute information includes at least one or more of the following attribute information: age information, gender information, occupation information, or interest information; the target rendering mode is a historical rendering mode of a current user or other users;
performing machine learning on the character attribute information, the first position information, the photographing time and the rendering mode corresponding to the plurality of first users to obtain a second machine learning model;
acquiring target attribute information, current position information and current time information of a current user, and substituting the target attribute information, the current position information and the current time information into the second machine learning model to obtain a target rendering mode corresponding to the target attribute information, the current position and the current time information;
rendering the target image according to the target rendering mode, wherein the target image is an image obtained by a photographing function;
and outputting the rendered target image.
2. The image rendering method of claim 1, wherein the performing machine learning on the person attribute information, the first position information, the photographing time, and the rendering manner corresponding to the plurality of first users to obtain a second machine learning model comprises:
judging whether a character characteristic region exists in the first image or not;
and if the character characteristic region exists in the first image, performing machine learning on the first position information, the photographing time and the rendering mode corresponding to the plurality of first users to obtain a second machine learning model.
3. The image rendering method according to claim 1, further comprising, after acquiring the position information, the photographing time, and the rendering manner of the first image when the plurality of first users photograph the first image:
acquiring weather information of a plurality of first users when shooting a first image;
performing machine learning on the first position information, the weather information and the rendering mode corresponding to the plurality of first users to obtain a third machine learning model;
and acquiring current position information and current weather information, and substituting the current position information and the current weather information into the third machine learning model to obtain a target rendering mode corresponding to the current position and the current weather information.
4. The image rendering method according to claim 1, further comprising, after outputting the rendered target image:
receiving feedback information of a user on the target image;
and adjusting the machine learning model according to the feedback information.
5. An image rendering apparatus, comprising:
the machine learning module is used for obtaining a target rendering mode of a target image according to a machine learning model, and comprises: the method comprises the steps of obtaining character attribute information of a plurality of first users, first position information, photographing time and a rendering mode of a first image when the first images are photographed by the plurality of first users, wherein the first position information is coordinate information or place identification information; the person attribute information includes at least one or more of the following attribute information: age information, gender information, occupation information, or interest information; the target rendering mode is a historical rendering mode of a current user or other users;
performing machine learning on the character attribute information, the first position information, the photographing time and the rendering mode corresponding to the plurality of first users to obtain a second machine learning model;
acquiring target attribute information, current position information and current time information of a current user, and substituting the target attribute information, the current position information and the current time information into the second machine learning model to obtain a target rendering mode corresponding to the target attribute information, the current position and the current time information;
the rendering module is used for rendering the target image according to the target rendering mode, and the target image is an image obtained by a photographing function;
and the output module is used for outputting the target image rendered by the rendering module.
6. A terminal, characterized in that the terminal comprises:
one or more processors;
a storage device for storing one or more programs,
the data transceiver is used for carrying out data interaction with the server;
when executed by the one or more processors, cause the one or more processors to implement the image rendering method of any of claims 1-4.
7. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the image rendering method according to any one of claims 1 to 4.
CN201710868728.XA 2017-09-22 2017-09-22 Image rendering method, device, terminal and computer readable storage medium Active CN107622473B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710868728.XA CN107622473B (en) 2017-09-22 2017-09-22 Image rendering method, device, terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710868728.XA CN107622473B (en) 2017-09-22 2017-09-22 Image rendering method, device, terminal and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN107622473A CN107622473A (en) 2018-01-23
CN107622473B true CN107622473B (en) 2020-01-21

Family

ID=61090211

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710868728.XA Active CN107622473B (en) 2017-09-22 2017-09-22 Image rendering method, device, terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN107622473B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108540824B (en) * 2018-05-15 2021-01-19 北京奇虎科技有限公司 Video rendering method and device
CN110570502A (en) * 2019-08-05 2019-12-13 北京字节跳动网络技术有限公司 method, apparatus, electronic device and computer-readable storage medium for displaying image frame
CN114861073B (en) * 2022-07-06 2022-09-20 哈尔滨工业大学(威海) Clothing personalized customization method and system based on big data and customer portrait

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7372595B1 (en) * 2001-08-20 2008-05-13 Foveon, Inc. Flexible image rendering system utilizing intermediate device-independent unrendered image data
CN103731660A (en) * 2012-10-12 2014-04-16 辉达公司 System and method for optimizing image quality in a digital camera
CN106569763A (en) * 2016-10-19 2017-04-19 华为机器有限公司 Image displaying method and terminal

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103533241B (en) * 2013-10-14 2017-05-10 厦门美图网科技有限公司 Photographing method of intelligent filter lens
CN103929594A (en) * 2014-04-28 2014-07-16 深圳市中兴移动通信有限公司 Mobile terminal and shooting method and device thereof
CN106408603B (en) * 2016-06-21 2023-06-02 北京小米移动软件有限公司 Shooting method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7372595B1 (en) * 2001-08-20 2008-05-13 Foveon, Inc. Flexible image rendering system utilizing intermediate device-independent unrendered image data
CN103731660A (en) * 2012-10-12 2014-04-16 辉达公司 System and method for optimizing image quality in a digital camera
CN106569763A (en) * 2016-10-19 2017-04-19 华为机器有限公司 Image displaying method and terminal

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"RENDERING GREYSCALE IMAGE USING COLOR FEATURE ";Ye Ji 等;《2008 International Conference on Machine Learning and Cybernetics》;20080815;第3017-3021页 *
"人脸图像的自适应美化与渲染研究";梁凌宇;《中国博士学位论文全文数据库 信息科技辑》;20141115(第 11 期);第I138-19页 *

Also Published As

Publication number Publication date
CN107622473A (en) 2018-01-23

Similar Documents

Publication Publication Date Title
EP3579544B1 (en) Electronic device for providing quality-customized image and method of controlling the same
CN107622281B (en) Image classification method and device, storage medium and mobile terminal
US11138434B2 (en) Electronic device for providing shooting mode based on virtual character and operation method thereof
EP2742723B1 (en) Zero-click photo upload
CN107484231B (en) Screen parameter adjusting method, device, terminal and computer readable storage medium
WO2019120016A1 (en) Image processing method and apparatus, storage medium, and electronic device
WO2019183775A1 (en) Intelligent assistant control method and terminal device
CN108881875B (en) Image white balance processing method and device, storage medium and terminal
KR20190035116A (en) Method and apparatus for displaying an ar object
CN109040523B (en) Artifact eliminating method and device, storage medium and terminal
CN109327691B (en) Image shooting method and device, storage medium and mobile terminal
CN107622473B (en) Image rendering method, device, terminal and computer readable storage medium
CN107402625B (en) Touch screen scanning method and device, terminal and computer readable storage medium
CN109120864B (en) Light supplement processing method and device, storage medium and mobile terminal
CN107292817B (en) Image processing method, device, storage medium and terminal
CN108665510B (en) Rendering method and device of continuous shooting image, storage medium and terminal
US20230262321A1 (en) Electronic device and operating method thereof
CN112116690A (en) Video special effect generation method and device and terminal
CN113110731B (en) Method and device for generating media content
CN109040729B (en) Image white balance correction method and device, storage medium and terminal
CN108055461B (en) Self-photographing angle recommendation method and device, terminal equipment and storage medium
CN113609358A (en) Content sharing method and device, electronic equipment and storage medium
CN113190307A (en) Control adding method, device, equipment and storage medium
CN109218620B (en) Photographing method and device based on ambient brightness, storage medium and mobile terminal
CN109089042B (en) Image processing mode identification method and device, storage medium and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong

Applicant after: OPPO Guangdong Mobile Communications Co., Ltd.

Address before: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong

Applicant before: Guangdong OPPO Mobile Communications Co., Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant