CN114422682A - Photographing method, electronic device, and readable storage medium - Google Patents

Photographing method, electronic device, and readable storage medium Download PDF

Info

Publication number
CN114422682A
CN114422682A CN202210104514.6A CN202210104514A CN114422682A CN 114422682 A CN114422682 A CN 114422682A CN 202210104514 A CN202210104514 A CN 202210104514A CN 114422682 A CN114422682 A CN 114422682A
Authority
CN
China
Prior art keywords
image
image data
scene
style
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210104514.6A
Other languages
Chinese (zh)
Other versions
CN114422682B (en
Inventor
郑晓明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ARM Technology China Co Ltd
Original Assignee
ARM Technology China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ARM Technology China Co Ltd filed Critical ARM Technology China Co Ltd
Priority to CN202210104514.6A priority Critical patent/CN114422682B/en
Publication of CN114422682A publication Critical patent/CN114422682A/en
Application granted granted Critical
Publication of CN114422682B publication Critical patent/CN114422682B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/88Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/646Circuits for processing colour signals for image enhancement, e.g. vertical detail restoration, cross-colour elimination, contour correction, chrominance trapping filters

Abstract

The present application relates to the field of shooting technologies, and in particular, to a shooting method, an electronic device, and a readable storage medium, where the method includes: in the shooting process of the electronic equipment, the image signal processor acquires image data collected by the lens; the image signal processor acquires a parameter value of an image style parameter corresponding to a scene type to which the image data belongs, wherein the parameter value of the image style parameter of the scene type is determined based on an image stored on the electronic equipment; the image signal processor processes the image data according to the parameter value of the image style parameter to obtain target image data; and the electronic equipment takes the target image corresponding to the target image data as a shooting result and displays the shooting result to the user. According to the photographing method, the image style types preferred by the user can be obtained without manual operation of the user, and the use experience of the user is improved.

Description

Photographing method, electronic device, and readable storage medium
Technical Field
The invention relates to the technical field of shooting, in particular to a shooting method, electronic equipment and a readable storage medium.
Background
With the development of image processing industry, people are more and more accustomed to adding a filter to an image shot by terminal electronic equipment such as a mobile phone and a tablet computer so as to obtain a favorite image style type. Thus, some photographing applications and image processing applications appear in succession. For example, referring to fig. 1a, when a user uses the mobile phone 100 to shoot, the user may open the interface 10 of the shooting application, select a favorite filter 3 among various filters provided by the user, adjust the viewing area 101 of the shooting application to a range desired by the user, and click the shooting button 102. The image style type of the photograph 103 displayed by the mobile phone 100 is the same as the image style type corresponding to the filter 3. For another example, referring to fig. 1b, the user operates the mobile phone 100 to open the interface 11 of the camera, adjust the camera view area 101 to the range required by the user, and then click the shooting button 102 to obtain the image 104. The user can open the interface 12 of the image processing application and select a favorite filter 3 among the plurality of filters provided by the image processing application, so that the image style type of the shot picture is the same as the image style type of the filter 3, and the picture 105 with the added filter is obtained.
However, in the image processing process shown in fig. 1a to 1b, no matter the filter added by the shooting application is adopted, or the filter is added after the image is shot, the user needs to manually add the filter to the image to obtain the image style type preferred by the user, the operation is complicated, and the user experience is not high.
Disclosure of Invention
In order to solve the problems that a user needs to manually select a filter and experience is not high, the embodiment of the application provides a shooting method, electronic equipment and a readable storage medium.
In a first aspect, an embodiment of the present application provides a shooting method applied to an electronic device, where the electronic device includes a lens and an image signal processor, and the method includes: in the shooting process of the electronic equipment, the image signal processor acquires image data collected by the lens; the image signal processor acquires a parameter value of an image style parameter corresponding to a scene type to which the image data belongs, wherein the parameter value of the image style parameter of the scene type is determined based on an image stored on the electronic equipment; the image signal processor processes the image data according to the parameter value of the image style parameter to obtain target image data; and the electronic equipment takes the target image corresponding to the target image data as a shooting result and displays the shooting result to the user.
It can be understood that a plurality of scene types are preset in the electronic device, and the scene type to which the image data belongs is one of the plurality of scene types preset in the electronic device. The preset scene types may be divided according to different shooting objects, for example, a landscape scene type, a portrait scene type, an animal scene type, and the like, and may also be divided according to different weather, for example, a rain scene type, a sunny scene type, a snow scene type, and the like, and may also be divided by combining the shooting objects and the weather, for example, the landscape scene type is further divided according to different weather, and the like, which is not limited in the present application.
It is to be understood that the shooting result may be an image obtained from image data of a single image, or may be a video or a motion picture obtained from image data of a plurality of consecutive images, and the present application is not limited thereto.
It can be understood that the image style parameter is an ISP parameter which may be understood as follows when the image signal processor processes the image acquired by the lens, and parameter values of different image style parameters correspond to different image style types. Before the electronic equipment shoots, the parameter values of the image style parameters corresponding to the scene types can be determined according to the images stored in the equipment, and then in the shooting process of the electronic equipment, the image signal processor can process the image data collected by the lens according to the parameter values of the image style parameters corresponding to the scene types to obtain the image style types preferred by the user under the corresponding scene types. According to the shooting method in the embodiment of the application, the electronic equipment can process the image data into the image style type preferred by the user without manual operation of the user, and the use experience of the user is improved.
In a possible implementation manner of the first aspect, the image style parameter includes at least one of the following: black level compensation parameters, dead pixel correction parameters, automatic white balance parameters, lens correction parameters, noise reduction parameters, edge enhancement parameters, brightness adjustment parameters, contrast adjustment parameters, chromaticity adjustment parameters, color correction parameters, and gamma correction parameters.
It is understood that the black level compensation, dead pixel correction, auto white balance, lens correction, noise reduction, edge enhancement, brightness adjustment, contrast adjustment, chromaticity adjustment, color correction, and gamma correction correspond to different sub-modules in a general functional block of the image signal processor, and the different sub-modules may adjust different parameters of the image data. It is to be understood that the image style parameter corresponding to the scene type in the image data may be any one of the above parameters, may be a plurality of the above parameters, or may be all of the above parameters.
It is to be understood that the image style parameter mentioned above is an example in the embodiment of the present application, and the image style parameter in the embodiment of the present application may also include more or less parameters than the above, and is not limited herein.
In a possible implementation manner of the first aspect, the acquiring, by the image signal processor, an image style parameter corresponding to a scene type to which the image data belongs includes: the image signal processor extracts first feature information of the image data and judges a scene type to which the image data belongs according to the first feature information.
It is to be understood that the first feature information is obtained by performing feature extraction on the image data when the image signal processor determines the scene type of the image data, and the first feature information may include: object characteristic information, environmental characteristic information, light characteristic information, and the like. The object feature information may include object type information, object color information, and object proportion information, and the object type information may be similar information of people, scenery, animals, plants, and objects; the object color information may be similar information of color articles, black and white articles, and the like, and the object ratio information is the ratio of the object to the whole picture. The environmental characteristics may be similar information such as indoors, outdoors, grasslands, seas, deserts, etc., and the light characteristics may be similar information such as daytime, cloudy days, dark night, etc. In some embodiments, the first feature information may further include other feature information besides the information, and the first feature information is an example in the embodiments of the present application, and the present application is not limited to this.
It can be understood that a plurality of scene types are preset in the electronic device, the scene type to which the image data belongs is judged according to the first feature information, and the scene type corresponding to the first feature information is determined in the preset plurality of scene types according to the first feature information.
In one possible implementation manner of the first aspect, the electronic device further includes a neural network processor, and the parameter value of the image style parameter is predetermined by: the method comprises the steps that a neural network processor obtains a plurality of images stored by electronic equipment; the neural network processor determines the scene type of each image in the multiple images; the neural network processor determines parameter values of the image style parameters of the plurality of images, and determines the parameter values of the image style parameters corresponding to each scene type according to the parameter values of the image style parameters of each image under each scene type.
It can be understood that the neural network processor determines the scene type to which each image in the plurality of images belongs, and the neural network processor determines the corresponding relationship between each image and a plurality of scene types preset in the electronic device.
In a possible implementation manner of the first aspect, the determining, by the neural network processor, a scene type to which each of the plurality of images belongs includes: and the neural network processor extracts second characteristic information of each image in the plurality of images through a machine learning algorithm and determines the corresponding relation between the second characteristic information and each scene type.
It can be understood that the second feature information is obtained by performing feature extraction on image data when the image signal processor determines the scene types of the multiple images, and the second feature information and the first feature information may be the same feature information, and the difference is that corresponding images are different, and details are not described here.
In a possible implementation manner of the first aspect, the machine learning algorithm includes a scene classification learning algorithm or a scene clustering algorithm.
It can be understood that, in some embodiments, the image data amount of the acquired multiple images is too large, for example, the acquired multiple images are all images stored in the electronic device, a scene clustering algorithm may be first used to cluster the second feature information of the multiple images, and a scene type corresponding to each category is determined. In other embodiments, the amount of image data of the acquired multiple images is small, for example, the acquired multiple images are one or two of the images stored by the electronic device, and the scene type corresponding to the second feature information may be determined by using a scene classification learning algorithm.
In one possible implementation manner of the first aspect, the determining, by the neural network processor, the parameter values of the image style parameters of the plurality of images includes: and the neural network processor extracts third characteristic information of each image in the multiple images through a machine classification learning algorithm and determines the corresponding relation between the third characteristic information and the parameter value of each image style parameter.
It is to be understood that the third feature information is obtained by performing feature extraction on the image data when the image signal processor determines the scene type of the image data, and the extracted third feature information may include: at least one of saturation information, contrast information, color histogram information, luminance histogram information, and subject category information. In addition, the third characteristic information may also include other information, which is not limited in this application.
In some embodiments, the first characteristic information, the second characteristic information, and the third characteristic information may include the same content, and in other embodiments, the first characteristic information, the second characteristic information, and the third characteristic information may include different content.
In a possible implementation manner of the first aspect, the determining, by the neural network processor, the parameter value of the image style parameter corresponding to each scene type according to the parameter value of the image style parameter of each image in each scene type includes: and the neural network processor determines that the parameter value of the preset image style parameter corresponding to the maximum third feature information under each scene type in the second corresponding relationship is the parameter value of the image style parameter corresponding to the scene type.
In a second aspect, embodiments of the present application provide an electronic device, one or more processors; one or more memories; the one or more memories store one or more programs that, when executed by the one or more processors, cause the electronic device to perform the above-described photographing method.
In a third aspect, an embodiment of the present application provides a computer-readable storage medium, where instructions are stored on the storage medium, and when executed on a computer, the instructions cause the computer to execute the above-mentioned shooting method.
In a fourth aspect, the present application provides a computer program product, which includes instructions that, when executed, cause a computer to execute the above-mentioned shooting method.
Drawings
FIGS. 1 a-1 b are interface diagrams of some of the captured images with filters added;
fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram illustrating a shooting method according to an embodiment of the present disclosure;
fig. 4 is a schematic flowchart illustrating a shooting method according to an embodiment of the present disclosure;
fig. 5a to 5c are interface diagrams illustrating some photographing methods provided by embodiments of the present application;
fig. 6 is a schematic flowchart illustrating another shooting method according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an ISP according to an embodiment of the present application;
fig. 8 is a schematic diagram illustrating a process of processing image data by a general function module according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a mobile phone adapted to the shooting method according to the present application.
Detailed Description
Illustrative embodiments of the present application include, but are not limited to, a photographing method, an electronic device, and a readable storage medium. Embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
In the following description, numerous technical details are set forth in order to provide a better understanding of the present invention. However, it will be understood by those skilled in the art that the claimed embodiments of the present invention may be practiced without these specific details and with various changes and modifications based on the following embodiments.
In order to solve the problems that when a user obtains a desired image style, the terminal device needs to process a shot image, the system workload of the terminal device is large, and the memory occupied is large, the shooting method is provided. The method determines the image style types preferred by the user in different scenes by acquiring the reference images preferred by the user. When the user shoots, the electronic device can adjust the parameter value of the ISP parameter in the Image Signal Processor (ISP) of the electronic device according to the preferred Image style type of the user in the shooting scene, so that the parameter value of the ISP parameter is adjusted to the parameter value of the ISP parameter corresponding to the preferred Image style type of the user, and the ISP can process the Image collected by the electronic device based on the adjusted ISP parameter to obtain the Image meeting the preferred Image style type of the user.
Specifically, after acquiring a reference image liked by a user, the electronic device may perform image analysis processing on the reference image through a machine learning algorithm preset in the electronic device, determine a scene type (hereinafter referred to as a reference scene type) corresponding to the reference image, determine a reference image style type matched with the reference image in multiple image style types to be selected preset in the electronic device, and further determine the reference image style type as an image style type preferred by the user in the reference scene type, and send a correspondence between the reference scene type and the reference image style type in each reference scene to an ISP of the electronic device. When a user uses the terminal device to shoot, the terminal device determines the type of the shot scene, and if the type of the shot scene is matched with the reference scene type, the parameter value of the parameter in the ISP, for example, the parameter value of the parameter in the general function module, is adjusted to be matched with the parameter value of the reference ISP parameter corresponding to the reference image style type. Further, the electronic device may be in a reference scene type, and the image style type of the shot photo may be in accordance with the user's preference.
According to the shooting method provided by the embodiment of the application, in the shooting process of the user, the electronic equipment processes the image collected by the electronic equipment through adjusting the parameter value of the ISP, so that the image with the preferred image style type of the user in different scenes can be directly shot, the preferred image style type of the user can be obtained without manual operation of the user, and the use experience of the user is improved.
It can be understood that a plurality of scene types are preset in the electronic device, and the scene type to which the image data belongs is one of the plurality of scene types preset in the electronic device. The preset scene types may be divided according to different shooting objects, for example, a landscape scene type, a portrait scene type, an animal scene type, and the like, and may also be divided according to different weather, for example, a rain scene type, a sunny scene type, a snow scene type, and the like, and may also be divided by combining the shooting objects and the weather, for example, the landscape scene type is further divided according to different weather, and the like, which is not limited in the present application.
It is to be understood that the ISP parameters are parameters of image processing involved in image processing by the ISP, and may include, for example, parameters for black level compensation, parameters for dead pixel correction, parameters for automatic white balance, parameters for lens correction, parameters for noise reduction, parameters for edge enhancement, parameters for brightness adjustment, parameters for contrast adjustment, parameters for chromaticity adjustment, parameters for color correction, and parameters for gamma correction, which is not limited in this application.
It is to be understood that, in other embodiments, the ISP parameters may further include parameters related to adjusting the lens, such as parameters for adjusting an aperture, a shutter, a focal length, and the like of the lens, which is not limited in this application.
It can be understood that the image style type and the corresponding parameter value of the ISP parameter are pre-stored in the ISP of the electronic device or the memory of the electronic device, and when the ISP adjusts the parameter value of the ISP parameter according to the preferred image style type of the user in different scene types, the ISP may obtain the parameter value of the ISP parameter corresponding to the image style type, and then complete the adjustment of the ISP parameter according to the obtained parameter value of the ISP parameter.
It can be understood that the shooting result obtained by applying the shooting method provided by the present application may be a single image (i.e., a photo), or may be a video or a motion picture composed of multiple frames of images, which is not limited in this application.
In order to better understand the solution of the embodiment of the present application, the following first describes an electronic device that may be involved in the embodiment of the present application. It is understood that the electronic device 200 may include, but is not limited to: laptop computers, desktop computers, tablet computers, cell phones, servers, wearable devices, head-mounted displays, mobile email devices, portable gaming devices, reader devices, televisions, and the like.
Fig. 2 is a schematic structural diagram of an electronic device 200 according to an embodiment of the present application.
As shown in fig. 2, the electronic device 200 may include a lens 201, an image sensor 202, a display screen 207, and a System On Chip (SOC) 2000. The lens 201 is connected to the image sensor 202, and the image sensor 202 is connected to the system-on-chip 2000.
Specifically, the lens 201 is used to collect the light signal reflected by the scene and present the light signal on the image sensor 202, and the lens 201 may be a fixed focus lens, a zoom lens, a fish-eye lens, a panoramic lens, or the like.
The image sensor 202 is configured to convert an optical signal reflected by a scene collected through the lens 201 into an electrical signal, generate RAW image (RAW) data, which may be Bayer-formatted data, for example, and transmit the RAW image data to the system-on-chip 2000. The image sensor may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
The system-on-chip 2000 may include an ISP203, a Neural Network Processing Unit (NPU) 204, a Central Processing Unit (CPU) 206; the ISP203, NPU204, and CPU 206 may be coupled by a bus; in other embodiments, the system-on-chip 2000 may comprise the ISP203 and the NPU204, wherein the ISP203 and the NPU204 are coupled via a bus and the CPU 206 is a separate device in the electronic device. In other embodiments, the system-on-chip 2000 may comprise the ISP203, the NPU204, and the memory 205 may be a double rate synchronous dynamic random access memory, wherein the ISP203, the NPU204, the CPU 206, and the memory 205 may be coupled by a bus; in other embodiments, the ISP203, the NPU204, and the CPU 206 may all be separate devices in the electronic device 200.
The display screen 207 is used to display images, videos, and the like. The display screen 207 includes a display panel. The display panel may be a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a Mini-LED, a Micro-OLED, a quantum dot light-emitting diode (QLED), or the like. In some embodiments, the electronic device 200 may include 1 or N display screens 207, N being a positive integer greater than 1.
It is understood that the system-on-chip 2000 shown in fig. 2 is only an exemplary illustration, and those skilled in the art will understand that in other embodiments, some components may be added or reduced, for example, a bus control unit, an interrupt management unit, a coprocessor, etc., and some components may be split or combined, for example, the ISP203 and the CPU 206 are integrated, and the embodiments of the present application are not limited thereto.
The ISP203 is an application-specific integrated circuit (ASIC) for image data processing, which is used to further process the image data formed by the image sensor 202 for better image quality.
The NPU204 is an ASIC designed for deep learning, and in some embodiments, the reference scene type and the reference image style type of the reference image may be determined by the reference image obtained from the memory 204, for example, deep learning model inference may be performed, for example, which may include: neural network model training, image recognition, face recognition and the like.
The CPU 206 may include one or more Processing units, for example, Processing modules or Processing circuits that may include a Central Processing Unit (CPU), (central Processing Unit), an image Processing Unit (gpu), (graphics Processing Unit), a digital Signal processor (dsp), (digital Signal processor), a microprocessor MCU (Micro-programmed Control Unit), an AI (Artificial Intelligence) processor, or a Programmable logic device (fpga), (field Programmable Gate array), etc. The different processing units may be separate devices or may be integrated into one or more processors.
ISP203, NPU204, CPU 206 are coupled by a bus. The bus may be an advanced high-performance bus (AHB) or other type of data bus.
The system-on-chip 2000 provided in the embodiment of the present application can determine, through the NPU204, an image style type preferred by a user in each scene type, and send a correspondence between a reference scene type and a reference image style type in each reference scene type to the ISP 203. When a user operates the electronic device 200 to shoot, the ISP203 may determine an image style type preferred by the user under the scene type according to the scene type of the image acquired by the electronic device 200, and adjust a parameter value of an ISP parameter of the ISP203 according to the determined image style type. The ISP203 may process the acquired image according to the adjusted parameter value of the ISP parameter to obtain a photo meeting the user's preference for the style of the image.
It is understood that the structure of the electronic device 200 shown in fig. 2 is only an example, and may be any electronic device 200 including the ISP203, the NPU204, and the CPU 206, and does not constitute a specific limitation to the electronic device 200, and in other embodiments, the electronic device 200 may include more or fewer modules, and may also combine or split some modules, and the embodiments of the present application are not limited.
It is understood that in some embodiments, when the terminal device performs shooting, the images acquired by the lens 201 and the image sensor 202 need to be processed by the ISP 203.
Fig. 3 is a flowchart illustrating shooting by the electronic device 200 in the embodiment of the present application.
As shown in fig. 3, the lens collects light reflected by a scene within a viewing range and projects the resulting light signal to a photosensitive area of the image sensor. The image sensor may perform photoelectric conversion, convert an optical signal into an original image, form image data in a RAW format, and transmit the image data to the ISP203 of the system-on-chip 2000. The ISP203 may process the image through a general function module therein. The general functional modules may include, for example, functional modules such as an automatic aperture, an automatic exposure, an automatic white balance, and a color tone adjustment. After the ISP performs image processing, the processed image is fed back to a Central Processing Unit (CPU), and the CPU performs interface rendering and displays the image on the display screen 207.
It can be understood that, in some embodiments, in the shooting method provided in the embodiments of the present application, the ISP may store the reference scene type of the acquired reference image and the corresponding reference image style type. When a user uses terminal equipment to shoot, and an ISP determines that the shooting scene type of the user is matched with the reference scene type of a reference image, the parameter values of at least part of ISP parameters in the ISP203 are adjusted according to the parameter values of the reference ISP parameters corresponding to the style type of the reference image. And processing the shot image according to the adjusted parameter value of the ISP parameter. According to the shooting method in the embodiment of the application, the ISP parameters are adjusted in the process of shooting and imaging of the electronic equipment, the image style type preferred by the user is obtained, the image style type preferred by the user can be obtained without manual operation of the user, and the use experience of the user is improved.
Taking the electronic device 200 as a mobile phone as an example, a shooting method in the embodiment of the present application is described with reference to fig. 4.
Fig. 4 is a flowchart illustrating a photographing method in an embodiment of the present application.
It is understood that in the embodiment corresponding to fig. 4, the style type of the image preferred by the user in the reference scene type is determined by the reference image selected by the part of the user, and the smart shooting is performed by the style type of the image. As shown in fig. 4, the shooting method in the embodiment of the present application includes:
401: the NPU204 reads reference picture data corresponding to the reference picture from the memory.
It will be appreciated that the reference image data may characterize the image style types of the user's preferred photographs under the reference scene type of the reference image. The reference image data acquired by the NPU204 may be data corresponding to one reference image, or image data corresponding to multiple reference images.
It is understood that the image data in the memory 205 of the cell phone 200 may be displayed in an album application, gallery application, photo application, etc. of the cell phone 200.
In some embodiments, the reference image may be an image marked by a user in an album application, gallery application, photo application, etc., and the NPU204 may retrieve reference image data of the marked reference image from the memory 205 after marking the favorite reference image. The mark may be, for example, a mark of a photo as favorite, or may be, for example, a mark of a favorite, etc., which is not limited in this application.
In some embodiments, the reference image may be an image corresponding to a favorite image style type selected by the user after opening an album application, a gallery application, a photo application, or the like, and then the selected image is imported into the shooting application, and after detecting that the image is imported into the shooting application, the mobile phone 200 uses the image as the reference image, and then the NPU204 may obtain reference image data of the reference image from the memory 205.
In other embodiments, the reference image determination method may also be other operations for marking an image and importing the image into a shooting application, which is not limited in this application.
402: the NPU204 determines a reference scene type of the reference image data.
Illustratively, the NPU204 may determine the reference scene type of the reference image data using a scene classification learning algorithm.
It is understood that the scene classification learning algorithm is a classification algorithm in a machine learning algorithm. It is understood that a machine classification learning algorithm is a supervised learning approach that models or predicts discrete random variables. The purpose of class learning is to learn a classification function or classification model, also often referred to as classifier, from a given set of manually labeled class training samples. When new data comes in, prediction can be made according to this function, and the new data item is mapped to one of the given classes.
Specifically, the training of the scene classification learning algorithm is to construct a rule between image data and a scene type by using a large amount of image data and a scene type corresponding to the image data as a training data set, and then the NPU204 may predict a reference scene type of the input reference image data according to the constructed rule, that is, execute the step 402. Specifically, the scene classification learning algorithm finds a relationship between feature information and a scene type by taking the feature information of image data and the scene type of the image data as input. Furthermore, when new image data (for example, reference image data) is input into the scene classification learning algorithm, the scene classification learning algorithm may determine a scene type matching the new image data among the existing scene types of the scene classification learning algorithm through the constructed relationship. Wherein, the characteristic information may include: object characteristic information, environmental characteristic information, light characteristic information, and the like. The object feature information may include object type information, object color information, and object proportion information, and the object type information may be similar information of people, scenery, animals, plants, and objects; the object color information may be similar information of color articles, black and white articles, and the like, and the object ratio information is the ratio of the object to the whole picture. The environmental characteristics may be similar information such as indoors, outdoors, grasslands, seas, deserts, etc., and the light characteristics may be similar information such as daytime, cloudy days, dark night, etc.
In some embodiments, the scene classification Learning algorithm may use, for example, a logarithmic probability Regression (Logistic Regression) algorithm, a Least Squares Regression (objective Least Squares Regression) algorithm, a bayesian classification algorithm, a Decision Tree (Decision Tree) algorithm, a Deep Learning (Deep Learning) algorithm, and the like, wherein the Deep Learning algorithm may include a Convolutional Neural Network (CNN) algorithm, a generated Network (GAN) algorithm, and the like, which is not limited in this application.
It is understood that in other embodiments, the NPU204 may determine the reference scene of the reference image data based on other manners, which is not limited in this application.
The NPU204 determines 403 a reference image style type of the reference image data.
Illustratively, the NPU204 may determine the reference image style type of the reference image data using a style classification learning algorithm.
The style classification learning algorithm is a classification algorithm in a machine learning algorithm. It can be understood that the style classification learning algorithm is a similar type of algorithm to the above-mentioned scene classification learning algorithm, and the difference is that: the scene classification learning algorithm is to construct a rule between image data and a scene type by using a large amount of image data and the scene type corresponding to the image data as a training data set, and then the NPU204 can predict the reference scene type of the input reference image data through the constructed rule. The style classification learning algorithm is to construct a rule between image data and image style types by using a large amount of image data and the image style types corresponding to the image data as a training data set, and then the NPU204 can predict the reference image style types of the input reference image data through the constructed rule. That is, the scene classification learning algorithm and the style classification learning algorithm may output different results for the same input reference image data: a reference scene type and a reference image style type.
Specifically, the style classification learning algorithm finds a relationship between feature information and a scene type by taking the feature information of image data and the scene type of the image data as input. Furthermore, when new image data (for example, reference image data) is input into the scene classification learning algorithm, the scene classification learning algorithm may determine a scene type matching the new image data among the existing scene types of the scene classification learning algorithm through the constructed relationship.
In some embodiments, the extracted feature information may be different when training the scene classification learning algorithm and the style classification learning algorithm. In training the style classification learning algorithm, the extracted feature information may include: at least one of saturation information, contrast information, color histogram information, luminance histogram information, and subject category information. Of course, the feature information may also include other information, which is not limited in this application.
In other embodiments, when training the scene classification learning algorithm and the style classification learning algorithm, the extracted feature information may be the same, and further, the same feature information set may be used as a part of training data sets for training the scene classification learning algorithm and the style classification learning algorithm.
The style classification learning algorithm may be, for example, the scene classification learning algorithm described above. In addition, in some embodiments, the style classification learning algorithm may use the same classification algorithm as the scene classification learning algorithm, and in other embodiments, the style classification learning algorithm may use a different classification algorithm from the scene classification learning algorithm, which is not limited in this application.
It is understood that, in some embodiments, the reference image style type is one of a plurality of preset image style types in the mobile phone 200, that is, a plurality of image style types under different scene types are predefined in the mobile phone 200. The process of executing step 403 is that the NPU204 performs image analysis processing on the reference image data, for example, extracts image feature information, and further matches with multiple image style types in the predefined reference scene type in the mobile phone 200, and the image style type with the highest matching degree may be used as the reference image scene type.
The image style type can be understood as a filter of an image, and can represent a special display effect of an image. Different image styles can be distinguished and named according to different feature information of the images. For example, the image style type may be divided into a bright style type and a warm style type according to the parameter value of the tone parameter in which the tone is gradually changed from a warm tone to a cool tone, and further, the bright style type and the warm style type may be further divided. For example, the image style types such as "movie", "nostalgic", and "film camera" are classified in combination with the saturation information and the contrast information, and the present application is not limited thereto.
It is understood that the reference scene type and the reference image style type determined in steps 402 and 403 are the image style types preferred by the user under the reference scene type.
It is understood that in other embodiments, the NPU204 may determine the reference image style type of the reference image data based on other manners, which is not limited in this application.
404: the NPU204 sends the correspondence between the reference scene type and the reference image style type to the ISP 203.
It will be appreciated that the data sent by the NPU204 to the ISP203 may not need to include a specific reference scene type and a reference image style type. In some embodiments, the correspondence may include a reference scene type label and a corresponding reference image style type label, for example, the reference scene type label is a, the reference image style type label is a, the correspondence may be a-a, and the data sent by NPU204 to ISP203 may be a-a. In some embodiments, the correspondence may include a reference number of correspondence between a reference scene type and a reference image style type, for example, the reference scene type is denoted by a, the reference image style type is denoted by a, the correspondence between a and a is denoted by 01, and the data sent by NPU204 to ISP203 may be 01.
It can be understood that after the NPU204 determines the correspondence between the reference scene type and the reference image style type of the reference image data, the correspondence can be sent to the ISP203 through the bus, and further, the ISP203 can adjust the parameter value of the ISP parameter of the shot photo according to the received information when the user uses the camera application of the mobile phone 200 to shoot the photo conforming to the image style type preferred by the user.
405: the ISP203 acquires the image data converted by the image sensor 202 and determines the scene type of the image data.
For example, the user may operate the mobile phone 200 to open the camera application, and after the light signal collected by the lens 201 is converted into image data (i.e., image data) in RAW format through the image sensor 202. The image sensor 202 may transmit the output image data to the ISP 203. At this time, the ISP of the mobile phone 200 acquires the image data, and may determine the scene type of the image data according to the acquired image data, that is, execute step 404.
It is to be appreciated that in some embodiments, ISP203 may determine the scene type of the image data by invoking a scene classification algorithm in NPU204 when determining the scene type. In other embodiments, the ISP203 may include a scene classification algorithm, and the ISP203 may directly analyze the scene type of the image data and obtain a corresponding result.
It can be understood that the image data is image data corresponding to an image acquired by the lens 201 when a user opens a shooting application such as a camera application, and the image data needs to be processed by the ISP203 before being displayed on the display screen 207 of the mobile phone 200.
406: and the ISP203 determines that the scene type of the image data is matched with the scene type of the reference image, adjusts the parameter value of the ISP parameter according to the style type of the reference image, and performs image processing on the image data to obtain target image data.
The scene type is matched with the reference scene type, which indicates that the scene shot by the user can adopt the reference image style type analyzed by the NPU204, and the ISP203 can directly obtain the parameter value of the ISP parameter corresponding to the reference image style type, and adjust the parameter value of the ISP parameter in the ISP203, so that the parameter value of the ISP parameter in the ISP203 is matched with the reference image style type. Further, the ISP203 may process the image data according to the adjusted ISP parameters, and/or control the lens 201 and the image sensor 202 to obtain target image data.
It is understood that ISP203 may store ISP parameters corresponding to a plurality of image style types in the style classification learning algorithm. After determining that the image scene matches the reference image scene, the ISP203 determines the parameter value of the ISP parameter corresponding to the image scene according to the style type of the reference image.
It can be understood that the target image data is the image data output to the CPU 206 of the mobile phone 200 after the ISP203 of the mobile phone 200 processes the image data, and the CPU 206 may render a display interface according to the received target image data and control the display screen 207 to display the target image data. The image style type of the target image data is matched with the reference image style type of the reference image, namely the image style type of the target image data is the image style type preferred by the user under the corresponding scene type.
According to the shooting method in the embodiment of the application, in the shooting process of the user, the electronic equipment processes the image acquired by the electronic equipment through adjusting the parameter value of the ISP parameter by the ISP to obtain the image with the preferred image style type of the user in different scenes, the image style type preferred by the user can be obtained without manual operation of the user, and the use experience of the user is improved.
The following describes a photographing method in the embodiment of the present application with reference to the interface diagrams shown in fig. 5a to 5 c.
Fig. 5a to 5b are schematic diagrams illustrating interface changes of a shooting method in an embodiment of the present application.
Illustratively, the user operates the mobile phone 200 to open the gallery application, and the interface displayed on the display screen of the mobile phone 200 is as shown in fig. 5 a. The user determines that the image style type of the image 501 is the favorite image style type in the gallery application, and then the user can click on the image 501, and the mobile phone displays the interface shown in fig. 5 b. The user may click on the like button 502 at the interface shown in fig. 5b, marking the image 501 as like.
At this time, the mobile phone 200 may use the image 501 as a reference image, and the NPU204 may obtain reference image data of the reference image from the memory 205, i.e., execute step 401.
After the NPU204 of the mobile phone 200 completes the step 401, the steps 402, 403, and 404 are sequentially executed. Specifically, the NPU204 classifies the scene type of the image 501 by a scene classification learning algorithm, and determines the scene type of the image 501. The scene classification learning algorithm can extract feature information in the image 501, the extracted image 501 comprises trees, birds, snowflakes and the like, and the scene type of the image 501 can be a landscape scene type if the environment is daytime. The NPU204 may use a style classification learning algorithm to make further decisions on the image 501. The scenic classification learning algorithm may extract feature information, such as saturation information, contrast information, color histogram information, brightness histogram information, and the like, from the image 501, match the feature information with a plurality of image style types preset in the scenic classification learning algorithm, and determine the image style type of the image 501. Assuming that the image style type of the image 501 is a sharp image style, the NPU204 may transmit the landscape scene type and the sharp image style to the ISP 203.
When a user operates the mobile phone 200 to open the camera application, the lens 201 performs photoelectric conversion on the acquired optical signal through the image sensor to obtain image data in a corresponding RAW format. And transmits the image data to the ISP 203. After receiving the image data, the ISP203 analyzes the scene type of the image data, and determines that the scene type is a landscape scene type and matches with the scene type of the image 501. At this time, the ISP203 may determine, according to the received landscape scene type and the vivid image style from the NPU204, that the ISP203 uses the parameter value of the ISP parameter corresponding to the vivid image style to process the image data and/or control the lens 201 and the image sensor 202. The ISP203 may obtain the parameter value of the ISP parameter corresponding to the vivid image style stored therein, and adjust the parameter value of the ISP parameter of the ISP 203. Specifically, the adjustment may be performed based on the parameter value of the ISP parameter corresponding to the vivid image style, for example, a parameter for histogram equalization, a parameter for exposure time, and the like in the ISP 203. The ISP203 may perform contrast adjustment on the image data based on the adjusted ISP parameter, so that the contrast of the image data reaches the vivid style type of the image 501, and the image is prevented from being overexposed. ISP203 may output the adjusted target image data (i.e., the image data corresponding to image 504) and send it to CPU 206, and CPU 206 may render a display interface according to the received target image data and control display screen 207 to display image 504, as shown in fig. 5 c.
The user can adjust the viewing area of the lens 201 and control the image displayed on the display 207 to be the angle and the viewing area desired by the user. The user can click the shooting button 503 to shoot, and a picture corresponding to the image 504 is obtained.
It is understood that, in some embodiments, after the image 501 is determined by the scene classification learning algorithm of the NPU204, the output scene type may be a snow scene type, and after the image 501 is determined by the scene classification learning algorithm of the NPU204, the output image style type may be a fresh style type, that is, the landscape scene type and the vivid style type are only examples in the embodiment of the present application, and the shooting method in the embodiment of the present application is not limited to the output result.
It can be understood that, in the above shooting process, the mobile phone 200 may directly adjust the parameter value of the ISP parameter in the ISP, so as to adjust the image style type of the image data to the image style type preferred by the user in the scene type, and the system does not need to execute the image processing algorithm after shooting, thereby reducing the workload of the system and the memory occupied during running.
In the above embodiment, the image style types preferred by the user in the reference scene are determined by using the reference images of the same reference scene type, and the intelligent shooting is performed by using the determined image style types preferred by the user in the reference scene. The embodiment of the application further provides another shooting method, which is characterized in that images with different scene types stored in the mobile phone 200 are used as reference images, image style types preferred by a user under different scene types are determined, and intelligent shooting is carried out according to the determined image style types.
Specifically, taking the electronic device 200 as a mobile phone as an example, another shooting method in the embodiment of the present application is described below with reference to fig. 6.
Fig. 6 is a flowchart illustrating another photographing method in the embodiment of the present application.
601: the NPU204 reads reference image data corresponding to a plurality of reference images of different scene types from the memory.
It is understood that step 601 is similar to step 401, except that the reference image data corresponding to the plurality of reference images acquired in step 601 correspond to different scene types.
602: the NPU204 determines a reference scene type corresponding to each reference image data.
Illustratively, the NPU204 may determine a scene type corresponding to each piece of reference image data using a scene clustering algorithm.
The scene clustering algorithm is a clustering algorithm in a machine learning algorithm. It can be understood that the machine clustering learning algorithm is an unsupervised learning method in the machine clustering learning algorithm, the machine clustering algorithm can divide the image data in the training set into a plurality of mutually disjoint subsets, each subset can be called as a "cluster", and an "clustering by clustering" effect is achieved, that is, the similarity between clusters is low (low intra-cluster similarity) and the similarity within clusters is high (high inter-cluster similarity). Clustering may be performed according to similarity (similarity) and distance (distance) of data, and may be divided into different clusters, so that each cluster may correspond to some potential concept or category.
Specifically, the scene clustering algorithm is to use a large amount of image data and scene types as input of the number of algorithms, the scene clustering algorithm may divide the large amount of image data into a plurality of mutually disjoint subsets, a potential concept or category corresponding to each subset is a scene type, and the scene clustering algorithm may match a corresponding scene type for each subset. The scene clustering algorithm in the embodiment of the present application can divide the image data stored in the mobile phone 200 to obtain a plurality of image data sets, and each image data set can match with a corresponding scene type.
In some embodiments, the scene clustering algorithm may be, for example, a proto-type-based clustering algorithm, a density-based clustering algorithm, a hierarchical clustering algorithm, and the like, which are not limited in this application.
603: the NPU204 determines a reference image style type corresponding to each reference image data.
Illustratively, the NPU204 may determine the image style type corresponding to each reference image using a style classification learning algorithm.
It is understood that step 603 is similar to the implementation process in step 403, and is not described herein again.
604: and the NPU determines the image style types preferred by the user under each scene type according to the operation results of the scene clustering algorithm and the style classification learning algorithm.
In step 604, the NPU204 determines the image style type preferred by the user in each scene according to the operation result of the scene clustering algorithm and the style classification learning algorithm, specifically, the NPU204 determines the image style type preferred by the user in each scene type according to the plurality of reference image data in the subset of each scene type determined by the scene clustering algorithm and the image style type corresponding to each reference image data.
In some embodiments, the NPU204 may determine that the most image style types in each scene type are the image style types preferred by the user in that scene type.
605: the NPU transmits the style types of the image data preferred by the user under each scene type to the ISP.
It is to be understood that step 605 is similar to step 404, except that in step 605, the user's preferred image style types for the various scene types may be transmitted.
606: the ISP203 acquires the image data converted by the image sensor 202 and determines the scene type of the image data.
It is understood that step 606 is similar to step 405, and is not described in detail herein.
607: the ISP203 adjusts the parameter value of the ISP parameter according to the preferred image style type of the user under the corresponding scene type, and performs image processing on the image data to obtain target image data.
It can be understood that, in step 607, according to the scene type determined in step 606, after the ISP203 receives the style type of the preferred image data in each scene type sent by the NPU204, the ISP203 determines an image style type corresponding to the scene type, and according to the determined image style type, obtains a parameter value of an ISP parameter corresponding to the determined image style type, and adjusts the parameter value of the ISP parameter in the ISP203 so that the parameter value of the ISP parameter in the ISP203 matches the determined image style type. Further, the ISP203 may process the image data according to the adjusted ISP parameters, and/or control the lens 201 and the image sensor 202 to obtain target image data.
According to another shooting method provided by the embodiment of the application, the preferred image style types of the user under different scene types can be determined according to the image data stored in the mobile phone 200. Furthermore, in the process of shooting by the user, the image of the image style type preferred by the user can be adjusted and generated in the ISP, intelligent shooting is realized, the terminal equipment is not required to process the shot image through the processor after the image is shot, and the workload of the system and the memory occupied during running are reduced.
Fig. 7 illustrates a schematic diagram of the structure of an ISP203, according to some embodiments of the present application. As shown in fig. 7, the ISP203 is an application-specific integrated circuit (ASIC) for image data processing, which is used to further process the image data formed by the image sensor 202 for better image quality.
The ISP203 includes a processor 2031, an image transmission interface 2032, general purpose peripherals 2033, a population module 2034, and general purpose functional modules 2035.
Among other things, processor 2031 is used for logic control and scheduling in ISP 203.
The image transmission interface 2032 is used for transmission of image data.
Generic peripheral devices 2033 include, but are not limited to: a bus for coupling the various modules of ISP203 and their controllers, a bus for coupling to other devices, such as an advanced high-performance bus (AHB), that enables the ISP to communicate with other devices (e.g., DSPs, CPUs, etc.) at high performance; and a WATCHDOG unit (WATCHDOG) for monitoring the working state of the ISP.
The padding module 2034 is configured to perform a padding operation on the image data according to the requirement of the image processing model in the NPU, such as the deep learning model, on the input data.
The general function module 2035 is used for processing images input to the ISP203, including but not limited to: dead pixel Correction (BPC), Black Level Compensation (BLC), Automatic White Balance (AWB), Gamma Correction (Gamma Correction), Color Correction (Color Correction), noise reduction (Denoise), edge enhancement, brightness, contrast, chromaticity adjustment, and the like. When the image sensor transmits the image data in the RAW format to the ISP203, the image data is processed by the general function module. The processing of the image data by the general function module 2035 will be described in detail below with reference to fig. 7, and will not be described further herein. In the embodiment of the present application, the adjustment of the parameter value of the ISP parameter in step 406 in fig. 4 and step 607 in fig. 6 may be the adjustment of the parameter value of the parameter of the general function module 2035, and the image processing on the image data therein may be performed by the general function module 2035.
It is understood that the structure of ISP203 shown in fig. 7 is only an example, and those skilled in the art should understand that the ISP may include more or less modules, and may combine or split some modules, and the embodiment of the present application is not limited thereto.
The general function modules may include a RAW domain processing module 2035a, a YUV domain processing module 2035b, and an RGB domain processing module 2035c, and fig. 8 shows a schematic process diagram of processing image data by the general function modules, where the processing process is as follows:
the RAW domain processing module 2035a performs dead pixel correction, black level compensation, and automatic white balance on the image data.
The image data processed in the RAW domain is subjected to RGB interpolation to obtain image data in RGB domain, and then the RGB domain processing module 2035b performs gamma correction and color correction on the image data in RGB domain.
The image data processed in the RGB domain is subjected to color gamut conversion to obtain image data in a YUV domain, and then the YUV domain processing module 2035c performs noise reduction, edge enhancement, and brightness/contrast/chromaticity adjustment on the image data in the YUV domain. In the embodiment of the present application, the image denoising method provided in the embodiment of the present application may be used to denoise the image data in the YUV domain.
It is understood that the structure of ISP203 shown in fig. 8 is only an example, and those skilled in the art should understand that the ISP may include more or less modules, and may combine or split some modules, and the embodiment of the present application is not limited thereto.
Fig. 9 is a schematic structural diagram of a mobile phone 200 adapted to the shooting method according to some embodiments of the present application. As shown in fig. 9, the mobile phone 200 may include a processor 910, a power supply module 940, a memory 980, a mobile communication module 930, a wireless communication module 920, a sensor module 990, an audio module 950, a camera 970, an interface module 960, buttons 901, a display 902, and the like.
It is to be understood that the illustrated structure of the embodiment of the present invention does not specifically limit the mobile phone 200. In other embodiments of the present application, handset 200 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 910 may include one or more Processing units, for example, Processing modules or Processing circuits that may include a Central Processing Unit (CPU), an Image Signal Processing Unit (ISP), a Video Processing Unit (VPU), a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), a microprocessor MCU (Micro-programmed Control Unit), an AI (Artificial Intelligence) processor, or a Programmable logic device fpga (field Programmable Gate array), and so on. The different processing units may be separate devices or may be integrated into one or more processors. A memory unit may be provided in the processor 910 for storing instructions and data. In some embodiments, the storage unit in processor 910 is a cache memory. The ISP, VPU, and memory 980 may be coupled by a bus to form a System On Chip (SOC), or in other embodiments, the ISP, VPU, and memory 980 may be separate devices.
The Memory 980 may be used to store Data, software programs, and modules, and may be a Volatile Memory (Volatile Memory), such as Random-Access Memory (RAM), Double Data Rate Synchronous Dynamic Random Access Memory (DDR SDRAM).
The power module 940 may include a power supply, power management components, and the like. The power source may be a battery. The power management component is used for managing the charging of the power supply and the power supply of the power supply to other modules. In some embodiments, the power management component includes a charge management module and a power management module. The charging management module is used for receiving charging input from the charger; the power management module is used for connecting a power supply, and the charging management module is connected with the processor 910. The power management module receives power and/or charge management module input and provides power to the processor 910, the display 902, the camera 970, and the wireless communication module 920.
The mobile communication module 930 may include, but is not limited to, an antenna, a power amplifier, a filter, an LNA (Low noise amplifier), and the like. The mobile communication module 930 may provide a solution including 2G/3G/4G/5G wireless communication applied to the handset 200. The mobile communication module 930 may receive electromagnetic waves from the antenna, filter, amplify, etc. the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 930 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 930 may be disposed in the processor 910. In some embodiments, at least some of the functional modules of the mobile communication module 930 may be disposed in the same device as at least some of the modules of the processor 910. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), Wideband Code Division Multiple Access (WCDMA), time division code division multiple access (time-division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), bluetooth (bluetooth, BT), Global Navigation Satellite System (GNSS), Wireless Local Area Network (WLAN), short-range wireless communication technology (near field communication, NFC), frequency modulation (FM, radio frequency communication), Infrared (IR), and so on. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou satellite navigation system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The wireless communication module 920 may include an antenna, and implement transceiving of electromagnetic waves via the antenna. The wireless communication module 920 may provide solutions for wireless communication applied to the mobile phone 200, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), Bluetooth (BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The handset 200 may communicate with the network and other devices via wireless communication techniques.
In some embodiments, the mobile communication module 930 and the wireless communication module 920 of the handset 200 may also be located in the same module.
The display screen 902 is used for displaying human-computer interaction interfaces, images, videos and the like. The display screen 902 includes a display panel. The display panel may be a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode), a flexible light-emitting diode (FLED), a Mini-LED, a Micro-LED, a quantum dot light-emitting diode (QLED), or the like.
The sensor module 990 may include a proximity light sensor, a pressure sensor, a gyroscope sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, and the like.
The audio module 950 is used to convert digital audio information into an analog audio signal for output, or convert an analog audio input into a digital audio signal. The audio module 950 may also be used to encode and decode audio signals. In some embodiments, the audio module 950 may be disposed in the processor 910, or some functional modules of the audio module 950 may be disposed in the processor 910. In some embodiments, audio module 950 may include speakers, an earpiece, a microphone, and a headphone interface.
The camera 970 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The light receiving element converts an optical Signal into an electrical Signal, and then transmits the electrical Signal to an ISP (Image Signal Processing) to convert the electrical Signal into a digital Image Signal. The mobile phone 200 can implement a shooting function through an ISP, a camera 970, a VPU, a GPU (graphics Processing Unit), a display 902, an application processor, and the like. The camera 970 may be a fixed focus lens, a zoom lens, a fisheye lens, a panoramic lens, or the like.
The interface module 960 includes an external memory interface, a Universal Serial Bus (USB) interface, a Subscriber Identity Module (SIM) card interface, and the like. The external memory interface may be used to connect an external memory card, such as a Micro SD card, to extend the storage capability of the mobile phone 200. The external memory card communicates with the processor 910 through an external memory interface to implement a data storage function. The usb interface is used for the handset 200 to communicate with other electronic devices. The SIM card interface is used to communicate with a SIM card installed to the handset 200, for example, to read a telephone number stored in the SIM card or to write a telephone number into the SIM card.
In some embodiments, the cell phone 200 further includes keys 901, a motor, and indicators, among other things. The keys 901 may include a volume key, an on/off key, and the like. The motor is used to cause the mobile phone 200 to generate a vibration effect, for example, when the mobile phone 200 of the user is being called, to prompt the user to answer the incoming call of the mobile phone 200. The indicators may include laser indicators, radio frequency indicators, LED indicators, and the like.
Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of these implementations. Embodiments of the application may be implemented as computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.
Program code may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices in a known manner. For purposes of this application, a processing system includes any system having a processor such as, for example, a Digital Signal Processor (DSP), a microcontroller, an Application Specific Integrated Circuit (ASIC), or a microprocessor.
The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. Including but not limited to OpenCL, C language, C + +, Java, etc. For languages such as C + +, Java, etc., since they convert the storage, those skilled in the art may make the conversion based on the specific high-level language, which may be different from the application of the data processing method in the embodiment of the present application, without departing from the scope of the embodiment of the present application.
In some cases, the disclosed embodiments may be implemented in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on one or more transitory or non-transitory machine-readable (e.g., computer-readable) storage media, which may be read and executed by one or more processors. For example, the instructions may be distributed via a network or via other computer readable media. Thus, a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), including, but not limited to, floppy diskettes, optical disks, read-only memories (CD-ROMs), magneto-optical disks, read-only memories (ROMs), Random Access Memories (RAMs), erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, flash memory, or a tangible machine-readable memory for transmitting information (e.g., carrier waves, infrared digital signals, etc.) using the internet in an electrical, optical, acoustical or other form of propagated signal. Thus, a machine-readable medium includes any type of machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (e.g., a computer).
In the drawings, some features of the structures or methods may be shown in a particular arrangement and/or order. However, it is to be understood that such specific arrangement and/or ordering may not be required. Rather, in some embodiments, the features may be arranged in a manner and/or order different from that shown in the illustrative figures. In addition, the inclusion of a structural or methodical feature in a particular figure is not meant to imply that such feature is required in all embodiments, and in some embodiments, may not be included or may be combined with other features.
It should be noted that, in the embodiments of the apparatuses in the present application, each unit/module is a logical unit/module, and physically, one logical unit/module may be one physical unit/module, or may be a part of one physical unit/module, and may also be implemented by a combination of multiple physical units/modules, where the physical implementation manner of the logical unit/module itself is not the most important, and the combination of the functions implemented by the logical unit/module is the key to solve the technical problem provided by the present application. Furthermore, in order to highlight the innovative part of the present application, the above-mentioned device embodiments of the present application do not introduce units/modules which are not so closely related to solve the technical problems presented in the present application, which does not indicate that no other units/modules exist in the above-mentioned device embodiments.
It is noted that, in the examples and descriptions of this patent, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the use of the verb "comprise a" to define an element does not exclude the presence of another, same element in a process, method, article, or apparatus that comprises the element.
While the present application has been shown and described with reference to certain preferred embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present application.

Claims (11)

1. A photographing method applied to an electronic device including a lens and an image signal processor, the method comprising:
in the shooting process of the electronic equipment, the image signal processor acquires image data acquired by the lens;
the image signal processor acquires a parameter value of an image style parameter corresponding to a scene type to which the image data belongs, wherein the parameter value of the image style parameter of the scene type is determined based on an image stored on the electronic equipment;
the image signal processor processes the image data according to the parameter value of the image style parameter to obtain target image data;
and the electronic equipment takes the target image corresponding to the target image data as a shooting result and displays the shooting result to a user.
2. The photographing method according to claim 1, wherein the image style parameter includes at least one of:
black level compensation parameters, dead pixel correction parameters, automatic white balance parameters, lens correction parameters, noise reduction parameters, edge enhancement parameters, brightness adjustment parameters, contrast adjustment parameters, chromaticity adjustment parameters, color correction parameters, and gamma correction parameters.
3. The photographing method according to claim 1, wherein the image signal processor acquires an image style parameter corresponding to a scene type to which the image data belongs, including:
the image signal processor extracts first characteristic information of the image data and judges the scene type of the image data according to the first characteristic information.
4. The photographing method according to claim 1, wherein the electronic device further includes a neural network processor, and the parameter value of the image style parameter is predetermined by:
the neural network processor acquires a plurality of images stored by the electronic equipment;
the neural network processor determines the scene type of each image in the multiple images;
and the neural network processor determines the parameter values of the image style parameters of the plurality of images, and determines the parameter values of the image style parameters corresponding to each scene type according to the parameter values of the image style parameters of each image under each scene type.
5. The shooting method according to claim 4, wherein the neural network processor determining the scene type to which each image of the plurality of images belongs comprises:
and the neural network processor extracts second characteristic information of each image in the plurality of images through a machine learning algorithm and determines a first corresponding relation between the second characteristic information and each scene type.
6. The photographing method according to claim 5, wherein the machine learning algorithm includes a scene classification learning algorithm or a scene clustering algorithm.
7. The shooting method according to claim 4, wherein parameter values of a plurality of preset image style parameters are stored in the electronic device;
the neural network processor determining parameter values for image style parameters of the plurality of images comprises:
and the neural network processor extracts third characteristic information of each image in the plurality of images through a machine classification learning algorithm, and determines a second corresponding relation between the third characteristic information and parameter values of the plurality of preset image style parameters.
8. The shooting method according to claim 7, wherein the determining, by the neural network processor, the parameter value of the image style parameter corresponding to each scene type according to the parameter value of the image style parameter of each image in each scene type includes:
and the neural network processor determines that the parameter value of the preset image style parameter corresponding to the maximum third feature information in each scene type in the second corresponding relationship is the parameter value of the image style parameter corresponding to the scene type.
9. An electronic device, comprising:
a memory for storing instructions for execution by one or more processors of the electronic device, an
A processor, which is one of processors of an electronic device, for controlling execution of the photographing method of any one of claims 1 to 8.
10. A computer-readable storage medium having stored thereon instructions that, when executed on a computer, cause the computer to execute the photographing method of any one of claims 1 to 8.
11. A computer program product, characterized in that it comprises instructions which, when executed, cause a computer to carry out the shooting method according to any one of claims 1 to 8.
CN202210104514.6A 2022-01-28 2022-01-28 Shooting method, electronic device and readable storage medium Active CN114422682B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210104514.6A CN114422682B (en) 2022-01-28 2022-01-28 Shooting method, electronic device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210104514.6A CN114422682B (en) 2022-01-28 2022-01-28 Shooting method, electronic device and readable storage medium

Publications (2)

Publication Number Publication Date
CN114422682A true CN114422682A (en) 2022-04-29
CN114422682B CN114422682B (en) 2024-02-02

Family

ID=81278364

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210104514.6A Active CN114422682B (en) 2022-01-28 2022-01-28 Shooting method, electronic device and readable storage medium

Country Status (1)

Country Link
CN (1) CN114422682B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115334234A (en) * 2022-07-01 2022-11-11 北京讯通安添通讯科技有限公司 Method and device for supplementing image information by taking pictures in dark environment
CN115439307A (en) * 2022-08-08 2022-12-06 荣耀终端有限公司 Style conversion method, style conversion model generation method, and style conversion system
CN116843583A (en) * 2023-09-01 2023-10-03 荣耀终端有限公司 Image processing method, device, electronic equipment and storage medium
CN117119316A (en) * 2023-10-25 2023-11-24 荣耀终端有限公司 Image processing method, electronic device, and readable storage medium

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080088710A1 (en) * 2006-10-16 2008-04-17 Casio Computer Co., Ltd. Imaging apparatus, continuous imaging method, and recording medium for recording a program
CN105323459A (en) * 2015-05-25 2016-02-10 维沃移动通信有限公司 Image processing method and mobile terminal
CN105450923A (en) * 2014-09-25 2016-03-30 索尼公司 Image processing method, image processing device and electronic device
CN105812553A (en) * 2015-09-24 2016-07-27 维沃移动通信有限公司 Rapid shooting method and mobile terminal
CN105979238A (en) * 2016-07-05 2016-09-28 深圳市德赛微电子技术有限公司 Method for controlling global imaging consistency of multiple cameras
CN107948529A (en) * 2017-12-28 2018-04-20 北京麒麟合盛网络技术有限公司 Image processing method and device
CN107995415A (en) * 2017-11-09 2018-05-04 深圳市金立通信设备有限公司 A kind of image processing method, terminal and computer-readable medium
WO2018174648A1 (en) * 2017-03-23 2018-09-27 삼성전자 주식회사 Electronic device, and method for processing image according to camera photographing environment and scene by using same
CN111901520A (en) * 2020-06-26 2020-11-06 深圳蚂里奥技术有限公司 Scene self-adaptive system, method and terminal based on image processing
WO2020238775A1 (en) * 2019-05-28 2020-12-03 华为技术有限公司 Scene recognition method, scene recognition device, and electronic apparatus
CN112840376A (en) * 2018-10-15 2021-05-25 华为技术有限公司 Image processing method, device and equipment
WO2021185374A1 (en) * 2020-03-20 2021-09-23 华为技术有限公司 Image capturing method and electronic device
CN113518210A (en) * 2020-04-10 2021-10-19 华为技术有限公司 Method and device for automatic white balance of image
CN113727025A (en) * 2021-08-31 2021-11-30 荣耀终端有限公司 Photographing method, photographing device, storage medium and program product
CN113905182A (en) * 2020-06-22 2022-01-07 华为技术有限公司 Shooting method and equipment

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080088710A1 (en) * 2006-10-16 2008-04-17 Casio Computer Co., Ltd. Imaging apparatus, continuous imaging method, and recording medium for recording a program
CN105450923A (en) * 2014-09-25 2016-03-30 索尼公司 Image processing method, image processing device and electronic device
CN105323459A (en) * 2015-05-25 2016-02-10 维沃移动通信有限公司 Image processing method and mobile terminal
CN105812553A (en) * 2015-09-24 2016-07-27 维沃移动通信有限公司 Rapid shooting method and mobile terminal
CN105979238A (en) * 2016-07-05 2016-09-28 深圳市德赛微电子技术有限公司 Method for controlling global imaging consistency of multiple cameras
WO2018174648A1 (en) * 2017-03-23 2018-09-27 삼성전자 주식회사 Electronic device, and method for processing image according to camera photographing environment and scene by using same
CN107995415A (en) * 2017-11-09 2018-05-04 深圳市金立通信设备有限公司 A kind of image processing method, terminal and computer-readable medium
CN107948529A (en) * 2017-12-28 2018-04-20 北京麒麟合盛网络技术有限公司 Image processing method and device
CN112840376A (en) * 2018-10-15 2021-05-25 华为技术有限公司 Image processing method, device and equipment
WO2020238775A1 (en) * 2019-05-28 2020-12-03 华为技术有限公司 Scene recognition method, scene recognition device, and electronic apparatus
WO2021185374A1 (en) * 2020-03-20 2021-09-23 华为技术有限公司 Image capturing method and electronic device
CN113518210A (en) * 2020-04-10 2021-10-19 华为技术有限公司 Method and device for automatic white balance of image
CN113905182A (en) * 2020-06-22 2022-01-07 华为技术有限公司 Shooting method and equipment
CN111901520A (en) * 2020-06-26 2020-11-06 深圳蚂里奥技术有限公司 Scene self-adaptive system, method and terminal based on image processing
CN113727025A (en) * 2021-08-31 2021-11-30 荣耀终端有限公司 Photographing method, photographing device, storage medium and program product

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115334234A (en) * 2022-07-01 2022-11-11 北京讯通安添通讯科技有限公司 Method and device for supplementing image information by taking pictures in dark environment
CN115334234B (en) * 2022-07-01 2024-03-29 北京讯通安添通讯科技有限公司 Method and device for taking photo supplementary image information in dim light environment
CN115439307A (en) * 2022-08-08 2022-12-06 荣耀终端有限公司 Style conversion method, style conversion model generation method, and style conversion system
CN116843583A (en) * 2023-09-01 2023-10-03 荣耀终端有限公司 Image processing method, device, electronic equipment and storage medium
CN117119316A (en) * 2023-10-25 2023-11-24 荣耀终端有限公司 Image processing method, electronic device, and readable storage medium

Also Published As

Publication number Publication date
CN114422682B (en) 2024-02-02

Similar Documents

Publication Publication Date Title
CN114422682B (en) Shooting method, electronic device and readable storage medium
CN112150399B (en) Image enhancement method based on wide dynamic range and electronic equipment
CN113129312B (en) Image processing method, device and equipment
JP7266672B2 (en) Image processing method, image processing apparatus, and device
CN110456960B (en) Image processing method, device and equipment
WO2021036853A1 (en) Image processing method and electronic apparatus
WO2021143269A1 (en) Photographic method in long focal length scenario, and mobile terminal
WO2019072057A1 (en) Image signal processing method, apparatus and device
CN110471606B (en) Input method and electronic equipment
CN112580400B (en) Image optimization method and electronic equipment
CN113810603B (en) Point light source image detection method and electronic equipment
CN112887582A (en) Image color processing method and device and related equipment
WO2020073957A1 (en) Image capturing method and terminal device
CN113973173A (en) Image synthesis method and electronic device
CN115604572A (en) Image acquisition method and device
CN113436576B (en) OLED display screen dimming method and device applied to two-dimensional code scanning
CN114463191A (en) Image processing method and electronic equipment
WO2023160220A1 (en) Image processing method and electronic device
CN116320716B (en) Picture acquisition method, model training method and related devices
CN116723417B (en) Image processing method and electronic equipment
CN115705663B (en) Image processing method and electronic equipment
RU2791810C2 (en) Method, equipment and device for image processing
CN113421209B (en) Image processing method, system on chip, electronic device, and medium
CN113473057B (en) Video recording method and electronic equipment
CN115802144B (en) Video shooting method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant